ECON 3150/4150, Spring term Lecture 3

Size: px
Start display at page:

Download "ECON 3150/4150, Spring term Lecture 3"

Transcription

1 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary / 30

2 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Refereces to Lecture 3 ad 4 Stock ad Watso (SW) Ch 3.7 ad Ch 4 ( mai expositio) ad CH 17 (techical expositio, the level matches Ch 2 ad Ch 3); Bårdse ad Nymoe (BN) Kap 2, 3 ad Kap / 30

3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step It is custom to motivate regressio, ad i particular the estimatio method Ordiary Least Squares, by settig fidig the best fittig lie i a scatter plot as the purpose of ecoometric modellig Nothig wrog i this but it should ot be take too far. Goodess of fit is oly oe aspect of buildig a relevat ecoometric model. Model parsimoy (explaiig a pheomea by simple models); theory cosistecy; ad relevat represetatio of couterfactuals to allow causal aalysis, are examples of model features that are just as importat as goodess of fit. After this caveat we start by presetig the mai ideas behid OLS estimatio i terms of fidig the best fittig lie a scatter plot of data poits. 3 / 30

4 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Basic ideas Scatter plot ad least squares fit Y X 4 / 30

5 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Basic ideas Y Which lie is best? Idea: Miimize sum of squared errors! But which errors? X 5 / 30

6 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Basic ideas Which squared error? Y x ( X i, Y i ) 1: Least vertical distace to lie 2. Least horizotal 3. Shortest distace to lie X X i 6 / 30

7 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Basic ideas ^Y i Y i Y x ( X i, Y i ) X i X Choose 1 whe wat to miimize squared errors from predictig Y i liearly from X i Residual: ˆε i = Y i Ŷ i, where Ŷ i is predicted value 7 / 30

8 Y Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Basic ideas Regressio lie ad predictio errors (projectios) X 8 / 30

9 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra Ordiary least squares (OLS) estimates I The differet lies that we cosidered placig i the scatter-plot correspod to differet values of the parameter β 0 ad β 1 i the liear fuctio that coects give umbers X 1,X 2,... X with Y1 fitted,y2 fitted,..., Y fitted : Y fitted i = β 0 β 1 X i, i = 1, 2,..., We obtai the best fit Y fitted i Ŷ i (i = 1, 2,..., ) Ŷ i = ˆβ 0 ˆβ 1 X i, i = 1, 2,..., (1) by fidig the estimates of β 0 ad β 1 that miimizes the sum of squared residuals ( Yi Yi fitted ) 2: S(β 0,β 1 ) = (Y i β 0 β 1 X i ) 2 (2) 9 / 30

10 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra Ordiary least squares (OLS) estimates II where Cosequetly ˆβ 0 ad ˆβ 1 are determied by the 1oc s: Y ˆβ 0 ˆβ 1 X = 0 (3) X i Y i ˆβ 0 X i ˆβ 1 Xi 2 = 0 (4) X = 1 is the sample mea (empirical mea) of X. X i (5) It is expected that you ca solve the simultaeous equatio system (3)-(4). See Questio C i the first exercise-set! 10 / 30

11 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra A trick ad a simplified derivatio I The trick is to ote that β 0 + β 1 X i α + β 1 (X i X ) (6) whe the itercept parameter α is defied as α β 0 + β 1 X (7) This meas that the best predictio Ŷ i give X i ca be writte as Ŷ i = ˆβ 0 + ˆβ 1 X i ˆα + ˆβ 1 (X i X ) where ˆα ˆβ 0 + ˆβ 1 X (8) ad we therefore choose the α ad β 1 that miimize S(α,β 1 ) = [Y i α β 1 (X i X )] 2 (9) 11 / 30

12 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra A trick ad a simplified derivatio II Calculate the two partial derivatives ( kjereregele for each elemet i the sums): S(α,β 1 ) α = 2 S(α,β 1 ) = 2 β 1 [Y i α β 1 (X i X )] ( 1) ad choose ˆα ad ˆβ 1 as the solutios of 2 2 [Y i α β 1 (X i X )] (X i X ) [ Yi ˆα ˆβ 1 (X i X ) ] ( 1) = 0 (10) [ Yi ˆα ˆβ 1 (X i X ) ] (X i X ) = 0 (11) 12 / 30

13 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra A trick ad a simplified derivatio III ˆα Ȳ = 0 (12) where i X )Y i ˆβ 1 (X i X ) (X 2 = 0 (13) Ȳ = 1 Y i (14) the empirical mea of Y. Aother DIY exercise: Show that (10) gives (12), ad (11) gives (13) ad that the solutios of (12) ad (13) are ˆα = Ȳ, (15) ˆβ 1 = (X i X )Y i (X i X ) 2 (16) 13 / 30

14 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Least squares algebra A trick ad a simplified derivatio IV Note that for (16) to make sese, we eed to assume (X i X ) 2 > 0 (i.e., X is a variable, ot a costat) A geeralizatio of this will be importat later, ad is the called absece of perfect multicolliearity. To obtai ˆβ 0 we simply use ˆβ 0 = ˆα ˆβ 1 X (17) 14 / 30

15 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Residuals ad total sum of squares I Defiitio of OLS residuals: ˆε i = Y i Ŷ i, i = 1, 2,..., (18) where we deviate form the S&W otatio, which uses û i for the residual. Usig this defiitio i the 1oc s (10) ad (13) gives ˆε i = 0 (19) ˆε i (X i X ) = 0. (20) 15 / 30

16 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Residuals ad total sum of squares II ˆε i = 0 = ˆε = 1 ˆε i (X i X ) = 0 = ˆσ εx = 1 ˆε i = 0 (21) (ˆε i ˆε)(X i X ) = 0 (22) where ˆσ εx deotes the (empirical) covariace betwee the residuals ad the explaatory variable. These properties always hold whe we iclude the itercept (β 0 or α) i the model They geeralize to the case of multiple regressio as we shall later (22) is a orthogoality coditio. It says that the OLS residuals are ucorrelated with the explaatory variable. 16 / 30

17 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Residuals ad total sum of squares III ˆσ εx = 0 occurs because we have defied the OLS residuals i such a way that they measure what is left uexplaied i Y whe we have extract all the explaatory power of X 17 / 30

18 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Total Sum of Squares ad Residual Sum of Squares I We defie the Total Sum of Squares for Y as TSS = (Y i Ȳ ) 2 (23) We ca guess that TSS ca be split i Explaied Sum of Squares ESS = ad Residual Sum of Squares RSS = (Ŷ i Ŷ ) 2 (24) (ˆε i ε) 2 = SSR (25) 18 / 30

19 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Total Sum of Squares ad Residual Sum of Squares II SSR deotes Sum of Squared Residuals. RSS ad SSR are both used. TSS = ESS + RSS (26) To show this importat decompositio, start with (Y i Ȳ ) 2 = (Y i Ŷ i ) + (Ŷ }{{} i Ŷ ) ˆε i where we have used that Ȳ = 1 Y i = 1 (ˆε i + Ŷ i ) = Ŷ because of (19). Completig the square gives 2 19 / 30

20 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Total Sum of Squares ad Residual Sum of Squares III (Y i Ȳ ) 2 = RSS + 2 ˆε i (Ŷ i Ŷ ) + ESS } {{ } TSS Expad the middle term: ˆε i (Ŷ i Ŷ ) = ˆε i (ˆα + ˆβ 1 (X i X ) Ŷ ) = ˆα ˆε i + ˆβ 1 ˆε i (X i X ) Ŷ }{{}}{{} (19) (20) ˆε i }{{} (19) 20 / 30

21 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Total Sum of Squares ad Residual Sum of Squares IV Therefore ˆε i (Ŷ i Ŷ ) = 0 The residuals are ucorrelated with the predictios Ŷ i. Could it be differet? Hece we have the desired result: TSS = ESS + RSS (27) 21 / 30

22 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step The coefficiet of determiatio I To summarize the goodess of fit i the form of a sigle umber, the coefficiet of determiatio, almost everywhere deoted R 2, is used: R 2 = ESS TSS RSS = = 1 RSS TSS TSS TSS = 1 rate of uexplaied Y variatio (28) If ˆβ 1 = 0, RSS = (Y i Ŷ i ε) 2 = (Y i ˆα) 2 = (Y i Ȳ ) 2 = TSS. ad R 2 = 0 If RSS = 0,a perfect fit, the R 2 = 1 22 / 30

23 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step The coefficiet of determiatio II Hece we have the property 0 R 2 1 (29) These results deped o defiig the regressio fuctio as as i (1). If we istead use Ŷ i = ˆβ 0 ˆβ 1 X i, Ŷ o i i = ˆβ o i 1 X i which forces the regressio lie trough the origi: the correspodig residuals do ot sum to zero, the decompositio of TSS breaks dow. R 2 (as defied above) ca be egative! Work with See Questio D i the first exercise-set! 23 / 30

24 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Regressio ad correlatio I We defie the empirical correlatio coefficiet betwee X ad Y as r X,Y 1 1 (Y i Y )(X i X ) = 1 1 (X i X ) (Y i Y ) 2 ˆσ XY ˆσ X ˆσ Y, (30) ˆσ XY deotes the empirical covariace betwee Y ad X. SW uses s XY ˆσ X ad ˆσ Y deote the two empirical stadard deviatios. SW uses s X ad s Y They are square roots of the empirical variaces, e.g., ˆσ X = ˆσ 2X = 1/( 1) (X i X ) 2 24 / 30

25 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Regressio ad correlatio II Note: Dividig y or 1 is ot really importat (but best stick to oe covetio) ˆσ X,Y ca be writte i three equivalet ways: ˆσ X,Y = 1 1 = 1 1 (X i X )(Y i Ȳ ) = 1 1 (Y i Ȳ )X i (X i X )Y i 25 / 30

26 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Regressio ad correlatio III The regressio coefficiet ca therefore be re-expressed as ˆβ 1 = 1 (X i X )Y i (X i X ) 2 = 1 (X i X )Y i 1 1 (X i X ) = ˆσ X,Y 2 ˆσ X 2 = ˆσ Y ˆσ X ˆσ X,Y ˆσ X ˆσ Y This shows that = ˆσ Y ˆσ X r X,Y (31) r X,Y = 0 is ecessary for ˆβ 1 = 0. Correlatio is ecessary for fidig regressio relatioships Still, ˆβ 1 = r XY (i geeral) ad regressio aalysis is differet from correlatio aalysis. 26 / 30

27 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Regressio ad causality I Three possible theoretical causal relatioships betwee X ad Y. Our regressio is causal if I is true, ad II (joit causality) ad III are ot true r XY = 0 i all three cases Ca also be that a third variable (Z) causes both Y ad X (spurious correlatio) 27 / 30

28 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Causal iterpretatio of regressio aalysis I Regressio aalysis ca refute a causal relatioship, sice correlatio is ecessary for causality But caot cofirm or discover a causal relatioship by statistical aalysis (such as regressio) aloe Need to supplemet the aalysis by theory ad by iterpretatio of atural experimets or quasi-experimets see page 126 ad the text box o page 131 i SW. Will see several examples later i the course. 28 / 30

29 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step Causal iterpretatio of regressio aalysis II I time series aalysis, the cetral cocept is autoomy of regressio parameters with respect to chages i policy variables. The cocept is developed i ECON 4160, but for those iterested Kap. 2.4, i BN gives a itroductio to this lie of thikig about correlatio ad causality. 29 / 30

30 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step I this lecture we have leart about the method of ordiary least squares (OLS) to fit a straight lie to a scatter-plot of umbers (data poits). The cocepts of radom variables ad statistical model, that were cetral i Lecture 1 ad 2, have ot eve bee metioed! I Lecture 4 we start to bridge that gap by itroducig the regressio model. Note also the limitatio of fittig the straight lie : May scatter plots do ot eve resemble a liear relatioship: See Figure 3.3 i BN, ad the Phillips curve examples i Kap 3 i BN. Luckily, the OLS method ca be used i may such cases the poit will be is that the coditioal expectatio fuctio eed ot be liear Hece: Several reasos to brig the statistical model back ito the story, ad i particular the coditioal expectatio fuctio! 30 / 30

ECON 3150/4150, Spring term Lecture 1

ECON 3150/4150, Spring term Lecture 1 ECON 3150/4150, Sprig term 2013. Lecture 1 Ragar Nymoe Uiversity of Oslo 15 Jauary 2013 1 / 42 Refereces to Lecture 1 ad 2 Hill, Griffiths ad Lim, 4 ed (HGL) Ch 1-1.5; Ch 2.8-2.9,4.3-4.3.1.3 Bårdse ad

More information

Simple Linear Regression

Simple Linear Regression Chapter 2 Simple Liear Regressio 2.1 Simple liear model The simple liear regressio model shows how oe kow depedet variable is determied by a sigle explaatory variable (regressor). Is is writte as: Y i

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N.

3/3/2014. CDS M Phil Econometrics. Types of Relationships. Types of Relationships. Types of Relationships. Vijayamohanan Pillai N. 3/3/04 CDS M Phil Old Least Squares (OLS) Vijayamohaa Pillai N CDS M Phil Vijayamoha CDS M Phil Vijayamoha Types of Relatioships Oly oe idepedet variable, Relatioship betwee ad is Liear relatioships Curviliear

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT

Geometry of LS. LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT OCTOBER 7, 2016 LECTURE 3 GEOMETRY OF LS, PROPERTIES OF σ 2, PARTITIONED REGRESSION, GOODNESS OF FIT Geometry of LS We ca thik of y ad the colums of X as members of the -dimesioal Euclidea space R Oe ca

More information

Algebra of Least Squares

Algebra of Least Squares October 19, 2018 Algebra of Least Squares Geometry of Least Squares Recall that out data is like a table [Y X] where Y collects observatios o the depedet variable Y ad X collects observatios o the k-dimesioal

More information

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So,

First, note that the LS residuals are orthogonal to the regressors. X Xb X y = 0 ( normal equations ; (k 1) ) So, 0 2. OLS Part II The OLS residuals are orthogoal to the regressors. If the model icludes a itercept, the orthogoality of the residuals ad regressors gives rise to three results, which have limited practical

More information

CLRM estimation Pietro Coretto Econometrics

CLRM estimation Pietro Coretto Econometrics Slide Set 4 CLRM estimatio Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Thursday 24 th Jauary, 2019 (h08:41) P. Coretto

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Correlation Regression

Correlation Regression Correlatio Regressio While correlatio methods measure the stregth of a liear relatioship betwee two variables, we might wish to go a little further: How much does oe variable chage for a give chage i aother

More information

Regression, Inference, and Model Building

Regression, Inference, and Model Building Regressio, Iferece, ad Model Buildig Scatter Plots ad Correlatio Correlatio coefficiet, r -1 r 1 If r is positive, the the scatter plot has a positive slope ad variables are said to have a positive relatioship

More information

Response Variable denoted by y it is the variable that is to be predicted measure of the outcome of an experiment also called the dependent variable

Response Variable denoted by y it is the variable that is to be predicted measure of the outcome of an experiment also called the dependent variable Statistics Chapter 4 Correlatio ad Regressio If we have two (or more) variables we are usually iterested i the relatioship betwee the variables. Associatio betwee Variables Two variables are associated

More information

Linear Regression Models

Linear Regression Models Liear Regressio Models Dr. Joh Mellor-Crummey Departmet of Computer Sciece Rice Uiversity johmc@cs.rice.edu COMP 528 Lecture 9 15 February 2005 Goals for Today Uderstad how to Use scatter diagrams to ispect

More information

Economics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model

Economics 326 Methods of Empirical Research in Economics. Lecture 8: Multiple regression model Ecoomics 326 Methods of Empirical Research i Ecoomics Lecture 8: Multiple regressio model Hiro Kasahara Uiversity of British Columbia December 24, 2014 Why we eed a multiple regressio model I There are

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Linear Regression Models, OLS, Assumptions and Properties

Linear Regression Models, OLS, Assumptions and Properties Chapter 2 Liear Regressio Models, OLS, Assumptios ad Properties 2.1 The Liear Regressio Model The liear regressio model is the sigle most useful tool i the ecoometricia s kit. The multiple regressio model

More information

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation II. Descriptive Statistics D. Liear Correlatio ad Regressio I this sectio Liear Correlatio Cause ad Effect Liear Regressio 1. Liear Correlatio Quatifyig Liear Correlatio The Pearso product-momet correlatio

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Zeros of Polynomials

Zeros of Polynomials Math 160 www.timetodare.com 4.5 4.6 Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered with fidig the solutios of polyomial equatios of ay degree

More information

(all terms are scalars).the minimization is clearer in sum notation:

(all terms are scalars).the minimization is clearer in sum notation: 7 Multiple liear regressio: with predictors) Depedet data set: y i i = 1, oe predictad, predictors x i,k i = 1,, k = 1, ' The forecast equatio is ŷ i = b + Use matrix otatio: k =1 b k x ik Y = y 1 y 1

More information

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials Math 60 www.timetodare.com 3. Properties of Divisio 3.3 Zeros of Polyomials 3.4 Complex ad Ratioal Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

Statistical Properties of OLS estimators

Statistical Properties of OLS estimators 1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of

More information

S Y Y = ΣY 2 n. Using the above expressions, the correlation coefficient is. r = SXX S Y Y

S Y Y = ΣY 2 n. Using the above expressions, the correlation coefficient is. r = SXX S Y Y 1 Sociology 405/805 Revised February 4, 004 Summary of Formulae for Bivariate Regressio ad Correlatio Let X be a idepedet variable ad Y a depedet variable, with observatios for each of the values of these

More information

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer. 6 Itegers Modulo I Example 2.3(e), we have defied the cogruece of two itegers a,b with respect to a modulus. Let us recall that a b (mod ) meas a b. We have proved that cogruece is a equivalece relatio

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n.

Cov(aX, cy ) Var(X) Var(Y ) It is completely invariant to affine transformations: for any a, b, c, d R, ρ(ax + b, cy + d) = a.s. X i. as n. CS 189 Itroductio to Machie Learig Sprig 218 Note 11 1 Caoical Correlatio Aalysis The Pearso Correlatio Coefficiet ρ(x, Y ) is a way to measure how liearly related (i other words, how well a liear model

More information

UNIT 11 MULTIPLE LINEAR REGRESSION

UNIT 11 MULTIPLE LINEAR REGRESSION UNIT MULTIPLE LINEAR REGRESSION Structure. Itroductio release relies Obectives. Multiple Liear Regressio Model.3 Estimatio of Model Parameters Use of Matrix Notatio Properties of Least Squares Estimates.4

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

Simple Linear Regression

Simple Linear Regression Simple Liear Regressio 1. Model ad Parameter Estimatio (a) Suppose our data cosist of a collectio of pairs (x i, y i ), where x i is a observed value of variable X ad y i is the correspodig observatio

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Solutions to Odd Numbered End of Chapter Exercises: Chapter 4

Solutions to Odd Numbered End of Chapter Exercises: Chapter 4 Itroductio to Ecoometrics (3 rd Updated Editio) by James H. Stock ad Mark W. Watso Solutios to Odd Numbered Ed of Chapter Exercises: Chapter 4 (This versio July 2, 24) Stock/Watso - Itroductio to Ecoometrics

More information

Simple Regression Model

Simple Regression Model Simple Regressio Model 1. The Model y i 0 1 x i u i where y i depedet variable x i idepedet variable u i disturbace/error term i 1,..., Eg: y wage (measured i 1976 dollars per hr) x educatio (measured

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram.

Summary: CORRELATION & LINEAR REGRESSION. GC. Students are advised to refer to lecture notes for the GC operations to obtain scatter diagram. Key Cocepts: 1) Sketchig of scatter diagram The scatter diagram of bivariate (i.e. cotaiig two variables) data ca be easily obtaied usig GC. Studets are advised to refer to lecture otes for the GC operatios

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Lesson 11: Simple Linear Regression

Lesson 11: Simple Linear Regression Lesso 11: Simple Liear Regressio Ka-fu WONG December 2, 2004 I previous lessos, we have covered maily about the estimatio of populatio mea (or expected value) ad its iferece. Sometimes we are iterested

More information

Topics Machine learning: lecture 2. Review: the learning problem. Hypotheses and estimation. Estimation criterion cont d. Estimation criterion

Topics Machine learning: lecture 2. Review: the learning problem. Hypotheses and estimation. Estimation criterion cont d. Estimation criterion .87 Machie learig: lecture Tommi S. Jaakkola MIT CSAIL tommi@csail.mit.edu Topics The learig problem hypothesis class, estimatio algorithm loss ad estimatio criterio samplig, empirical ad epected losses

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Introduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 4

Introduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 4 Itroductio to Ecoometrics (3 rd Updated Editio) by James H. Stock ad Mark W. Watso Solutios to Odd- Numbered Ed- of- Chapter Exercises: Chapter 4 (This versio August 7, 204) 205 Pearso Educatio, Ic. Stock/Watso

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

11 THE GMM ESTIMATION

11 THE GMM ESTIMATION Cotets THE GMM ESTIMATION 2. Cosistecy ad Asymptotic Normality..................... 3.2 Regularity Coditios ad Idetificatio..................... 4.3 The GMM Iterpretatio of the OLS Estimatio.................

More information

Simple Regression. Acknowledgement. These slides are based on presentations created and copyrighted by Prof. Daniel Menasce (GMU) CS 700

Simple Regression. Acknowledgement. These slides are based on presentations created and copyrighted by Prof. Daniel Menasce (GMU) CS 700 Simple Regressio CS 7 Ackowledgemet These slides are based o presetatios created ad copyrighted by Prof. Daiel Measce (GMU) Basics Purpose of regressio aalysis: predict the value of a depedet or respose

More information

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A) REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Introduction to regression

Introduction to regression Itroductio to regressio Regressio Bria Caffo, Jeff Leek ad Roger Peg Johs Hopkis Bloomberg School of Public Health A famous motivatig example (Perhaps surprisigly, this example is still relevat) http://www.ature.com/ejhg/joural/v17/8/full/ejhg20095a.html

More information

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n, CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 9 Variace Questio: At each time step, I flip a fair coi. If it comes up Heads, I walk oe step to the right; if it comes up Tails, I walk oe

More information

Lesson 10: Limits and Continuity

Lesson 10: Limits and Continuity www.scimsacademy.com Lesso 10: Limits ad Cotiuity SCIMS Academy 1 Limit of a fuctio The cocept of limit of a fuctio is cetral to all other cocepts i calculus (like cotiuity, derivative, defiite itegrals

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL/MAY 2009 EXAMINATIONS ECO220Y1Y PART 1 OF 2 SOLUTIONS

UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL/MAY 2009 EXAMINATIONS ECO220Y1Y PART 1 OF 2 SOLUTIONS PART of UNIVERSITY OF TORONTO Faculty of Arts ad Sciece APRIL/MAY 009 EAMINATIONS ECO0YY PART OF () The sample media is greater tha the sample mea whe there is. (B) () A radom variable is ormally distributed

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

University of California, Los Angeles Department of Statistics. Simple regression analysis

University of California, Los Angeles Department of Statistics. Simple regression analysis Uiversity of Califoria, Los Ageles Departmet of Statistics Statistics 100C Istructor: Nicolas Christou Simple regressio aalysis Itroductio: Regressio aalysis is a statistical method aimig at discoverig

More information

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates Iteratioal Joural of Scieces: Basic ad Applied Research (IJSBAR) ISSN 2307-4531 (Prit & Olie) http://gssrr.org/idex.php?joural=jouralofbasicadapplied ---------------------------------------------------------------------------------------------------------------------------

More information

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression

Outline. Linear regression. Regularization functions. Polynomial curve fitting. Stochastic gradient descent for regression. MLE for regression REGRESSION 1 Outlie Liear regressio Regularizatio fuctios Polyomial curve fittig Stochastic gradiet descet for regressio MLE for regressio Step-wise forward regressio Regressio methods Statistical techiques

More information

ARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t

ARIMA Models. Dan Saunders. y t = φy t 1 + ɛ t ARIMA Models Da Sauders I will discuss models with a depedet variable y t, a potetially edogeous error term ɛ t, ad a exogeous error term η t, each with a subscript t deotig time. With just these three

More information

Matrix Representation of Data in Experiment

Matrix Representation of Data in Experiment Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 5: SINGULARITIES. ANDREW SALCH 1. The Jacobia criterio for osigularity. You have probably oticed by ow that some poits o varieties are smooth i a sese somethig

More information

Lecture 11 Simple Linear Regression

Lecture 11 Simple Linear Regression Lecture 11 Simple Liear Regressio Fall 2013 Prof. Yao Xie, yao.xie@isye.gatech.edu H. Milto Stewart School of Idustrial Systems & Egieerig Georgia Tech Midterm 2 mea: 91.2 media: 93.75 std: 6.5 2 Meddicorp

More information

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations

Section 1.1. Calculus: Areas And Tangents. Difference Equations to Differential Equations Differece Equatios to Differetial Equatios Sectio. Calculus: Areas Ad Tagets The study of calculus begis with questios about chage. What happes to the velocity of a swigig pedulum as its positio chages?

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

2 Geometric interpretation of complex numbers

2 Geometric interpretation of complex numbers 2 Geometric iterpretatio of complex umbers 2.1 Defiitio I will start fially with a precise defiitio, assumig that such mathematical object as vector space R 2 is well familiar to the studets. Recall that

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

Paired Data and Linear Correlation

Paired Data and Linear Correlation Paired Data ad Liear Correlatio Example. A group of calculus studets has take two quizzes. These are their scores: Studet st Quiz Score ( data) d Quiz Score ( data) 7 5 5 0 3 0 3 4 0 5 5 5 5 6 0 8 7 0

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense, 3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [

More information

Bivariate Sample Statistics Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 7

Bivariate Sample Statistics Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 7 Bivariate Sample Statistics Geog 210C Itroductio to Spatial Data Aalysis Chris Fuk Lecture 7 Overview Real statistical applicatio: Remote moitorig of east Africa log rais Lead up to Lab 5-6 Review of bivariate/multivariate

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

6.867 Machine learning

6.867 Machine learning 6.867 Machie learig Mid-term exam October, ( poits) Your ame ad MIT ID: Problem We are iterested here i a particular -dimesioal liear regressio problem. The dataset correspodig to this problem has examples

More information

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

September 2012 C1 Note. C1 Notes (Edexcel) Copyright   - For AS, A2 notes and IGCSE / GCSE worksheets 1 September 0 s (Edecel) Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

PROBABILITY LOGIC: Part 2

PROBABILITY LOGIC: Part 2 James L Bec 2 July 2005 PROBABILITY LOGIC: Part 2 Axioms for Probability Logic Based o geeral cosideratios, we derived axioms for: Pb ( a ) = measure of the plausibility of propositio b coditioal o the

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

The Random Walk For Dummies

The Random Walk For Dummies The Radom Walk For Dummies Richard A Mote Abstract We look at the priciples goverig the oe-dimesioal discrete radom walk First we review five basic cocepts of probability theory The we cosider the Beroulli

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Regression, Part I. A) Correlation describes the relationship between two variables, where neither is independent or a predictor.

Regression, Part I. A) Correlation describes the relationship between two variables, where neither is independent or a predictor. Regressio, Part I I. Differece from correlatio. II. Basic idea: A) Correlatio describes the relatioship betwee two variables, where either is idepedet or a predictor. - I correlatio, it would be irrelevat

More information

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13

BHW #13 1/ Cooper. ENGR 323 Probabilistic Analysis Beautiful Homework # 13 BHW # /5 ENGR Probabilistic Aalysis Beautiful Homework # Three differet roads feed ito a particular freeway etrace. Suppose that durig a fixed time period, the umber of cars comig from each road oto the

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Bertrand s Postulate

Bertrand s Postulate Bertrad s Postulate Lola Thompso Ross Program July 3, 2009 Lola Thompso (Ross Program Bertrad s Postulate July 3, 2009 1 / 33 Bertrad s Postulate I ve said it oce ad I ll say it agai: There s always a

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j.

Apply change-of-basis formula to rewrite x as a linear combination of eigenvectors v j. Eigevalue-Eigevector Istructor: Nam Su Wag eigemcd Ay vector i real Euclidea space of dimesio ca be uiquely epressed as a liear combiatio of liearly idepedet vectors (ie, basis) g j, j,,, α g α g α g α

More information

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random Part III. Areal Data Aalysis 0. Comparative Tests amog Spatial Regressio Models While the otio of relative likelihood values for differet models is somewhat difficult to iterpret directly (as metioed above),

More information

Pb ( a ) = measure of the plausibility of proposition b conditional on the information stated in proposition a. & then using P2

Pb ( a ) = measure of the plausibility of proposition b conditional on the information stated in proposition a. & then using P2 Axioms for Probability Logic Pb ( a ) = measure of the plausibility of propositio b coditioal o the iformatio stated i propositio a For propositios a, b ad c: P: Pb ( a) 0 P2: Pb ( a& b ) = P3: Pb ( a)

More information

NYU Center for Data Science: DS-GA 1003 Machine Learning and Computational Statistics (Spring 2018)

NYU Center for Data Science: DS-GA 1003 Machine Learning and Computational Statistics (Spring 2018) NYU Ceter for Data Sciece: DS-GA 003 Machie Learig ad Computatioal Statistics (Sprig 208) Brett Berstei, David Roseberg, Be Jakubowski Jauary 20, 208 Istructios: Followig most lab ad lecture sectios, we

More information

6.867 Machine learning, lecture 7 (Jaakkola) 1

6.867 Machine learning, lecture 7 (Jaakkola) 1 6.867 Machie learig, lecture 7 (Jaakkola) 1 Lecture topics: Kerel form of liear regressio Kerels, examples, costructio, properties Liear regressio ad kerels Cosider a slightly simpler model where we omit

More information

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences. Norwegia Uiversity of Sciece ad Techology Departmet of Mathematical Scieces Corrected 3 May ad 4 Jue Solutios TMA445 Statistics Saturday 6 May 9: 3: Problem Sow desity a The probability is.9.5 6x x dx

More information

Rademacher Complexity

Rademacher Complexity EECS 598: Statistical Learig Theory, Witer 204 Topic 0 Rademacher Complexity Lecturer: Clayto Scott Scribe: Ya Deg, Kevi Moo Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved for

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

¹Y 1 ¹ Y 2 p s. 2 1 =n 1 + s 2 2=n 2. ¹X X n i. X i u i. i=1 ( ^Y i ¹ Y i ) 2 + P n

¹Y 1 ¹ Y 2 p s. 2 1 =n 1 + s 2 2=n 2. ¹X X n i. X i u i. i=1 ( ^Y i ¹ Y i ) 2 + P n Review Sheets for Stock ad Watso Hypothesis testig p-value: probability of drawig a statistic at least as adverse to the ull as the value actually computed with your data, assumig that the ull hypothesis

More information