Introduction to Regression

Similar documents
Introduction to Regression

Introduction to Regression

Introduction to Regression

Nonparametric Regression. Badr Missaoui

Nonparametric Estimation of Luminosity Functions

Introduction to Regression

Nonparametric Methods

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression

41903: Introduction to Nonparametrics

Nonparametric Inference in Cosmology and Astrophysics: Biases and Variants

Time Series and Forecasting Lecture 4 NonLinear Time Series

Nonparametric Regression. Changliang Zou

Econ 582 Nonparametric Regression

Chapter 9. Non-Parametric Density Function Estimation

Nonparametric Statistics

Modelling Non-linear and Non-stationary Time Series

Chapter 9. Non-Parametric Density Function Estimation

Local Polynomial Regression

Introduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β

Statistics: Learning models from data

Linear Regression Model. Badr Missaoui

Generalized Additive Models

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones

A Modern Look at Classical Multivariate Techniques

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Remedial Measures for Multiple Linear Regression Models

12 - Nonparametric Density Estimation

Introduction to Nonparametric Regression

Can we do statistical inference in a non-asymptotic way? 1

Lecture 2. The Simple Linear Regression Model: Matrix Approach

Quantitative Economics for the Evaluation of the European Policy. Dipartimento di Economia e Management

Generalized Linear Models

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

Kernel density estimation

3 Nonparametric Density Estimation

Recap. HW due Thursday by 5 pm Next HW coming on Thursday Logistic regression: Pr(G = k X) linear on the logit scale Linear discriminant analysis:

Nonparametric Inference via Bootstrapping the Debiased Estimator

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego

Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood

Bickel Rosenblatt test

Inference for Regression

Additive Isotonic Regression

Inference in Regression Analysis

Motivational Example

Simple Linear Regression

Confidence intervals for kernel density estimation

Sliced Inverse Regression

Data Mining Stat 588

Day 4A Nonparametrics

Lecture Notes 15 Prediction Chapters 13, 22, 20.4.

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Announcements. Proposals graded

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Ch 2: Simple Linear Regression

STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002

Multiple Linear Regression

Lecture 14 Simple Linear Regression

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

ECON The Simple Regression Model

Linear Models in Machine Learning

Linear Methods for Prediction

STAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)

1. Variance stabilizing transformations; Box-Cox Transformations - Section. 2. Transformations to linearize the model - Section 5.

Penalized Splines, Mixed Models, and Recent Large-Sample Results

CS 195-5: Machine Learning Problem Set 1

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

Final Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Simple Linear Regression

Day 3B Nonparametrics and Bootstrap

DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA

Nonparametric Econometrics

Adaptive Nonparametric Density Estimators

4 Nonparametric Regression

Lecture 3: Statistical Decision Theory (Part II)

Ch. 5 Transformations and Weighting

Nonparametric Regression 10/ Larry Wasserman

Nonparametric Regression Härdle, Müller, Sperlich, Werwarz, 1995, Nonparametric and Semiparametric Models, An Introduction

probability of k samples out of J fall in R.

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Addressing the Challenges of Luminosity Function Estimation via Likelihood-Free Inference

Part 6: Multivariate Normal and Linear Models

Spatial Process Estimates as Smoothers: A Review

Inference on distributions and quantiles using a finite-sample Dirichlet process

Section 7: Local linear regression (loess) and regression discontinuity designs

UNIVERSITÄT POTSDAM Institut für Mathematik

7 Semiparametric Estimation of Additive Models

Correlation and Regression

Linear Regression. 1 Introduction. 2 Least Squares

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

STAT 705 Nonlinear regression

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =

STAT5044: Regression and Anova. Inyoung Kim

Nonparametric Density Estimation

Acceleration of some empirical means. Application to semiparametric regression

Unit 10: Simple Linear Regression and Correlation

Transcription:

Introduction to Regression Chad M. Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/100

Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression Nonparametric Procedures Cross Validation Local Polynomial Regression Confidence Bands Basis Methods: Splines Multiple Regression Introduction to Regression p. 2/100

Basic Concepts in Regression The Regression Problem: Observe (X 1,Y 1 ),...,(X n,y n ), estimate f(x) = E(Y X = x). Equivalently: Estimate f(x) with where E(ǫ i ) = 0. Y i = f(x i )+ǫ i, i = 1,2,...,n Simple estimators: f(x) = β0 + β1 x parametric f(x) = mean{y i : X i x h} nonparametric Introduction to Regression p. 3/100

Regression Parametric Regression: Y = β 0 +β 1 X +ǫ Y = β 0 +β 1 X 1 + +β d X d +ǫ Y = β 0 e β 1X 1 + β 2 X 2 +ǫ Nonparametric Regression: Y = f(x)+ǫ Y = f 1 (X 1 )+ +f d (X d )+ǫ Y = f(x 1,X 2 )+ǫ Introduction to Regression p. 4/100

Next... We will first quickly look at a simple example from astronomy Introduction to Regression p. 4/100

Example: Type Ia Supernovae Distance Modulus 36 38 40 42 44 0.0 0.5 1.0 1.5 Redshift From Riess, et al. (2007), 182 Type Ia Supernovae Introduction to Regression p. 5/100

Example: Type Ia Supernovae Assumption: Observed pairs (z i,y i ) are realizations of Y i = f(z i )+σ i ǫ i, where the ǫ i are i.i.d. standard normal with mean zero. Error standard deviations σ i assumed known. Objective: Estimate f( ). Introduction to Regression p. 6/100

Example: Type Ia Supernovae ΛCDM, two parameter model: ( ) c(1+z) z du f(z) = 5log 10 H 0 Ωm (1+u) 3 +(1 Ω m ) 0 +25 Introduction to Regression p. 7/100

Example: Type Ia Supernovae Distance Modulus 36 38 40 42 44 0.0 0.5 1.0 1.5 Redshift Estimate where H 0 = 72.76 and Ω m = 0.341 (the MLE) Introduction to Regression p. 8/100

Example: Type Ia Supernovae Distance Modulus 36 38 40 42 44 0.0 0.5 1.0 1.5 Fourth order polynomial fit. Redshift Introduction to Regression p. 9/100

Example: Type Ia Supernovae Distance Modulus 36 38 40 42 44 0.0 0.5 1.0 1.5 Fifth order polynomial fit. Redshift Introduction to Regression p. 10/100

Next... The previous two slides illustrate the fundamental challenge of constructing a regression model: How complex should it be? Introduction to Regression p. 10/100

The Bias Variance Tradeoff Must choose model to achieve balance between too simple high bias, i.e., E( f(x)) not close to f(x) xxxxxx precise, but not accurate too complex high variance, i.e., Var( f(x)) large xxxxxx accurate, but not precise This is the classic bias variance tradeoff. Could be called the accuracy precision tradeoff. Introduction to Regression p. 11/100

The Bias Variance Tradeoff Linear Regression Model: Fit models of the form f(x) = β0 + p β i g i (x) i=1 where g i ( ) are fixed, specified functions. Increasing p yields more complex model Introduction to Regression p. 12/100

Next... Linear regression models are the classic approach to regression. Here, the response is modelled as a linear combination of p selected predictors. Introduction to Regression p. 12/100

Linear Regression Assume that p Y = β 0 + β i g i (x)+ǫ i=1 where g i ( ) are specified functions, not estimated. Assume that E(ǫ) = 0, Var(ǫ) = σ 2 Often assume ǫ is Gaussian, but this is not essential Introduction to Regression p. 13/100

Linear Regression The Design Matrix: D = 1 g 1 (X 1 ) g 2 (X 1 ) g p (X 1 ) 1 g 1 (X 2 ) g 2 (X 2 ) g p (X 2 )..... 1 g 1 (X n ) g 2 (X n ) g p (X n ) Then, L = D(D T D) 1 D T is the projection matrix. Note that ν = trace(l) = p+1 Introduction to Regression p. 14/100

Linear Regression Let Y be a n-vector filled with Y 1,Y 2,...,Y n. The Least Squares Estimates of β: β = (D T D) 1 D T Y The Fitted Values: Ŷ = LY = D β The Residuals: ǫ = Y Ŷ = (I L)Y Introduction to Regression p. 15/100

Linear Regression The estimates of β given above minimize the sum of squared errors: n n ) 2 SSE = ǫ 2 i = (Y i Ŷ i i=1 i=1 If ǫ i are Gaussian, then β is Gaussian, leading to confidence intervals, hypothesis tests Introduction to Regression p. 16/100

Linear Regression in R Linear regression in R is done via the command lm() > holdlm = lm(y x) Default is to include intercept term. If want to exclude, use > holdlm = lm(y x - 1) Introduction to Regression p. 17/100

Linear Regression in R y 0.0 0.5 1.0 1.5 2.0 0.2 0.4 0.6 0.8 1.0 x Introduction to Regression p. 18/100

Linear Regression in R Fitting the model Y = β 0 +β 1 x+ǫ The summary() shows the key output > holdlm = lm(y x) > summary(holdlm) < CUT SOME HERE > Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) -0.2468 0.1061-2.326 0.0319 * x 2.0671 0.1686 12.260 3.57e-10 *** < CUT SOME HERE > Note that β 1 = 2.07, the SE for β 1 is approximately 0.17. Introduction to Regression p. 19/100

Linear Regression in R Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) -0.2468 0.1061-2.326 0.0319 * x 2.0671 0.1686 12.260 3.57e-10 *** If the ǫ are Gaussian, then the last column gives the p-value for the test of H 0 : β i = 0 versus H 1 : β i 0. Also, if the ǫ are Gaussian, β i ±2ŜE( β i ) is an approximate 95% confidence interval for β i. Introduction to Regression p. 20/100

Linear Regression in R Residuals 0.4 0.2 0.0 0.2 0.0 0.5 1.0 1.5 Fitted Values Always a good idea to look at plot of residuals versus fitted values. There should be no apparent pattern. > plot(holdlm$residuals,holdlm$fitted.values) Introduction to Regression p. 21/100

Linear Regression in R Sample Quantiles 0.4 0.2 0.0 0.2 2 1 0 1 2 Theoretical Quantiles Is it reasonable to assume ǫ are Gaussian? Check the normal probability plot of the residuals. > qqplot(holdlm$residuals) > qqline(holdlm$residuals) Introduction to Regression p. 22/100

Linear Regression in R Simple to include more terms in the model: > holdlm = lm(y x + z) Unfortunately, this does not work: > holdlm = lm(y x + xˆ2) Instead, create a new variable and use that: > x2 = xˆ2 > holdlm = lm(y x + x2) Introduction to Regression p. 23/100

The Correlation Coefficient A standard way of quantifying the strength of the linear relationship between two variables is via the (Pearson) correlation coefficient. Sample Version: r = 1 n 1 n i=1 X i Y i where X i and Y i are the standardized versions of X i and Y i, i.e., X i = X i X s X. Introduction to Regression p. 24/100

The Correlation Coefficient Note that 1 r 1. r = 1 indicates perfect linear, increasing relationship r = 1 indicates perfect linear, decreasing relationship Also, note that In R: use cor() β 1 = r ( sy s X ) Introduction to Regression p. 25/100

The Correlation Coefficient 1 0.8 0.4 0 0.4 0.8 1 1 1 1 1 1 1 0.1 0 0 0 0 0 0 Examples of different scatter plots and values for r (Wikipedia) Introduction to Regression p. 26/100

The Coefficient of Determination A very popular, but overused, regression diagnostic is the coefficient of determination, denoted R 2 n i=1(y i Ŷ i ) 2 R 2 = 1 n i=1( Yi Y ) 2 The proportion of the variation explained by the model. Note that 0 R 2 1, and, R 2 = r 2. But: Large R 2 does not imply that the model is a good fit. Introduction to Regression p. 27/100

Anscombe s Quartet (1973) Each of these has the same value for the marginal means and variances, same r, same R 2, same β0 and β1. Introduction to Regression p. 28/100

Next... Now we will begin our discussion of nonparametric regression. Reconsider the SNe example from the start of the lecture. Introduction to Regression p. 28/100

Example: Type Ia Supernovae Distance Modulus 36 38 40 42 44 0.0 0.5 1.0 1.5 Nonparametric estimate. Redshift Introduction to Regression p. 29/100

The Bias Variance Tradeoff Nonparametric Case: Every nonparametric procedure requires a smoothing parameter h. Consider the regression estimator based on local averaging: f(x) = mean{y i : X i x h}. Increasing h yields smoother estimate f Introduction to Regression p. 30/100

What is nonparametric? Data Estimate Assumptions In the parametric case, the influence of assumptions is fixed Introduction to Regression p. 31/100

What is nonparametric? Data Estimate Assumptions h In the nonparametric case, the influence of assumptions is controlled by h, where h = O ( 1 n 1/5 ) Introduction to Regression p. 32/100

The Bias Variance Tradeoff Can formalize this via appropriate measure of error in estimator Squared error loss: L(f(x), f(x)) = (f(x) f(x)) 2. Mean squared error MSE (risk) MSE = R(f(x), f(x)) = E(L(f(x), f(x))). Introduction to Regression p. 33/100

The Bias Variance Tradeoff The key result: where R(f(x), f(x)) = bias 2 x +variance x bias x = E( f(x)) f(x) variance x = Var( f(x)) Often written as MSE = BIAS 2 + VARIANCE. Introduction to Regression p. 34/100

The Bias Variance Tradeoff Mean Integrated Squared Error: MISE = = R(f(x), f(x))dx bias 2 x dx+ variance x dx Empirical version: 1 n n i=1 R(f(x i ), f(xi )). Introduction to Regression p. 35/100

The Bias Variance Tradeoff Risk Bias squared Variance Less Optimal smoothing More Introduction to Regression p. 36/100

The Bias Variance Tradeoff For nonparametric procedures, when x is one-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh h = O MISE = O ( 1 n 1/5 ( 1 ) n 4/5 whereas, for parametric problems, when the model is correct, ( ) 1 MISE = O n ) Introduction to Regression p. 37/100

Next... Now we will begin our discussion of nonparametric regression procedures. But, these nonparametric procedures share a common structure with linear regression. Namely, they are all linear smoothers. Introduction to Regression p. 37/100

Linear Smoothers For a linear smoother: f(x) = i Y i l i (x) for some weights: l 1 (x),...,l n (x) The vector of weights depends on the target point x. Introduction to Regression p. 38/100

Linear Smoothers If f(x) = n i=1 l i(x)y i then Ŷ 1... Ŷn }{{} Ŷ f(x 1 )... f(xn) = l 1 (X 1 ) l 2 (X 1 ) ln(x 1 ) Y l 1 (X 2 ) l 2 (X 2 ) ln(x 2 ) 1............... Yn l 1 (Xn) l 2 (Xn) ln(xn) }{{}}{{} Y L Ŷ = L Y The effective degrees of freedom is: ν = trace(l) = n L ii. i=1 Can be interpreted as the number of parameters in the model. Recall that ν = p+1 for linear regression. Introduction to Regression p. 39/100

Linear Smoothers Consider the Extreme Cases: When l i (X j ) = 1/n for all i,j, and ν = 1. f(x j ) = Y for all j When l i (X j ) = δ ij for all i,j, and ν = n. f(x j ) = Y j Introduction to Regression p. 40/100

Next... To begin the discussion of nonparametric procedures, we start with a simple, but naive, approach: binning. Introduction to Regression p. 40/100

Binning Divide the x-axis into bins B 1,B 2,... of width h. f is a step function based on averaging the Y i s in each bin: for x B j : f(x) = mean { Y i : X i B j }. The (arbitrary) choice of the boundaries of the bins can affect inference, especially when h large. Introduction to Regression p. 41/100

Binning WMAP data, binned estimate with h = 15. Cl 4000 2000 0 2000 4000 6000 8000 0 100 200 300 400 500 600 700 l Introduction to Regression p. 42/100

Binning WMAP data, binned estimate with h = 50. Cl 4000 2000 0 2000 4000 6000 8000 0 100 200 300 400 500 600 700 l Introduction to Regression p. 43/100

Binning WMAP data, binned estimate with h = 75. Cl 4000 2000 0 2000 4000 6000 8000 0 100 200 300 400 500 600 700 l Introduction to Regression p. 44/100

Next... A step beyond binning is to take local averages to estimate the regression function. Introduction to Regression p. 44/100

Local Averaging For each x let N x = [ x h 2, x+ h 2 ]. This is a moving window of length h, centered at x. Define } f(x) = mean {Y i : X i N x. This is like binning but removes the arbitrary boundaries. Introduction to Regression p. 45/100

Local Averaging WMAP data, local average estimate with h = 15. Cl 4000 2000 0 2000 4000 6000 8000 Binned Local Averaging 0 100 200 300 400 500 600 700 l Introduction to Regression p. 46/100

Local Averaging WMAP data, local average estimate with h = 50. Cl 4000 2000 0 2000 4000 6000 8000 Binned Local Averaging 0 100 200 300 400 500 600 700 l Introduction to Regression p. 47/100

Local Averaging WMAP data, local average estimate with h = 75. Cl 4000 2000 0 2000 4000 6000 8000 Binned Local Averaging 0 100 200 300 400 500 600 700 l Introduction to Regression p. 48/100

Next... Local averaging is a special case of kernel regression. Introduction to Regression p. 48/100

Kernel Regression The local average estimator can be written: n i=1 f(x) = Y i K ( ) x X i h n i=1 K( ) x X i h where K(x) = { 1 x < 1/2 0 otherwise. Can improve this by using a function K which is smoother. Introduction to Regression p. 49/100

Kernels A kernel is any smooth function K such that K(x) 0 and K(x)dx = 1, xk(x)dx = 0 and σk 2 x 2 K(x)dx > 0. Some commonly used kernels are the following: the boxcar kernel : K(x) = 1 I( x < 1), 2 the Gaussian kernel : K(x) = 1 e x2 /2, 2π the Epanechnikov kernel : K(x) = 3 4 (1 x2 )I( x < 1) the tricube kernel : K(x) = 70 81 (1 x 3 ) 3 I( x < 1). Introduction to Regression p. 50/100

Kernels 3 0 3 3 0 3 3 0 3 3 0 3 Introduction to Regression p. 51/100

Kernel Regression We can write this as f(x) = n i=1 Y i K ( ) x X i h n i=1 K( ) x X i h f(x) = i Y i l i (x) where l i (x) = K ( x X i ) h n i=1 K( x X i ). h Introduction to Regression p. 52/100

Kernel Regression WMAP data, kernel regression estimates, h = 15. Cl 4000 2000 0 2000 4000 6000 8000 boxcar kernel Gaussian kernel tricube kernel 0 100 200 300 400 500 600 700 l Introduction to Regression p. 53/100

Kernel Regression WMAP data, kernel regression estimates, h = 50. Cl 4000 2000 0 2000 4000 6000 8000 boxcar kernel Gaussian kernel tricube kernel 0 100 200 300 400 500 600 700 l Introduction to Regression p. 54/100

Kernel Regression WMAP data, kernel regression estimates, h = 75. Cl 4000 2000 0 2000 4000 6000 8000 boxcar kernel Gaussian kernel tricube kernel 0 100 200 300 400 500 600 700 l Introduction to Regression p. 55/100

Kernel Regression MISE h4 4 ( x 2 K(x)dx + σ2 K 2 (x)dx nh ) 2 ( 1 g(x) dx. f (x)+2f (x) g (x) g(x) where g(x) is the density for X. What is especially notable is the presence of the term 2f (x) g (x) g(x) = design bias. ) 2 dx Also, bias is large near the boundary. We can reduce these biases using local polynomials. Introduction to Regression p. 56/100

Kernel Regression WMAP data, h = 50. Note the boundary bias. Cl 500 1000 1500 2000 2500 3000 boxcar kernel Gaussian kernel tricube kernel 0 20 40 60 80 100 l Introduction to Regression p. 57/100

Next... The deficiencies of kernel regression can be addressed by considering models which, on small scales, model the regression function as a low order polynomial. Introduction to Regression p. 57/100

Local Polynomial Regression Recall polynomial regression: f(x) = β0 + β1 x+ β2 x 2 + + βp x p where β = ( β0,..., βp ) are obtained by least squares: n ( minimize Y i [β 0 +β 1 x+β 2 x 2 + +β p x p ] i=1 ) 2 Introduction to Regression p. 58/100

Local Polynomial Regression Local polynomial regression: approximate f(x) locally by a different polynomial for every x: f(u) β 0 (x)+β 1 (x)(u x)+β 2 (x)(u x) 2 + +β p (x)(u x) p for u near x. Estimate ( β0 (x),..., βp (x)) by local least squares: minimize n ( ) (Y i [β 0 (x)+β 1 (x)x+β 2 (x)x 2 + +β p (x)x p ]) 2 x Xi K h i=1 }{{} kernel f(x) = β0 (x) Introduction to Regression p. 59/100

Local Polynomial Regression Taking p = 0 yields the kernel regression estimator: n f n (x) = l i (x)y i l i (x) = i=1 K ( ) x x i h n j=1 K( x x j ). h Taking p = 1 yields the local linear estimator. This is the best, all-purpose smoother. Choice of Kernel K: not important Choice of bandwidth h: crucial Introduction to Regression p. 60/100

Local Polynomial Regression The local polynomial regression estimate is n f n (x) = l i (x)y i i=1 where l(x) T = (l 1 (x),...,l n (x)), l(x) T = e T 1 (XT x W xx x ) 1 Xx T W x, e 1 = (1,0,...,0) T and X x and W x are defined by Xx = 1 X 1 x 1 X 2 x...... 1 Xn x...... (X 1 x) p p! (X 2 x) p p! (Xn x) p p! Wx = K ( x X1 h.. ) 0 0.. 0 0 0 K ( x X n h.... ). Introduction to Regression p. 61/100

Local Polynomial Regression Note that f(x) = n i=1 l i(x)y i is a linear smoother. Define Ŷ = (Ŷ 1,...,Ŷ n ) where Ŷ i = f(xi ). Then Ŷ = LY where L is the smoothing matrix: L = l 1 (X 1 ) l 2 (X 1 ) ln(x 1 ) l 1 (X 2 ) l 2 (X 2 ) ln(x 2 ).... l 1 (Xn) l 2 (Xn) ln(xn)..... The effective degrees of freedom is: ν = trace(l) = n i=1 L ii. Introduction to Regression p. 62/100

Theoretical Aside Why local linear (p = 1) is better than kernel (p = 0). Both have (approximate) variance σ 2 (x) g(x)nh The kernel estimator has bias h 2 ( 1 2 f (x)+ f (x)g (x) g(x) K 2 (u)du ) u 2 K(u)du whereas the local linear estimator has asymptotic bias h 2 1 2 f (x) u 2 K(u)du The local linear estimator is free from design bias. At the boundary points, the kernel estimator has asymptotic bias of O(h) while the local linear estimator has bias O(h 2 ). Introduction to Regression p. 63/100

Next... Parametric proceudres force the choice p, the number of predictors. Nonparametric procedures force the choice of the smoothing parameter h. Next we will discuss how this can be selected in a data-driven manner via cross-validation. Introduction to Regression p. 63/100

Choosingpandh Estimate the risk 1 n n E( f(xi ) f(x i )) 2 i=1 with the leave-one-out cross-validation score: CV = 1 n (Y i f( i) (X i )) 2 n i=1 where f( i) is the estimator obtained by omitting the i th pair (X i,y i ). Introduction to Regression p. 64/100

Choosingpandh Amazing shortcut formula: ( CV = 1 n n i=1 ) 2 Y i fn (x i ). 1 L ii An commonly used approximation is GCV (generalized cross-validation): GCV = 1 n ( 1 ν n n ) 2 (Y i f(xi )) 2 i=1 ν = trace(l). Introduction to Regression p. 65/100

Next... Next, we will go over a bit of how to perform local polynomial regression in R. Introduction to Regression p. 65/100

Using locfit() Need to include the locfit library: > install.packages("locfit") > library(locfit) > result = locfit(y x, alpha=c(0, 1.5), deg=1) y and x are vectors the second argument to alpha gives the bandwidth (h) the first argument to alpha specifies the nearest neighbor fraction, an alternative to the bandwidth fitted(result) gives the fitted values, f(xi ) residuals(result) gives the residuals, Y i f(xi ) Introduction to Regression p. 66/100

Using locfit() degree=1, n=1000, sigma=0.1 y 0.6 0.4 0.2 0.0 0.2 0.4 0.6 x(1 x)sin((2.1π) (x + 0.2)) estimate using h = 0.005 estimate using gcv optimal h = 0.015 0.0 0.2 0.4 0.6 0.8 1.0 x Introduction to Regression p. 67/100

Using locfit() degree=1, n=1000, sigma=0.1 y 0.6 0.4 0.2 0.0 0.2 0.4 0.6 x(1 x)sin((2.1π) (x + 0.2)) estimate using h = 0.1 estimate using gcv optimal h = 0.015 0.0 0.2 0.4 0.6 0.8 1.0 x Introduction to Regression p. 68/100

Using locfit() degree=1, n=1000, sigma=0.5 y 2 0 2 4 6 (x + 5exp( 4x 2 )) estimate using h = 0.05 estimate using gcv optimal h = 0.174 2 1 0 1 2 x Introduction to Regression p. 69/100

Using locfit() degree=1, n=1000, sigma=0.5 y 2 0 2 4 6 (x + 5exp( 4x 2 )) estimate using h = 0.5 estimate using gcv optimal h = 0.174 2 1 0 1 2 x Introduction to Regression p. 70/100

Example WMAP data, local linear fit, h = 15. y 4000 2000 0 2000 4000 6000 8000 estimate using h = 15 estimate using gcv optimal h = 63.012 0 100 200 300 400 500 600 700 x Introduction to Regression p. 71/100

Example WMAP data, local linear fit, h = 125. y 4000 2000 0 2000 4000 6000 8000 estimate using h = 125 estimate using gcv optimal h = 63.012 0 100 200 300 400 500 600 700 x Introduction to Regression p. 72/100

Next... We should quantify the amount of error in our estimator for the regression function. Next we will look at how this can be done using local polynomial regression. Introduction to Regression p. 72/100

Variance Estimation Let where σ 2 = n i=1 (Y i f(xi )) 2 n 2ν + ν ν = tr(l), ν = tr(l T L) = n l(x i ) 2. If f is sufficiently smooth, then σ 2 is a consistent estimator of σ 2. i=1 Introduction to Regression p. 73/100

Variance Estimation For the WMAP data, using local linear fit. > wmap = read.table("wmap.dat",header=t) > opth = 63.0 > locfitwmap = locfit(wmap$cl[1:700] wmap$ell[1:700], alpha=c(0,opth),deg=1) > nu = as.numeric(locfitwmap$dp[6]) > nutilde = as.numeric(locfitwmap$dp[7]) > sigmasqrhat = sum(residuals(locfitwmap)ˆ2)/(700-2*nu+nutilde) > sigmasqrhat [1] 1122214 But, does not seem reasonable to assume homoscedasticity... Introduction to Regression p. 74/100

Variance Estimation Allow σ to be a function of x: Y i = f(x i )+σ(x i )ǫ i. Let Z i = log(y i f(x i )) 2 and δ i = logǫ 2 i. Then, Z i = log(σ 2 (x i ))+δ i. This suggests estimating logσ 2 (x) by regressing the log squared residuals on x. Introduction to Regression p. 75/100

Variance Estimation 1. Estimate f(x) with any nonparametric method to get an estimate fn (x). 2. Define Z i = log(y i fn (x i )) 2. 3. Regress the Z i s on the x i s (again using any nonparametric method) to get an estimate q(x) of logσ 2 (x) and let σ 2 (x) = e q(x). Introduction to Regression p. 76/100

Example WMAP data, log squared residuals, local linear fit, h = 63.0. log residual squared 0 5 10 15 0 100 200 300 400 500 600 700 l Introduction to Regression p. 77/100

Example With local linear fit, h = 130, chosen via GCV log residual squared 0 5 10 15 0 100 200 300 400 500 600 700 l Introduction to Regression p. 78/100

Example Estimating σ(x): sigma hat 500 1000 1500 2000 0 100 200 300 400 500 600 700 l Introduction to Regression p. 79/100

Confidence Bands Recall that f(x) = i Y i l i (x) so Var( f(x)) = σ 2 (X i )l 2 i(x). An approximate 1 α confidence interval for f(x) is i f(x)±z α/2 i σ 2 (X i )l 2 i (x). When σ(x) is smooth, we can approximate σ 2 (X i )l 2 i (x) σ(x) l(x). i Introduction to Regression p. 80/100

Confidence Bands Two caveats: 1. f is biased so this is realy an interval for E( f(x)). Result: bands can miss sharp peaks in the function. 2. Pointwise coverage does not imply simultaneous coverage for all x. Solution for 2 is to replace z α/2 with a larger number (Sun and Loader 1994). locfit() does this for you. Introduction to Regression p. 81/100

More on locfit() > diaghat = predict.locfit(locfitwmap,where="data",what="infl") > normell = predict.locfit(locfitwmap,where="data",what="vari") diaghat will be L ii, i = 1,2,...,n. normell will be l(x i ), i = 1,2,...,n The Sun and Loader replacement for z α/2 is found using kappa0(locfitwmap)$crit.val. Introduction to Regression p. 82/100

Example 95% pointwise confidence bands: Cl 1000 2000 3000 4000 5000 estimate 95% pointwise band assuming constant variance 95% pointwise band using nonconstant variance 0 100 200 300 400 500 600 700 l Introduction to Regression p. 83/100

Example 95% simultaneous confidence bands: Cl 1000 2000 3000 4000 5000 estimate 95% simultaneous band assuming constant variance 95% simultaneous band using nonconstant variance 0 100 200 300 400 500 600 700 l Introduction to Regression p. 84/100

Next... We discuss the use of regression splines. These are a well-motivated choice of basis for constructing regression functions. Introduction to Regression p. 84/100

Basis Methods Idea: expand f as f(x) = j β j ψ j (x) where ψ 1 (x),ψ 2 (x),... are specially chosen, known functions. Then estimate β j and set f(x) = j β j ψ j (x). Here we consider a popular version, splines Introduction to Regression p. 85/100

Splines and Penalization Define fn to be the function that minimizes M(λ) = (Y i fn (x i )) 2 +λ (f (x)) 2 dx. i λ = 0 = fn (X i ) = Y i (no smoothing) λ = = fn (x) = β0 + β1 x (linear) 0 < λ < = fn (x) = cubic spline with knots at X i. A cubic spline is a continuous function f such that 1. f is a cubic polynomial between the X i s 2. f has continuous first and second derivatives at the X i s. Introduction to Regression p. 86/100

Basis For Splines Define (z) + = max{z,0}, N = n+4, ψ 1 (x) = 1 ψ 2 (x) = x ψ 3 (x) = x 2 ψ 4 (x) = x 3 ψ 5 (x) = (x X 1 ) 3 + ψ 6 (x) = (x X 2 ) 3 + ψ N (x) = (x X n ) 3 +. These functions form a basis for the splines: we can write f(x) = N j=1 β j ψ j (x). (For numerical calculations it is actually more efficient to use other spline bases.) We can thus write (1) f n (x) = N β j ψ j (x), j=1 Introduction to Regression p. 87/100

Basis For Splines We can now rewrite the minimization as follows: minimize : (Y Ψβ) T (Y Ψβ)+λβ T Ωβ where Ψ ij = ψ j (X i ) and Ω jk = ψ j (x)ψ k (x)dx. The value of β that minimizes this is β = (Ψ T Ψ+λΩ) 1 Ψ T Y. The smoothing spline fn (x) is a linear smoother, that is, there exist weights l(x) such that fn (x) = n i=1 Y il i (x). Introduction to Regression p. 88/100

Basis For Splines Basis functions with 5 knots. Psi 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x Introduction to Regression p. 89/100

Basis For Splines Same span as previous slide, the B-spline basis, 5 knots: 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x Introduction to Regression p. 90/100

Smoothing Splines in R > smosplresult = smooth.spline(x,y, cv=false, all.knots=true) If cv=true, then cross-validiation used to choose λ; if cv=false, gcv is used If all.knots=true, then knots are placed at all data points; if all.knots=false, then set nknots to specify the number of knots to be used. Using fewer knots eases the computational cost. predict(smosplresult) gives fitted values. Introduction to Regression p. 91/100

Example WMAP data, λ chosen using GCV. Cl 4000 2000 0 2000 4000 6000 8000 0 100 200 300 400 500 600 700 l Introduction to Regression p. 92/100

Next... Finally, we address the challenges of fitting regression models when the predictor x is of large dimension. Introduction to Regression p. 92/100

Recall: The Bias Variance Tradeoff For nonparametric procedures, when x is one-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh h = O MISE = O ( 1 n 1/5 ( 1 ) n 4/5 whereas, for parametric problems, when the model is correct, ( ) 1 MISE = O n ) Introduction to Regression p. 93/100

The Curse of Dimensionality For nonparametric procedures, when x is d-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh d h = O ( MISE = O 1 n 1/(4+d) ( 1 ) n 4/(4+d) whereas, for parametric problems, when the model is correct, still, ( ) 1 MISE = O n ) Introduction to Regression p. 94/100

Multiple Regression Y = f(x 1,X 2,...,X d )+ǫ The curse of dimensionality: Optimal rate of convergence for d = 1 is n 4/5. In d dimensions the optimal rate of convergence is is n 4/(4+d). Thus, the sample size m required for a d-dimensional problem to have the same accuracy as a sample size n in a one-dimensional problem is m n cd where c = (4+d)/(5d) > 0. To maintain a given degree of accuracy of an estimator, the sample size must increase exponentially with the dimension d. Put another way, confidence bands get very large as the dimension d increases. Introduction to Regression p. 95/100

Multiple Local Linear Given a nonsingular positive definite d d bandwidth matrix H, we define K H (x) = 1 H 1/2K(H 1/2 x). Often, one scales each covariate to have the same mean and variance and then we use the kernel h d K( x /h) where K is any one-dimensional kernel. Then there is a single bandwidth parameter h. Introduction to Regression p. 96/100

Multiple Local Linear At a target value x = (x 1,...,x d ) T, the local sum of squares is given by n ( d ) 2 w i (x) Y i a 0 a j (x ij x j ) i=1 j=1 where w i (x) = K( x i x /h). The estimator is fn (x) = â 0 â = (â 0,...,â d ) T is the value of a = (a 0,...,a d ) T that minimizes the weighted sums of squares. Introduction to Regression p. 97/100

Multiple Local Linear The solution â is â = (X T xw x X x ) 1 X T xw x Y where X x = 1 (x 11 x 1 ) (x 1d x d ) 1 (x 21 x 1 ) (x 2d x d )...... 1 (x n1 x 1 ) (x nd x d ) and W x is the diagonal matrix whose (i,i) element is w i (x). Introduction to Regression p. 98/100

A Compromise: Additive Models Y = α+ d j=1 f j (x j )+ǫ Usually take α = Y. Then estimate the f j s by backfitting. 1. set α = Y, f1 = fd = 0. 2. Iterate until convergence: for j = 1,...,d: Compute Ỹ i = Y i α k j f k (X i ), i = 1,...,n. Apply a smoother to Ỹ i on X j to obtain fj. Set fj (x) equal to fj (x) n 1 n i=1 f j (x i ). The last step ensures that i f j (X i ) = 0 (identifiability). Introduction to Regression p. 99/100

To Sum Up... Classic linear regression is computationally and theoretically simple Nonparametric procedures for regression offer increased flexibility Careful consideration of MISE suggests use of local polynomial fits Smoothing parameters can be chosen objectively High-dimensional data present a challenge Introduction to Regression p. 100/100