Introduction to Regression
|
|
- Roderick Armstrong
- 6 years ago
- Views:
Transcription
1 Introduction to Regression Chad M. Schafer May 20, 2015
2 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression Nonparametric Procedures Cross Validation Local Polynomial Regression Confidence Bands Basis Methods: Splines Multiple Regression
3 Basic Concepts in Regression The Regression Problem: Objective is to construct a model that can be used to predict the response variable Y using observation of the predictor variables X. Note: In this framework, Y is real-valued, while X could be a vector. Example: Predicting redshift from photometric information Y = redshift X = intensity in different photometric bands
4 Basic Concepts in Regression The Regression Problem: Observe a training sample (X 1,Y 1 ),...,(X n,y n ), use this to estimate f(x) = E(Y X = x). Equivalently: Estimate f(x) when where E(ǫ i ) = 0. Y i = f(x i )+ǫ i, i = 1,2,...,n Simple estimators when X is real-valued: f(x) = β 0 + β 1 x parametric f(x) = mean{y i : X i x h} nonparametric
5 Basic Concepts of Regression Parametric Regression: Y = β 0 +β 1 X +ǫ Y = β 0 +β 1 X 1 + +β d X d +ǫ Y = β 0 e β 1X 1 + β 2 X 2 +ǫ Nonparametric Regression: Y = f(x)+ǫ Y = f 1 (X 1 )+ +f d (X d )+ǫ Y = f(x 1,X 2 )+ǫ
6 Next... We will first quickly look at a simple example from astronomy
7 Example: Type Ia Supernovae Distance Modulus Redshift From Riess, et al. (2007), 182 Type Ia Supernovae
8 Example: Type Ia Supernovae Assumption: Observed (redshift, distance modulus) pairs (z i,y i ) are related by Y i = f(z i )+σ i ǫ i, where the ǫ i are i.i.d. standard normal with mean zero. Error standard deviations σ i assumed known. Objective: Estimate f( ).
9 Example: Type Ia Supernovae ΛCDM, two parameter model: f(z) = 5log 10 ( c(1+z) H 0 z 0 ) du Ωm (1+u) 3 +(1 Ω m ) +25
10 Example: Type Ia Supernovae Distance Modulus Redshift Estimate where H 0 = and Ω m = (the MLE)
11 Example: Type Ia Supernovae Distance Modulus Redshift Fourth order polynomial fit: Y = β 0 +β 1 z +β 2 z 2 +β 3 z 3 +β 4 z 4 +ǫ
12 Example: Type Ia Supernovae Distance Modulus Redshift Fifth order polynomial fit: Y = β 0 +β 1 z +β 2 z 2 +β 3 z 3 +β 4 z 4 +β 5 z 5 +ǫ
13 Next... The previous two slides illustrate the fundamental challenge of constructing a regression model: How complex should it be?
14 The Bias Variance Tradeoff Must choose model to achieve balance between too simple high bias, i.e., E( f(x)) not close to f(x) xxxxxx precise, but not accurate too complex high variance, i.e., Var( f(x)) large xxxxxx accurate, but not precise This is the classic bias variance tradeoff. Could be called the accuracy precision tradeoff.
15 The Bias Variance Tradeoff An Example: Consider the Linear Regression Model. These models are of the form f(x) = β 0 + p i=1 β i g i (x) where g i ( ) are fixed, specified functions. (For example, g i (x) = x i.) Increasing p yields more complex model
16 Next... Linear regression models are the classic approach to regression. Here, the response is modelled as a linear combination of p selected predictors.
17 Linear Regression Assume that p Y = β 0 + β i g i (x)+ǫ i=1 where g i ( ) are specified functions, not estimated. Assume that E(ǫ) = 0, Var(ǫ) = σ 2 Often assume ǫ is Gaussian, but this is not essential Start with the Simple Linear Regression model: Y = β 0 +β 1 x+ǫ
18 Linear Regression Redshift g r LRG data from 2SLAQ,
19 Linear Regression Redshift Y i Y i g r Showing one of the fitted values, Ŷi.
20 Linear Regression Redshift Y i ε i Y i g r Showing one of the residuals, ǫ i.
21 Linear Regression The line fit here is Ŷ i = β 0 + β 1 x i + ǫ i with β 0 = and β 1 = These estimates of β 0 and β 1 given above minimize the sum of squared errors: n n ( ) 2 SSE = ǫ 2 i = Y i Ŷi i=1 This is least squares regression. i=1 If ǫ i are Gaussian, then β are Gaussian, leading to confidence intervals, hypothesis tests
22 Linear Regression Least squares has a natural geometric interpretation.
23 Linear Regression The Design Matrix: D = 1 g 1 (X 1 ) g 2 (X 1 ) g p (X 1 ) 1 g 1 (X 2 ) g 2 (X 2 ) g p (X 2 ) g 1 (X n ) g 2 (X n ) g p (X n ) Then, L = D(D T D) 1 D T is the projection matrix. Note that ν = trace(l) = p+1 = complexity of model
24 Linear Regression Let Y be a n-vector filled with Y 1,Y 2,...,Y n. The Least Squares Estimates of β: β = (D T D) 1 D T Y The Fitted Values: Ŷ = LY = D β The Residuals: ǫ = Y Ŷ = (I L)Y
25 Linear Regression in R Linear regression in R is done via the command lm() > holdlm = lm(y x) Note: That is a tilde ( ) between y and x. Default is to include intercept term. If want to exclude, use > holdlm = lm(y x - 1)
26 Linear Regression in R y x
27 Linear Regression in R Fitting the model Y = β 0 +β 1 x+ǫ The summary() shows the key output > holdlm = lm(y x) > summary(holdlm) < CUT SOME HERE > Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) * x e-10 *** < CUT SOME HERE > Note that β 1 = 2.07, the SE for β 1 is approximately 0.17.
28 Linear Regression in R Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) * x e-10 *** If the ǫ are Gaussian, then the last column gives the p-value for the test of H 0 : β i = 0 versus H 1 : β i 0. Also, if the ǫ are Gaussian, β i ±2ŜE( β i ) is an approximate 95% confidence interval for β i.
29 Linear Regression in R Residuals Fitted Values Always a good idea to look at plot of residuals versus fitted values. There should be no apparent pattern. > plot(holdlm$residuals,holdlm$fitted.values)
30 Linear Regression in R Sample Quantiles Theoretical Quantiles Is it reasonable to assume ǫ are Gaussian? Check the normal probability plot of the residuals. > qqplot(holdlm$residuals) > qqline(holdlm$residuals)
31 Linear Regression in R Simple to include more terms in the model: > holdlm = lm(y x + z) Unfortunately, this does not work: > holdlm = lm(y x + xˆ2) Instead, can use: > holdlm = lm(y x + I(xˆ2))
32 The Correlation Coefficient A standard way of quantifying the strength of the linear relationship between two variables is via the (Pearson) correlation coefficient. Sample Version: r = 1 n 1 n i=1 X i Y i where X i and Y i are the standardized versions of X i and Y i, i.e., X i = X i X s X.
33 The Correlation Coefficient Note that 1 r 1. r = 1 indicates perfect linear, increasing relationship r = 1 indicates perfect linear, decreasing relationship Also, note that In R: use cor() β 1 = r( sy s X )
34 The Correlation Coefficient Examples of different scatter plots and values for r (Wikipedia)
35 The Coefficient of Determination A very popular, but overused, regression diagnostic is the coefficient of determination, denoted R 2 R 2 = 1 n i=1( Y i Ŷi ) 2 n i=1( Yi Y ) 2 The proportion of the variation explained by the model. Note that 0 R 2 1, and, R 2 = r 2. But: Large R 2 does not imply that the model is a good fit.
36 Anscombe s Quartet (1973) Each of these has the same value for the marginal means and variances, same r, same R 2, same β 0 and β 1.
37 Next... Now we will begin our discussion of nonparametric regression. Reconsider the SNe example from the start of the lecture.
38 Example: Type Ia Supernovae Distance Modulus Redshift Nonparametric estimate.
39 The Bias Variance Tradeoff Nonparametric Case: Every nonparametric procedure requires a smoothing parameter h. Consider the regression estimator based on local averaging: f(x) = mean{y i : X i x h}. Increasing h yields smoother estimate f
40 What is nonparametric? Data Estimate Assumptions In the parametric case, the influence of assumptions is fixed
41 What is nonparametric? Data Estimate Assumptions h In the nonparametric case, the influence of assumptions is controlled by h, where h = O( 1 n 1/5 )
42 The Bias Variance Tradeoff Can formalize this via appropriate measure of error in estimator Squared error loss: L(f(x), f(x)) = (f(x) f(x)) 2. Mean squared error MSE (risk) MSE = R(f(x), f(x)) = E(L(f(x), f(x))).
43 The Bias Variance Tradeoff The key result: R(f(x), f(x)) = bias 2 x +variance x where bias x = E( f(x)) f(x) variance x = Var( f(x)) Often written as MSE = BIAS 2 + VARIANCE.
44 The Bias Variance Tradeoff Mean Integrated Squared Error: MISE = R(f(x), f(x))dx = bias 2 x dx+ variance x dx Empirical version: 1 n n R(f(x i ), f(x i )). i=1
45 The Bias Variance Tradeoff Risk Bias squared Variance Less Optimal smoothing More
46 The Bias Variance Tradeoff For nonparametric procedures, when x is one-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh ( 1 h = O MISE = O n 1/5 ) ( ) 1 n 4/5 whereas, for parametric problems, when the model is correct, ( ) 1 MISE = O n
47 Next... Now we will begin our discussion of nonparametric regression procedures. But, these nonparametric procedures share a common structure with linear regression. Namely, they are all linear smoothers.
48 Linear Smoothers For a linear smoother: f(x) = i Y i l i (x) for some weights: l 1 (x),...,l n (x) The vector of weights depends on the target point x.
49 Linear Smoothers If f(x) = n i=1 l i(x)y i then Ŷ 1. Ŷ n }{{} Ŷ Ŷ = L Y f(x 1 ). f(x n ) = l 1 (X 1 ) l 2 (X 1 ) l n (X 1 ) l 1 (X 2 ) l 2 (X 2 ) l n (X 2 ).... l 1 (X n ) l 2 (X n ) l n (X n ) } {{ } L Y 1. Y n } {{ } Y The effective degrees of freedom is: ν = trace(l) = n L ii. i=1 Can be interpreted as the number of parameters in the model. Recall that ν = p+1 for linear regression.
50 Linear Smoothers Consider the Extreme Cases: When l i (X j ) = 1/n for all i,j, and ν = 1. f(x j ) = Y for all j When l i (X j ) = δ ij for all i,j, and ν = n. f(x j ) = Y j
51 Next... To begin the discussion of nonparametric procedures, we start with a simple, but naive, approach: binning.
52 Binning Divide the x-axis into bins B 1,B 2,... of width h. f is a step function based on averaging the Y i s in each bin: { for x B j : f(x) = mean Y i : X i B j }. The (arbitrary) choice of the boundaries of the bins can affect inference, especially when h large.
53 Binning WMAP data, binned estimate with h = 15. Cl l
54 Binning WMAP data, binned estimate with h = 50. Cl l
55 Binning WMAP data, binned estimate with h = 75. Cl l
56 Next... A step beyond binning is to take local averages to estimate the regression function.
57 Local Averaging For each x let N x = [ x h 2, x+ h ]. 2 This is a moving window of length h, centered at x. Define } f(x) = mean {Y i : X i N x. This is like binning but removes the arbitrary boundaries.
58 Local Averaging WMAP data, local average estimate with h = 15. Cl Binned Local Averaging l
59 Local Averaging WMAP data, local average estimate with h = 50. Cl Binned Local Averaging l
60 Local Averaging WMAP data, local average estimate with h = 75. Cl Binned Local Averaging l
61 Next... Local averaging is a special case of kernel regression.
62 Kernel Regression The local average estimator can be written: where f(x) = K(x) = n i=1 Y i K ( x Xi h n i=1 K ( x Xi h ) { 1 x < 1/2 0 otherwise. Can improve this by using a function K which is smoother. )
63 Kernels A kernel is any smooth function K such that K(x) 0 and K(x)dx = 1, xk(x)dx = 0 and σk 2 x 2 K(x)dx > 0. Some commonly used kernels are the following: the boxcar kernel : K(x) = 1 I( x < 1), 2 the Gaussian kernel : K(x) = 1 e x2 /2, 2π the Epanechnikov kernel : K(x) = 3 4 (1 x2 )I( x < 1) the tricube kernel : K(x) = (1 x 3 ) 3 I( x < 1).
64 Kernels
65 Kernel Regression We can write this as f(x) = n i=1 Y i K f(x) = i ( x Xi h n i=1 K ( x Xi h Y i l i (x) ) ) where l i (x) = K( x Xi h ) n i=1 K ( x Xi h ).
66 Kernel Regression WMAP data, kernel regression estimates, h = 15. Cl boxcar kernel Gaussian kernel tricube kernel l
67 Kernel Regression WMAP data, kernel regression estimates, h = 50. Cl boxcar kernel Gaussian kernel tricube kernel l
68 Kernel Regression WMAP data, kernel regression estimates, h = 75. Cl boxcar kernel Gaussian kernel tricube kernel l
69 Kernel Regression ( ) 2 ( ) MISE h4 x 2 K(x)dx f (x)+2f (x) g (x) 2 dx 4 g(x) + σ2 K 2 (x)dx 1 nh g(x) dx. where g(x) is the density for X. What is especially notable is the presence of the term 2f (x) g (x) g(x) = design bias. Also, bias is large near the boundary. We can reduce these biases using local polynomials.
70 Kernel Regression WMAP data, h = 50. Note the boundary bias. Cl boxcar kernel Gaussian kernel tricube kernel l
71 Next... The deficiencies of kernel regression can be addressed by considering models which, on small scales, model the regression function as a low order polynomial.
72 Local Polynomial Regression Recall polynomial regression: f(x) = β 0 + β 1 x+ β 2 x β p x p where β = ( β 0,..., β p ) are obtained by least squares: minimize ( n Y i [β 0 +β 1 x+β 2 x 2 + +β p x p ] i=1 ) 2
73 Local Polynomial Regression Local polynomial regression: approximate f(x) locally by a different polynomial for every x: f(u) β 0 (x)+β 1 (x)(u x)+β 2 (x)(u x) 2 + +β p (x)(u x) p for u near x. Estimate ( β 0 (x),..., β p (x)) by local least squares: minimize n ( ) (Y i [β 0 (x)+β 1 (x)x+β 2 (x)x 2 + +β p (x)x p ]) 2 x Xi K h }{{} kernel i=1 f(x) = β 0 (x)
74 Local Polynomial Regression Taking p = 0 yields the kernel regression estimator: f n (x) = l i (x) = n l i (x)y i i=1 K ( ) x x i h ( ). n j=1 K x xj h Taking p = 1 yields the local linear estimator. This is the best, all-purpose smoother. Choice of Kernel K: not important Choice of bandwidth h: crucial
75 Local Polynomial Regression The local polynomial regression estimate is f n (x) = n l i (x)y i i=1 where l(x) T = (l 1 (x),...,l n (x)), l(x) T = e T 1(X T xw x X x ) 1 X T xw x, e 1 = (1,0,...,0) T and X x and W x are defined by X x = 1 X 1 x 1 X 2 x.. 1 X n x (X 1 x) p p! (X 2 x) p p! (Xn x) p p! W x = ( x X1 K h. ) K ( x X n h.. ).
76 Local Polynomial Regression Note that f(x) = n i=1 l i(x)y i is a linear smoother. Define Ŷ = (Ŷ1,...,Ŷn) where Ŷi = f(x i ). Then Ŷ = LY where L is the smoothing matrix: L = l 1 (X 1 ) l 2 (X 1 ) l n (X 1 ) l 1 (X 2 ) l 2 (X 2 ) l n (X 2 ).... l 1 (X n ) l 2 (X n ) l n (X n ). The effective degrees of freedom is: ν = trace(l) = n L ii. i=1
77 Theoretical Aside Why local linear (p = 1) is better than kernel (p = 0). Both have (approximate) variance σ 2 (x) K 2 (u)du g(x)nh The kernel estimator has bias h 2 ( 1 2 f (x)+ f (x)g (x) g(x) ) u 2 K(u)du whereas the local linear estimator has asymptotic bias h 21 2 f (x) u 2 K(u)du The local linear estimator is free from design bias. At the boundary points, the kernel estimator has asymptotic bias of O(h) while the local linear estimator has bias O(h 2 ).
78 Next... Parametric proceudres force the choice p, the number of predictors. Nonparametric procedures force the choice of the smoothing parameter h. Next we will discuss how this can be selected in a data-driven manner via cross-validation.
79 Choosing p and h Estimate the risk 1 n n E( f(x i ) f(x i )) 2 i=1 with the leave-one-out cross-validation score: CV = 1 n n (Y i f ( i) (X i )) 2 i=1 where f ( i) is the estimator obtained by omitting the i th pair (X i,y i ).
80 Choosing p and h Amazing shortcut formula: CV = 1 n ( n i=1 Y i f ) 2 n (x i ). 1 L ii An commonly used approximation is GCV (generalized cross-validation): GCV = 1 n ( 1 ν n n ) 2 (Y i f(x i )) 2 i=1 ν = trace(l).
81 Next... Next, we will go over a bit of how to perform local polynomial regression in R.
82 Using locfit() Need to include the locfit library: > install.packages("locfit") > library(locfit) > result = locfit(y x, alpha=c(0, 1.5), deg=1) y and x are vectors the second argument to alpha gives the bandwidth (h) the first argument to alpha specifies the nearest neighbor fraction, an alternative to the bandwidth fitted(result) gives the fitted values, f(x i ) residuals(result) gives the residuals, Y i f(x i )
83 Using locfit() degree=1, n=1000, sigma=0.1 y x(1 x)sin((2.1π) (x + 0.2)) estimate using h = estimate using gcv optimal h = x
84 Using locfit() degree=1, n=1000, sigma=0.1 y x(1 x)sin((2.1π) (x + 0.2)) estimate using h = 0.1 estimate using gcv optimal h = x
85 Using locfit() degree=1, n=1000, sigma=0.5 y (x + 5exp( 4x 2 )) estimate using h = 0.05 estimate using gcv optimal h = x
86 Using locfit() degree=1, n=1000, sigma=0.5 y (x + 5exp( 4x 2 )) estimate using h = 0.5 estimate using gcv optimal h = x
87 Example WMAP data, local linear fit, h = 15. y estimate using h = 15 estimate using gcv optimal h = x
88 Example WMAP data, local linear fit, h = 125. y estimate using h = 125 estimate using gcv optimal h = x
89 Next... We should quantify the amount of error in our estimator for the regression function. Next we will look at how this can be done using local polynomial regression.
90 Variance Estimation Let σ 2 = where n i=1 (Y i f(x i )) 2 n 2ν + ν ν = tr(l), ν = tr(l T L) = n l(x i ) 2. If f is sufficiently smooth, then σ 2 is a consistent estimator of σ 2. i=1
91 Variance Estimation For the WMAP data, using local linear fit. > wmap = read.table("wmap.dat",header=t) > opth = 63.0 > locfitwmap = locfit(wmap$cl[1:700] wmap$ell[1:700], alpha=c(0,opth),deg=1) > nu = as.numeric(locfitwmap$dp[6]) > nutilde = as.numeric(locfitwmap$dp[7]) > sigmasqrhat = sum(residuals(locfitwmap)ˆ2)/(700-2*nu+nutilde) > sigmasqrhat [1] But, does not seem reasonable to assume homoscedasticity...
92 Variance Estimation Allow σ to be a function of x: Y i = f(x i )+σ(x i )ǫ i. Let Z i = log(y i f(x i )) 2 and δ i = logǫ 2 i. Then, Z i = log(σ 2 (x i ))+δ i. This suggests estimating logσ 2 (x) by regressing the log squared residuals on x.
93 Variance Estimation 1. Estimate f(x) with any nonparametric method to get an estimate f n (x). 2. Define Z i = log(y i f n (x i )) Regress the Z i s on the x i s (again using any nonparametric method) to get an estimate q(x) of logσ 2 (x) and let σ 2 (x) = e q(x).
94 Example WMAP data, log squared residuals, local linear fit, h = log residual squared l
95 Example With local linear fit, h = 130, chosen via GCV log residual squared l
96 Example Estimating σ(x): sigma hat l
97 Confidence Bands Recall that f(x) = i Y i l i (x) so Var( f(x)) = i σ 2 (X i )l 2 i(x). An approximate 1 α confidence interval for f(x) is f(x)±z α/2 i σ 2 (X i )l 2 i (x). When σ(x) is smooth, we can approximate i σ 2 (X i )l 2 i (x) σ(x) l(x).
98 Confidence Bands Two caveats: 1. f is biased so this is really an interval for E( f(x)). Result: bands can miss sharp peaks in the function. 2. Pointwise coverage does not imply simultaneous coverage for all x. Solution for 2 is to replace z α/2 with a larger number (Sun and Loader 1994). locfit() does this for you.
99 More on locfit() > diaghat = predict.locfit(locfitwmap,where="data",what="infl") > normell = predict.locfit(locfitwmap,where="data",what="vari") diaghat will be L ii, i = 1,2,...,n. normell will be l(x i ), i = 1,2,...,n The Sun and Loader replacement for z α/2 is found using kappa0(locfitwmap)$crit.val.
100 Example 95% pointwise confidence bands: Cl estimate 95% pointwise band assuming constant variance 95% pointwise band using nonconstant variance l
101 Example 95% simultaneous confidence bands: Cl estimate 95% simultaneous band assuming constant variance 95% simultaneous band using nonconstant variance l
102 Next... We discuss the use of regression splines. These are a well-motivated choice of basis for constructing regression functions.
103 Basis Methods Idea: expand f as f(x) = j β j ψ j (x) where ψ 1 (x),ψ 2 (x),... are specially chosen, known functions. Then estimate β j and set f(x) = β j ψ j (x). j Here we consider a popular version, splines
104 Splines and Penalization Define f n to be the function that minimizes M(λ) = i (Y i f n (x i )) 2 +λ (f (x)) 2 dx. λ = 0 = f n (X i ) = Y i (no smoothing) λ = = f n (x) = β 0 + β 1 x (linear) 0 < λ < = f n (x) = cubic spline with knots at X i. A cubic spline is a continuous function f such that 1. f is a cubic polynomial between the X i s 2. f has continuous first and second derivatives at the X i s.
105 Basis For Splines Define (z) + = max{z,0}, N = n+4, ψ 1 (x) = 1 ψ 2 (x) = x ψ 3 (x) = x 2 ψ 4 (x) = x 3 ψ 5 (x) = (x X 1 ) 3 + ψ 6 (x) = (x X 2 ) 3 + ψ N (x) = (x X n ) 3 +. These functions form a basis for the splines: we can write f(x) = N β j ψ j (x). j=1 (For numerical calculations it is actually more efficient to use other spline bases.) We can thus write f n (x) = N j=1 β j ψ j (x), (1)
106 Basis For Splines We can now rewrite the minimization as follows: minimize : (Y Ψβ) T (Y Ψβ)+λβ T Ωβ where Ψ ij = ψ j (X i ) and Ω jk = ψ j (x)ψ k (x)dx. The value of β that minimizes this is β = (Ψ T Ψ+λΩ) 1 Ψ T Y. The smoothing spline f n (x) is a linear smoother, that is, there exist weights l(x) such that f n (x) = n i=1 Y il i (x).
107 Basis For Splines Basis functions with 5 knots. Psi x
108 Basis For Splines Same span as previous slide, the B-spline basis, 5 knots: x
109 Smoothing Splines in R > smosplresult = smooth.spline(x,y, cv=false, all.knots=true) If cv=true, then cross-validiation used to choose λ; if cv=false, gcv is used If all.knots=true, then knots are placed at all data points; if all.knots=false, then set nknots to specify the number of knots to be used. Using fewer knots eases the computational cost. predict(smosplresult) gives fitted values.
110 Example WMAP data, λ chosen using GCV. Cl l
111 Next... Finally, we address the challenges of fitting regression models when the predictor x is of large dimension.
112 Recall: The Bias Variance Tradeoff For nonparametric procedures, when x is one-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh ( 1 h = O MISE = O n 1/5 ) ( ) 1 n 4/5 whereas, for parametric problems, when the model is correct, ( ) 1 MISE = O n
113 The Curse of Dimensionality For nonparametric procedures, when x is d-dimensional, which is minimized at Hence, MISE c 1 h 4 + c 2 nh d ( ) 1 h = O n 1/(4+d) ( ) 1 MISE = O n 4/(4+d) whereas, for parametric problems, when the model is correct, still, ( ) 1 MISE = O n
114 Multiple Regression Y = f(x 1,X 2,...,X d )+ǫ The curse of dimensionality: Optimal rate of convergence for d = 1 is n 4/5. In d dimensions the optimal rate of convergence is is n 4/(4+d). Thus, the sample size m required for a d-dimensional problem to have the same accuracy as a sample size n in a one-dimensional problem is m n cd where c = (4+d)/(5d) > 0. To maintain a given degree of accuracy of an estimator, the sample size must increase exponentially with the dimension d. Put another way, confidence bands get very large as the dimension d increases.
115 Multiple Local Linear Given a nonsingular positive definite d d bandwidth matrix H, we define K H (x) = 1 H 1/2K(H 1/2 x). Often, one scales each covariate to have the same mean and variance and then we use the kernel h d K( x /h) where K is any one-dimensional kernel. Then there is a single bandwidth parameter h.
116 Multiple Local Linear At a target value x = (x 1,...,x d ) T, the local sum of squares is given by ( n d w i (x) Y i a 0 a j (x ij x j ) i=1 j=1 ) 2 where w i (x) = K( x i x /h). The estimator is f n (x) = â 0 â = (â 0,...,â d ) T is the value of a = (a 0,...,a d ) T that minimizes the weighted sums of squares.
117 Multiple Local Linear The solution â is â = (X T xw x X x ) 1 X T xw x Y where 1 (x 11 x 1 ) (x 1d x d ) 1 (x 21 x 1 ) (x 2d x d ) X x = (x n1 x 1 ) (x nd x d ) and W x is the diagonal matrix whose (i,i) element is w i (x).
118 A Compromise: Additive Models Y = α+ d f j (x j )+ǫ j=1 Usually take α = Y. Then estimate the f j s by backfitting. 1. set α = Y, f 1 = f d = Iterate until convergence: for j = 1,...,d: Compute Ỹi = Y i α k j f k (X i ), i = 1,...,n. Apply a smoother to Ỹi on X j to obtain f j. Set f j (x) equal to f j (x) n 1 n i=1 f j (x i ). The last step ensures that i f j (X i ) = 0 (identifiability).
119 To Sum Up... Classic linear regression is computationally and theoretically simple Nonparametric procedures for regression offer increased flexibility Careful consideration of MISE suggests use of local polynomial fits Smoothing parameters can be chosen objectively High-dimensional data present a challenge
Introduction to Regression
Introduction to Regression Chad M. Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/100 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression
More informationIntroduction to Regression
Introduction to Regression p. 1/97 Introduction to Regression Chad Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/97 Acknowledgement Larry Wasserman, All of Nonparametric
More informationIntroduction to Regression
Introduction to Regression David E Jones (slides mostly by Chad M Schafer) June 1, 2016 1 / 102 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression Nonparametric Procedures
More informationNonparametric Regression. Badr Missaoui
Badr Missaoui Outline Kernel and local polynomial regression. Penalized regression. We are given n pairs of observations (X 1, Y 1 ),...,(X n, Y n ) where Y i = r(x i ) + ε i, i = 1,..., n and r(x) = E(Y
More informationIntroduction to Regression
PSU Summer School in Statistics Regression Slide 1 Introduction to Regression Chad M. Schafer Department of Statistics & Data Science, Carnegie Mellon University cschafer@cmu.edu June 2018 PSU Summer School
More informationNonparametric Estimation of Luminosity Functions
x x Nonparametric Estimation of Luminosity Functions Chad Schafer Department of Statistics, Carnegie Mellon University cschafer@stat.cmu.edu 1 Luminosity Functions The luminosity function gives the number
More informationNonparametric Methods
Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis
More information41903: Introduction to Nonparametrics
41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific
More informationLocal regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression
Local regression I Patrick Breheny November 1 Patrick Breheny STA 621: Nonparametric Statistics 1/27 Simple local models Kernel weighted averages The Nadaraya-Watson estimator Expected loss and prediction
More informationTime Series and Forecasting Lecture 4 NonLinear Time Series
Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations
More informationNonparametric Inference in Cosmology and Astrophysics: Biases and Variants
Nonparametric Inference in Cosmology and Astrophysics: Biases and Variants Christopher R. Genovese Department of Statistics Carnegie Mellon University http://www.stat.cmu.edu/ ~ genovese/ Collaborators:
More informationEcon 582 Nonparametric Regression
Econ 582 Nonparametric Regression Eric Zivot May 28, 2013 Nonparametric Regression Sofarwehaveonlyconsideredlinearregressionmodels = x 0 β + [ x ]=0 [ x = x] =x 0 β = [ x = x] [ x = x] x = β The assume
More informationNonparametric Regression. Changliang Zou
Nonparametric Regression Institute of Statistics, Nankai University Email: nk.chlzou@gmail.com Smoothing parameter selection An overall measure of how well m h (x) performs in estimating m(x) over x (0,
More informationChapter 9. Non-Parametric Density Function Estimation
9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least
More informationNonparametric Statistics
Nonparametric Statistics Jessi Cisewski Yale University Astrostatistics Summer School - XI Wednesday, June 3, 2015 1 Overview Many of the standard statistical inference procedures are based on assumptions
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationChapter 9. Non-Parametric Density Function Estimation
9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least
More informationLinear Regression Model. Badr Missaoui
Linear Regression Model Badr Missaoui Introduction What is this course about? It is a course on applied statistics. It comprises 2 hours lectures each week and 1 hour lab sessions/tutorials. We will focus
More informationStatistics 203: Introduction to Regression and Analysis of Variance Penalized models
Statistics 203: Introduction to Regression and Analysis of Variance Penalized models Jonathan Taylor - p. 1/15 Today s class Bias-Variance tradeoff. Penalized regression. Cross-validation. - p. 2/15 Bias-variance
More informationLocal Polynomial Regression
VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based
More informationIntroduction. Linear Regression. coefficient estimates for the wage equation: E(Y X) = X 1 β X d β d = X β
Introduction - Introduction -2 Introduction Linear Regression E(Y X) = X β +...+X d β d = X β Example: Wage equation Y = log wages, X = schooling (measured in years), labor market experience (measured
More informationIntroduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones
Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive
More informationA Modern Look at Classical Multivariate Techniques
A Modern Look at Classical Multivariate Techniques Yoonkyung Lee Department of Statistics The Ohio State University March 16-20, 2015 The 13th School of Probability and Statistics CIMAT, Guanajuato, Mexico
More informationModelling Non-linear and Non-stationary Time Series
Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationSimple Linear Regression
Simple Linear Regression September 24, 2008 Reading HH 8, GIll 4 Simple Linear Regression p.1/20 Problem Data: Observe pairs (Y i,x i ),i = 1,...n Response or dependent variable Y Predictor or independent
More informationGeneralized Additive Models
Generalized Additive Models The Model The GLM is: g( µ) = ß 0 + ß 1 x 1 + ß 2 x 2 +... + ß k x k The generalization to the GAM is: g(µ) = ß 0 + f 1 (x 1 ) + f 2 (x 2 ) +... + f k (x k ) where the functions
More information12 - Nonparametric Density Estimation
ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6
More informationQuantitative Economics for the Evaluation of the European Policy. Dipartimento di Economia e Management
Quantitative Economics for the Evaluation of the European Policy Dipartimento di Economia e Management Irene Brunetti 1 Davide Fiaschi 2 Angela Parenti 3 9 ottobre 2015 1 ireneb@ec.unipi.it. 2 davide.fiaschi@unipi.it.
More informationDensity estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas
0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity
More informationCan we do statistical inference in a non-asymptotic way? 1
Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.
More informationGeneralized Linear Models
Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n
More informationData Mining Stat 588
Data Mining Stat 588 Lecture 02: Linear Methods for Regression Department of Statistics & Biostatistics Rutgers University September 13 2011 Regression Problem Quantitative generic output variable Y. Generic
More informationRecap. HW due Thursday by 5 pm Next HW coming on Thursday Logistic regression: Pr(G = k X) linear on the logit scale Linear discriminant analysis:
1 / 23 Recap HW due Thursday by 5 pm Next HW coming on Thursday Logistic regression: Pr(G = k X) linear on the logit scale Linear discriminant analysis: Pr(G = k X) Pr(X G = k)pr(g = k) Theory: LDA more
More informationInference for Regression
Inference for Regression Section 9.4 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 13b - 3339 Cathy Poliak, Ph.D. cathy@math.uh.edu
More informationAdditive Isotonic Regression
Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive
More informationIntroduction to Nonparametric Regression
Introduction to Nonparametric Regression Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota)
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationLecture 14 Simple Linear Regression
Lecture 4 Simple Linear Regression Ordinary Least Squares (OLS) Consider the following simple linear regression model where, for each unit i, Y i is the dependent variable (response). X i is the independent
More informationKernel density estimation
Kernel density estimation Patrick Breheny October 18 Patrick Breheny STA 621: Nonparametric Statistics 1/34 Introduction Kernel Density Estimation We ve looked at one method for estimating density: histograms
More informationStatistics 203: Introduction to Regression and Analysis of Variance Course review
Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying
More informationSliced Inverse Regression
Sliced Inverse Regression Ge Zhao gzz13@psu.edu Department of Statistics The Pennsylvania State University Outline Background of Sliced Inverse Regression (SIR) Dimension Reduction Definition of SIR Inversed
More informationModel-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego
Model-free prediction intervals for regression and autoregression Dimitris N. Politis University of California, San Diego To explain or to predict? Models are indispensable for exploring/utilizing relationships
More informationAnnouncements. Proposals graded
Announcements Proposals graded Kevin Jamieson 2018 1 Bayesian Methods Machine Learning CSE546 Kevin Jamieson University of Washington November 1, 2018 2018 Kevin Jamieson 2 MLE Recap - coin flips Data:
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationRemedial Measures for Multiple Linear Regression Models
Remedial Measures for Multiple Linear Regression Models Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Remedial Measures for Multiple Linear Regression Models 1 / 25 Outline
More informationBoundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta
Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta November 29, 2007 Outline Overview of Kernel Density
More informationSparse Nonparametric Density Estimation in High Dimensions Using the Rodeo
Outline in High Dimensions Using the Rodeo Han Liu 1,2 John Lafferty 2,3 Larry Wasserman 1,2 1 Statistics Department, 2 Machine Learning Department, 3 Computer Science Department, Carnegie Mellon University
More information3 Nonparametric Density Estimation
3 Nonparametric Density Estimation Example: Income distribution Source: U.K. Family Expenditure Survey (FES) 1968-1995 Approximately 7000 British Households per year For each household many different variables
More informationLecture 2. The Simple Linear Regression Model: Matrix Approach
Lecture 2 The Simple Linear Regression Model: Matrix Approach Matrix algebra Matrix representation of simple linear regression model 1 Vectors and Matrices Where it is necessary to consider a distribution
More information1. Variance stabilizing transformations; Box-Cox Transformations - Section. 2. Transformations to linearize the model - Section 5.
Ch. 5: Transformations and Weighting 1. Variance stabilizing transformations; Box-Cox Transformations - Section 5.2; 5.4 2. Transformations to linearize the model - Section 5.3 3. Weighted regression -
More informationMotivational Example
Motivational Example Data: Observational longitudinal study of obesity from birth to adulthood. Overall Goal: Build age-, gender-, height-specific growth charts (under 3 year) to diagnose growth abnomalities.
More informationSimple Linear Regression
Simple Linear Regression Reading: Hoff Chapter 9 November 4, 2009 Problem Data: Observe pairs (Y i,x i ),i = 1,... n Response or dependent variable Y Predictor or independent variable X GOALS: Exploring
More informationLecture Notes 15 Prediction Chapters 13, 22, 20.4.
Lecture Notes 15 Prediction Chapters 13, 22, 20.4. 1 Introduction Prediction is covered in detail in 36-707, 36-701, 36-715, 10/36-702. Here, we will just give an introduction. We observe training data
More informationprobability of k samples out of J fall in R.
Nonparametric Techniques for Density Estimation (DHS Ch. 4) n Introduction n Estimation Procedure n Parzen Window Estimation n Parzen Window Example n K n -Nearest Neighbor Estimation Introduction Suppose
More informationDay 4A Nonparametrics
Day 4A Nonparametrics A. Colin Cameron Univ. of Calif. - Davis... for Center of Labor Economics Norwegian School of Economics Advanced Microeconometrics Aug 28 - Sep 2, 2017. Colin Cameron Univ. of Calif.
More informationFinal Overview. Introduction to ML. Marek Petrik 4/25/2017
Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,
More informationLecture 3: Statistical Decision Theory (Part II)
Lecture 3: Statistical Decision Theory (Part II) Hao Helen Zhang Hao Helen Zhang Lecture 3: Statistical Decision Theory (Part II) 1 / 27 Outline of This Note Part I: Statistics Decision Theory (Classical
More informationNonparametric Econometrics
Applied Microeconometrics with Stata Nonparametric Econometrics Spring Term 2011 1 / 37 Contents Introduction The histogram estimator The kernel density estimator Nonparametric regression estimators Semi-
More informationMultiple Linear Regression
Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from
More informationCh. 5 Transformations and Weighting
Outline Three approaches: Ch. 5 Transformations and Weighting. Variance stabilizing transformations; Box-Cox Transformations - Section 5.2; 5.4 2. Transformations to linearize the model - Section 5.3 3.
More informationUnit 10: Simple Linear Regression and Correlation
Unit 10: Simple Linear Regression and Correlation Statistics 571: Statistical Methods Ramón V. León 6/28/2004 Unit 10 - Stat 571 - Ramón V. León 1 Introductory Remarks Regression analysis is a method for
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationSTATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002
Time allowed: 3 HOURS. STATISTICS 174: APPLIED STATISTICS FINAL EXAM DECEMBER 10, 2002 This is an open book exam: all course notes and the text are allowed, and you are expected to use your own calculator.
More informationSpatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood
Spatially Smoothed Kernel Density Estimation via Generalized Empirical Likelihood Kuangyu Wen & Ximing Wu Texas A&M University Info-Metrics Institute Conference: Recent Innovations in Info-Metrics October
More informationIEOR 165 Lecture 7 1 Bias-Variance Tradeoff
IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to
More informationInference in Regression Analysis
Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value
More informationBickel Rosenblatt test
University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationFinal Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58
Final Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Final Review 1 / 58 Outline 1 Multiple Linear Regression (Estimation, Inference) 2 Special Topics for Multiple
More informationSTAT420 Midterm Exam. University of Illinois Urbana-Champaign October 19 (Friday), :00 4:15p. SOLUTIONS (Yellow)
STAT40 Midterm Exam University of Illinois Urbana-Champaign October 19 (Friday), 018 3:00 4:15p SOLUTIONS (Yellow) Question 1 (15 points) (10 points) 3 (50 points) extra ( points) Total (77 points) Points
More informationMATH 644: Regression Analysis Methods
MATH 644: Regression Analysis Methods FINAL EXAM Fall, 2012 INSTRUCTIONS TO STUDENTS: 1. This test contains SIX questions. It comprises ELEVEN printed pages. 2. Answer ALL questions for a total of 100
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley
More informationAdaptive Nonparametric Density Estimators
Adaptive Nonparametric Density Estimators by Alan J. Izenman Introduction Theoretical results and practical application of histograms as density estimators usually assume a fixed-partition approach, where
More informationBasis Expansion and Nonlinear SVM. Kai Yu
Basis Expansion and Nonlinear SVM Kai Yu Linear Classifiers f(x) =w > x + b z(x) = sign(f(x)) Help to learn more general cases, e.g., nonlinear models 8/7/12 2 Nonlinear Classifiers via Basis Expansion
More informationNonparametric Regression 10/ Larry Wasserman
Nonparametric Regression 10/36-702 Larry Wasserman 1 Introduction Now we focus on the following problem: Given a sample (X 1, Y 1 ),..., (X n, Y n ), where X i R d and Y i R, estimate the regression function
More informationRegression, Ridge Regression, Lasso
Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.
More informationUncertainty Quantification for Inverse Problems. November 7, 2011
Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance
More informationConfidence intervals for kernel density estimation
Stata User Group - 9th UK meeting - 19/20 May 2003 Confidence intervals for kernel density estimation Carlo Fiorio c.fiorio@lse.ac.uk London School of Economics and STICERD Stata User Group - 9th UK meeting
More informationOutline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model
Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationRegression Estimation Least Squares and Maximum Likelihood
Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize
More informationCorrelation and Regression
Correlation and Regression October 25, 2017 STAT 151 Class 9 Slide 1 Outline of Topics 1 Associations 2 Scatter plot 3 Correlation 4 Regression 5 Testing and estimation 6 Goodness-of-fit STAT 151 Class
More informationPenalized Splines, Mixed Models, and Recent Large-Sample Results
Penalized Splines, Mixed Models, and Recent Large-Sample Results David Ruppert Operations Research & Information Engineering, Cornell University Feb 4, 2011 Collaborators Matt Wand, University of Wollongong
More informationSpatial Process Estimates as Smoothers: A Review
Spatial Process Estimates as Smoothers: A Review Soutir Bandyopadhyay 1 Basic Model The observational model considered here has the form Y i = f(x i ) + ɛ i, for 1 i n. (1.1) where Y i is the observed
More informationUNIVERSITÄT POTSDAM Institut für Mathematik
UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam
More informationLinear Regression. 1 Introduction. 2 Least Squares
Linear Regression 1 Introduction It is often interesting to study the effect of a variable on a response. In ANOVA, the response is a continuous variable and the variables are discrete / categorical. What
More informationNonparametric Inference via Bootstrapping the Debiased Estimator
Nonparametric Inference via Bootstrapping the Debiased Estimator Yen-Chi Chen Department of Statistics, University of Washington ICSA-Canada Chapter Symposium 2017 1 / 21 Problem Setup Let X 1,, X n be
More informationSection 7: Local linear regression (loess) and regression discontinuity designs
Section 7: Local linear regression (loess) and regression discontinuity designs Yotam Shem-Tov Fall 2015 Yotam Shem-Tov STAT 239/ PS 236A October 26, 2015 1 / 57 Motivation We will focus on local linear
More informationLecture 18: Simple Linear Regression
Lecture 18: Simple Linear Regression BIOS 553 Department of Biostatistics University of Michigan Fall 2004 The Correlation Coefficient: r The correlation coefficient (r) is a number that measures the strength
More informationMatrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =
Matrices and vectors A matrix is a rectangular array of numbers Here s an example: 23 14 17 A = 225 0 2 This matrix has dimensions 2 3 The number of rows is first, then the number of columns We can write
More information7 Semiparametric Estimation of Additive Models
7 Semiparametric Estimation of Additive Models Additive models are very useful for approximating the high-dimensional regression mean functions. They and their extensions have become one of the most widely
More informationCopula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011
Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011 Outline Ordinary Least Squares (OLS) Regression Generalized Linear Models
More informationLecture 19 Multiple (Linear) Regression
Lecture 19 Multiple (Linear) Regression Thais Paiva STA 111 - Summer 2013 Term II August 1, 2013 1 / 30 Thais Paiva STA 111 - Summer 2013 Term II Lecture 19, 08/01/2013 Lecture Plan 1 Multiple regression
More informationLecture 3: Inference in SLR
Lecture 3: Inference in SLR STAT 51 Spring 011 Background Reading KNNL:.1.6 3-1 Topic Overview This topic will cover: Review of hypothesis testing Inference about 1 Inference about 0 Confidence Intervals
More informationNonparametric Density Estimation
Nonparametric Density Estimation Econ 690 Purdue University Justin L. Tobias (Purdue) Nonparametric Density Estimation 1 / 29 Density Estimation Suppose that you had some data, say on wages, and you wanted
More informationDESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA
Statistica Sinica 18(2008), 515-534 DESIGN-ADAPTIVE MINIMAX LOCAL LINEAR REGRESSION FOR LONGITUDINAL/CLUSTERED DATA Kani Chen 1, Jianqing Fan 2 and Zhezhen Jin 3 1 Hong Kong University of Science and Technology,
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationLecture 5: LDA and Logistic Regression
Lecture 5: and Logistic Regression Hao Helen Zhang Hao Helen Zhang Lecture 5: and Logistic Regression 1 / 39 Outline Linear Classification Methods Two Popular Linear Models for Classification Linear Discriminant
More information