Solutions to obligatorisk oppgave 2, STK2100

Similar documents
Generalized Additive Models

Logistic Regression 21/05

Exercise 5.4 Solution

Statistical Prediction

A Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn

Logistic Regression. 0.1 Frogs Dataset

Introduction to Statistics and R

Linear Regression Models P8111

cor(dataset$measurement1, dataset$measurement2, method= pearson ) cor.test(datavector1, datavector2, method= pearson )

Nature vs. nurture? Lecture 18 - Regression: Inference, Outliers, and Intervals. Regression Output. Conditions for inference.

How to deal with non-linear count data? Macro-invertebrates in wetlands

Generalized linear models for binary data. A better graphical exploratory data analysis. The simple linear logistic regression model

R code and output of examples in text. Contents. De Jong and Heller GLMs for Insurance Data R code and output. 1 Poisson regression 2

Generalized Linear Models in R

Variance Decomposition and Goodness of Fit

Logistic Regressions. Stat 430

Classification. Chapter Introduction. 6.2 The Bayes classifier

A Generalized Linear Model for Binomial Response Data. Copyright c 2017 Dan Nettleton (Iowa State University) Statistics / 46

Generalized Additive Models (GAMs)

Stat 4510/7510 Homework 7

R Output for Linear Models using functions lm(), gls() & glm()

Stat/F&W Ecol/Hort 572 Review Points Ané, Spring 2010

Stat 5303 (Oehlert): Randomized Complete Blocks 1

Week 7 Multiple factors. Ch , Some miscellaneous parts

Hands on cusp package tutorial

A Handbook of Statistical Analyses Using R 2nd Edition. Brian S. Everitt and Torsten Hothorn

Business Statistics. Lecture 10: Course Review

Tento projekt je spolufinancován Evropským sociálním fondem a Státním rozpočtem ČR InoBio CZ.1.07/2.2.00/

Exam details. Final Review Session. Things to Review

Chapter 16: Understanding Relationships Numerical Data

Using R in 200D Luke Sonnet

Logistic Regression - problem 6.14

R Hints for Chapter 10

Interactions in Logistic Regression

7/28/15. Review Homework. Overview. Lecture 6: Logistic Regression Analysis

mgcv: GAMs in R Simon Wood Mathematical Sciences, University of Bath, U.K.

BOOtstrapping the Generalized Linear Model. Link to the last RSS article here:factor Analysis with Binary items: A quick review with examples. -- Ed.

Generalised linear models. Response variable can take a number of different formats

STATS216v Introduction to Statistical Learning Stanford University, Summer Midterm Exam (Solutions) Duration: 1 hours

Multivariate Statistics in Ecology and Quantitative Genetics Summary

Consider fitting a model using ordinary least squares (OLS) regression:

STAT 510 Final Exam Spring 2015

ssh tap sas913, sas

Non-Gaussian Response Variables

df=degrees of freedom = n - 1

Modeling Overdispersion

Booklet of Code and Output for STAD29/STA 1007 Midterm Exam

Leftovers. Morris. University Farm. University Farm. Morris. yield

Duration of Unemployment - Analysis of Deviance Table for Nested Models

Variance Decomposition in Regression James M. Murray, Ph.D. University of Wisconsin - La Crosse Updated: October 04, 2017

Exam Applied Statistical Regression. Good Luck!

Prediction problems 3: Validation and Model Checking

Chapter 8 Conclusion

Logistic & Tobit Regression

PAPER 206 APPLIED STATISTICS

Administration. Homework 1 on web page, due Feb 11 NSERC summer undergraduate award applications due Feb 5 Some helpful books

Regression Methods for Survey Data

Stat 401B Exam 3 Fall 2016 (Corrected Version)

Logistic Regression. 1 Analysis of the budworm moth data 1. 2 Estimates and confidence intervals for the parameters 2

HW1 Roshena MacPherson Feb 1, 2017

Introduction to the Generalized Linear Model: Logistic regression and Poisson regression

Generalized Linear Models 1

Class Notes: Week 8. Probit versus Logit Link Functions and Count Data

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 26 May :00 16:00

UNIVERSITY OF TORONTO. Faculty of Arts and Science APRIL 2010 EXAMINATIONS STA 303 H1S / STA 1002 HS. Duration - 3 hours. Aids Allowed: Calculator

Random Independent Variables

Clinical Trials. Olli Saarela. September 18, Dalla Lana School of Public Health University of Toronto.

Model checking overview. Checking & Selecting GAMs. Residual checking. Distribution checking

Checking, Selecting & Predicting with GAMs. Simon Wood Mathematical Sciences, University of Bath, U.K.

R in Linguistic Analysis. Wassink 2012 University of Washington Week 6

How to work correctly statistically about sex ratio

1 Multiple Regression

BMI 541/699 Lecture 22

Review of Multiple Regression

Reaction Days

ESP 178 Applied Research Methods. 2/23: Quantitative Analysis

1. Logistic Regression, One Predictor 2. Inference: Estimating the Parameters 3. Multiple Logistic Regression 4. AIC and BIC in Logistic Regression

Various Issues in Fitting Contingency Tables

STK 2100 Oblig 1. Zhou Siyu. February 15, 2017

22s:152 Applied Linear Regression. Take random samples from each of m populations.

Explanatory variables are: weight, width of shell, color (medium light, medium, medium dark, dark), and condition of spine.

EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #7

Statistical Methods III Statistics 212. Problem Set 2 - Answer Key

STA102 Class Notes Chapter Logistic Regression

Sample solutions. Stat 8051 Homework 8

22s:152 Applied Linear Regression. There are a couple commonly used models for a one-way ANOVA with m groups. Chapter 8: ANOVA

Neural networks (not in book)

Generalized Linear Models

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Package clogitboost. R topics documented: December 21, 2015

β j = coefficient of x j in the model; β = ( β1, β2,

An Introduction to Path Analysis

Density Temp vs Ratio. temp

Lecture 18: Simple Linear Regression

Handout 4: Simple Linear Regression

ANOVA (Analysis of Variance) output RLS 11/20/2016

Regression and the 2-Sample t

Linear Regression. Data Model. β, σ 2. Process Model. ,V β. ,s 2. s 1. Parameter Model

The prediction of house price

Classification: Logistic Regression and Naive Bayes Book Chapter 4. Carlos M. Carvalho The University of Texas McCombs School of Business

Transcription:

Solutions to obligatorisk oppgave 2, STK2100 Vinnie Ko May 14, 2018 Disclaimer: This document is made solely for my own personal use and can contain many errors. Oppgave 1 We load packages and read data before we start with the assignment. 1 > # STK 2100 2 > # Obligatorisk oppgave 2 3 > 4 > # Clean up the memory before we start. 5 > rm(list=ls(all=true)) 6 > 7 > # Load packages 8 > library(gam) 9 > 10 > # Read data. 11 > Spam = read.table("http://www.uio.no/studier/emner/matnat/math/stk2100/data/spam data.txt", header = T) 12 > head(spam) 13 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 14 1 0.00 0.64 0.64 0 0.32 0.00 0.00 0.00 0.00 0.00 0.00 0.64 0.00 0.00 0.00 0.32 15 2 0.21 0.28 0.50 0 0.14 0.28 0.21 0.07 0.00 0.94 0.21 0.79 0.65 0.21 0.14 0.14 16 3 0.06 0.00 0.71 0 1.23 0.19 0.19 0.12 0.64 0.25 0.38 0.45 0.12 0.00 1.75 0.06 17 4 0.00 0.00 0.00 0 0.63 0.00 0.31 0.63 0.31 0.63 0.31 0.31 0.31 0.00 0.00 0.31 18 5 0.00 0.00 0.00 0 0.63 0.00 0.31 0.63 0.31 0.63 0.31 0.31 0.31 0.00 0.00 0.31 19 6 0.00 0.00 0.00 0 1.85 0.00 0.00 1.85 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20 x17 x18 x19 x20 x21 x22 x23 x24 x25 x26 x27 x28 x29 x30 x31 x32 x33 x34 21 1 0.00 1.29 1.93 0.00 0.96 0 0.00 0.00 0 0 0 0 0 0 0 0 0 0 22 2 0.07 0.28 3.47 0.00 1.59 0 0.43 0.43 0 0 0 0 0 0 0 0 0 0 23 3 0.06 1.03 1.36 0.32 0.51 0 1.16 0.06 0 0 0 0 0 0 0 0 0 0 24 4 0.00 0.00 3.18 0.00 0.31 0 0.00 0.00 0 0 0 0 0 0 0 0 0 0 25 5 0.00 0.00 3.18 0.00 0.31 0 0.00 0.00 0 0 0 0 0 0 0 0 0 0 26 6 0.00 0.00 0.00 0.00 0.00 0 0.00 0.00 0 0 0 0 0 0 0 0 0 0 27 x35 x36 x37 x38 x39 x40 x41 x42 x43 x44 x45 x46 x47 x48 x49 x50 x51 x52 28 1 0 0 0.00 0 0 0.00 0 0 0.00 0 0.00 0.00 0 0 0.00 0.000 0 0.778 29 2 0 0 0.07 0 0 0.00 0 0 0.00 0 0.00 0.00 0 0 0.00 0.132 0 0.372 30 3 0 0 0.00 0 0 0.06 0 0 0.12 0 0.06 0.06 0 0 0.01 0.143 0 0.276 31 4 0 0 0.00 0 0 0.00 0 0 0.00 0 0.00 0.00 0 0 0.00 0.137 0 0.137 32 5 0 0 0.00 0 0 0.00 0 0 0.00 0 0.00 0.00 0 0 0.00 0.135 0 0.135 33 6 0 0 0.00 0 0 0.00 0 0 0.00 0 0.00 0.00 0 0 0.00 0.223 0 0.000 34 x53 x54 x55 x56 x57 y train 35 1 0.000 0.000 3.756 61 278 TRUE TRUE 36 2 0.180 0.048 5.114 101 1028 TRUE FALSE 37 3 0.184 0.010 9.821 485 2259 TRUE TRUE 38 4 0.000 0.000 3.537 40 191 TRUE FALSE 1

39 5 0.000 0.000 3.537 40 191 TRUE FALSE 40 6 0.000 0.000 3.000 15 54 TRUE FALSE (a) We fit logistic regression with training data, by using all explanatory variables. 1 > # a) 2 > n.test = sum(spam[,"train"] == 0) 3 > 4 > # Fit logistic regression. 5 > glm.fit = glm(y. train, data = Spam, family = binomial, subset = Spam[,"train"]) 6 Warning message: 7 glm.fit: fitted probabilities numerically 0 or 1 occurred 8 > # Predict from the fitted logistic regression model. 9 > y.hat.test = predict(glm.fit, Spam[(Spam[,"train"] == 0),], type = "response") > 0.5 10 > y.test = Spam[(Spam[,"train"] == 0), "y"] 11 > test.rss = var(y.test) (length(y.test) 1) 12 > 13 > # Test error rate 14 > test.error.rate = mean(y.test!= y.hat.test) 15 > test.error.rate 16 [1] 0.08548124 17 > 18 > # Test MSE 19 > test.mse = mean((y.test y.hat.test)ˆ2) 20 > test.mse 21 [1] 0.08548124 22 > 23 > # Test R squared 24 > test.r.squared = 1 n.test test.mse/test.rss 25 > test.r.squared 26 [1] 0.6430416 We considered test prediction error rate, test MSE and test R 2 as possible model performance criterion. Note that since y i {0, 1} and ŷ i {0, 1}, test prediction error rate and test MSE are always the same with this data set. Throughout this assignment, we use test prediction error rate as model performance criterion. Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred This warning message happens most probably because there exists a line that perfectly separates two categories. When p = 0 or 1, we have numerical problem with log-likelihood: L = p y (1 p) 1 y l = y log(p) + (1 y) log(1 p) (b) We first transform the explanatory variables into principal components. Then, we fit logistic regression with first k principal components. We try out k = 1,, 57 and choose the optimal value of k based on the test prediction error rate. 2

1 > # b) 2 > # Number of variables 3 > d = ncol(spam) 2 4 > 5 > # Compute principal components 6 > X.PCA = prcomp(spam[,1:d], retx = TRUE, scale = TRUE) 7 > Spam.PCA = data.frame(x.pca$x, y = Spam[,"y"], train = Spam[,"train"]) 8 > test.error.rate.pca.glm = data.frame(n.pc = NA, err = NA) 9 > # Try out different number of principal components. 10 > for(k in 1:d) { 11 + # Fit logistic regression with i principal components. 12 + glm.fit.pca = glm(y. train, data = Spam.PCA[,c(1:k,58,59)], family = binomial, subset = Spam.PCA[," train"]) 13 + # Predict from the fitted model. 14 + y.hat.pca.test = predict(glm.fit.pca, Spam.PCA[Spam.PCA[,"train"] == 0,], type = "response") > 0.5 15 + # Compute test error rate. 16 + test.error.rate.pca.glm[k,"err"] = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat.pca.test) 17 + test.error.rate.pca.glm[k,"n.pc"] = k 18 + } 19 There were 50 or more warnings (use warnings() to see the first 50) 20 > test.error.rate.pca.glm = test.error.rate.pca.glm[order(test.error.rate.pca.glm[,"err"]),] 21 > rownames(test.error.rate.pca.glm) = NULL 22 > head(test.error.rate.pca.glm) 23 n.pc err 24 1 45 0.07536705 25 2 44 0.07634584 26 3 43 0.07895595 27 4 40 0.07993475 28 5 41 0.07993475 29 6 47 0.07993475 30 > 31 > plot(x = test.error.rate.pca.glm[,"n.pc"], y = test.error.rate.pca.glm[,"err"], xlab = "number of PC", ylab = "test error rate") 32 > 33 > warnings() 34 Warning messages: 35 1: glm.fit: fitted probabilities numerically 0 or 1 occurred 36 2: glm.fit: fitted probabilities numerically 0 or 1 occurred 37 3: glm.fit: fitted probabilities numerically 0 or 1 occurred 38 4: glm.fit: fitted probabilities numerically 0 or 1 occurred 39 5: glm.fit: fitted probabilities numerically 0 or 1 occurred 40 6: glm.fit: fitted probabilities numerically 0 or 1 occurred 41 7: glm.fit: fitted probabilities numerically 0 or 1 occurred 42 8: glm.fit: fitted probabilities numerically 0 or 1 occurred 43 9: glm.fit: fitted probabilities numerically 0 or 1 occurred 44 10: glm.fit: fitted probabilities numerically 0 or 1 occurred 45 11: glm.fit: fitted probabilities numerically 0 or 1 occurred 46 12: glm.fit: fitted probabilities numerically 0 or 1 occurred 47 13: glm.fit: fitted probabilities numerically 0 or 1 occurred 48 14: glm.fit: fitted probabilities numerically 0 or 1 occurred 49 15: glm.fit: fitted probabilities numerically 0 or 1 occurred 50 16: glm.fit: fitted probabilities numerically 0 or 1 occurred 51 17: glm.fit: fitted probabilities numerically 0 or 1 occurred 52 18: glm.fit: fitted probabilities numerically 0 or 1 occurred 53 19: glm.fit: fitted probabilities numerically 0 or 1 occurred 54 20: glm.fit: fitted probabilities numerically 0 or 1 occurred 55 21: glm.fit: fitted probabilities numerically 0 or 1 occurred 56 22: glm.fit: fitted probabilities numerically 0 or 1 occurred 57 23: glm.fit: fitted probabilities numerically 0 or 1 occurred 58 24: glm.fit: fitted probabilities numerically 0 or 1 occurred 59 25: glm.fit: fitted probabilities numerically 0 or 1 occurred 3

60 26: glm.fit: fitted probabilities numerically 0 or 1 occurred 61 27: glm.fit: fitted probabilities numerically 0 or 1 occurred 62 28: glm.fit: fitted probabilities numerically 0 or 1 occurred 63 29: glm.fit: fitted probabilities numerically 0 or 1 occurred 64 30: glm.fit: fitted probabilities numerically 0 or 1 occurred 65 31: glm.fit: fitted probabilities numerically 0 or 1 occurred 66 32: glm.fit: fitted probabilities numerically 0 or 1 occurred 67 33: glm.fit: fitted probabilities numerically 0 or 1 occurred 68 34: glm.fit: fitted probabilities numerically 0 or 1 occurred 69 35: glm.fit: fitted probabilities numerically 0 or 1 occurred 70 36: glm.fit: fitted probabilities numerically 0 or 1 occurred 71 37: glm.fit: fitted probabilities numerically 0 or 1 occurred 72 38: glm.fit: fitted probabilities numerically 0 or 1 occurred 73 39: glm.fit: fitted probabilities numerically 0 or 1 occurred 74 40: glm.fit: fitted probabilities numerically 0 or 1 occurred 75 41: glm.fit: fitted probabilities numerically 0 or 1 occurred 76 42: glm.fit: fitted probabilities numerically 0 or 1 occurred 77 43: glm.fit: fitted probabilities numerically 0 or 1 occurred 78 44: glm.fit: fitted probabilities numerically 0 or 1 occurred 79 45: glm.fit: fitted probabilities numerically 0 or 1 occurred 80 46: glm.fit: fitted probabilities numerically 0 or 1 occurred 81 47: glm.fit: fitted probabilities numerically 0 or 1 occurred 82 48: glm.fit: fitted probabilities numerically 0 or 1 occurred 83 49: glm.fit: fitted probabilities numerically 0 or 1 occurred 84 50: glm.fit: fitted probabilities numerically 0 or 1 occurred With k = 45, we get the lowest test error rate. With the given dataset, using principal components (instead of original variables) results in better prediction performance. test error rate 0.08 0.10 0.12 0.14 0 10 20 30 40 50 number of PC Figure 1: Test error rate against k. (c) We fit logistic regression and GAM by using only the first 3 explanatory variables and compare them. 4

1 > # c) 2 > # Logistic regression with first 3 variables. 3 > glm.fit.nv.3 = glm(y x1 + x2 + x3, subset = Spam[,"train"], data = Spam, family = binomial) 4 > y.hat.glm.nv.3 = predict(glm.fit.nv.3, Spam[(Spam[,"train"] == 0), ], type = "response") > 0.5 5 > test.error.rate.glm.nv.3 = mean(spam[(spam[,"train"] == 0),"y"]!= y.hat.glm.nv.3) 6 > test.error.rate.glm.nv.3 7 [1] 0.3615008 8 > # GAM with first 3 variables. 9 > gam.fit.nv.3 = gam(y s(x1) + s(x2) + s(x3), subset = Spam[,"train"], data = Spam, family = binomial) 10 > y.hat.gam.nv.3 = predict(gam.fit.nv.3, Spam[(Spam[,"train"] == 0),], type = "response") > 0.5 11 > test.error.rate.gam.nv.3 = mean(spam[(spam[,"train"] == 0),"y"]!= y.hat.gam.nv.3) 12 > test.error.rate.gam.nv.3 13 [1] 0.2998369 14 > 15 > # GAM Plot 16 > plot(gam.fit.nv.3, se = TRUE) 17 > 18 > # Histrogram 19 > hist(spam[,"x1"], breaks = 100) 20 > hist(spam[,"x2"], breaks = 100) 21 > hist(spam[,"x3"], breaks = 100) 22 > GAM model has lower test error rate than logistic regression model. So, the non-linear terms improve the model. > summary(gam.fit.nv.3) Call: gam(formula = y ~ s(x1) + s(x2) + s(x3), family = binomial, data = Spam, subset = Spam[, "train"]) Deviance Residuals: Min 1Q Median 3Q Max -2.5594-0.6942-0.6942 0.9406 1.7556 (Dispersion Parameter for binomial family taken to be 1) Null Deviance: 2050.735 on 1535 degrees of freedom Residual Deviance: 1677.261 on 1523 degrees of freedom AIC: 1703.262 Number of Local Scoring Iterations: 15 Anova for Parametric Effects Df Sum Sq Mean Sq F value Pr(>F) s(x1) 1 18.69 18.693 19.216 1.247e-05 *** s(x2) 1 36.30 36.299 37.316 1.273e-09 *** s(x3) 1 24.25 24.246 24.925 6.647e-07 *** Residuals 1523 1481.52 0.973 --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Anova for Nonparametric Effects Npar Df Npar Chisq P(Chi) (Intercept) s(x1) 3 27.077 5.673e-06 *** s(x2) 3 55.788 4.664e-12 *** s(x3) 3 66.570 2.320e-14 *** 5

--- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 > s(x1) 6 4 2 0 2 s(x2) 150 100 50 0 50 100 0.0 0.5 1.0 1.5 2.0 x1 0 2 4 6 8 10 12 14 x2 s(x3) 3 2 1 0 1 2 0 1 2 3 4 x3 Figure 2: The non-linear terms of GAM model. (gam package was used.) Figure 2 shows the non-linear structures of the fitted GAM model. From visual inspection, we conclude that there is non-linear relationship between response variable and explanatory variables. According to the ANOVA tests from the model summary of gam.fit.nv.3, all three splines are significant. So, we conclude that there is non-linear relationship between response variable and explanatory variables. The plot of X2 looks linear at a first glance, but if one zooms into the left region where the most data points are, one can observe wiggly curves. 6

Histogram of Spam[, "x1"] Histogram of Spam[, "x2"] Frequency 0 500 1000 1500 2000 2500 3000 3500 Frequency 0 1000 2000 3000 4000 0 1 2 3 4 Spam[, "x1"] 0 2 4 6 8 10 12 14 Spam[, "x2"] Histogram of Spam[, "x3"] Frequency 0 500 1000 1500 2000 2500 0 1 2 3 4 5 Spam[, "x3"] Figure 3: Histogram of the first 3 explanatory variables of the Spam dataset. The variable X 1, X 2, X 3 are word frequencies in e-mail and from Figure 3 we can see that most data points have relatively low values of frequency. This means that the (non-linear) patterns on the rightregion of the plots are determined by merely a few data points. (This is also visible by wide 95% CI s on the wide side of the plots.) In this case, one can consider to put restrictions on the smoothness of the right-region. 7

s(x1,7.79) 100 50 0 50 s(x2,5.55) 1000 500 0 500 0.0 0.5 1.0 1.5 2.0 x1 0 2 4 6 8 10 12 14 x2 s(x3,4.12) 3 2 1 0 1 2 0 1 2 3 4 x3 Figure 4: The non-linear terms of GAM model. (mgcv package was used.) (d) We repeat part (c), but this time, we use principal components instead of the original explanatory variables. (i.e. We fit logistic regression and GAM by using only the first 3 principal components and we compare them.) 1 > # d) 2 > # Logistic regression with first 3 principal components. 3 > glm.fit.npc.3 = glm(y PC1 + PC2 + PC3, subset = Spam.PCA[,"train"], data = Spam.PCA, family = binomial) 4 > y.hat.glm.npc.3 = predict(glm.fit.npc.3, Spam.PCA[(Spam.PCA[,"train"] == 0), ], type = "response") > 0.5 5 > test.error.rate.glm.npc.3 = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat.glm.npc.3) 6 > test.error.rate.glm.npc.3 7 [1] 0.1311582 8 > # GAM with first 3 principal components. 8

9 > gam.fit.npc.3 = gam(y s(pc1) + s(pc2) + s(pc3), subset = Spam.PCA[,"train"], data = Spam.PCA, family = binomial) 10 > y.hat.gam.npc.3 = predict(gam.fit.npc.3, Spam.PCA[(Spam.PCA[,"train"] == 0),], type = "response") > 0.5 11 > test.error.rate.gam.npc.3 = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat.gam.npc.3) 12 > test.error.rate.gam.npc.3 13 [1] 0.1223491 14 > 15 > # Plot the composition of principal components. 16 > plot.ts( 17 + X.PCA$rotation[,"PC1"], 18 + xlab = "Variable number", 19 + ylab = "Variable loading", 20 + main = "The composition of principal component 1", 21 + font.main = 1) 22 > plot.ts( 23 + X.PCA$rotation[,"PC2"], 24 + xlab = "Variable number", 25 + ylab = "Variable loading", 26 + main = "The composition of principal component 2", 27 + font.main = 1) 28 > plot.ts( 29 + X.PCA$rotation[,"PC3"], 30 + xlab = "Variable number", 31 + ylab = "Variable loading", 32 + main = "The composition of principal component 3", 33 + font.main = 1) 34 > 35 > # GAM plot 36 > plot(gam.fit.npc.3, se = TRUE) 37 > Adding non-linear terms decreases the test error rate in this case. Figure 5 shows the non-linear structures of the fitted GAM model. From visual inspection, we conclude that there is non-linear relationship between response variable and explanatory variables. > summary(gam.fit.npc.3) Call: gam(formula = y ~ s(pc1) + s(pc2) + s(pc3), family = binomial, data = Spam.PCA, subset = Spam.PCA[, "train"]) Deviance Residuals: Min 1Q Median 3Q Max -2.8164-0.4849-0.1729 0.3589 2.8941 (Dispersion Parameter for binomial family taken to be 1) Null Deviance: 2050.735 on 1535 degrees of freedom Residual Deviance: 974.3573 on 1523 degrees of freedom AIC: 1000.358 Number of Local Scoring Iterations: 16 Anova for Parametric Effects Df Sum Sq Mean Sq F value Pr(>F) s(pc1) 1 172.93 172.928 183.183 < 2.2e-16 *** s(pc2) 1 141.46 141.456 149.844 < 2.2e-16 *** s(pc3) 1 41.49 41.492 43.952 4.659e-11 *** Residuals 1523 1437.74 0.944 9

--- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Anova for Nonparametric Effects Npar Df Npar Chisq P(Chi) (Intercept) s(pc1) 3 18.623 0.0003271 *** s(pc2) 3 75.796 2.220e-16 *** s(pc3) 3 23.193 3.682e-05 *** --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 > According to the ANOVA tests from the model summary of gam.fit.npc.3, all three splines are significant. So, we conclude that there is non-linear relationship between response variable and principal components. Here, we observe similar phenomenon as in question (c). The principal components have less problem that the most of the data points are located in the left side of the plot. However, the non-linearity on the right corners are still determined by only a few data points. In this case, one can consider to put restrictions on the smoothness of the right-region. 10

s(pc1) 150 100 50 0 50 100 s(pc2) 10 5 0 5 0 10 20 30 PC1 4 2 0 2 4 6 8 PC2 s(pc3) 10 5 0 5 10 15 5 0 5 10 15 PC3 Figure 5: The non-linear terms of GAM model with 3 principal components. (gam package was used.) 11

s(pc1,5.93) 40000 20000 0 10000 20000 0 10 20 30 s(pc2,3.49) 10 5 0 5 10 4 2 0 2 4 6 8 PC1 PC2 s(pc3,3.06) 15 10 5 0 5 10 15 5 0 5 10 15 PC3 Figure 6: The non-linear terms of GAM model with 3 principal components. (mgcv package was used.) For both logistic regression and GAM, using 3 principal components instead of original variables decreases test error rate quite a lot. A possible explanation is that PCA merges similar variables into one principal component. Many of the variables are for example word frequency of a certain word. It can happen that word frequency of some words are highly correlated to each other. Those variables will have tendency of being merged into the same PCA. This means that different principal components will try to span the variable space as large as possible. The fact that principal component 1 and 2 are highly orthogonal in Figure 7 supports this. So, with the same number of variables, principal components cover wider variable space and hence result in better model in terms of prediction performance. 12

The composition of principal component 1 The composition of principal component 2 Variable loading 0.1 0.0 0.1 0.2 0.3 Variable loading 0.1 0.0 0.1 0.2 0 10 20 30 40 50 0 10 20 30 40 50 Variable number Variable number The composition of principal component 3 Variable loading 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0 10 20 30 40 50 Variable number Figure 7: Composition of principal components. (e) 1 > # e) 2 > k = 20 3 > PCA.name.vec = names(spam.pca)[1:57] 4 > # Compose formula 5 > formula.obj = as.formula(paste("y", paste(paste("s(", PCA.name.vec[1:k], ")", sep=""), collapse = "+"), sep=" ")) 6 > # Fit GAM with principal components 7 > gam.fit.npc.20 = gam(formula.obj, subset = Spam.PCA[,"train"], data = Spam.PCA, family = binomial) 8 > # Predict from the fitted model 9 > y.hat.gam.npc.20 = predict(gam.fit.npc.20, Spam.PCA[(Spam.PCA[,"train"] == 0),], type = "response") > 0.5 10 > # Compute error rate. 11 > test.error.rate.gam.npc.20 = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat.gam.npc.20) 12 > test.error.rate.gam.npc.20 13 [1] 0.07536705 13

The GAM model with 20 principal components result in lower test error rate than the GAM model with 3 principal components from part (d). (f) 1 # A function for part f) 2 find.best.k.for.pca.gam.func = function (K, parallel = FALSE, n.core = NULL, silent = FALSE) { 3 4 # tic: global 5 start.time.global = Sys.time() 6 7 # Read data 8 Spam = read.table("http://www.uio.no/studier/emner/matnat/math/stk2100/data/spam data.txt", header = T) 9 d = ncol(spam) 2 10 11 # Compute principal components 12 X.PCA = prcomp(spam[,1:d], retx = TRUE, scale = TRUE) 13 Spam.PCA = data.frame(x.pca$x, y = Spam[,"y"], train = Spam[,"train"]) 14 15 # Make a frame 16 gam.fit.list = list() 17 test.error.rate.mat = data.frame(n.pc = NA, test.err = NA) 18 PCA.name.vec = names(spam.pca)[1:57] 19 20 # Option 1: non parallelized for loop 21 if (parallel == FALSE) { 22 # Display message 23 if (silent == FALSE) { 24 cat("fitting process started with K = ", K, " (non-parallelized). ", "\n", sep = "") 25 } 26 # Try out given k values. 27 for (k in 1:K) { 28 # tic: fit.gam 29 start.time.fit.gam = Sys.time() 30 # Display message 31 if (silent == FALSE) { 32 cat("fitting GAM with k = ", k, " PC. ", sep = "") 33 } 34 35 # Compose formula 36 formula.obj = as.formula(paste("y", paste(paste("s(", PCA.name.vec[1:k], ")", sep=""), collapse = "+"), sep=" ")) 37 # Fit GAM with principal components. 38 gam.fit.list[[k]] = gam(formula.obj, subset = Spam.PCA[,"train"], data = Spam.PCA, family = binomial) 39 names(gam.fit.list)[k] = paste("k.", k, sep="") 40 # Predict from the fitted model. 41 y.hat = predict(gam.fit.list[[k]], Spam.PCA[(Spam.PCA[,"train"] == 0),], type = "response") > 0.5 42 # Compute error rate with test set. 43 test.error.rate.mat[k,"n.pc"] = k 44 test.error.rate.mat[k,"test.err"] = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat) 45 # Clean up 46 remove(formula.obj, y.hat) 47 48 # toc: fit.gam 49 end.time.fit.gam = Sys.time() 50 time.taken.fit.gam = end.time.fit.gam start.time.fit.gam 51 # Display a message. 52 if (silent == FALSE) { 14

53 cat("finished (", round(time.taken.fit.gam, 2), " ", units(time.taken.fit.gam), " elapsed).", "\n", sep = "") 54 } 55 } 56 } else if (parallel == TRUE) { 57 # Worker function 58 worker.func = function(i) { 59 # Compose formula. 60 formula.obj = as.formula(paste("y", paste(paste("s(", PCA.name.vec[1:i], ")", sep=""), collapse = "+"), sep=" ")) 61 # Fit GAM with principal components. 62 gam.fit = gam(formula.obj, subset = Spam.PCA[,"train"], data = Spam.PCA, family = binomial) 63 # Predict from the fitted model. 64 y.hat = predict(gam.fit, Spam.PCA[(Spam.PCA[,"train"] == 0),], type = "response") > 0.5 65 # Compute error rate with test set. 66 test.error.rate = mean(spam.pca[(spam.pca[,"train"] == 0),"y"]!= y.hat) 67 68 # Wrap up the result. 69 result = list(k = i, gam.fit = gam.fit, test.error.rate = test.error.rate) 70 return(result) 71 } 72 73 # Set number of cores to be used. 74 if (is.null(n.core)) { 75 n.workers = detectcores() 76 } else { 77 n.workers = n.core 78 } 79 80 # tic: cluster setup 81 start.time.cl.setup = Sys.time() 82 # Display a message. 83 if (silent == FALSE) { 84 cat("setting up ", n.workers, " clusters with ", sep = "") 85 } 86 87 # Set up clusters 88 sys.type = Sys.info()[["sysname"]] 89 if (sys.type == "Windows") { 90 if (silent == FALSE) { 91 cat(" PSOCK. ", sep = "") 92 } 93 cl = makecluster(n.workers, type = "PSOCK") 94 # Load packages to all clusters 95 clustercall(cl, function() { 96 library(mgcv) 97 } 98 ) 99 # Make all variables available to all clusters 100 clusterexport(cl = cl, varlist = objects(), envir = environment()) 101 } else { 102 if (silent == FALSE) { 103 cat(" FORK. ", sep = "") 104 } 105 cl = makecluster(n.workers, type = "FORK") 106 } 107 108 # toc: cluster set up 109 end.time.cl.setup = Sys.time() 110 time.taken.cl.setup = end.time.cl.setup start.time.cl.setup 111 # Display a message. 15

112 if (silent == FALSE) { 113 cat("finished (", round(time.taken.cl.setup, 2), " ", units(time.taken.cl.setup), " elapsed).", "\n", sep = "") 114 } 115 116 # tic: gam.fit 117 start.time.gam.fit = Sys.time() 118 # Display a message. 119 if (silent == FALSE) { 120 cat("fitting GAM with K = ", K, " (parallelized with ", n.workers, " cores). ", sep = "") 121 } 122 123 # Perform parallel calculation (parlapply) 124 clusters.result = parlapply(cl, 1:K, worker.func) 125 # Shut down cluster 126 stopcluster(cl) 127 128 # toc: gam.fit 129 end.time.gam.fit = Sys.time() 130 time.taken.gam.fit = end.time.gam.fit start.time.gam.fit 131 # Display a message. 132 if (silent == FALSE) { 133 cat("finished (", round(time.taken.gam.fit, 2), " ", units(time.taken.gam.fit), " elapsed).", "\n", sep = "") 134 } 135 136 # Rearrange the result. 137 for (k in 1:K) { 138 gam.fit.list[[k]] = clusters.result[[k]]$gam.fit 139 names(gam.fit.list)[k] = paste("k.", k, sep="") 140 test.error.rate.mat[k,] = c(clusters.result[[k]]$k, clusters.result[[k]]$test.error.rate) 141 } 142 } 143 144 # Order the result matrix 145 test.error.rate.mat = test.error.rate.mat[order(test.error.rate.mat[,"test.err"]),] 146 rownames(test.error.rate.mat) = NULL 147 148 # toc: global 149 end.time.global = Sys.time() 150 time.taken.global = end.time.global start.time.global 151 152 # Wrap up the result. 153 result = list(k = K, gam.fit.list = gam.fit.list, test.error.rate.mat = test.error.rate.mat, time.taken = time. taken.global) 154 return(result) 155 } We try GAM models with k principal components with k {1,, 57}. 1 > source("find.best.k.for.pca.gam.func.r") 2 > 3 > GAM.PCA.result.list = find.best.k.for.pca.gam.func(k = 57, parallel = TRUE) 4 Setting up 8 clusters with FORK. Finished (0.34 secs elapsed). 5 Fitting GAM with K = 57 (parallelized with 8 cores). Finished (2.14 mins elapsed). 6 > 7 > head(gam.pca.result.list$test.error.rate.mat) 8 n.pc test.err 9 1 50 0.06982055 10 2 51 0.06982055 16

11 3 53 0.07014682 12 4 52 0.07079935 13 5 49 0.07112561 14 6 25 0.07177814 15 > 16 > plot(x = GAM.PCA.result.list$test.error.rate.mat[,"n.PC"], y = GAM.PCA.result.list$test.error.rate.mat[," test.err"], xlab = "k", ylab = "test error rate") 17 > points(gam.pca.result.list$test.error.rate.mat[1,"n.pc"],gam.pca.result.list$test.error.rate.mat[1,"test. err"], pch = 19, col = "red") 18 > The optimal value is k = 50. So, when we use 50 principal components for GAM, we obtain the best test error rate performance. test error rate 0.08 0.12 0 10 20 30 40 50 k Figure 8: Test error rate against k. (g) So, in this assignment, we learned: - We can model binomial response variable with logit link function. - Adding non linear terms can improve the prediction performance model. (We used GAM with splines.) - When we have many variables that measure similar things, condensing them with principal component analysis can improve the model. Possible weaknesses: - GAM can be too wiggly in the region where there are only a few data points. One can consider to put extra restrictions on those regions. - We only tried one training-test set split. If we assign training and test data again, the results can change. One can try different splits of training and data set or can utilize K-fold cross validation. - The explanatory variables are highly skewed. One can consider to log-transform these. 17