CS6220: DATA MINING TECHNIQUES

Similar documents
CS6220: DATA MINING TECHNIQUES

CS145: INTRODUCTION TO DATA MINING

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES

CS145: INTRODUCTION TO DATA MINING

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING

CS6220: DATA MINING TECHNIQUES

CS249: ADVANCED DATA MINING

Linear Regression (continued)

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

FINAL: CS 6375 (Machine Learning) Fall 2014

CS 6375 Machine Learning

Overfitting, Bias / Variance Analysis

Data Mining Techniques. Lecture 2: Regression

Linear Models in Machine Learning

Introduction to Machine Learning Midterm Exam

Introduction to Machine Learning. Regression. Computer Science, Tel-Aviv University,

ISyE 691 Data mining and analytics

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Machine Learning Linear Regression. Prof. Matteo Matteucci

Introduction to Machine Learning Midterm Exam Solutions

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Midterm: CS 6375 Spring 2015 Solutions

CSCI-567: Machine Learning (Spring 2019)

CS249: ADVANCED DATA MINING

CS249: ADVANCED DATA MINING

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

Machine Learning for OR & FE

Linear Regression In God we trust, all others bring data. William Edwards Deming

Introduction to Machine Learning Midterm, Tues April 8

Midterm: CS 6375 Spring 2018

Machine Learning & Data Mining

COMS 4771 Regression. Nakul Verma

CS6220: DATA MINING TECHNIQUES

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

SUPERVISED LEARNING: INTRODUCTION TO CLASSIFICATION

EXAM IN STATISTICAL MACHINE LEARNING STATISTISK MASKININLÄRNING

ECE521 week 3: 23/26 January 2017

CS249: ADVANCED DATA MINING

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Linear Models for Regression CS534

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

Logistic Regression & Neural Networks

Pattern Recognition and Machine Learning

CLUe Training An Introduction to Machine Learning in R with an example from handwritten digit recognition

Click Prediction and Preference Ranking of RSS Feeds

Case Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!

10-701/ Machine Learning, Fall

ECS171: Machine Learning

Mathematical Formulation of Our Example

Regression, Ridge Regression, Lasso

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 9: LINEAR REGRESSION

ECE 5424: Introduction to Machine Learning

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

Machine Learning for NLP

COMS 4771 Introduction to Machine Learning. James McInerney Adapted from slides by Nakul Verma

CSCI567 Machine Learning (Fall 2014)

Machine Learning Linear Classification. Prof. Matteo Matteucci

Lecture #11: Classification & Logistic Regression

Machine Learning: A Statistics and Optimization Perspective

Linear Models for Regression CS534

Association studies and regression

Bias-Variance Tradeoff

Final Exam, Machine Learning, Spring 2009

Linear Models for Regression CS534

Loss Functions and Optimization. Lecture 3-1

Decision Support Systems MEIC - Alameda 2010/2011. Homework #8. Due date: 5.Dec.2011

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26

VBM683 Machine Learning

ANÁLISE DOS DADOS. Daniela Barreiro Claro

Statistics: A review. Why statistics?

Logistic Regression. COMP 527 Danushka Bollegala

Day 3: Classification, logistic regression

Computational statistics

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

CPSC 340: Machine Learning and Data Mining

Linear Models for Regression

Stochastic Gradient Descent

MODULE -4 BAYEIAN LEARNING

CS6220: DATA MINING TECHNIQUES

Linear Models for Regression. Sargur Srihari

CS60021: Scalable Data Mining. Large Scale Machine Learning

10-701/ Machine Learning - Midterm Exam, Fall 2010

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren

CS534 Machine Learning - Spring Final Exam

Regression. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh.

Lecture 6: Linear Regression (continued)

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Linear Model Selection and Regularization

Introduction to Statistical modeling: handout for Math 489/583

Machine Learning, Midterm Exam

Statistics 203: Introduction to Regression and Analysis of Variance Course review

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

Lecture 2 Machine Learning Review

Active Learning and Optimized Information Gathering

Transcription:

CS6220: DATA MINING TECHNIQUES Matrix Data: Prediction Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2015

Announcements TA Monisha s office hour has changed to Thursdays 10-12pm, 462WVH (the same location) Team formation due this Sunday Homework 1 out by tomorrow. 2

Today s Schedule Course Project Introduction Linear Regression Model Decision Tree 3

Methods to Learn Matrix Data Text Data Set Data Sequence Data Time Series Graph & Network Images Classification Decision Tree; Naïve Bayes; Logistic Regression SVM; knn HMM Label Propagation* Neural Network Clustering K-means; hierarchical clustering; DBSCAN; Mixture Models; kernel k- means* PLSA SCAN*; Spectral Clustering* Frequent Pattern Mining Apriori; FP-growth GSP; PrefixSpan Prediction Linear Regression Autoregression Similarity Search Ranking DTW P-PageRank PageRank 4

How to learn these algorithms? Three levels When it is applicable? Input, output, strengths, weaknesses, time complexity How it works? Pseudo-code, work flows, major steps Can work out a toy problem by pen and paper Why it works? Intuition, philosophy, objective, derivation, proof 5

Matrix Data: Prediction Matrix Data Linear Regression Model Model Evaluation and Selection Summary 6

Example A matrix of n p: n data objects / points p attributes / dimensions x 11 x i1 x n1 x 1f x if x nf x 1p x ip x np 7

Numerical E.g., height, income Attribute Type Categorical / discrete E.g., Sex, Race 8

Categorical Attribute Types Nominal: categories, states, or names of things Hair_color = {auburn, black, blond, brown, grey, red, white} marital status, occupation, ID numbers, zip codes Binary Nominal attribute with only 2 states (0 and 1) Symmetric binary: both outcomes equally important e.g., gender Asymmetric binary: outcomes not equally important. e.g., medical test (positive vs. negative) Convention: assign 1 to most important outcome (e.g., HIV positive) Ordinal Values have a meaningful order (ranking) but magnitude between successive values is not known. Size = {small, medium, large}, grades, army rankings 9

Matrix Data: Prediction Matrix Data Linear Regression Model Model Evaluation and Selection Summary 10

Linear Regression Ordinary Least Square Regression Closed form solution Online updating Linear Regression with Probabilistic Interpretation 11

The Linear Regression Problem Any Attributes to Continuous Value: x y {age; major ; gender; race} GPA {income; credit score; profession} loan {college; major ; GPA} future income 12

Illustration 13

Formalization Data: n independent data objects y i, i = 1,, n x i = x i0, x i1, x i2,, x ip T, i = 1,, n Usually a constant factor is considered, say, x i0 = 1 Model: y: dependent variable x: explanatory variables β = β 0, β 1,, β p T : weight vector y = x T β = β 0 + x 1 β 1 + x 2 β 2 + + x p β p 14

Model Construction A 2-step Process Use training data to find the best parameter β, denoted as β Model Usage Model Evaluation Use test data to select the best model Feature selection Apply the model to the unseen data: y = x T β 15

Least Square Estimation Cost function (Total Square Error): J β = i x i T β y i 2 Matrix form: J β = Xβ y T (Xβ y) 1, 1, 1, x 11 x i1 x n1 or Xβ y 2 x 1f x if x nf x 1p x ip x np X: n p + 1 matrix y 1 y i y n y: n 1 vector 16

Ordinary Least Squares (OLS) Goal: find β that minimizes J β J β = Xβ y T Xβ y = β T X T Xβ y T Xβ β T X T y + y T y Ordinary least squares Set first derivative of J β as 0 J β = 2βT X T X 2y T X = 0 β = X T X 1 X T y 17

Gradient Descent Minimize the cost function by moving down in the steepest direction 18

Gradient Descent Online Updating Move in the direction of steepest descend η = 0.1 in practice β (t+1) :=β (t) η J β β=β (t), Where J β = i x i T β y i 2 = i J i (β) J β = i When a new observation, i, comes in, only need to update: β (t+1) :=β (t) + 2η(y i x i T β (t) )x i J i β = i 2x i (x i T β y i ) If the prediction for object i is smaller than the real value, β should move forward to the direction of x i 19

Other Practical Issues What if X T X is not invertible? Add a small portion of identity matrix, λi, to it (ridge regression* ) What if some attributes are categorical? Set dummy variables E.g., x = 1, if sex = F; x = 0, if sex = M Nominal variable with multiple values? Create more dummy variables for one variable What if non-linear correlation exists? Transform features, say, x to x 2 20

Probabilistic Interpretation Review of normal distribution X~N μ, σ 2 f X = x = 1 x μ 2 2πσ 2 e 2σ 2 21

Probabilistic Interpretation Model: y i = x i T β + ε i ε i ~N(0, σ 2 ) y i x i, β~n(x i T β, σ 2 ) E y i x i Likelihood: = x i T β L β = i p y i x i, β) = i 1 2πσ 2 exp{ y i x i T β 2 2σ 2 } Maximum Likelihood Estimation find β that maximizes L β arg max L = arg min J, Equivalent to OLS! 22

Matrix Data: Prediction Matrix Data Linear Regression Model Model Evaluation and Selection Summary 23

Model Selection Problem Basic problem: how to choose between competing linear regression models Model too simple: underfit the data; poor predictions; high bias; low variance Model too complex: overfit the data; poor predictions; low bias; high variance Model just right: balance bias and variance to get good predictions 24

Bias: E( f x ) f(x) Bias and Variance True predictor f x : x T β Estimated predictor f x : x T β How far away is the expectation of the estimator to the true value? The smaller the better. Variance: Var f x = E[ f x E f x How variant is the estimator? The smaller the better. Reconsider the cost function J β = i x i T β y i 2 Can be considered as E[ f x f(x) ε 2 ] = bias 2 + variance + noise Note E ε = 0, Var ε = σ 2 2 ] 25

Bias-Variance Trade-off 26

Cross-Validation Partition the data into K folds Use K-1 fold as training, and 1 fold as testing Calculate the average accuracy best on K training-testing pairs Accuracy on validation/test dataset! Mean square error can again be used: i x i T β y i 2 /n 27

AIC & BIC* AIC and BIC can be used to test the quality of statistical models AIC (Akaike information criterion) AIC = 2k 2ln( L), where k is the number of parameters in the model and L is the likelihood under the estimated parameter BIC (Bayesian Information criterion) BIC = kln(n) 2ln( L), Where n is the number of objects 28

Stepwise Feature Selection Avoid brute-force selection 2 p Forward selection Starting with the best single feature Always add the feature that improves the performance best Stop if no feature will further improve the performance Backward elimination Start with the full model Always remove the feature that results in the best performance enhancement Stop if removing any feature will get worse performance 29

Matrix Data: Prediction Matrix Data Linear Regression Model Model Evaluation and Selection Summary 30

Summary What is matrix data? Attribute types Linear regression OLS Probabilistic interpretation Model Evaluation and Selection Bias-Variance Trade-off Mean square error Cross-validation, AIC, BIC, step-wise feature selection 31