Machine Learning (COMP-652 and ECSE-608)

Similar documents
Machine Learning (COMP-652 and ECSE-608)

Machine Learning (CS 567) Lecture 2

ECE521 week 3: 23/26 January 2017

COMS 4771 Lecture Course overview 2. Maximum likelihood estimation (review of some statistics)

Linear Models for Regression CS534

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

CS 6375 Machine Learning

CSC2515 Winter 2015 Introduction to Machine Learning. Lecture 2: Linear regression

Lecture 5: Logistic Regression. Neural Networks

Overfitting, Bias / Variance Analysis

Machine Learning! in just a few minutes. Jan Peters Gerhard Neumann

COMP 551 Applied Machine Learning Lecture 3: Linear regression (cont d)

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

MODULE -4 BAYEIAN LEARNING

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

Linear Models for Regression

Linear Models for Regression CS534

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Linear Models for Regression CS534

Linear Regression (continued)

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

COMP 551 Applied Machine Learning Lecture 2: Linear Regression

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

Lecture : Probabilistic Machine Learning

CSC321 Lecture 18: Learning Probabilistic Models

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Machine Learning (CS 567) Lecture 3

Regression with Numerical Optimization. Logistic

Lecture 3: More on regularization. Bayesian vs maximum likelihood learning

Stat 406: Algorithms for classification and prediction. Lecture 1: Introduction. Kevin Murphy. Mon 7 January,

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

CS 540: Machine Learning Lecture 1: Introduction

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question

Lecture 2: Linear regression

CSCI567 Machine Learning (Fall 2014)

CS 188: Artificial Intelligence Spring Today

Machine Learning, Fall 2009: Midterm

Mathematical Formulation of Our Example

Lecture 2 Machine Learning Review

Lecture 11 Linear regression

Non-parametric Methods

CSC321 Lecture 2: Linear Regression

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Statistical Machine Learning Hilary Term 2018

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Ad Placement Strategies

Lecture 3. STAT161/261 Introduction to Pattern Recognition and Machine Learning Spring 2018 Prof. Allie Fletcher

Machine Learning Lecture 5

Introduction to Machine Learning

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

COMP 551 Applied Machine Learning Lecture 2: Linear regression

Association studies and regression

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Logistic Regression. Machine Learning Fall 2018

COMP 551 Applied Machine Learning Lecture 20: Gaussian processes

10-701/ Machine Learning, Fall

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Recent Advances in Bayesian Inference Techniques

Machine Learning Linear Classification. Prof. Matteo Matteucci

Statistical Learning. Philipp Koehn. 10 November 2015

Math for Machine Learning Open Doors to Data Science and Artificial Intelligence. Richard Han

ECE521 Lecture 7/8. Logistic Regression

Machine Learning - MT & 5. Basis Expansion, Regularization, Validation

Linear Models for Regression

Midterm, Fall 2003

ECE521 lecture 4: 19 January Optimization, MLE, regularization

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Least Squares Regression

From perceptrons to word embeddings. Simon Šuster University of Groningen

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Machine Learning and Bayesian Inference

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

CS-E3210 Machine Learning: Basic Principles

CMU-Q Lecture 24:

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Statistical Techniques in Robotics (16-831, F12) Lecture#21 (Monday November 12) Gaussian Processes

Lecture 5: GPs and Streaming regression

ECE521 Lecture 19 HMM cont. Inference in HMM

Statistical learning. Chapter 20, Sections 1 4 1

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Bayesian Deep Learning

Midterm: CS 6375 Spring 2015 Solutions

Lecture 15. Probabilistic Models on Graph

CSC242: Intro to AI. Lecture 21

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Naïve Bayes classification

The Lady Tasting Tea. How to deal with multiple testing. Need to explore many models. More Predictive Modeling

Statistical learning. Chapter 20, Sections 1 3 1

Algorithmisches Lernen/Machine Learning

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Artificial Intelligence: what have we seen so far? Machine Learning and Bayesian Inference

Least Squares Regression

Introduction to Machine Learning Midterm Exam

Bits of Machine Learning Part 1: Supervised Learning

Loss Functions, Decision Theory, and Linear Models

GI01/M055: Supervised Learning

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses

CS281 Section 4: Factor Analysis and PCA

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Transcription:

Machine Learning (COMP-652 and ECSE-608) Instructors: Doina Precup and Guillaume Rabusseau Email: dprecup@cs.mcgill.ca and guillaume.rabusseau@mail.mcgill.ca Teaching assistants: Tianyu Li and TBA Class web page: http://www.cs.mcgill.ca/ dprecup/courses/ml.html COMP-652 and ECSE-608, Lecture 1 - January 5, 2017

Outline Administrative issues What is machine learning? Types of machine learning Linear hypotheses Error functions Overfitting COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 1

Administrative issues Class materials: No required textbook, but several textbooks available Required or recommended readings (from books or research papers) posted on the class web page Class notes: posted on the web page Prerequisites: Knowledge of a programming language Knowledge of probability/statistics, calculus and linear algebra; general facility with math Some AI background is recommended but not required COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 2

Evaluation Four homework assignments (40%) Midterm examination (30%) Project (30%) Participation to class discussions (up to 1% extra credit) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 3

What is learning? H. Simon: Any process by which a system improves its performance M. Minsky: Learning is making useful changes in our minds R. Michalsky: Learning is constructing or modifying representations of what is being experienced L. Valiant: Learning is the process of knowledge acquisition in the absence of explicit programming COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 4

Engineering reasons: Why study machine learning? Easier to build a learning system than to hand-code a working program! E.g.: Robot that learns a map of the environment by exploring Programs that learn to play games by playing against themselves Improving on existing programs, e.g. Instruction scheduling and register allocation in compilers Combinatorial optimization problems Solving tasks that require a system to be adaptive, e.g. Speech and handwriting recognition Intelligent user interfaces COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 5

Scientific reasons: Why study machine learning? Discover knowledge and patterns in highly dimensional, complex data Sky surveys High-energy physics data Sequence analysis in bioinformatics Social network analysis Ecosystem analysis Understanding animal and human learning How do we learn language? How do we recognize faces? Creating real AI! If an expert system brilliantly designed, engineered and implemented cannot learn not to repeat its mistakes, it is not as intelligent as a worm or a sea anemone or a kitten. (Oliver Selfridge). COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 6

Very brief history Studied ever since computers were invented (e.g. Samuel s checkers player) Very active in 1960s (neural networks) Died down in the 1970s Revival in early 1980s (decision trees, backpropagation, temporaldifference learning) - coined as machine learning Exploded since the 1990s Now: very active research field, several yearly conferences (e.g., ICML, NIPS), major journals (e.g., Machine Learning, Journal of Machine Learning Research), rapidly growing number of researchers The time is right to study in the field! Lots of recent progress in algorithms and theory Flood of data to be analyzed Computational power is available Growing demand for industrial applications COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 7

What are good machine learning tasks? There is no human expert E.g., DNA analysis Humans can perform the task but cannot explain how E.g., character recognition Desired function changes frequently E.g., predicting stock prices based on recent trading data Each user needs a customized function E.g., news filtering COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 8

Important application areas Bioinformatics: sequence alignment, analyzing microarray data, information integration,... Computer vision: vision,... object recognition, tracking, segmentation, active Robotics: state estimation, map building, decision making Graphics: building realistic simulations Speech: recognition, speaker identification Financial analysis: option pricing, portfolio allocation E-commerce: automated trading agents, data mining, spam,... Medicine: diagnosis, treatment, drug design,... Computer games: building adaptive opponents Multimedia: retrieval across diverse databases COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 9

Based on the information available: Supervised learning Reinforcement learning Unsupervised learning Kinds of learning Based on the role of the learner Passive learning Active learning COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 10

Passive and active learning Traditionally, learning algorithms have been passive learners, which take a given batch of data and process it to produce a hypothesis or model Data Learner Model Active learners are instead allowed to query the environment Ask questions Perform experiments Open issues: how to query the environment optimally? how to account for the cost of queries? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 11

Example: A data set Cell Nuclei of Fine Needle Aspirate Cell samples were taken from tumors in breast cancer patients before surgery, and imaged Tumors were excised Patients were followed to determine whether or not the cancer recurred, and how long until recurrence or disease free COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 12

Data (continued) Thirty real-valued variables per tumor. Two variables that can be predicted: Outcome (R=recurrence, N=non-recurrence) Time (until recurrence, for R, time healthy, for N). tumor size texture perimeter... outcome time 18.02 27.6 117.5 N 31 17.99 10.38 122.8 N 61 20.29 14.34 135.1 R 27... COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 13

Terminology tumor size texture perimeter... outcome time 18.02 27.6 117.5 N 31 17.99 10.38 122.8 N 61 20.29 14.34 135.1 R 27... Columns are called input variables or features or attributes The outcome and time (which we are trying to predict) are called output variables or targets A row in the table is called training example or instance The whole table is called (training) data set. The problem of predicting the recurrence is called (binary) classification The problem of predicting the time is called regression COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 14

More formally tumor size texture perimeter... outcome time 18.02 27.6 117.5 N 31 17.99 10.38 122.8 N 61 20.29 14.34 135.1 R 27... A training example i has the form: number of attributes (30 in our case). x i,1,... x i,n, y i where n is the We will use the notation x i to denote the column vector with elements x i,1,... x i,n. The training set D consists of m training examples We denote the m n matrix of attributes by X and the size-m column vector of outputs from the data set by y. COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 15

Supervised learning problem Let X denote the space of input values Let Y denote the space of output values Given a data set D X Y, find a function: h : X Y such that h(x) is a good predictor for the value of y. h is called a hypothesis Problems are categorized by the type of output domain If Y = R, this problem is called regression If Y is a categorical variable (i.e., part of a finite discrete set), the problem is called classification If Y is a more complex structure (eg graph) the problem is called structured prediction COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 16

Steps to solving a supervised learning problem 1. Decide what the input-output pairs are. 2. Decide how to encode inputs and outputs. This defines the input space X, and the output space Y. (We will discuss this in detail later) 3. Choose a class of hypotheses/representations H. 4.... COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 17

Example: What hypothesis class should we pick? x y 0.86 2.49 0.09 0.83-0.85-0.25 0.87 3.10-0.44 0.87-0.43 0.02-1.10-0.12 0.40 1.81-0.96-0.83 0.17 0.43 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 18

Linear hypothesis Suppose y was a linear function of x: h w (x) = w 0 + w 1 x 1 (+ ) w i are called parameters or weights To simplify notation, we can add an attribute x 0 = 1 to the other n attributes (also called bias term or intercept term): h w (x) = n w i x i = w T x i=0 where w and x are vectors of size n + 1. How should we pick w? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 19

Error minimization! Intuitively, w should make the predictions of h w close to the true values y on the data we have Hence, we will define an error function or cost function to measure how much our prediction differs from the true answer We will pick w such that the error function is minimized How should we choose the error function? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 20

Least mean squares (LMS) Main idea: try to make h w (x) close to y on the examples in the training set We define a sum-of-squares error function J(w) = 1 2 m (h w (x i ) y i ) 2 i=1 (the 1/2 is just for convenience) We will choose w such as to minimize J(w) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 21

Steps to solving a supervised learning problem 1. Decide what the input-output pairs are. 2. Decide how to encode inputs and outputs. This defines the input space X, and the output space Y. 3. Choose a class of hypotheses/representations H. 4. Choose an error function (cost function) to define the best hypothesis 5. Choose an algorithm for searching efficiently through the space of hypotheses. COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 22

Notation reminder Consider a function f(u 1, u 2,..., u n ) : R n R (for us, this will usually be an error function) The partial derivative w.r.t. u i is denoted: u i f(u 1, u 2,..., u n ) : R n R The partial derivative is the derivative along the u i axis, keeping all other variables fixed. The gradient f(u 1, u 2,..., u n ) : R n R n is a function which outputs a vector containing the partial derivatives. That is: f = f, f,..., f u 1 u 2 u n COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 23

w j J(w) = A bit of algebra 1 w j 2 m (h w (x i ) y i ) 2 i=1 = 1 m 2 2 (h w (x i ) y i ) (h w (x i ) y i ) w j = = m i=1 i=1 (h w (x i ) y i ) w j m (h w (x i ) y i )x i,j i=1 ( n ) w l x i,l y i Setting all these partial derivatives to 0, we get a linear system with (n + 1) equations and (n + 1) unknowns. l=0 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 24

The solution Recalling some multivariate calculus: w J = w 1 2 (Xw y)t (Xw y) = w 1 2 (wt X T Xw y T Xw w T X T y + y T y) = X T Xw X T y Setting gradient equal to zero: X T Xw X T y = 0 w = (X T X) 1 X T y X T Xw = X T y The inverse exists if the columns of X are linearly independent. COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 25

Example: Data and best linear hypothesis y = 1.60x + 1.05 y x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 26

Linear regression summary The optimal solution (minimizing sum-squared-error) can be computed in polynomial time in the size of the data set. The solution is w = (X T X) 1 X T y, where X is the data matrix augmented with a column of ones, and y is the column vector of target outputs. A very rare case in which an analytical, exact solution is possible COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 27

Coming back to mean-squared error function... Good intuitive feel (small errors are ignored, large errors are penalized) Nice math (closed-form solution, unique global optimum) Geometric interpretation Any other interpretation? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 28

A probabilistic assumption Assume y i is a noisy target value, generated from a hypothesis h w (x) More specifically, assume that there exists w such that: y i = h w (x i ) + ɛ i where ɛ i is random variable (noise) drawn independently for each x i according to some Gaussian (normal) distribution with mean zero and variance σ. How should we choose the parameter vector w? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 29

Bayes theorem in learning Let h be a hypothesis and D be the set of training data. Using Bayes theorem, we have: P (h D) = P (D h)p (h), P (D) where: P (h) is the prior probability of hypothesis h P (D) = P (D h)p (h) is the probability h (normalization, independent of h) of training data D P (h D) is the probability of h given D P (D h) is the probability of D given h (likelihood of the data) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 30

Choosing hypotheses What is the most probable hypothesis given the training data? Maximum a posteriori (MAP) hypothesis h MAP : h MAP = arg max h H P (h D) = arg max h H = arg max h H P (D h)p (h) (using Bayes theorem) P (D) P (D h)p (h) Last step is because P (D) is independent of h (so constant for the maximization) This is the Bayesian answer (more in a minute) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 31

Maximum likelihood estimation h MAP = arg max h H P (D h)p (h) If we assume P (h i ) = P (h j ) (all hypotheses are equally likely a priori) then we can further simplify, and choose the maximum likelihood (ML) hypothesis: h ML = arg max h H P (D h) = arg max h H L(h) Standard assumption: the training examples are independently identically distributed (i.i.d.) This alows us to simplify P (D h): P (D h) = m P ( x i, y i h) = i=1 m P (y i x i ; h)p (x i ) i=1 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 32

The log trick We want to maximize: L(h) = m P (y i x i ; h)p (x i ) i=1 This is a product, and products are hard to maximize! Instead, we will maximize log L(h)! (the log-likelihood function) log L(h) = m log P (y i x i ; h) + i=1 m log P (x i ) i=1 The second sum depends on D, but not on h, so it can be ignored in the search for a good hypothesis COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 33

Maximum likelihood for regression Adopt the assumption that: where ɛ i N (0, σ). y i = h w (x i ) + ɛ i, The best hypothesis maximizes the likelihood of y i h w (x i ) = ɛ i Hence, L(w) = m i=1 1 2πσ 2 e 1 2 ( ) yi hw(x i ) 2 σ because the noise variables ɛ i are from a Gaussian distribution COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 34

Applying the log trick log L(w) = = ( m 1 log 2πσ 2 e 1 2 i=1 m log i=1 ( ) 1 2πσ 2 ) (y i hw(x i )) 2 σ 2 m i=1 1(y i h w (x i )) 2 2 σ 2 Maximizing the right hand side is the same as minimizing: m i=1 1(y i h w (x i )) 2 2 σ 2 This is our old friend, the sum-squared-error function! (the constants that are independent of h can again be ignored) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 35

Maximum likelihood hypothesis for least-squares estimators Under the assumption that the training examples are i.i.d. and that we have Gaussian target noise, the maximum likelihood parameters w are those minimizing the sum squared error: w = arg min w m (y i h w (x i )) 2 i=1 This makes explicit the hypothesis behind minimizing the sum-squared error If the noise is not normally distributed, maximizing the likelihood will not be the same as minimizing the sum-squared error In practice, different loss functions are used depending on the noise assumption COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 36

A graphical representation for the data generation process X X~P(X) ML: fixed but unknown w eps eps~n(0,sigma) y y=h_w(x)+eps Deterministic Circles represent (random) variables) Arrows represent dependencies between variables Some variables are observed, others need to be inferred because they are hidden (latent) New assumptions can be incorporated by making the model more complicated COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 37

Predicting recurrence time based on tumor size 80 70 time to recurrence (months?) 60 50 40 30 20 10 0 10 15 20 25 30 tumor radius (mm?) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 38

Is linear regression enough? Linear regression is too simple for most realistic problems But it should be the first thing you try for real-valued outputs! Problems can also occur is X T X is not invertible. Two possible solutions: 1. Transform the data Add cross-terms, higher-order terms More generally, apply a transformation of the inputs from X to some other space X, then do linear regression in the transformed space 2. Use a different hypothesis class (e.g. non-linear functions) Today we focus on the first approach COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 39

Polynomial fits Suppose we want to fit a higher-degree polynomial to the data. (E.g., y = w 2 x 2 + w 1 x 1 + w 0.) Suppose for now that there is a single input variable per training sample. How do we do it? COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 40

Answer: Polynomial regression Given data: (x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ). Suppose we want a degree-d polynomial fit. Let y be as before and let X = x d 1... x 2 1 x 1 1 x d 2... x 2 2 x 2 1.... x d m... x 2 m x m 1 Solve the linear regression Xw y. COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 41

Example of quadratic regression: Data matrices X = 0.75 0.86 1 0.01 0.09 1 0.73 0.85 1 0.76 0.87 1 0.19 0.44 1 0.18 0.43 1 1.22 1.10 1 0.16 0.40 1 0.93 0.96 1 0.03 0.17 1 y = 2.49 0.83 0.25 3.10 0.87 0.02 0.12 1.81 0.83 0.43 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 42

X T X X T X = [ 0.75 0.01 0.73 0.76 0.19 0.18 1.22 0.16 0.93 0.03 0.86 0.09 0.85 0.87 0.44 0.43 1.10 0.40 0.96 0.17 1 1 1 1 1 1 1 1 1 1 = ] 4.11 1.64 4.95 1.64 4.95 1.39 4.95 1.39 10 0.75 0.86 1 0.01 0.09 1 0.73 0.85 1 0.76 0.87 1 0.19 0.44 1 0.18 0.43 1 1.22 1.10 1 0.16 0.40 1 0.93 0.96 1 0.03 0.17 1 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 43

X T y X T y = [ 0.75 0.01 0.73 0.76 0.19 0.18 1.22 0.16 0.93 0.03 0.86 0.09 0.85 0.87 0.44 0.43 1.10 0.40 0.96 0.17 1 1 1 1 1 1 1 1 1 1 = 3.60 6.49 8.34 ] 2.49 0.83 0.25 3.10 0.87 0.02 0.12 1.81 0.83 0.43 COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 44

Solving for w w = (X T X) 1 X T y = [ 4.11 1.64 4.95 1.64 4.95 1.39 4.95 1.39 10 ] 1 [ 3.60 6.49 8.34 ] = [ 0.68 1.74 0.73 ] So the best order-2 polynomial is y = 0.68x 2 + 1.74x + 0.73. COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 45

Linear function approximation in general Given a set of examples x i, y i i=1...m, we fit a hypothesis h w (x) = K 1 k=0 w k φ k (x) = w T φ(x) where φ k are called basis functions The best w is considered the one which minimizes the sum-squared error over the training data: m (y i h w (x i )) 2 i=1 We can find the best w in closed form: w = (Φ T Φ) 1 Φ T y or by other methods (e.g. gradient descent - as will be seen later) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 46

Linear models in general By linear models, we mean that the hypothesis function h w (x) is a linear function of the parameters w This does not mean the h w (x) is a linear function of the input vector x (e.g., polynomial regression) In general h w (x) = K 1 k=0 w k φ k (x) = w T φ(x) where φ k are called basis functions Usually, we will assume that φ 0 (x) = 1, x, to create a bias term The hypothesis can alternatively be written as: h w (x) = Φw where Φ is a matrix with one row per instance; row j contains φ(x j ). Basis functions are fixed COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 47

Example basis functions: Polynomials 1 0.5 0 0.5 1 1 0 1 φ k (x) = x k Global functions: a small change in x may cause large change in the output of many basis functions COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 48

Example basis functions: Gaussians 1 0.75 0.5 0.25 0 1 ( 0 ) 1 x µk φ k (x) = exp 2σ 2 µ k controls the position along the x-axis σ controls the width (activation radius) µ k, σ fixed for now (later we discuss adjusting them) Usually thought as local functions: if σ is relatively small, a small change in x only causes a change in the output of a few basis functions (the ones with means close to x) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 49

Example basis functions: Sigmoidal 1 0.75 0.5 0.25 φ k (x) = σ ( ) x µk s 0 1 0 1 where σ(a) = 1 1 + exp( a) µ k controls the position along the x-axis s controls the slope µ k, s fixed for now (later we discuss adjusting them) Local functions: a small change in x only causes a change in the output of a few basis (most others will stay close to 0 or 1) COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 50

Order-2 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 51

Order-3 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 52

Order-4 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 53

Order-5 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 54

Order-6 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 55

Order-7 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 56

Order-8 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 57

Order-9 fit y Is this a better fit to the data? x COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 58

Overfitting A general, HUGELY IMPORTANT problem for all machine learning algorithms We can find a hypothesis that predicts perfectly the training data but does not generalize well to new data E.g., a lookup table! We are seeing an instance here: if we have a lot of parameters, the hypothesis memorizes the data points, but is wild everywhere else. Next time: defining overfitting formally, and finding ways to avoid it COMP-652 and ECSE-608, Lecture 1 - January 5, 2017 59