Logistic Regression. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, / 48

Similar documents
Gaussian and Linear Discriminant Analysis; Multiclass Classification

Overfitting, Bias / Variance Analysis

Neural Network Training

Introduction to Logistic Regression and Support Vector Machine

Linear Regression (continued)

Support Vector Machines

Logistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu

Clustering. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms March 8, / 26

Association studies and regression

The classifier. Theorem. where the min is over all possible classifiers. To calculate the Bayes classifier/bayes risk, we need to know

The classifier. Linear discriminant analysis (LDA) Example. Challenges for LDA

Lecture 7. Logistic Regression. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 11, 2016

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

ECE521 week 3: 23/26 January 2017

MLE/MAP + Naïve Bayes

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Lecture 5: Logistic Regression. Neural Networks

Neural Networks and Deep Learning

ECE521 Lecture7. Logistic Regression

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

Linear Models for Classification

Logistic Regression. Seungjin Choi

Bias-Variance Tradeoff

Linear and logistic regression

Introduction to Machine Learning

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning, Fall 2012 Homework 2

Support Vector Machines, Kernel SVM

Classification Logistic Regression

Max Margin-Classifier

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Linear Models in Machine Learning

Generative v. Discriminative classifiers Intuition

Case Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!

Linear Models for Regression CS534

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5

Stochastic Gradient Descent

Statistical Machine Learning Hilary Term 2018

Machine Learning. Lecture 3: Logistic Regression. Feng Li.

Regression with Numerical Optimization. Logistic

Naïve Bayes Introduction to Machine Learning. Matt Gormley Lecture 18 Oct. 31, 2018

Machine Learning. Lecture 04: Logistic and Softmax Regression. Nevin L. Zhang

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Generative v. Discriminative classifiers Intuition

HOMEWORK #4: LOGISTIC REGRESSION

MLE/MAP + Naïve Bayes

Lecture 2 Machine Learning Review

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Logistic Regression. COMP 527 Danushka Bollegala

Logistic Regression. Will Monroe CS 109. Lecture Notes #22 August 14, 2017

Machine Learning for NLP

Linear Classification. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Expectation maximization

Ch 4. Linear Models for Classification

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Gradient Descent. Sargur Srihari

Generative Classifiers: Part 1. CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang

CS60021: Scalable Data Mining. Large Scale Machine Learning

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Linear & nonlinear classifiers

HOMEWORK #4: LOGISTIC REGRESSION

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

Lecture 4 September 15

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Machine Learning for NLP

Naïve Bayes classification

Linear Models for Regression CS534

Linear classifiers: Overfitting and regularization

Optimization Tutorial 1. Basic Gradient Descent

Linear Models for Regression CS534

CPSC 340: Machine Learning and Data Mining

ECS171: Machine Learning

Machine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

Lecture 1: Supervised Learning

Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University. September 20, 2012

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 9 Sep. 26, 2018

Machine Learning 2017

Logistic Regression. William Cohen

CSC 411: Lecture 04: Logistic Regression

CPSC 340: Machine Learning and Data Mining. MLE and MAP Fall 2017

Machine Learning Lecture 7

Ad Placement Strategies

Lecture 3 - Linear and Logistic Regression

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

MLPR: Logistic Regression and Neural Networks

Outline. MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition. Which is the correct model? Recap.

Machine Learning Lecture 5

CSCI567 Machine Learning (Fall 2018)

ECE 5984: Introduction to Machine Learning

Reading Group on Deep Learning Session 1

(Kernels +) Support Vector Machines

Ad Placement Strategies

Transcription:

Logistic Regression Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 1 / 48

Outline 1 Administration 2 Review of last lecture 3 Logistic regression Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 2 / 48

HW1 Will be returned today in class during our break Can also pick up from Brooke during her office hours Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 3 / 48

Outline 1 Administration 2 Review of last lecture Naive Bayes 3 Logistic regression Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 4 / 48

How to tell spam from ham? FROM THE DESK OF MR. AMINU SALEH DIRECTOR, FOREIGN OPERATIONS DEPARTMENT AFRI BANK PLC Afribank Plaza, 14th Floormoney344.jpg 51/55 Broad Street, P.M.B 12021 Lagos-Nigeria Attention: Honorable Beneficiary, IMMEDIATE PAYMENT NOTIFICATION VALUED AT US$10 MILLION Dear Ameet, Do you have 10 minutes to get on a videocall before 2pm? Thanks, Stefano

Simple strategy: count the words free 100 money 2.. account 2.. free 1 money 1.. account 2.. Bag-of-word representation of documents (and textual data)

Naive Bayes (in our Spam Email Setting) Assume X R D, all X d [K], and z k is the number of times k in X P (X = x, Y = c) = P (Y = c)p (X = x Y = c) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 5 / 48

Naive Bayes (in our Spam Email Setting) Assume X R D, all X d [K], and z k is the number of times k in X P (X = x, Y = c) = P (Y = c)p (X = x Y = c) = P (Y = c) P (k Y = c) z k = π c k Key assumptions made? k θ z k ck Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 5 / 48

Naive Bayes (in our Spam Email Setting) Assume X R D, all X d [K], and z k is the number of times k in X P (X = x, Y = c) = P (Y = c)p (X = x Y = c) = P (Y = c) P (k Y = c) z k = π c k Key assumptions made? Conditional independence: P (X i, X j Y = c) = P (X i Y = c)p (X j Y = c). P (X i Y = c) depends only the value of X i, not i itself (order of words does not matter in bag-of-word representation of documents) k θ z k ck Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 5 / 48

Learning problem Training data Goal D = {({z nk } K k=1, y n)} N n=1 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 6 / 48

Learning problem Training data D = {({z nk } K k=1, y n)} N n=1 Goal Learn π c, c = 1, 2,, C, and θ ck, c [C], k [K] under the constraints: Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 6 / 48

Learning problem Training data D = {({z nk } K k=1, y n)} N n=1 Goal Learn π c, c = 1, 2,, C, and θ ck, c [C], k [K] under the constraints: π c = 1, c θ ck = k k and all quantities should be nonnegative. P (k Y = c) = 1, Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 6 / 48

Likelihood Function Let X 1,..., X N be IID with PDF f(x θ) (also written as f(x; θ)) Likelihood function is defined by L(θ x) (also written as L(θ; x)): N L(θ x) = f(x i ; θ). i=1 Notes The likelihood function is just the joint density of the data, except that we treat it as a function of the parameter θ, L : Θ [0, ). Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 7 / 48

Maximum Likelihood Estimator Definition: The maximum likelihood estimator (MLE) ˆθ, is the value of θ that maximizes L(θ). The log-likelihood function is defined by l(θ) = log L(θ) Maximum occurs at same place as that of the likelihood function Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 8 / 48

Maximum Likelihood Estimator Definition: The maximum likelihood estimator (MLE) ˆθ, is the value of θ that maximizes L(θ). The log-likelihood function is defined by l(θ) = log L(θ) Maximum occurs at same place as that of the likelihood function Using logs simplifies mathemetical expressions (converts exponents to products and products to sums) Using logs helps with numerical stabilitity The same is true of the likelihood function times any constant. Thus we shall often drop constants in the likelihood function. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 8 / 48

Bayes Rule For any document x, we want to compare p(spam x) and p(ham x) Axiom of Probability: p(spam, x) = p(spam x)p(x) = p(x spam)p(spam) This gives us (via bayes rule): p(spam x) = p(x spam)p(spam) p(x) p(ham x) = p(x ham)p(ham) p(x) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 9 / 48

Bayes Rule For any document x, we want to compare p(spam x) and p(ham x) Axiom of Probability: p(spam, x) = p(spam x)p(x) = p(x spam)p(spam) This gives us (via bayes rule): p(spam x) = p(x spam)p(spam) p(x) p(ham x) = p(x ham)p(ham) p(x) Denominators are same, and easier to compute logarithms, so we compare: log[p(x spam)p(spam)] versus log[p(x ham)p(ham)] Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 9 / 48

Our hammer: maximum likelihood estimation Log-Likelihood of the training data L = log P (D) = log N π yn P (x n y n ) n=1 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 10 / 48

Our hammer: maximum likelihood estimation Log-Likelihood of the training data L = log P (D) = log = log ( N π yn n=1 k N π yn P (x n y n ) n=1 θ z nk y nk ) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 10 / 48

Our hammer: maximum likelihood estimation Log-Likelihood of the training data L = log P (D) = log = log ( N π yn n=1 ( k N π yn P (x n y n ) n=1 θ z nk y nk ) ) = n log π yn + k z nk log θ ynk Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 10 / 48

Our hammer: maximum likelihood estimation Log-Likelihood of the training data L = log P (D) = log = log ( N π yn n=1 ( k N π yn P (x n y n ) n=1 θ z nk y nk ) ) = n log π yn + k z nk log θ ynk = n log π yn + n,k z nk log θ ynk Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 10 / 48

Our hammer: maximum likelihood estimation Log-Likelihood of the training data L = log P (D) = log = log ( N π yn n=1 ( k N π yn P (x n y n ) n=1 θ z nk y nk ) ) = n log π yn + k z nk log θ ynk = n log π yn + n,k z nk log θ ynk Optimize it! (π c, θ ck ) = arg max n log π yn + n,k z nk log θ ynk Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 10 / 48

Details Note the separation of parameters in the likelihood log π yn + z nk log θ ynk n n,k which implies that {π c } and {θ ck } can be estimated separately. Reorganize terms log π yn = log π c (#of data points labeled as c) n c Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 11 / 48

Details Note the separation of parameters in the likelihood log π yn + z nk log θ ynk n n,k which implies that {π c } and {θ ck } can be estimated separately. Reorganize terms log π yn = log π c (#of data points labeled as c) n c and z nk log θ ynk = c n,k n:y n=c k z nk log θ ck = c n:y n=c,k z nk log θ ck The later implies {θ ck, k = 1, 2,, K} and {θ c k, k = 1, 2,, K} can be estimated independently. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 11 / 48

Estimating {π c } We want to maximize log π c (#of data points labeled as c) Intuition c Similar to roll a dice (or flip a coin): each side of the dice shows up with a probability of π c (total C sides) Solution And we have total N trials of rolling this dice Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 12 / 48

Estimating {π c } We want to maximize log π c (#of data points labeled as c) Intuition c Similar to roll a dice (or flip a coin): each side of the dice shows up with a probability of π c (total C sides) Solution And we have total N trials of rolling this dice π c = #of data points labeled as c N Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 12 / 48

Estimating {θ ck, k = 1, 2,, K} We want to maximize n:y n=c,k z nk log θ ck Intuition Again similar to roll a dice: each side of the dice shows up with a probability of θ ck (total K sides) Solution And we have total n:y n=c,k z nk trials. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 13 / 48

Estimating {θ ck, k = 1, 2,, K} We want to maximize n:y n=c,k z nk log θ ck Intuition Again similar to roll a dice: each side of the dice shows up with a probability of θ ck (total K sides) Solution And we have total n:y n=c,k z nk trials. θ ck = #of times side k shows up in data points labeled as c #total trials for data points labeled as c Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 13 / 48

Translating back to our problem of detecting spam emails Collect a lot of ham and spam emails as training examples Estimate the bias p(ham) = #of ham emails #of spam emails, p(spam) = #of emails #of emails Estimate the weights (i.e., p(funny word ham) etc) p(funny word ham) = p(funny word spam) = #of funny word in ham emails #of words in ham emails #of funny word in spam emails #of words in spam emails (1) (2) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 14 / 48

Classification rule Given an unlabeled data point x = {z k, k = 1, 2,, K}, label it with y = arg max c [C] P (y = c x) = arg max c [C] P (y = c)p (x y = c) = arg max c [log π c + z k log θ ck ] k Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 15 / 48

Constrained optimization Equality Constraints min f(x) s.t. g(x) =0 Method of Lagrange multipliers Construct the following function (Lagrangian) L(x, )=f(x)+ g(x) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 16 / 48

A short derivation of the maximum likelihood estimation To maximize z nk log θ ck n:y n=c,k We can use Lagrange multipliers Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 17 / 48

A short derivation of the maximum likelihood estimation To maximize z nk log θ ck n:y n=c,k We can use Lagrange multipliers f(θ) = z nk log θ ck n:y n=c,k g(θ) = 1 k θ ck = 0 Lagrangian L(θ, λ) = f(θ) + λg(θ) = n:y n=c,k ( z nk log θ ck + λ 1 k θ ck ) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 17 / 48

L(θ, λ) = n:y n=c n:y n=c,k z nk log θ ck + λ ( 1 k θ ck ) First take derivatives with respect to θ ck and then find the stationary point ( ) z nk λ = 0 θ ck = 1 z nk θ ck λ n:y n=c Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 18 / 48

L(θ, λ) = n:y n=c n:y n=c,k z nk log θ ck + λ ( 1 k θ ck ) First take derivatives with respect to θ ck and then find the stationary point ( ) z nk λ = 0 θ ck = 1 z nk θ ck λ n:y n=c Plug in expression above for θ ck into constraint k θ ck = 1 Solve for λ Plug this expression for λ back into expression for θ ck to get: θ ck = k Chris will review in section on Friday n:y n=c z nk n:y n=c z nk Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 18 / 48

Summary Things you should know The form of the naive Bayes model write down the joint distribution explain the conditional independence assumption implied by the model explain how this model can be used to classify spam vs ham emails explain how it could be used for categorical variables Be able to go through the short derivation for parameter estimation The model illustrated here is called discrete Naive Bayes HW2 asks you to apply the same principle to other variants of naive Bayes The derivations are very similar except there you need to estimate different model parameters Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 19 / 48

Moving forward Examine the classification rule for naive Bayes y = arg max c log π c + k z k log θ ck For binary classification, we thus determine the label based on the sign of log π 1 + ( z k log θ 1k log π 2 + ) z k log θ 2k k k This is just a linear function of the features {z k } Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 20 / 48

Moving forward Examine the classification rule for naive Bayes y = arg max c log π c + k z k log θ ck For binary classification, we thus determine the label based on the sign of log π 1 + ( z k log θ 1k log π 2 + ) z k log θ 2k k k This is just a linear function of the features {z k } w 0 + k z k w k where we absorb w 0 = log π 1 log π 2 and w k = log θ 1k log θ 2k. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 20 / 48

Naive Bayes is a linear classifier Fundamentally, what really matters in deciding decision boundary is w 0 + k z k w k This motivates many new methods, including logistic regression, to be discussed next Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 21 / 48

Outline 1 Administration 2 Review of last lecture 3 Logistic regression General setup Maximum likelihood estimation Gradient descent Newton s method Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 22 / 48

Logistic classification Setup for two classes Input: x R D Output: y {0, 1} Training data: D = {(x n, y n ), n = 1, 2,..., N} Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 23 / 48

Logistic classification Setup for two classes Input: x R D Output: y {0, 1} Training data: D = {(x n, y n ), n = 1, 2,..., N} Model: p(y = 1 x; b, w) = σ[g(x)] where g(x) = b + d w d x d = b + w T x and σ[ ] stands for the sigmoid function σ(a) = 1 1 + e a Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 23 / 48

Why the sigmoid function? Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 24 / 48

Why the sigmoid function? What does it look like? where σ(a) = 1 1 + e a 1 0.9 0.8 0.7 0.6 0.5 0.4 Properties a = b + w T x 0.3 0.2 0.1 0 6 4 2 0 2 4 6 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 24 / 48

Why the sigmoid function? What does it look like? where σ(a) = 1 1 + e a 1 0.9 0.8 0.7 0.6 0.5 0.4 Properties a = b + w T x 0.3 0.2 0.1 0 6 4 2 0 2 4 6 Bounded between 0 and 1 thus, interpretable as probability Monotonically increasing thus, usable to derive classification rules σ(a) > 0.5, positive (classify as 1 ) σ(a) < 0.5, negative (classify as 0 ) σ(a) = 0.5, undecidable Nice computational properties As we will see soon Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 24 / 48

Why the sigmoid function? What does it look like? where σ(a) = 1 1 + e a 1 0.9 0.8 0.7 0.6 0.5 0.4 Properties a = b + w T x 0.3 0.2 0.1 0 6 4 2 0 2 4 6 Bounded between 0 and 1 thus, interpretable as probability Monotonically increasing thus, usable to derive classification rules σ(a) > 0.5, positive (classify as 1 ) σ(a) < 0.5, negative (classify as 0 ) σ(a) = 0.5, undecidable Nice computational properties As we will see soon Linear or nonlinear classifier? Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 24 / 48

Linear or nonlinear? σ(a) is nonlinear, however, the decision boundary is determined by σ(a) = 0.5 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 25 / 48

Linear or nonlinear? σ(a) is nonlinear, however, the decision boundary is determined by σ(a) = 0.5 a = 0 g(x) = b + w T x = 0 which is a linear function in x We often call b the offset term. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 25 / 48

Contrast Naive Bayes and our new model Similar Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 26 / 48

Contrast Naive Bayes and our new model Similar Both classification models are linear functions of features Different Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 26 / 48

Contrast Naive Bayes and our new model Similar Both classification models are linear functions of features Different Naive Bayes models the joint distribution: P (X, Y ) = P (Y )P (X Y ) Logistic regression models the conditional distribution: P (Y X) Generative vs. Discriminative NB is a generative model, LR is a discriminative model We will talk more about the differences later Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 26 / 48

Likelihood function Probability of a single training sample (x n, y n ) { σ(b + w p(y n x n ; b; w) = T x n ) if y n = 1 1 σ(b + w T x n ) otherwise Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 27 / 48

Likelihood function Probability of a single training sample (x n, y n ) { σ(b + w p(y n x n ; b; w) = T x n ) if y n = 1 1 σ(b + w T x n ) otherwise Compact expression, exploring that y n is either 1 or 0 p(y n x n ; b; w) = σ(b + w T x n ) yn [1 σ(b + w T x n )] 1 yn Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 27 / 48

Log Likelihood or Cross Entropy Error Log-likelihood of the whole training data D log P (D) = n {y n log σ(b + w T x n ) + (1 y n ) log[1 σ(b + w T x n )]} Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 28 / 48

Log Likelihood or Cross Entropy Error Log-likelihood of the whole training data D log P (D) = n {y n log σ(b + w T x n ) + (1 y n ) log[1 σ(b + w T x n )]} It is convenient to work with its negation, which is called cross-entropy error function E(b, w) = n {y n log σ(b + w T x n ) + (1 y n ) log[1 σ(b + w T x n )]} Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 28 / 48

Shorthand notation This is for convenience Append 1 to x x [1 x 1 x 2 x D ] Append b to w w [b w 1 w 2 w D ] Cross-entropy is then E(w) = n {y n log σ(w T x n ) + (1 y n ) log[1 σ(w T x n )]} Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 29 / 48

How to find the optimal parameters for logistic regression? We will minimize the error function E(w) = n {y n log σ(w T x n ) + (1 y n ) log[1 σ(w T x n )]} However, this function is complex and we cannot find the simple solution as we did in Naive Bayes. So we need to use numerical methods. Numerical methods are messier, in contrast to cleaner analytic solutions. In practice, we often have to tune a few optimization parameters patience is necessary. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 30 / 48

An overview of numerical methods We describe two Gradient descent (our focus in lecture): simple, especially effective for large-scale problems Newton s method: classical and powerful method Gradient descent is often referred to as a first-order method Requires computation of gradients (i.e., the first-order derivative) Newton s method is often referred as to an second-order method Requires computation of second derivatives Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 31 / 48

Gradient Descent Start at a random point f(w) w * w0 w

Gradient Descent Start at a random point f(w) Determine a descent direction w * w0 w

Gradient Descent Start at a random point f(w) Determine a descent direction Choose a step size w * w0 w

Gradient Descent Start at a random point f(w) Determine a descent direction Choose a step size Update w * w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w2 w1 w0 w

Gradient Descent Start at a random point Repeat Determine a descent direction Choose a step size Update Until stopping criterion is satisfied f(w) w * w2 w1 w0 w

Where Will We Converge? f(w) Convex g(w) Non-convex w * w Any local minimum is a global minimum w! w * w Multiple local minima may exist Least Squares, Ridge Regression and Logistic Regression are all convex!

Why do we move in the direction opposite the gradient? Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 32 / 48

Choosing Descent Direction (1D) f(w) positive go left! f(w) negative go right! zero done! w * w0 We can only move in two directions Negative slope is direction of descent! w w0 w * Update Rule: w i+1 = w i Step Size w α i df dw (w i) Negative Slope

Choosing Descent Direction 2D Example: Function values are in black/white and black represents higher values Arrows are gradients "Gradient2" by Sarang. Licensed under CC BY-SA 2.5 via Wikimedia Commons http://commons.wikimedia.org/wiki/file:gradient2.svg#/media/file:gradient2.svg We can move anywhere in R d Negative gradient is direction of steepest descent! Step Size Update Rule: w i+1 = w i α i f(w i ) Negative Slope

Example: min f(θ) = 0.5(θ 2 1 θ 2 ) 2 + 0.5(θ 1 1) 2 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 33 / 48

Example: min f(θ) = 0.5(θ 2 1 θ 2 ) 2 + 0.5(θ 1 1) 2 We compute the gradients f θ 1 = 2(θ 2 1 θ 2 )θ 1 + θ 1 1 f θ 2 = (θ 2 1 θ 2 ) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 33 / 48

Example: min f(θ) = 0.5(θ 2 1 θ 2 ) 2 + 0.5(θ 1 1) 2 We compute the gradients f θ 1 = 2(θ 2 1 θ 2 )θ 1 + θ 1 1 f θ 2 = (θ 2 1 θ 2 ) Use the following iterative procedure for gradient descent 1 Initialize θ (0) 1 and θ (0) 2, and t = 0 2 do [ θ (t+1) 1 θ (t) 1 η 2(θ (t) 2 ] (t) 1 θ 2 )θ(t) 1 + θ (t) 1 1 [ θ (t+1) 2 θ (t) 2 η (θ (t) 2 ] (t) 1 θ 2 ) t t + 1 3 until f(θ (t) ) does not change much Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 33 / 48

Impact of step size Choosing the right η is important Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 34 / 48

Impact of step size Choosing the right η is important small η is too slow? 3 2.5 2 1.5 1 0.5 0 0.5 0 0.5 1 1.5 2 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 34 / 48

Impact of step size Choosing the right η is important small η is too slow? large η is too unstable? 3 3 2.5 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0.5 0 0.5 1 1.5 2 0.5 0 0.5 1 1.5 2 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 34 / 48

Gradient descent General form for minimizing f(θ) θ t+1 θ η f θ Remarks η is step size how far we go in the direction of the negative gradient Step size needs to be chosen carefully to ensure convergence. Step size can be adaptive, e.g., we can use line search We are minimizing a function, hence the subtraction ( η) With a suitable choice of η, we converge to a stationary point f θ = 0 Stationary point not always global minimum (but happy when convex) Popular variant called stochastic gradient descent Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 35 / 48

Gradient Descent Update for Logistic Regression Simple fact: derivatives of σ(a) d σ(a) d a = d ( 1 + e a ) 1 (1 + e a ) = d a (1 + e a ) 2 e a = (1 + e a ) 2 = 1 e a 1 + e a 1 + e a = σ(a)[1 σ(a)] Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 36 / 48

Gradients of the cross-entropy error function Cross-entropy Error Function E(w) = n {y n log σ(w T x n ) + (1 y n ) log[1 σ(w T x n )]} Gradients E(w) w = n { yn [1 σ(w T x n )]x n (1 y n )σ(w T x n )]x n } = n { σ(w T x n ) y n } xn Remark Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 37 / 48

Gradients of the cross-entropy error function Cross-entropy Error Function E(w) = n {y n log σ(w T x n ) + (1 y n ) log[1 σ(w T x n )]} Gradients E(w) w = n { yn [1 σ(w T x n )]x n (1 y n )σ(w T x n )]x n } = n { σ(w T x n ) y n } xn Remark e n = { σ(w T x n ) y n } is called error for the nth training sample. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 37 / 48

Numerical optimization Gradient descent for logistic regression Choose a proper step size η > 0 Iteratively update the parameters following the negative gradient to minimize the error function w (t+1) w (t) η n { σ(w T x n ) y n } xn Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 38 / 48

Intuition for Newton s method Approximate the true function with an easy-to-solve optimization problem f(x) f quad (x) f(x) f quad (x) x k x k +d k x k x k +d k Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 39 / 48

Intuition for Newton s method Approximate the true function with an easy-to-solve optimization problem f(x) f quad (x) f(x) f quad (x) x k x k +d k x k x k +d k In particular, we can approximate the cross-entropy error function around w (t) by a quadratic function, and then minimize this quadratic function Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 39 / 48

Approximation Second Order Taylor expansion around x t f(x) f(x t ) + f (x t )(x x t ) + 1 2 f (x t )(x x t ) 2 Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 40 / 48

Approximation Second Order Taylor expansion around x t f(x) f(x t ) + f (x t )(x x t ) + 1 2 f (x t )(x x t ) 2 Taylor expansion of cross-entropy error function around w (t) E(w) E(w (t) ) + (w w (t) ) T E(w (t) ) + 1 2 (w w(t) ) T H (t) (w w (t) ) where E(w (t) ) is the gradient H (t) is the Hessian matrix evaluated at w (t) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 40 / 48

So what is the Hessian matrix? The matrix of second-order derivatives In other words, H = 2 E(w) ww T H ij = ( ) E(w) w j w i So the Hessian matrix is R D D, where w R D. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 41 / 48

Optimizing the approximation Minimize the approximation E(w) E(w (t) ) + (w w (t) ) T E(w (t) ) + 1 2 (w w(t) ) T H (t) (w w (t) ) and use the solution as the new estimate of the parameters w (t+1) min w (w w(t) ) T E(w (t) ) + 1 2 (w w(t) ) T H (t) (w w (t) ) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 42 / 48

Optimizing the approximation Minimize the approximation E(w) E(w (t) ) + (w w (t) ) T E(w (t) ) + 1 2 (w w(t) ) T H (t) (w w (t) ) and use the solution as the new estimate of the parameters w (t+1) min w (w w(t) ) T E(w (t) ) + 1 2 (w w(t) ) T H (t) (w w (t) ) The quadratic function minimization has a closed form, thus, we have i.e., the Newton s method. ( w (t+1) w (t) H (t)) 1 E(w (t) ) Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 42 / 48

Contrast gradient descent and Newton s method Similar Both are iterative procedures. Different Newton s method requires second-order derivatives. Newton s method does not have the magic η to be set. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 43 / 48

Other important things about Hessian Our cross-entropy error function is convex E(w) w = n {σ(w T x n ) y n }x n (3) H = 2 E(w) = homework (4) wwt Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 44 / 48

Other important things about Hessian Our cross-entropy error function is convex For any vector v, E(w) w = n {σ(w T x n ) y n }x n (3) H = 2 E(w) = homework (4) wwt v T Hv = homework 0 Thus, positive semi-definite. Thus, the cross-entropy error function is convex, with only one global optimum. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 44 / 48

Good about Newton s method Fast (in terms of convergence)! Newton s method finds the optimal point in a single iteration when the function we re optimizing is quadratic In general, the better our Taylor approximation, the more quickly Newton s method will converge Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 45 / 48

Bad about Newton s method Not scalable! Computing and inverting Hessian matrix can be very expensive for large-scale problems where the dimensionally D is very large. There are fixes and alternatives, such as Quasi-Newton/Quasi-second order method. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 46 / 48

Summary Setup for 2 classes Logistic Regression models conditional distribution as: p(y = 1 x; w) = σ[g(x)] where g(x) = w T x Linear decision boundary: g(x) = w T x = 0 Minimizing cross-entropy error (negative log-likelihood) E(b, w) = n {y n log σ(b+w T x n )+(1 y n ) log[1 σ(b+w T x n )]} No closed form solution; must rely on iterative solvers Numerical optimization Gradient descent: simple, scalable to large-scale problems move in direction opposite of gradient! gradient of logistic function takes nice form Newton method: fast to converge but not scalable At each iteration, find optimal point in 2nd-order Taylor expansion Closed form solution exists for each iteration Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 47 / 48

Naive Bayes and logistic regression: two different modeling paradigms Maximize joint likelihood n log p(x n, y n ) (Generative, NB) Maximize conditional likelihood n log p(y n x n ) (Discriminative, LR) More on this next class Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, 2017 48 / 48