Logistic Regression. Will Monroe CS 109. Lecture Notes #22 August 14, 2017

Similar documents
Gradient Ascent Chris Piech CS109, Stanford University

Topic 2: Logistic Regression

Lecture 2: Logistic Regression and Neural Networks

Week 5: Logistic Regression & Neural Networks

Generative v. Discriminative classifiers Intuition

Lecture 4 Logistic Regression

Linear Models in Machine Learning

Generative v. Discriminative classifiers Intuition

CS229 Supplemental Lecture notes

ECE 5984: Introduction to Machine Learning

Support Vector Machines

Lecture 3 - Linear and Logistic Regression

Logistic Regression Review Fall 2012 Recitation. September 25, 2012 TA: Selen Uguroglu

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University. September 20, 2012

Machine Learning. Lecture 3: Logistic Regression. Feng Li.

Comments. x > w = w > x. Clarification: this course is about getting you to be able to think as a machine learning expert

Logistic Regression Trained with Different Loss Functions. Discussion

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Vector, Matrix, and Tensor Derivatives

Machine Learning, Fall 2012 Homework 2

Linear classifiers: Logistic regression

Bias-Variance Tradeoff

Regression with Numerical Optimization. Logistic

Introduction to Machine Learning

HOMEWORK #4: LOGISTIC REGRESSION

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

HOMEWORK #4: LOGISTIC REGRESSION

Logistic Regression. Seungjin Choi

Logistic Regression. Machine Learning Fall 2018

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 9 Sep. 26, 2018

Machine Learning

Linear Models for Regression

Machine Learning CS-6350, Assignment - 3 Due: 08 th October 2013

Machine Learning Gaussian Naïve Bayes Big Picture

ECE521 Lecture7. Logistic Regression

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Introduction to Bayesian Learning. Machine Learning Fall 2018

Linear Models for Classification

Topic 3: Neural Networks

But if z is conditioned on, we need to model it:

CS60010: Deep Learning

Applied Machine Learning Lecture 5: Linear classifiers, continued. Richard Johansson

MLPR: Logistic Regression and Neural Networks

Outline. MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition. Which is the correct model? Recap.

Machine Learning. Linear Models. Fabio Vandin October 10, 2017

Lecture 6. Regression

Logistic Regression. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Machine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang

Learning: Binary Perceptron. Examples: Perceptron. Separable Case. In the space of feature vectors

Logistic Regression. COMP 527 Danushka Bollegala

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Data Mining 2018 Logistic Regression Text Classification

Variables which are always unobserved are called latent variables or sometimes hidden variables. e.g. given y,x fit the model p(y x) = z p(y x,z)p(z)

Rapid Introduction to Machine Learning/ Deep Learning

Machine Learning Basics III

The Kernel Trick, Gram Matrices, and Feature Extraction. CS6787 Lecture 4 Fall 2017

Sequence Modelling with Features: Linear-Chain Conditional Random Fields. COMP-599 Oct 6, 2015

Machine Learning

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

CS 361: Probability & Statistics

Gaussian and Linear Discriminant Analysis; Multiclass Classification

ECE521 Lectures 9 Fully Connected Neural Networks

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld.

CS446: Machine Learning Fall Final Exam. December 6 th, 2016

Maximum Likelihood, Logistic Regression, and Stochastic Gradient Training

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16

Logistic Regression. Professor Ameet Talwalkar. Professor Ameet Talwalkar CS260 Machine Learning Algorithms January 25, / 48

Machine Learning. Regression-Based Classification & Gaussian Discriminant Analysis. Manfred Huber

ECS171: Machine Learning

Neural Network Training

f = Xw + b, We can compute the total square error of the function values above, compared to the observed training set values:

Lecture 2 Machine Learning Review

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

Lecture 3 Feedforward Networks and Backpropagation

Stochastic Gradient Descent

Learning from Data Logistic Regression

Computing Neural Network Gradients

Input layer. Weight matrix [ ] Output layer

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

Intro to Neural Networks and Deep Learning

CSC321 Lecture 4: Learning a Classifier

Generative Learning algorithms

Gradient-Based Learning. Sargur N. Srihari

Notes on Noise Contrastive Estimation (NCE)

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

CSC321 Lecture 5 Learning in a Single Neuron

Midterm: CS 6375 Spring 2015 Solutions

Generative v. Discriminative classifiers Intuition

Computing the MLE and the EM Algorithm

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Midterm sample questions

CS 361: Probability & Statistics

Midterm, Fall 2003

Logistic Regression & Neural Networks

Partial Fractions. (Do you see how to work it out? Substitute u = ax + b, so du = a dx.) For example, 1 dx = ln x 7 + C, x x (x 3)(x + 1) = a

Logistic Regression. INFO-2301: Quantitative Reasoning 2 Michael Paul and Jordan Boyd-Graber SLIDES ADAPTED FROM HINRICH SCHÜTZE

Log-Linear Models, MEMMs, and CRFs

Transcription:

1 Will Monroe CS 109 Logistic Regression Lecture Notes #22 August 14, 2017 Based on a chapter by Chris Piech Logistic regression is a classification algorithm1 that works by trying to learn a function that approximates P(Y X). It makes the central assumption that P(Y X) can be approximated as a sigmoid function applied to a linear combination of input features. It is particularly important to learn because logistic regression is the basic building block of artificial neural networks. Mathematically, for a single training data point (x, y), logistic regression assumes: m P(Y 1 X x) σ(z) where z θ 0 + θ i x i This assumption is often written in the equivalent forms: P(Y 1 X x) σ(θ T x) where we always set x 0 to be 1 P(Y 0 X x) 1 σ(θ T x) by total law of probability Using these equations for probability of Y X we can create an algorithm that selects values of theta that maximize that probability for all data. I am first going to state the log probability function and partial derivatives with respect to theta. Then later we will (a) show an algorithm that can chose optimal values of theta and (b) show how the equations were derived. An important thing to realize is that: given the best values for the parameters (θ), logistic regression often can do a great ob of estimating the probability of different class labels. However, given bad, or even random, values of θ it does a poor ob. The amount of intelligence that your logistic regression machine learning algorithm has depends on how good its values of θ are. Notation Before we get started I want to make sure that we are all on the same page with respect to notation. In logistic regression, θ is a vector of parameters of length m and we are going to learn the values of those parameters based off of n training examples. The number of parameters should be equal to the number of features of each data point (see section 1). Two pieces of notation that we use often in logistic regression that you may not be familiar with: m θ T x θ i x i θ 1 x 1 + θ 2 x 2 + + θ m x m σ(z) 1 1 + e z The superscript T in θ T x represents a matrix transpose; the operation θ T x is equivalent to taking the dot product of the vectors θ and x, or simply a weighted sum of the components of x (with θ containing the weights). 1Yes, this is a terribly confusing name, given that regression refers to tasks that require predicting continuous values. Perhaps logistic classification would have been better.

2 The function σ(z) 1 1+e z is called the logistic function (or sigmoid function); it looks like this: σ(z) z An important quality of this function is that it maps all real numbers to the range (0, 1). In logistic regression, σ(z) turns an arbitrary score z into a number between 0 and 1 that is interpreted as a probability. Positive numbers become high probabilities; negative numbers become low ones. Log Likelihood In order to chose values for the parameters of logistic regression, we use maximum likelihood estimation (MLE). As such we are going to have two steps: (1) write the log-likelihood function and (2) find the values of θ that maximize the log-likelihood function. The labels that we are predicting are binary, and the output of our logistic regression function is supposed to be the probability that the label is one. This means that we can (and should) interpret each label as a Bernoulli random variable: Y Ber(p) where p σ(θ T x). To start, here is a super slick way of writing the probability of one data point (recall this is the equation form of the probability mass function of a Bernoulli): P(Y y X x) σ(θ T x) y 1 σ(θ T x) (1 y) Now that we know the probability mass function, we can write the likelihood of all the data: n L(θ) P(Y y (i) X x (i) ) the likelihood of independent training labels n σ(θ T x (i) ) y(i) 1 σ(θ T x (i) ) (1 y (i) ) substituting the likelihood of a Bernoulli And if you take the log of this function, you get the log likelihood for logistic regression. The log likelihood equation is: n LL(θ) y (i) log σ(θ T x (i) ) + (1 y (i) ) log1 σ(θ T x (i) ) Recall that in MLE the only remaining step is to chose parameters (θ) that maximize log likelihood.

3 Gradient of Log Likelihood Now that we have a function for log-likelihood, we simply need to chose the values of theta that maximize it. Unfortunately, if we try ust setting the derivative equal to zero, we ll quickly get frustrated: there s no closed form for the maximum. However, we can find the best values of theta by using an optimization algorithm. The optimization algorithm we will use requires the partial derivative of log likelihood with respect to each parameter. First I am going to give you the partial derivative (so you can see how it is used); we ll derive it a bit later: n y (i) σ(θ T x (i) ) x (i) θ Gradient Ascent Optimization Our goal is to choosing parameters (θ) that maximize likelihood, and we know the partial derivative of log likelihood with respect to each parameter. We are ready for our optimization algorithm. In the case of logistic regression we can t solve for θ mathematically. Instead we use a computer to chose θ. To do so we employ an algorithm called gradient ascent (a classic in optimization theory). The idea behind gradient ascent is that gradients point uphill. If you continuously take small steps in the direction of your gradient, you will eventually make it to a local maximum. In the case of logistic regression you can prove that the result will always be a global maximum. The update to our parameters that results in each small step can be calculated as: θ new θ old + η LL(θ old ) θ old θ old + η n y (i) σ(θ T x (i) ) x (i) Where η is the magnitude of the step size that we take. If you keep updating θ using the equation above, you will converge on the best values of θ. You now have an intelligent model. Here is the gradient ascent algorithm for logistic regression in pseudo-code: It is also common to have a parameter θ 0 that is added as a constant to the θ T x inside the sigmoid. Rather than computing special derivatives for θ 0, we can simply define an additional feature x 0 that always takes the value 1. Taking a weighted average then results in adding θ 0, the weight for x 0.

4 Derivations In this section we provide the mathematical derivations for the gradient of log-likelihood. The derivations are worth knowing because these ideas are heavily used in Artificial Neural Networks. Our goal is to calculate the derivative of the log likelihood with respect to each theta. To start, here is the definition for the derivative of a sigmoid function with respect to its inputs: σ(z) σ(z)1 σ(z) to get the derivative with respect to θ, use the chain rule Take a moment and appreciate the beauty of the derivative of the sigmoid function. The reason that sigmoid has such a simple derivative stems from the natural exponent in the sigmoid denominator. Since the likelihood function is a sum over all of the data, and in calculus the derivative of a sum is the sum of derivatives, we can focus on computing the derivative of one example. The gradient of theta is simply the sum of this term for each training data point. First I am going to show you how to compute the derivative the hard way. Then we are going to look at an easier method. The derivative of gradient for one data point (x, y): y log σ(θ T x) + (1 y) log1 σ(θ T x θ θ θ y σ(θ T x) 1 y 1 σ(θ T σ(θ T x) x) θ y σ(θ T x) 1 y 1 σ(θ T σ(θ T x)1 σ(θ T x)x x) y σ(θ T x) σ(θ T x)1 σ(θ T σ(θ T x)1 σ(θ T x)x x) derivative of sum of terms derivative of log f (x) chain rule + derivative of σ algebraic manipulation y σ(θ T x) x cancelling terms Derivatives Without Tears That was the hard way. Logistic regression is the building block of artificial neural networks. If we want to scale up, we are going to have to get used to an easier way of calculating derivatives. For that we are going to have to welcome back our old friend the chain rule (of derivatives). By the chain rule: θ θ where p σ(θ T x) θ where z θ T x

5 Chain rule is the decomposition mechanism of calculus. It allows us to calculate a complicated partial derivative ( θ ) by breaking it down into smaller pieces. LL(θ) y log p + (1 y) log() where p σ(θ T x) y p 1 y taking the derivative p σ(z) σ(z)1 σ(z) z θ T x x θ where z θ T x taking the derivative of the sigmoid as previously defined only x interacts with θ Each of those derivatives was much easier to calculate. Now we simply multiply them together. θ y p 1 y y p 1 y θ σ(z)1 σ(z) x substituting in for each term p x since p σ(z) y() p(1 y) x multiplying in y px expanding y σ(θ T x)x since p σ(θ T x)