Loss Functions and Optimization. Lecture 3-1
|
|
- Richard Ryan
- 6 years ago
- Views:
Transcription
1 Lecture 3: Loss Functions and Optimization Lecture 3-1
2 Administrative Assignment 1 is released: Due Thursday April 20, 11:59pm on Canvas (Extending due date since it was released late) Lecture 3-2
3 Administrative Check out Project Ideas on Piazza Schedule for Office hours is on the course website TA specialties are posted on Piazza Lecture 3-3
4 Administrative Details about redeeming Google Cloud Credits should go out today; will be posted on Piazza $100 per student to use for homeworks and projects Lecture 3-4
5 Recall from last time: Challenges of recognition Viewpoint Illumination Deformation This image is CC0 1.0 public domain This image by Umberto Salvagnin is licensed under CC-BY 2.0 Clutter Intraclass Variation This image is CC0 1.0 public domain This image is CC0 1.0 public domain Lecture 3-5 Occlusion This image by jonsson is licensed under CC-BY 2.0
6 Recall from last time: data-driven approach, knn 1-NN classifier test train train 5-NN classifier validation test Lecture 3-6
7 Recall from last time: Linear Classifier f(x,w) = Wx + b Lecture 3-7
8 Recall from last time: Linear Classifier TODO: 1. Define a loss function that quantifies our unhappiness with the scores across the training data. 2. Come up with a way of efficiently finding the parameters that minimize the loss function. (optimization) Cat image by Nikita is licensed under CC-BY 2.0; Car image is CC0 1.0 public domain; Frog image is in the public domain Lecture 3-8
9 Suppose: 3 training examples, 3 classes. With some W the scores are: cat car frog Lecture 3-9
10 Suppose: 3 training examples, 3 classes. With some W the scores are: A loss function tells how good our current classifier is Given a dataset of examples Where cat car frog is image and is (integer) label Loss over the dataset is a sum of loss over examples: Lecture 3-10
11 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog the SVM loss has the form: Lecture 3-11
12 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example Hinge loss where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog the SVM loss has the form: Lecture 3-12
13 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog the SVM loss has the form: Lecture 3-13
14 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: = max(0, ) +max(0, ) = max(0, 2.9) + max(0, -3.9) = = 2.9 Lecture 3-14
15 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: = max(0, ) +max(0, ) = max(0, -2.6) + max(0, -1.9) =0+0 =0 Lecture 3-15
16 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: = max(0, (-3.1) + 1) +max(0, (-3.1) + 1) = max(0, 6.3) + max(0, 6.6) = = 12.9 Lecture 3-16
17 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Loss over full dataset is average: L = ( )/3 = 5.27 Lecture 3-17
18 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q: What happens to loss if car scores change a bit? Lecture 3-18
19 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q2: what is the min/max possible loss? Lecture 3-19
20 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q3: At initialization W is small so all s 0. What is the loss? Lecture 3-20
21 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q4: What if the sum was over all classes? (including j = y_i) Lecture 3-21
22 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q5: What if we used mean instead of sum? Lecture 3-22
23 Suppose: 3 training examples, 3 classes. With some W the scores are: Multiclass SVM loss: Given an example where is the image and where is the (integer) label, and using the shorthand for the scores vector: cat car frog Losses: the SVM loss has the form: Q6: What if we used Lecture 3-23
24 Multiclass SVM Loss: Example code Lecture 3-24
25 E.g. Suppose that we found a W such that L = 0. Is this W unique? Lecture 3-25
26 E.g. Suppose that we found a W such that L = 0. Is this W unique? No! 2W is also has L = 0! Lecture 3-26
27 Suppose: 3 training examples, 3 classes. With some W the scores are: Before: = max(0, ) +max(0, ) = max(0, -2.6) + max(0, -1.9) =0+0 =0 cat car frog Losses: With W twice as large: = max(0, ) +max(0, ) = max(0, -6.2) + max(0, -4.8) =0+0 =0 Lecture 3-27
28 Data loss: Model predictions should match training data Lecture 3-28
29 Data loss: Model predictions should match training data Lecture 3-29
30 Data loss: Model predictions should match training data Lecture 3-30
31 Data loss: Model predictions should match training data Lecture 3-31
32 Data loss: Model predictions should match training data Regularization: Model should be simple, so it works on test data Lecture 3-32
33 Data loss: Model predictions should match training data Regularization: Model should be simple, so it works on test data Occam s Razor: Among competing hypotheses, the simplest is the best William of Ockham, Lecture 3-33
34 Regularization = regularization strength (hyperparameter) In common use: L2 regularization L1 regularization Elastic net (L1 + L2) Max norm regularization (might see later) Dropout (will see later) Fancier: Batch normalization, stochastic depth Lecture 3-34
35 L2 Regularization (Weight Decay) Lecture 3-35
36 L2 Regularization (Weight Decay) (If you are a Bayesian: L2 regularization also corresponds MAP inference using a Gaussian prior on W) Lecture 3-36
37 Softmax Classifier (Multinomial Logistic Regression) cat car frog Lecture 3-37
38 Softmax Classifier (Multinomial Logistic Regression) scores = unnormalized log probabilities of the classes. cat car frog Lecture 3-38
39 Softmax Classifier (Multinomial Logistic Regression) scores = unnormalized log probabilities of the classes. where cat car frog Lecture 3-39
40 Softmax Classifier (Multinomial Logistic Regression) scores = unnormalized log probabilities of the classes. where cat car frog Softmax function Lecture 3-40
41 Softmax Classifier (Multinomial Logistic Regression) scores = unnormalized log probabilities of the classes. where cat car frog Want to maximize the log likelihood, or (for a loss function) to minimize the negative log likelihood of the correct class: Lecture 3-41
42 Softmax Classifier (Multinomial Logistic Regression) scores = unnormalized log probabilities of the classes. where cat car frog Want to maximize the log likelihood, or (for a loss function) to minimize the negative log likelihood of the correct class: in summary: Lecture 3-42
43 Softmax Classifier (Multinomial Logistic Regression) cat car frog unnormalized log probabilities Lecture 3-43
44 Softmax Classifier (Multinomial Logistic Regression) unnormalized probabilities cat car frog exp unnormalized log probabilities Lecture 3-44
45 Softmax Classifier (Multinomial Logistic Regression) unnormalized probabilities cat car frog exp unnormalized log probabilities normalize probabilities Lecture 3-45
46 Softmax Classifier (Multinomial Logistic Regression) unnormalized probabilities cat car frog exp unnormalized log probabilities normalize L_i = -log(0.13) = 0.89 probabilities Lecture 3-46
47 Softmax Classifier (Multinomial Logistic Regression) Q: What is the min/max possible loss L_i? unnormalized probabilities cat car frog exp unnormalized log probabilities normalize L_i = -log(0.13) = 0.89 probabilities Lecture 3-47
48 Softmax Classifier (Multinomial Logistic Regression) unnormalized probabilities cat car frog exp unnormalized log probabilities normalize Q2: Usually at initialization W is small so all s 0. What is the loss? L_i = -log(0.13) = 0.89 probabilities Lecture 3-48
49 Lecture 3-49
50 Softmax vs. SVM Lecture 3-50
51 Softmax vs. SVM assume scores: [10, -2, 3] [10, 9, 9] [10, -100, -100] and Q: Suppose I take a datapoint and I jiggle a bit (changing its score slightly). What happens to the loss in both cases? Lecture 3-51
52 Recap - We have some dataset of (x,y) - We have a score function: - We have a loss function: e.g. Softmax SVM Full loss Lecture 3-52
53 How do we find the best W? Recap - We have some dataset of (x,y) - We have a score function: - We have a loss function: e.g. Softmax SVM Full loss Lecture 3-53
54 Optimization Lecture 3-54
55 This image is CC0 1.0 public domain Lecture 3-55
56 Walking man image is CC0 1.0 public domain Lecture 3-56
57 Strategy #1: A first very bad idea solution: Random search Lecture 3-57
58 Lets see how well this works on the test set % accuracy! not bad! (SOTA is ~95%) Lecture 3-58
59 Strategy #2: Follow the slope Lecture 3-59
60 Strategy #2: Follow the slope In 1-dimension, the derivative of a function: In multiple dimensions, the gradient is the vector of (partial derivatives) along each dimension The slope in any direction is the dot product of the direction with the gradient The direction of steepest descent is the negative gradient Lecture 3-60
61 current W: gradient dw: [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [?,?,?,?,?,?,?,?,?, ] Lecture 3-61
62 current W: W + h (first dim): gradient dw: [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [ , -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [?,?,?,?,?,?,?,?,?, ] Lecture 3-62
63 current W: W + h (first dim): [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [ , -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss gradient dw: [-2.5,?,?,?, )/ ( = -2.5?,?,?,?,?, ] Lecture 3-63
64 current W: W + h (second dim): gradient dw: [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [0.34, , 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [-2.5,?,?,?,?,?,?,?,?, ] Lecture 3-64
65 current W: W + h (second dim): [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [0.34, , 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss gradient dw: [-2.5, 0.6,?,?,?, )/ ( = 0.6?,?,?,?, ] Lecture 3-65
66 current W: W + h (third dim): gradient dw: [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [0.34, -1.11, , 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [-2.5, 0.6,?,?,?,?,?,?,?, ] Lecture 3-66
67 current W: W + h (third dim): [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [0.34, -1.11, , 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss gradient dw: [-2.5, 0.6, 0,?,?, ( )/0.0001?, =0?,?,?, ] Lecture 3-67
68 This is silly. The loss is just a function of W: want Lecture 3-68
69 This is silly. The loss is just a function of W: want This image is in the public domain Lecture 3-69 This image is in the public domain
70 Hammer image is in the public domain This is silly. The loss is just a function of W: Calculus! want Use calculus to compute an analytic gradient This image is in the public domain Lecture 3-70 This image is in the public domain
71 current W: gradient dw: [0.34, -1.11, 0.78, 0.12, 0.55, 2.81, -3.1, -1.5, 0.33, ] loss [-2.5, 0.6, 0, 0.2, 0.7, -0.5, 1.1, 1.3, -2.1, ] dw =... (some function data and W) Lecture 3-71
72 In summary: - Numerical gradient: approximate, slow, easy to write - Analytic gradient: exact, fast, error-prone => In practice: Always use analytic gradient, but check implementation with numerical gradient. This is called a gradient check. Lecture 3-72
73 Gradient Descent Lecture 3-73
74 W_2 original W W_1 negative gradient direction Lecture 3-74
75 Lecture 3-75
76 Stochastic Gradient Descent (SGD) Full sum expensive when N is large! Approximate sum using a minibatch of examples 32 / 64 / 128 common Lecture 3-76
77 Interactive Web Demo time... Lecture 3-77
78 Interactive Web Demo time... Lecture 3-78
79 Aside: Image Features Lecture 3-79
80 Image Features: Motivation y θ f(x, y) = (r(x, y), θ(x, y)) x r Cannot separate red and blue points with linear classifier After applying feature transform, points can be separated by linear classifier Lecture 3-80
81 Example: Color Histogram +1 Lecture 3-81
82 Example: Histogram of Oriented Gradients (HoG) Divide image into 8x8 pixel regions Within each region quantize edge direction into 9 bins Example: 320x240 image gets divided into 40x30 bins; in each bin there are 9 numbers so feature vector has 30*40*9 = 10,800 numbers Lowe, Object recognition from local scale-invariant features, ICCV 1999 Dalal and Triggs, "Histograms of oriented gradients for human detection," CVPR 2005 Lecture 3-82
83 Example: Bag of Words Step 1: Build codebook Extract random patches Cluster patches to form codebook of visual words Step 2: Encode images Fei-Fei and Perona, A bayesian hierarchical model for learning natural scene categories, CVPR 2005 Lecture 3-83
84 Image features vs ConvNets f Feature Extraction 10 numbers giving scores for classes training 10 numbers giving scores for classes training Lecture 3-84
85 Next time: Introduction to neural networks Backpropagation Lecture 3-85
Loss Functions and Optimization. Lecture 3-1
Lecture 3: Loss Functions and Optimization Lecture 3-1 Administrative: Live Questions We ll use Zoom to take questions from remote students live-streaming the lecture Check Piazza for instructions and
More informationCS 1674: Intro to Computer Vision. Final Review. Prof. Adriana Kovashka University of Pittsburgh December 7, 2016
CS 1674: Intro to Computer Vision Final Review Prof. Adriana Kovashka University of Pittsburgh December 7, 2016 Final info Format: multiple-choice, true/false, fill in the blank, short answers, apply an
More informationMachine Learning Basics
Security and Fairness of Deep Learning Machine Learning Basics Anupam Datta CMU Spring 2019 Image Classification Image Classification Image classification pipeline Input: A training set of N images, each
More informationMachine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang
Machine Learning Basics Lecture 4: SVM I Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d. from distribution
More informationNeural networks and optimization
Neural networks and optimization Nicolas Le Roux INRIA 8 Nov 2011 Nicolas Le Roux (INRIA) Neural networks and optimization 8 Nov 2011 1 / 80 1 Introduction 2 Linear classifier 3 Convolutional neural networks
More informationDiscriminative part-based models. Many slides based on P. Felzenszwalb
More sliding window detection: ti Discriminative part-based models Many slides based on P. Felzenszwalb Challenge: Generic object detection Pedestrian detection Features: Histograms of oriented gradients
More informationDeep Learning Lab Course 2017 (Deep Learning Practical)
Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of
More informationLogistic Regression Introduction to Machine Learning. Matt Gormley Lecture 9 Sep. 26, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Logistic Regression Matt Gormley Lecture 9 Sep. 26, 2018 1 Reminders Homework 3:
More informationCENG 793. On Machine Learning and Optimization. Sinan Kalkan
CENG 793 On Machine Learning and Optimization Sinan Kalkan 2 Now Introduction to ML Problem definition Classes of approaches K-NN Support Vector Machines Softmax classification / logistic regression Parzen
More informationNeural networks and optimization
Neural networks and optimization Nicolas Le Roux Criteo 18/05/15 Nicolas Le Roux (Criteo) Neural networks and optimization 18/05/15 1 / 85 1 Introduction 2 Deep networks 3 Optimization 4 Convolutional
More informationGaussian and Linear Discriminant Analysis; Multiclass Classification
Gaussian and Linear Discriminant Analysis; Multiclass Classification Professor Ameet Talwalkar Slide Credit: Professor Fei Sha Professor Ameet Talwalkar CS260 Machine Learning Algorithms October 13, 2015
More informationCSCI567 Machine Learning (Fall 2018)
CSCI567 Machine Learning (Fall 2018) Prof. Haipeng Luo U of Southern California Sep 12, 2018 September 12, 2018 1 / 49 Administration GitHub repos are setup (ask TA Chi Zhang for any issues) HW 1 is due
More informationRegularization Introduction to Machine Learning. Matt Gormley Lecture 10 Feb. 19, 2018
1-61 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Regularization Matt Gormley Lecture 1 Feb. 19, 218 1 Reminders Homework 4: Logistic
More informationLogistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Logistic Regression Matt Gormley Lecture 8 Feb. 12, 2018 1 10-601 Introduction
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationMachine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.
Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted
More informationNeural Networks, Computation Graphs. CMSC 470 Marine Carpuat
Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ
More informationDeep Learning (CNNs)
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deep Learning (CNNs) Deep Learning Readings: Murphy 28 Bishop - - HTF - - Mitchell
More informationLecture 35: Optimization and Neural Nets
Lecture 35: Optimization and Neural Nets CS 4670/5670 Sean Bell DeepDream [Google, Inceptionism: Going Deeper into Neural Networks, blog 2015] Aside: CNN vs ConvNet Note: There are many papers that use
More informationWarm up: risk prediction with logistic regression
Warm up: risk prediction with logistic regression Boss gives you a bunch of data on loans defaulting or not: {(x i,y i )} n i= x i 2 R d, y i 2 {, } You model the data as: P (Y = y x, w) = + exp( yw T
More informationNeural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35
Neural Networks David Rosenberg New York University July 26, 2017 David Rosenberg (New York University) DS-GA 1003 July 26, 2017 1 / 35 Neural Networks Overview Objectives What are neural networks? How
More informationMachine Learning for Signal Processing Bayes Classification and Regression
Machine Learning for Signal Processing Bayes Classification and Regression Instructor: Bhiksha Raj 11755/18797 1 Recap: KNN A very effective and simple way of performing classification Simple model: For
More informationComments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms
Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:
More informationNeural Networks: Backpropagation
Neural Networks: Backpropagation Seung-Hoon Na 1 1 Department of Computer Science Chonbuk National University 2018.10.25 eung-hoon Na (Chonbuk National University) Neural Networks: Backpropagation 2018.10.25
More informationAn Introduction to Optimization Methods in Deep Learning. Yuan YAO HKUST
1 An Introduction to Optimization Methods in Deep Learning Yuan YAO HKUST Acknowledgement Feifei Li, Stanford cs231n Ruder, Sebastian (2016). An overview of gradient descent optimization algorithms. arxiv:1609.04747.
More informationNonlinear Classification
Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions
More informationLecture 3: Multiclass Classification
Lecture 3: Multiclass Classification Kai-Wei Chang CS @ University of Virginia kw@kwchang.net Some slides are adapted from Vivek Skirmar and Dan Roth CS6501 Lecture 3 1 Announcement v Please enroll in
More informationDistinguish between different types of scenes. Matching human perception Understanding the environment
Scene Recognition Adriana Kovashka UTCS, PhD student Problem Statement Distinguish between different types of scenes Applications Matching human perception Understanding the environment Indexing of images
More informationMachine Learning Basics Lecture 7: Multiclass Classification. Princeton University COS 495 Instructor: Yingyu Liang
Machine Learning Basics Lecture 7: Multiclass Classification Princeton University COS 495 Instructor: Yingyu Liang Example: image classification indoor Indoor outdoor Example: image classification (multiclass)
More informationPart 4: Conditional Random Fields
Part 4: Conditional Random Fields Sebastian Nowozin and Christoph H. Lampert Colorado Springs, 25th June 2011 1 / 39 Problem (Probabilistic Learning) Let d(y x) be the (unknown) true conditional distribution.
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Matrix Data: Prediction Instructor: Yizhou Sun yzsun@ccs.neu.edu September 21, 2015 Announcements TA Monisha s office hour has changed to Thursdays 10-12pm, 462WVH (the same
More informationMachine Learning. Lecture 3: Logistic Regression. Feng Li.
Machine Learning Lecture 3: Logistic Regression Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2016 Logistic Regression Classification
More informationMachine Learning Basics: Stochastic Gradient Descent. Sargur N. Srihari
Machine Learning Basics: Stochastic Gradient Descent Sargur N. srihari@cedar.buffalo.edu 1 Topics 1. Learning Algorithms 2. Capacity, Overfitting and Underfitting 3. Hyperparameters and Validation Sets
More informationStochastic gradient descent; Classification
Stochastic gradient descent; Classification Steve Renals Machine Learning Practical MLP Lecture 2 28 September 2016 MLP Lecture 2 Stochastic gradient descent; Classification 1 Single Layer Networks MLP
More informationLecture 5 Neural models for NLP
CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm
More informationLogistic Regression & Neural Networks
Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability
More informationKernelized Perceptron Support Vector Machines
Kernelized Perceptron Support Vector Machines Emily Fox University of Washington February 13, 2017 What is the perceptron optimizing? 1 The perceptron algorithm [Rosenblatt 58, 62] Classification setting:
More informationA Discriminatively Trained, Multiscale, Deformable Part Model
A Discriminatively Trained, Multiscale, Deformable Part Model P. Felzenszwalb, D. McAllester, and D. Ramanan Edward Hsiao 16-721 Learning Based Methods in Vision February 16, 2009 Images taken from P.
More informationCSC321 Lecture 6: Backpropagation
CSC321 Lecture 6: Backpropagation Roger Grosse Roger Grosse CSC321 Lecture 6: Backpropagation 1 / 21 Overview We ve seen that multilayer neural networks are powerful. But how can we actually learn them?
More informationRegularization in Neural Networks
Regularization in Neural Networks Sargur Srihari 1 Topics in Neural Network Regularization What is regularization? Methods 1. Determining optimal number of hidden units 2. Use of regularizer in error function
More informationClustering with k-means and Gaussian mixture distributions
Clustering with k-means and Gaussian mixture distributions Machine Learning and Category Representation 2012-2013 Jakob Verbeek, ovember 23, 2012 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.12.13
More informationCOMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation
COMP 55 Applied Machine Learning Lecture 2: Bayesian optimisation Associate Instructor: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp55 Unless otherwise noted, all material posted
More informationOptimization and (Under/over)fitting
Optimization and (Under/over)fitting EECS 442 Prof. David Fouhey Winter 2019, University of Michigan http://web.eecs.umich.edu/~fouhey/teaching/eecs442_w19/ Administrivia We re grading HW2 and will try
More informationCase Study 1: Estimating Click Probabilities. Kakade Announcements: Project Proposals: due this Friday!
Case Study 1: Estimating Click Probabilities Intro Logistic Regression Gradient Descent + SGD Machine Learning for Big Data CSE547/STAT548, University of Washington Sham Kakade April 4, 017 1 Announcements:
More informationCSC 411 Lecture 10: Neural Networks
CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35 Inspiration: The Brain Our brain has 10 11
More informationSupport Vector Machine I
Support Vector Machine I Jia-Bin Huang ECE-5424G / CS-5824 Virginia Tech Spring 2019 Administrative Please use piazza. No emails. HW 0 grades are back. Re-grade request for one week. HW 1 due soon. HW
More informationConvolutional Neural Networks
Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»
More informationApplied Machine Learning Lecture 5: Linear classifiers, continued. Richard Johansson
Applied Machine Learning Lecture 5: Linear classifiers, continued Richard Johansson overview preliminaries logistic regression training a logistic regression classifier side note: multiclass linear classifiers
More informationOnline Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?
Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationCS60010: Deep Learning
CS60010: Deep Learning Sudeshna Sarkar Spring 2018 16 Jan 2018 FFN Goal: Approximate some unknown ideal function f : X! Y Ideal classifier: y = f*(x) with x and category y Feedforward Network: Define parametric
More informationLecture 17: Neural Networks and Deep Learning
UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions
More informationLecture 2 - Learning Binary & Multi-class Classifiers from Labelled Training Data
Lecture 2 - Learning Binary & Multi-class Classifiers from Labelled Training Data DD2424 March 23, 2017 Binary classification problem given labelled training data Have labelled training examples? Given
More informationApprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning
Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/22/2010 Pieter Abbeel UC Berkeley Slides adapted from Dan Klein Announcements W7 due tonight [this is your last written for
More informationCSE 546 Final Exam, Autumn 2013
CSE 546 Final Exam, Autumn 0. Personal info: Name: Student ID: E-mail address:. There should be 5 numbered pages in this exam (including this cover sheet).. You can use any material you brought: any book,
More informationFrom Binary to Multiclass Classification. CS 6961: Structured Prediction Spring 2018
From Binary to Multiclass Classification CS 6961: Structured Prediction Spring 2018 1 So far: Binary Classification We have seen linear models Learning algorithms Perceptron SVM Logistic Regression Prediction
More informationNeural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann
Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable
More informationMachine Learning for NLP
Machine Learning for NLP Linear Models Joakim Nivre Uppsala University Department of Linguistics and Philology Slides adapted from Ryan McDonald, Google Research Machine Learning for NLP 1(26) Outline
More informationCS 6375 Machine Learning
CS 6375 Machine Learning Nicholas Ruozzi University of Texas at Dallas Slides adapted from David Sontag and Vibhav Gogate Course Info. Instructor: Nicholas Ruozzi Office: ECSS 3.409 Office hours: Tues.
More informationINTRODUCTION TO DATA SCIENCE
INTRODUCTION TO DATA SCIENCE JOHN P DICKERSON Lecture #13 3/9/2017 CMSC320 Tuesdays & Thursdays 3:30pm 4:45pm ANNOUNCEMENTS Mini-Project #1 is due Saturday night (3/11): Seems like people are able to do
More informationGlobal Scene Representations. Tilke Judd
Global Scene Representations Tilke Judd Papers Oliva and Torralba [2001] Fei Fei and Perona [2005] Labzebnik, Schmid and Ponce [2006] Commonalities Goal: Recognize natural scene categories Extract features
More informationMachine Learning. Regression-Based Classification & Gaussian Discriminant Analysis. Manfred Huber
Machine Learning Regression-Based Classification & Gaussian Discriminant Analysis Manfred Huber 2015 1 Logistic Regression Linear regression provides a nice representation and an efficient solution to
More informationCSC 411 Lecture 17: Support Vector Machine
CSC 411 Lecture 17: Support Vector Machine Ethan Fetaya, James Lucas and Emad Andrews University of Toronto CSC411 Lec17 1 / 1 Today Max-margin classification SVM Hard SVM Duality Soft SVM CSC411 Lec17
More informationNon-Linearity. CS 188: Artificial Intelligence. Non-Linear Separators. Non-Linear Separators. Deep Learning I
Non-Linearity CS 188: Artificial Intelligence Deep Learning I Instructors: Pieter Abbeel & Anca Dragan --- University of California, Berkeley [These slides were created by Dan Klein, Pieter Abbeel, Anca
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationCPSC 340 Assignment 4 (due November 17 ATE)
CPSC 340 Assignment 4 due November 7 ATE) Multi-Class Logistic The function example multiclass loads a multi-class classification datasetwith y i {,, 3, 4, 5} and fits a one-vs-all classification model
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationAdministration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6
Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects
More informationMIRA, SVM, k-nn. Lirong Xia
MIRA, SVM, k-nn Lirong Xia Linear Classifiers (perceptrons) Inputs are feature values Each feature has a weight Sum is the activation activation w If the activation is: Positive: output +1 Negative, output
More informationSpatial Transformation
Spatial Transformation Presented by Liqun Chen June 30, 2017 1 Overview 2 Spatial Transformer Networks 3 STN experiments 4 Recurrent Models of Visual Attention (RAM) 5 Recurrent Models of Visual Attention
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationFisher Vector image representation
Fisher Vector image representation Machine Learning and Category Representation 2014-2015 Jakob Verbeek, January 9, 2015 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15 A brief recap on kernel
More informationIntroduction to CNN and PyTorch
Introduction to CNN and PyTorch Kripasindhu Sarkar kripasindhu.sarkar@dfki.de Kaiserslautern University, DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de Some of the contents
More informationECS171: Machine Learning
ECS171: Machine Learning Lecture 3: Linear Models I (LFD 3.2, 3.3) Cho-Jui Hsieh UC Davis Jan 17, 2018 Linear Regression (LFD 3.2) Regression Classification: Customer record Yes/No Regression: predicting
More informationCompressed Fisher vectors for LSVR
XRCE@ILSVRC2011 Compressed Fisher vectors for LSVR Florent Perronnin and Jorge Sánchez* Xerox Research Centre Europe (XRCE) *Now with CIII, Cordoba University, Argentina Our system in a nutshell High-dimensional
More informationBayesian Deep Learning
Bayesian Deep Learning Mohammad Emtiyaz Khan AIP (RIKEN), Tokyo http://emtiyaz.github.io emtiyaz.khan@riken.jp June 06, 2018 Mohammad Emtiyaz Khan 2018 1 What will you learn? Why is Bayesian inference
More informationCSC321 Lecture 8: Optimization
CSC321 Lecture 8: Optimization Roger Grosse Roger Grosse CSC321 Lecture 8: Optimization 1 / 26 Overview We ve talked a lot about how to compute gradients. What do we actually do with them? Today s lecture:
More informationDetecting Humans via Their Pose
Detecting Humans via Their Pose Alessandro Bissacco Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 bissacco@cs.ucla.edu Ming-Hsuan Yang Honda Research Institute
More informationLinear classifiers: Overfitting and regularization
Linear classifiers: Overfitting and regularization Emily Fox University of Washington January 25, 2017 Logistic regression recap 1 . Thus far, we focused on decision boundaries Score(x i ) = w 0 h 0 (x
More informationSGD and Deep Learning
SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationLinear Models for Classification: Discriminative Learning (Perceptron, SVMs, MaxEnt)
Linear Models for Classification: Discriminative Learning (Perceptron, SVMs, MaxEnt) Nathan Schneider (some slides borrowed from Chris Dyer) ENLP 12 February 2018 23 Outline Words, probabilities Features,
More informationKernel Density Topic Models: Visual Topics Without Visual Words
Kernel Density Topic Models: Visual Topics Without Visual Words Konstantinos Rematas K.U. Leuven ESAT-iMinds krematas@esat.kuleuven.be Mario Fritz Max Planck Institute for Informatics mfrtiz@mpi-inf.mpg.de
More informationNeural Networks: Backpropagation
Neural Networks: Backpropagation Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others
More informationMark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.
CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.
More informationLecture 14: Deep Generative Learning
Generative Modeling CSED703R: Deep Learning for Visual Recognition (2017F) Lecture 14: Deep Generative Learning Density estimation Reconstructing probability density function using samples Bohyung Han
More informationLearning features by contrasting natural images with noise
Learning features by contrasting natural images with noise Michael Gutmann 1 and Aapo Hyvärinen 12 1 Dept. of Computer Science and HIIT, University of Helsinki, P.O. Box 68, FIN-00014 University of Helsinki,
More informationInterpreting Deep Classifiers
Ruprecht-Karls-University Heidelberg Faculty of Mathematics and Computer Science Seminar: Explainable Machine Learning Interpreting Deep Classifiers by Visual Distillation of Dark Knowledge Author: Daniela
More informationStyle-aware Mid-level Representation for Discovering Visual Connections in Space and Time
Style-aware Mid-level Representation for Discovering Visual Connections in Space and Time Experiment presentation for CS3710:Visual Recognition Presenter: Zitao Liu University of Pittsburgh ztliu@cs.pitt.edu
More informationMachine Learning Linear Classification. Prof. Matteo Matteucci
Machine Learning Linear Classification Prof. Matteo Matteucci Recall from the first lecture 2 X R p Regression Y R Continuous Output X R p Y {Ω 0, Ω 1,, Ω K } Classification Discrete Output X R p Y (X)
More informationSupport Vector Machines
Two SVM tutorials linked in class website (please, read both): High-level presentation with applications (Hearst 1998) Detailed tutorial (Burges 1998) Support Vector Machines Machine Learning 10701/15781
More informationMachine Learning Lecture 10
Machine Learning Lecture 10 Neural Networks 26.11.2018 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Today s Topic Deep Learning 2 Course Outline Fundamentals Bayes
More informationCSC2541 Lecture 5 Natural Gradient
CSC2541 Lecture 5 Natural Gradient Roger Grosse Roger Grosse CSC2541 Lecture 5 Natural Gradient 1 / 12 Motivation Two classes of optimization procedures used throughout ML (stochastic) gradient descent,
More informationMLE/MAP + Naïve Bayes
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University MLE/MAP + Naïve Bayes MLE / MAP Readings: Estimating Probabilities (Mitchell, 2016)
More informationMathematical Tools for Neuroscience (NEU 314) Princeton University, Spring 2016 Jonathan Pillow. Homework 8: Logistic Regression & Information Theory
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 206 Jonathan Pillow Homework 8: Logistic Regression & Information Theory Due: Tuesday, April 26, 9:59am Optimization Toolbox One
More informationSupport Vector Machines
Support Vector Machines INFO-4604, Applied Machine Learning University of Colorado Boulder September 28, 2017 Prof. Michael Paul Today Two important concepts: Margins Kernels Large Margin Classification
More informationClassification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses
More informationCS-E3210 Machine Learning: Basic Principles
CS-E3210 Machine Learning: Basic Principles Lecture 3: Regression I slides by Markus Heinonen Department of Computer Science Aalto University, School of Science Autumn (Period I) 2017 1 / 48 In a nutshell
More informationCSE 473/573 Computer Vision and Image Processing (CVIP)
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for
More information