Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Size: px
Start display at page:

Download "Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4"

Transcription

1 Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 1

2 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2

3 Recap: How to learn the function By minimizing expected error 3

4 Recap: Sampling the function d i X i is unknown, so sample it Basically, get input-output pairs for a number of samples of input Many samples, where Good sampling: the samples of will be drawn from Estimate function from the samples 4

5 The Empirical risk di Xi The expected error is the average error over the entire input space The empirical estimate of the expected error is the average error over the samples 5

6 Empirical Risk Minimization Given a training set of input-output pairs 2 Error on the i-th instance: Empirical average error on all training data: Estimate the parameters to minimize the empirical estimate of expected error I.e. minimize the empirical error over the drawn samples 6

7 Problem Statement Given a training set of input-output pairs Minimize the following function w.r.t This is problem of function minimization An instance of optimization 7

8 A CRASH COURSE ON FUNCTION OPTIMIZATION 8

9 Finding the minimum of a scalar function of a multi-variate input The optimum point is a turning point the gradient will be 0 9

10 Unconstrained Minimization of function (Multivariate) 1. Solve for the where the gradient equation equals to zero f ( X ) 2. Compute the Hessian Matrix at the candidate solution and verify that Hessian is positive definite (eigenvalues positive) -> to identify local minima Hessian is negative definite (eigenvalues negative) -> to identify local maxima 0 10

11 Closed Form Solutions are not always f(x) available Often it is not possible to simply solve The function to minimize/maximize may have an intractable form In these situations, iterative solutions are used Begin with a guess for the optimal and refine it iteratively until the correct value is obtained X 11

12 Iterative solutions f(x) x 0 x 1 x 2 x 5 x 3 x 4 X Iterative solutions Start from an initial guess for the optimal Update the guess towards a (hopefully) better value of Stop when no longer decreases Problems: Which direction to step in How big must the steps be 12

13 The Approach of Gradient Descent Iterative solution: Start at some point Find direction in which to shift this point to decrease error This can be found from the derivative of the function A positive derivative moving left decreases error A negative derivative moving right decreases error Shift point in this direction

14 The Approach of Gradient Descent Iterative solution: Trivial algorithm Initialize While If is positive: Else x = x step x = x + step What must step be to ensure we actually get to the optimum?

15 The Approach of Gradient Descent Iterative solution: Trivial algorithm Initialize While Identical to previous algorithm

16 The Approach of Gradient Descent Iterative solution: Trivial algorithm Initialize While is the step size

17 Gradient descent/ascent (multivariate) The gradient descent/ascent method to find the minimum or maximum of a function iteratively To find a maximum move in the direction of the gradient To find a minimum move exactly opposite the direction of the gradient T T Many solutions to choosing step size Later lecture /

18 Fixed step size Use fixed value for 1. Fixed step size /

19 f Influence of step size example (constant step size) é 2 2 ( x1, x2) ( x1) x1x2 4( x2) x initial ê 3 ë ù ú û x 0 x /

20 What is the optimal step size? Step size is critical for fast optimization Will revisit this topic later For now, simply assume a potentiallyiteration-dependent step size 20

21 Gradient descent convergence criteria The gradient descent algorithm converges when one of the following criteria is satisfied f (x k1 )- f (x k ) < e 1 Or f (x k ) < e /

22 Overall Gradient Descent Algorithm Initialize: While /

23 Convergence of Gradient Descent For appropriate step size, for convex (bowlshaped) functions gradient descent will always find the minimum. For non-convex functions it will find a local minimum or an inflection point 23

24 Returning to our problem.. 24

25 Problem Statement Given a training set of input-output pairs Minimize the following function w.r.t This is problem of function minimization An instance of optimization 25

26 Preliminaries Before we proceed: the problem setup 26

27 Problem Setup: Things to define Given a training set of input-output pairs Minimize What are the these following input-output function pairs? 27

28 Problem Setup: Things to define Given a training set of input-output pairs Minimize What are the these following input-output function pairs? What is f() and what are its parameters? 28

29 Problem Setup: Things to define Given a training set of input-output pairs Minimize What are the these following input-output function pairs? What is the divergence div()? What is f() and what are its parameters W? 29

30 Problem Setup: Things to define Given a training set of input-output pairs Minimize the following function What is f() and what are its parameters W? 30

31 What is f()? Typical network Input units Hidden units Output units Multi-layer perceptron A directed network with a set of inputs and outputs No loops Generic terminology We will refer to the inputs as the input units No neurons here the input units are just the inputs We refer to the outputs as the output units Intermediate units are hidden units 31

32 The individual neurons Individual neurons operate on a set of inputs and produce a single output Standard setup: A differentiable activation function applied the sum of weighted inputs and a bias y = f w x + b More generally: any differentiable function 32

33 The individual neurons Individual neurons operate on a set of inputs and produce a single output Standard setup: A differentiable activation function applied the sum of weighted inputs and a bias y = f w x + b More generally: any differentiable function We will assume this unless otherwise specified Parameters are weights and bias 33

34 Activations and their derivatives [*] Some popular activation functions and their derivatives 34

35 Vector Activations Input Layer Hidden Layers Output Layer We can also have neurons that have multiple coupled outputs Function operates on set of inputs to produce set of outputs Modifying a single parameter in will affect all outputs 35

36 Vector activation example: Softmax s o f t m a x Example: Softmax vector activation Parameters are weights and bias 36

37 Multiplicative combination: Can be viewed as a case of vector activations x z y Parameters are weights and bias A layer of multiplicative combination is a special case of vector activation 37

38 Typical network Input Layer Hidden Layers Output Layer We assume a layered network for simplicity We will refer to the inputs as the input layer No neurons here the layer simply refers to inputs We refer to the outputs as the output layer Intermediate layers are hidden layers 38

39 Typical network Input Layer Hidden Layers Output Layer In a layered network, each layer of perceptrons can be viewed as a single vector activation 39

40 Notation () () () () () () () () () () () The input layer is the 0 th layer We will represent the output of the i-th perceptron of the k th () layer as () Input to network: () Output of network: We will represent the weight of the connection between the i-th unit of the k-1th layer and the jth unit of the k-th layer as The bias to the jth unit of the k-th layer is () () 40

41 Problem Setup: Things to define Given a training set of input-output pairs Minimize What are the these following input-output function pairs? 41

42 Vector notation Given a training set of input-output pairs 2 is the nth input vector is the nth desired output is the nth vector of actual outputs of the network We will sometimes drop the first subscript when referring to a specific instance 42

43 Representing the input Input Layer Hidden Layers Output Layer Vectors of numbers (or may even be just a scalar, if input layer is of size 1) E.g. vector of pixel values E.g. vector of speech features E.g. real-valued vector representing text We will see how this happens later in the course Other real valued vectors 43

44 Representing the output Input Layer Hidden Layers Output Layer If the desired output is real-valued, no special tricks are necessary Scalar Output : single output neuron d = scalar (real value) Vector Output : as many output neurons as the dimension of the desired output d = [d 1 d 2.. d L ] (vector of real values) 44

45 Representing the output If the desired output is binary (is this a cat or not), use a simple 1/0 representation of the desired output 1 = Yes it s a cat 0 = No it s not a cat. 45

46 Representing the output 1 σ z = 1 + e σ(z) If the desired output is binary (is this a cat or not), use a simple 1/0 representation of the desired output Output activation: Typically a sigmoid Viewed as the probability of class value 1 Indicating the fact that for actual data, in general an feature value X may occur for both classes, but with different probabilities Is differentiable 46

47 Representing the output If the desired output is binary (is this a cat or not), use a simple 1/0 representation of the desired output 1 = Yes it s a cat 0 = No it s not a cat. Sometimes represented by two independent outputs, one representing the desired output, the other representing the negation of the desired output Yes: [1 0] No: [0 1] 47

48 Multi-class output: One-hot representations Consider a network that must distinguish if an input is a cat, a dog, a camel, a hat, or a flower We can represent this set as the following vector: [cat dog camel hat flower] T For inputs of each of the five classes the desired output is: cat: [ ] T dog: [ ] T camel: [ ] T hat: [ ] T flower: [ ] T For an input of any class, we will have a five-dimensional vector output with four zeros and a single 1 at the position of that class This is a one hot vector 48

49 Multi-class networks Input Layer Hidden Layers Output Layer For a multi-class classifier with N classes, the one-hot representation will have N binary outputs An N-dimensional binary vector The neural network s output too must ideally be binary (N-1 zeros and a single 1 in the right place) More realistically, it will be a probability vector N probability values that sum to 1. 49

50 Multi-class classification: Output Input Layer Hidden Layers Output Layer s o f t m a x Softmax vector activation is often used at the output of multi-class classifier nets () () This can be viewed as the probability 50

51 Typical Problem Statement We are given a number of training data instances E.g. images of digits, along with information about which digit the image represents Tasks: Binary recognition: Is this a 2 or not Multi-class recognition: Which digit is this? Is this a digit in the first place? 51

52 Training data Typical Problem statement: binary classification (, 0) (, 1) (, 1) (, 0) (, 0) (, 1) Input: vector of pixel values Output: sigmoid Given, many positive and negative examples (training data), learn all weights such that the network does the desired job 52

53 Training data (, 5) Typical Problem statement: multiclass classification (, 2) Input Layer Hidden Layers Output Layer (, 2) (, 4) s o f t m a x (, 0) (, 2) Input: vector of pixel values Output: Class prob Given, many positive and negative examples (training data), learn all weights such that the network does the desired job 53

54 Problem Setup: Things to define Given a training set of input-output pairs Minimize the following function What is the divergence div()? 54

55 Examples of divergence functions d 1 d 2 d 3 d 4 L 2 Div() Div For real-valued output vectors, the (scaled) L 2 divergence is popular Squared Euclidean distance between true and desired output Note: this is differentiable 55

56 For binary classifier KL Div For binary classifier with scalar output,, d is 0/1, the cross entropy between the probability distribution and the ideal output probability is popular Minimum when d = Y Derivative ddiv(y, d) dy = 1 Y 1 if d = 1 1 Y if d = 0 56

57 For multi-class classification d 1 d 2 d 3 d 4 KL Div() Div Desired output d is a one hot vector with the 1 in the c-th position (for class c) Actual output will be probability distribution y, y, The cross-entropy between the desired one-hot output and actual output: Derivative Div Y, d = d log y ddiv(y, d) dy = 1 y for the c th component 0 for remaining component Div(Y, d) = y

58 Problem Setup Given a training set of input-output pairs The error on the i th instance is The total error Minimize w.r.t 58

59 Recap: Gradient Descent Algorithm In order to minimize any function Initialize: w.r.t. While /

60 Recap: Gradient Descent Algorithm In order to minimize any function Initialize: w.r.t. While For every component Explicitly stating it by component /

61 Training Neural Nets through Gradient Total training error: Descent Gradient descent algorithm: Initialize all weights and biases Do: Using the extended notation: the bias is also a weight For every layer for all update: Assuming the bias is also represented as a weight Until (), (), (), has converged 61

62 Training Neural Nets through Gradient Total training error: Descent Gradient descent algorithm: Initialize all weights Do: For every layer for all update: Until (), (), (), has converged 62

63 The derivative Total training error: Computing the derivative Total derivative: 63

64 Training by gradient descent () Initialize all weights Do: For all, initialize For all (), For every layer k for all i, j: Compute Div(Y t,d t ) (),, () += Div(Y t,d t ) (), For every layer for all : Until has converged () () η w, = w, T derr () dw, 64

65 The derivative Total training error: Total derivative: So we must first figure out how to compute the derivative of divergences of individual training inputs 65

66 Calculus Refresher: Basic rules of For any differentiable function with derivative calculus the following must hold for sufficiently small For any differentiable function with partial derivatives the following must hold for sufficiently small 66

67 Calculus Refresher: Chain rule For any nested function Check we can confirm that : 67

68 Calculus Refresher: Distributed Chain rule Check: 68

69 Distributed Chain Rule: Influence Diagram affects through each of 69

70 Distributed Chain Rule: Influence Diagram Small perturbations in perturbations in each of cause small each of which individually additively perturbs 70

71 Returning to our problem How to compute 71

72 A first closer look at the network Showing a tiny 2-input network for illustration Actual network would have many more neurons and inputs 72

73 A first closer look at the network + f(. ) + f(. ) + f(. ) + f(. ) + f(. ) Showing a tiny 2-input network for illustration Actual network would have many more neurons and inputs Explicitly separating the weighted sum of inputs from the activation 73

74 A first closer look at the network (), (), (), (), (), (), + + (), (), (), (), (), (), + + (), (), (), + Showing a tiny 2-input network for illustration Actual network would have many more neurons and inputs Expanded with all weights and activations shown The overall function is differentiable w.r.t every weight, bias and input 74

75 Computing the derivative for a single input (), (), (), (), (), (), + + (), (), (), (), (), (), + + (), (), (), + Each yellow ellipse represents a perceptron Aim: compute derivative of weights w.r.t. each of the But first, lets label all our variables and activation functions 75

76 Computing the derivative for a single input , (), (), (), (), (), (), (), (), (), (), (), (), (), (), () () () () () () () () () () Div

77 Computing the gradient What is: Derive on board? 77

78 Computing the gradient What is: Derive on board? Note: computation of the derivative requires intermediate and final output values of the network in response to the input 78

79 BP: Scalar Formulation Div(Y,d) The network again

80 Gradients: Local Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) E f N Redrawn Separately label input and output of each node

81 Forward Computation z (1) y (1) z (N-1) y (N-1) z(n) y (N) f N f N 1 Assuming () () and

82 Forward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N f N Assuming () () () and

83 Forward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N f N 1 1 1

84 Forward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N f N ITERATE FOR k = 1:N for j = 1:layer-width

85 Forward Pass Input: dimensional vector Set:, is the width of the 0 th (input) layer ; For layer For D k is the size of the kth layer () (), () () Output: () 85

86 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N 1 1 1

87 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N 1 1 1

88 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N 1 1 1

89 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () computed during the forward pass

90 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N Derivative of the activation function of Nth layer

91 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () () () () () () Because : z () y () = w () () () () ()

92 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () () () () computed during the forward pass () () () () () ()

93 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () () () () computed during the forward pass () () () () () ()

94 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () () () () () () () () () ()

95 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) w ij (k) z(n) y (N) f N Div(Y,d) Div(Y,d) f N () () () () () () () () () () () () ()

96 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z(n) f N y (N) Div(Y,d) Div(Y,d) f N Figure assumes, but does not show the 1 bias nodes Initialize: Gradient w.r.t network output () () () () () () () () () () () () ()

97 Output layer (N) : For Backward Pass (,) () () For layer For () () () () () () () () () () () () () for 97

98 Backward Pass Output layer (N) : For () (,) () () () () For layer For () () () Called Backpropagation because the derivative of the error is propagated backwards through the network Very analogous to the forward pass: Backward weighted combination of next layer Backward equivalent of activation () () () () () () () for 98

99 For comparison: the forward pass Input: Set: again dimensional vector, is the width of the 0 th (input) layer ; For layer For () (), () () Output: () 99

100 Special cases Have assumed so far that 1. The computation of the output of one neuron does not directly affect computation of other neurons in the same (or previous) layers 2. Outputs of neurons only combine through weighted addition 3. Activations are actually differentiable All of these conditions are frequently not applicable Not discussed in class, but explained in slides Will appear in quiz. Please read the slides 100

101 Special Case 1. Vector activations y (k-1) z (k) y (k) y (k-1) z (k) y (k) Vector activations: all outputs are functions of all inputs 101

102 Special Case 1. Vector activations y (k-1) z (k) y (k) y (k-1) z (k) y (k) Scalar activation: Modifying a only changes corresponding Vector activation: Modifying a potentially changes all, 102

103 Influence diagram y (k-1) y (k) y (k-1) z (k) y (k) z (k) Scalar activation: Each influences one Vector activation: Each influences all, 103

104 The number of outputs y (k-1) z (k) y (k) y (k-1) z (k) y (k) Note: The number of outputs (y (k) ) need not be the same as the number of inputs (z (k) ) May be more or fewer 104

105 Scalar Activation: Derivative rule y (k-1) z (k) y (k) In the case of scalar activation functions, the derivative of the error w.r.t to the input to the unit is a simple product of derivatives 105

106 Derivatives of vector activation y (k-1) z (k) y (k) Div Note: derivatives of scalar activations are just a special case of vector activations: () () For vector activations the derivative of the error w.r.t. to any input is a sum of partial derivatives Regardless of the number of outputs 106

107 Special cases Examples of vector activations and other special cases on slides Please look up Will appear in quiz! 107

108 Example Vector Activation: Softmax () y (k-1) z (k) y (k) () Div () () () () () () () () () () () () () For future reference is the Kronecker delta: 108

109 Vector Activations y (k-1) z (k) y (k) In reality the vector combinations can be anything E.g. linear combinations, polynomials, logistic (softmax), etc. 109

110 Special Case 2: Multiplicative networks z (k-1) y (k-1) o (k) W (k) Forward: o y y ( k ) ( k -1) ( k -1) i j l Some types of networks have multiplicative combination In contrast to the additive combination we have seen so far Seen in networks such as LSTMs, GRUs, attention models, etc.

111 z (k-1) Backpropagation: Multiplicative y (k-1) Networks Forward: o (k) W (k) k ) o y y ( ( k -1) ( k -1) i j l Backward: () () () ( k ) Div oi Div ( k 1) y - ( k -1) ( k -1) ( k ) l ( k ) y j y j oi oi Div Div ( k 1) y - ( k -1) j ( k ) yl oi Some types of networks have multiplicative combination Div

112 Multiplicative combination as a case of vector activations y (k-1) z (k) y (k) A layer of multiplicative combination is a special case of vector activation 112

113 Multiplicative combination: Can be viewed as a case of vector activations y (k-1) z (k) y (k) () Y, Div () () A layer of multiplicative combination is a special case of vector activation 113

114 Gradients: Backward Computation z (k-1) y (k-1) z (k) y (k) z (N-1) y (N-1) z (N) f N y (N) Div(Y,d) Div f N () For k = N 1 For i = 1:layer width If layer has vector activation () () () () Else if activation is scalar () () () () () () () () () ()

115 Backward Pass for softmax output Output layer (N) : For (,) () layer z (N) softmax y (N) d KL Div Div () (,) () () () For layer For () () () () () () () () () for 115

116 Special Case 3: Non-differentiable x w activations x w x x..... w w w + z f(z) y z 1 z 2 x 1 w y f(z) = z z 3 f(z) = 0 z Activation functions are sometimes not actually differentiable E.g. The RELU (Rectified Linear Unit) z 4 And its variants: leaky RELU, randomized leaky RELU E.g. The max function Must use subgradients where available Or secants 116

117 The subgradient A subgradient of a function at a point is any vector such that Guaranteed to exist only for convex functions bowl shaped functions For non-convex functions, the equivalent concept is a quasi-secant The subgradient is a direction in which the function is guaranteed to increase If the function is differentiable at, the subgradient is the gradient The gradient is not always the subgradient though 117

118 Subgradients and the RELU Can use any subgradient At the differentiable points on the curve, this is the same as the gradient Typically, will use the equation given 118

119 Subgradients and the Max z 1 z 2 y z N Vector equivalent of subgradient 1 w.r.t. the largest incoming input Incremental changes in this input will change the output 0 for the rest Incremental changes to these inputs will not change the output 119

120 Subgradients and the Max z 1 y 1 z 2 y 2 y 3 z N y M Multiple outputs, each selecting the max of a different subset of inputs Will be seen in convolutional networks Gradient for any output: 1 for the specific component that is maximum in corresponding input subset 0 otherwise 120

121 Backward Pass: Recap Output layer (N) : For (,) () () For layer For () () () () () () (vector activation) () () () () () () () () () () (vector activation) () () () for 121

122 Overall Approach For each data instance Forward pass: Pass instance forward through the net. Store all intermediate outputs of all computation Backward pass: Sweep backward through the net, iteratively compute all derivatives w.r.t weights Actual Error is the sum of the error over all training instances Actual gradient is the sum or average of the derivatives computed for each training instance

123 Training by BackProp Initialize all weights Do: Initialize ; For all, initialize (), For all (Loop over training instances) Forward pass: Compute Output Y t Err += Div(Y t, d t ) Backward pass: For all i, j, k: Compute Div(Y t,d t ) (), Compute, For all update: () += Div(Y t,d t ) (), () () η w, = w, T derr () dw, Until has converged 123

124 Vector formulation For layered networks it is generally simpler to think of the process in terms of vector operations Simpler arithmetic Fast matrix libraries make operations much faster We can restate the entire process in vector terms On slides, please read This is what is actually used in any real system Will appear in quiz 124

125 Vector formulation () () () () () () () () () () () k () () () () () () () () () k k () () () () () () () Arrange all inputs to the network in a vector Arrange the inputs to neurons of the kth layer as a vector k Arrange the outputs of neurons in the kth layer as a vector k Arrange the weights to any layer as a matrix Similarly with biases 12 5

126 Vector formulation () () () () () k () () () k () () () () () () () () () () () () () () () () The computation of a single layer is easily expressed in matrix notation as (setting 0 ): k () () () k k k1 k k k 12 6

127 The forward pass: Evaluating the network 0 127

128 The forward pass

129 The forward pass The Complete computation 129

130 The forward pass The Complete computation 2 130

131 The forward pass The Complete computation 131

132 The forward pass N N The Complete computation 132

133 The forward pass N N The Complete computation 133

134 Forward pass Div(Y,d) Forward pass: Initialize For k = 1 to N: Output

135 The Forward Pass Set For layer k = 1 to N: Recursion: Output: 135

136 The backward pass The network is a nested function The error for any is also a nested function

137 Calculus recap 2: The Jacobian The derivative of a vector function w.r.t. vector input is called a Jacobian It is the matrix of partial derivatives given below Using vector notation Check: 137

138 Jacobians can describe the derivatives of neural activations w.r.t their input z y For Scalar activations Number of outputs is identical to the number of inputs Jacobian is a diagonal matrix Diagonal entries are individual derivatives of outputs w.r.t inputs Not showing the superscript (k) in equations for brevity 138

139 Jacobians can describe the derivatives of neural activations w.r.t their input z y For scalar activations (shorthand notation): Jacobian is a diagonal matrix Diagonal entries are individual derivatives of outputs w.r.t inputs 139

140 For Vector activations z y Jacobian is a full matrix Entries are partial derivatives of individual outputs w.r.t individual inputs 140

141 Special case: Affine functions Matrix and bias operating on vector to produce vector The Jacobian of w.r.t is simply the matrix 141

142 Vector derivatives: Chain rule We can define a chain rule for Jacobians For vector functions of vector inputs: Check Note the order: The derivative of the outer function comes first 142

143 Vector derivatives: Chain rule The chain rule can combine Jacobians and Gradients For scalar functions of vector inputs ( is vector): Check Note the order: The derivative of the outer function comes first 143

144 Special Case Scalar functions of Affine functions Derivatives w.r.t parameters Note reversal of order. This is in fact a simplification of a product of tensor terms that occur in the right order 144

145 The backward pass In the following slides we will also be using the notation z the Jacobian Y to explicitly illustrate the chain rule to represent In general a represents a derivative of w.r.t. and could be a gradient (for scalar ) Or a Jacobian (for vector )

146 The backward pass First compute the gradient of the divergence w.r.t.. The actual gradient depends on the divergence function.

147 The backward pass

148 The backward pass

149 The backward pass

150 The backward pass

151 The backward pass

152 The backward pass

153 The backward pass The Jacobian will be a diagonal matrix for scalar activations

154 The backward pass

155 The backward pass

156 The backward pass

157 The backward pass

158 The backward pass In some problems we will also want to compute the derivative w.r.t. the input

159 The Backward Pass Set, Initialize: Compute For layer k = N downto 1: Compute Will require intermediate values computed in the forward pass Recursion: Gradient computation: 159

160 The Backward Pass Set, Initialize: Compute For layer k = N downto 1: Compute Will require intermediate values computed in the forward pass Recursion: Note analogy to forward pass Gradient computation: 160

161 For comparison: The Forward Pass Set For layer k = 1 to N: Recursion: Output: 161

162 Neural network training algorithm Initialize all weights and biases Do: For all, initialize W, b For all Forward pass : Compute Output Y(X t ) Divergence Div(Y t, d t ) Err += Div(Y t, d t ) Backward pass: For all k compute: y Div = z Div W z Div = y Div J y z W Div(Y t, d t ); b Div(Y t, d t ) W Err += W Div(Y t, d t ); b Err += b Div(Y t, d t ) For all update: W = W W Err ; b = b W Err Until has converged 162

163 Setting up for digit recognition Training data (, 0) (, 1) (, 0) (, 1) (, 0) (, 1) Sigmoid output neuron Simple Problem: Recognizing 2 or not 2 Single output with sigmoid activation Use KL divergence Backpropagation to learn network parameters 163

164 Training data Recognizing the digit (, 0) (, 1) (, 0) (, 1) (, 0) (, 1) Y 1 Y 2 Y 3 Y 4 Y 0 More complex problem: Recognizing digit Network with 10 (or 11) outputs First ten outputs correspond to the ten digits Optional 11th is for none of the above Softmax output layer: Ideal output: One of the outputs goes to 1, the others go to 0 Backpropagation with KL divergence to learn network 164

165 Issues Convergence: How well does it learn And how can we improve it How well will it generalize (outside training data) What does the output really mean? Etc.. 165

166 Next up Convergence and generalization 166

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Lecture 3 Feedforward Networks and Backpropagation

Lecture 3 Feedforward Networks and Backpropagation Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

CSC 411 Lecture 10: Neural Networks

CSC 411 Lecture 10: Neural Networks CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35 Inspiration: The Brain Our brain has 10 11

More information

CSCI567 Machine Learning (Fall 2018)

CSCI567 Machine Learning (Fall 2018) CSCI567 Machine Learning (Fall 2018) Prof. Haipeng Luo U of Southern California Sep 12, 2018 September 12, 2018 1 / 49 Administration GitHub repos are setup (ask TA Chi Zhang for any issues) HW 1 is due

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions? Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Lecture 3 Feedforward Networks and Backpropagation

Lecture 3 Feedforward Networks and Backpropagation Lecture 3 Feedforward Networks and Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 3, 2017 Things we will look at today Recap of Logistic Regression

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

1 What a Neural Network Computes

1 What a Neural Network Computes Neural Networks 1 What a Neural Network Computes To begin with, we will discuss fully connected feed-forward neural networks, also known as multilayer perceptrons. A feedforward neural network consists

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Statistical NLP for the Web

Statistical NLP for the Web Statistical NLP for the Web Neural Networks, Deep Belief Networks Sameer Maskey Week 8, October 24, 2012 *some slides from Andrew Rosenberg Announcements Please ask HW2 related questions in courseworks

More information

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer CSE446: Neural Networks Spring 2017 Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer Human Neurons Switching time ~ 0.001 second Number of neurons 10 10 Connections per neuron 10 4-5 Scene

More information

Neural Networks: Optimization Part 2. Intro to Deep Learning, Fall 2017

Neural Networks: Optimization Part 2. Intro to Deep Learning, Fall 2017 Neural Networks: Optimization Part 2 Intro to Deep Learning, Fall 2017 Quiz 3 Quiz 3 Which of the following are necessary conditions for a value x to be a local minimum of a twice differentiable function

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Neural Networks: Backpropagation

Neural Networks: Backpropagation Neural Networks: Backpropagation Seung-Hoon Na 1 1 Department of Computer Science Chonbuk National University 2018.10.25 eung-hoon Na (Chonbuk National University) Neural Networks: Backpropagation 2018.10.25

More information

CSC321 Lecture 6: Backpropagation

CSC321 Lecture 6: Backpropagation CSC321 Lecture 6: Backpropagation Roger Grosse Roger Grosse CSC321 Lecture 6: Backpropagation 1 / 21 Overview We ve seen that multilayer neural networks are powerful. But how can we actually learn them?

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

Ch.6 Deep Feedforward Networks (2/3)

Ch.6 Deep Feedforward Networks (2/3) Ch.6 Deep Feedforward Networks (2/3) 16. 10. 17. (Mon.) System Software Lab., Dept. of Mechanical & Information Eng. Woonggy Kim 1 Contents 6.3. Hidden Units 6.3.1. Rectified Linear Units and Their Generalizations

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Optimization and Gradient Descent

Optimization and Gradient Descent Optimization and Gradient Descent INFO-4604, Applied Machine Learning University of Colorado Boulder September 12, 2017 Prof. Michael Paul Prediction Functions Remember: a prediction function is the function

More information

Stochastic gradient descent; Classification

Stochastic gradient descent; Classification Stochastic gradient descent; Classification Steve Renals Machine Learning Practical MLP Lecture 2 28 September 2016 MLP Lecture 2 Stochastic gradient descent; Classification 1 Single Layer Networks MLP

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Lecture 15: Exploding and Vanishing Gradients

Lecture 15: Exploding and Vanishing Gradients Lecture 15: Exploding and Vanishing Gradients Roger Grosse 1 Introduction Last lecture, we introduced RNNs and saw how to derive the gradients using backprop through time. In principle, this lets us train

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation BACKPROPAGATION Neural network training optimization problem min J(w) w The application of gradient descent to this problem is called backpropagation. Backpropagation is gradient descent applied to J(w)

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

CSC321 Lecture 4: Learning a Classifier

CSC321 Lecture 4: Learning a Classifier CSC321 Lecture 4: Learning a Classifier Roger Grosse Roger Grosse CSC321 Lecture 4: Learning a Classifier 1 / 28 Overview Last time: binary classification, perceptron algorithm Limitations of the perceptron

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Neural networks COMS 4771

Neural networks COMS 4771 Neural networks COMS 4771 1. Logistic regression Logistic regression Suppose X = R d and Y = {0, 1}. A logistic regression model is a statistical model where the conditional probability function has a

More information

Natural Language Processing with Deep Learning CS224N/Ling284

Natural Language Processing with Deep Learning CS224N/Ling284 Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 4: Word Window Classification and Neural Networks Richard Socher Organization Main midterm: Feb 13 Alternative midterm: Friday Feb

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Multi-layer Neural Networks

Multi-layer Neural Networks Multi-layer Neural Networks Steve Renals Informatics 2B Learning and Data Lecture 13 8 March 2011 Informatics 2B: Learning and Data Lecture 13 Multi-layer Neural Networks 1 Overview Multi-layer neural

More information

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples

8-1: Backpropagation Prof. J.C. Kao, UCLA. Backpropagation. Chain rule for the derivatives Backpropagation graphs Examples 8-1: Backpropagation Prof. J.C. Kao, UCLA Backpropagation Chain rule for the derivatives Backpropagation graphs Examples 8-2: Backpropagation Prof. J.C. Kao, UCLA Motivation for backpropagation To do gradient

More information

CSC321 Lecture 4: Learning a Classifier

CSC321 Lecture 4: Learning a Classifier CSC321 Lecture 4: Learning a Classifier Roger Grosse Roger Grosse CSC321 Lecture 4: Learning a Classifier 1 / 31 Overview Last time: binary classification, perceptron algorithm Limitations of the perceptron

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Automatic Differentiation and Neural Networks

Automatic Differentiation and Neural Networks Statistical Machine Learning Notes 7 Automatic Differentiation and Neural Networks Instructor: Justin Domke 1 Introduction The name neural network is sometimes used to refer to many things (e.g. Hopfield

More information

Neural Nets Supervised learning

Neural Nets Supervised learning 6.034 Artificial Intelligence Big idea: Learning as acquiring a function on feature vectors Background Nearest Neighbors Identification Trees Neural Nets Neural Nets Supervised learning y s(z) w w 0 w

More information

Machine Learning Basics III

Machine Learning Basics III Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Deep Learning Recurrent Networks 2/28/2018

Deep Learning Recurrent Networks 2/28/2018 Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good

More information

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav Neural Networks 30.11.2015 Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav 1 Talk Outline Perceptron Combining neurons to a network Neural network, processing input to an output Learning Cost

More information

Introduction to Machine Learning (67577)

Introduction to Machine Learning (67577) Introduction to Machine Learning (67577) Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem Deep Learning Shai Shalev-Shwartz (Hebrew U) IML Deep Learning Neural Networks

More information

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17 3/9/7 Neural Networks Emily Fox University of Washington March 0, 207 Slides adapted from Ali Farhadi (via Carlos Guestrin and Luke Zettlemoyer) Single-layer neural network 3/9/7 Perceptron as a neural

More information

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:

More information

Intelligent Systems Discriminative Learning, Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1 Recap: Softmax single

More information

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 15: Exploding and Vanishing Gradients CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 2 October 2018 http://www.inf.ed.ac.u/teaching/courses/mlp/ MLP Lecture 3 / 2 October 2018 Deep

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Oliver Schulte - CMPT 310 Neural Networks Neural networks arise from attempts to model human/animal brains Many models, many claims of biological plausibility We will focus on

More information

Lecture 2: Learning with neural networks

Lecture 2: Learning with neural networks Lecture 2: Learning with neural networks Deep Learning @ UvA LEARNING WITH NEURAL NETWORKS - PAGE 1 Lecture Overview o Machine Learning Paradigm for Neural Networks o The Backpropagation algorithm for

More information

Machine Learning

Machine Learning Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018

Deep Learning Sequence to Sequence models: Attention Models. 17 March 2018 Deep Learning Sequence to Sequence models: Attention Models 17 March 2018 1 Sequence-to-sequence modelling Problem: E.g. A sequence X 1 X N goes in A different sequence Y 1 Y M comes out Speech recognition:

More information

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) Sanjeev Arora Elad Hazan Recap: Structure of a deep

More information

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.

Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Intro to Neural Networks and Deep Learning

Intro to Neural Networks and Deep Learning Intro to Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi UVA CS 6316 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions Backpropagation Nonlinearity Functions NNs

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018

Backpropagation Introduction to Machine Learning. Matt Gormley Lecture 12 Feb 23, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Backpropagation Matt Gormley Lecture 12 Feb 23, 2018 1 Neural Networks Outline

More information

Machine Learning Lecture 10

Machine Learning Lecture 10 Machine Learning Lecture 10 Neural Networks 26.11.2018 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Today s Topic Deep Learning 2 Course Outline Fundamentals Bayes

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Lecture 4: Deep Learning Essentials Pierre Geurts, Gilles Louppe, Louis Wehenkel 1 / 52 Outline Goal: explain and motivate the basic constructs of neural networks. From linear

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Neural Networks: A brief touch Yuejie Chi Department of Electrical and Computer Engineering Spring 2018 1/41 Outline

More information

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University

Deep Feedforward Networks. Seung-Hoon Na Chonbuk National University Deep Feedforward Networks Seung-Hoon Na Chonbuk National University Neural Network: Types Feedforward neural networks (FNN) = Deep feedforward networks = multilayer perceptrons (MLP) No feedback connections

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Machine Learning Lecture 12

Machine Learning Lecture 12 Machine Learning Lecture 12 Neural Networks 30.11.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory Probability

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information