Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1
Recap: Softmax single layer networ class 1 class 2 class 3 softmax + + + inputs MLP Lecture 3 Deep Neural Networs (1) 2
Single layer networ Single-layer networ, 1 output, 2 inputs + x 1 x 2 MLP Lecture 3 Deep Neural Networs (1) 3
Geometric interpretation Single-layer networ, 1 output, 2 inputs x 2 w y(w; x) =0 x 1 b w Bishop, sec 3.1 MLP Lecture 3 Deep Neural Networs (1) 4
Single layer networ Single-layer networ, 3 outputs, 2 inputs + + + x 1 x 2 MLP Lecture 3 Deep Neural Networs (1) 5
Example data (three classes) 5 Data 4 3 2 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 5 MLP Lecture 3 Deep Neural Networs (1) 6
Classification regions with single-layer networ 5 Plot of Decision regions 4 3 2 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 5 Single-layer networs are limited to linear classification boundaries MLP Lecture 3 Deep Neural Networs (1) 7
Single layer networ trained on MNIST Digits 10 Outputs 0 1 2 3 4 5 6 7 8 9 784x10 weight matrix.... 784 Inputs 28x28 Output weights define a template for each class MLP Lecture 3 Deep Neural Networs (1) 8
Hinton Diagrams Visualise the weights for class.... 400 (20x20) inputs MLP Lecture 3 Deep Neural Networs (1) 9
Hinton diagram for single layer networ trained on MNIST Weights for each class act as a discriminative template Inner product of class weights and input to measure closeness to each template Classify to the closest template (maximum value output) 0 1 2 3 MLP Lecture 3 Deep Neural Networs (1) 10
Multi-Layer Networs MLP Lecture 3 Deep Neural Networs (1) 11
From templates to features Good classification needs to cope with the variability of real data: scale, sew, rotation, translation,... Very difficult to do with a single template per class Could have multiple templates per tas... this will wor, but we can do better Use features rather than templates (images from: Nielsen, chapter 1) MLP Lecture 3 Deep Neural Networs (1) 12
Incorporating features in neural networ architecture Layered processing: inputs - features - classification 0 1 2 3 4 5 6 7 8 9............ How to obtain features? - learning! MLP Lecture 3 Deep Neural Networs (1) 13
Incorporating features in neural networ architecture 0 1 2 3 4 5 6 7 8 9............ MLP Lecture 3 Deep Neural Networs (1) 14
Multi-layer networ y f f.... f f Softmax.... + + + + a (2) w (2) Outputs b (2) g g h (1).... g g Hidden layer + + Sigmoid.... + + w (1) i a (1) b (1).... x i Inputs y = softmax ( H r=1 ) w (2) r h(1) r + b h (1) = sigmoid ( d s=1 w (1) s x s + b MLP Lecture 3 Deep Neural Networs (1) 15 )
Multi-layer networ for MNIST (image from: Michael Nielsen, Neural Networs and Deep Learning, http://neuralnetworsanddeeplearning.com/chap1.html) MLP Lecture 3 Deep Neural Networs (1) 16
Training multi-layer networs: Credit assignment Hidden units mae training the weights more complicated, since the hidden units affect the error function indirectly via all the outputs The credit assignment problem what is the error of a hidden unit? how important is input-hidden weight w (1) i to output unit? what is the gradient of the error with respect to each weight? Solution: bac-propagation of error (bacprop) Bacprop enables the gradients to be computed. These gradients are used by gradient descent to train the weights. MLP Lecture 3 Deep Neural Networs (1) 17
Training output weights y 1 y Outputs y K g (2) 1 w (2) 1 w (2) l h g (2) ` w (2) K @E n @w (2) Hidden units g (2) K = g (2) h(1) w (1) i x i MLP Lecture 3 Deep Neural Networs (1) 18
Training MLPs: Error function and required gradients Cross-entropy error function: Required gradients: E n w (2) E n = E n w (1) i C t n ln y n =1 E n b (2) E n b (1) Gradient for hidden-to-output weights similar to single-layer networ: E n w (2) = E n a (2) = (y t ) }{{} g (2) ( a(2) C E n = w y c=1 c h (1) y c a (2) ) a(2) w MLP Lecture 3 Deep Neural Networs (1) 19
Bac-propagation of error: hidden unit error signal y 1 y Outputs y K g (2) 1 g (2) ` w (2) g (2) K K w (2) 1 w (2) l w (1) i Hidden units h g (1) = X` @E n @w (1) i g (2) l w` = g (1) x i! h (1 h ) x i MLP Lecture 3 Deep Neural Networs (1) 20
Training MLPs: Input-to-hidden weights c=1 E n w (1) i = E n a (1) }{{} g (1) a(1) w (1) i }{{} x i To compute g (1) = E n / a (1), the error signal for hidden unit, we must sum over all the output units contributions to g (1) : K g (1) E n K = c=1 a c (2) a(2) c a (1) = g c (2) a(2) c c=1 h (1) h(1) a (1) ( K ) = g c (2) w (2) c h (1) (1 h (1) ) MLP Lecture 3 Deep Neural Networs (1) 21
Training MLPs: Gradients E n w (2) E n w (1) i = (y t ) }{{} g (2) ( = c=1 h (1) g (2) c w (2) c ) h (1) (1 h (1) ) } {{ } g (1) x i Exercise: write down expressions for the gradients w.r.t. the biases E n E n b (2) b (1) MLP Lecture 3 Deep Neural Networs (1) 22
Bac-propagation of error: hidden unit error signal y 1 y Outputs y K g (2) 1 g (2) ` w (2) g (2) K K w (2) 1 w (2) l w (1) i Hidden units h g (1) = X` @E n @w (1) i g (2) l w` = g (1) x i! h (1 h ) x i MLP Lecture 3 Deep Neural Networs (1) 23
Bac-propagation of error The bac-propagation of error algorithm is summarised as follows: 1 Apply an input vectors from the training set, x, to the networ and forward propagate to obtain the output vector y 2 Using the target vector t compute the error E n 3 Evaluate the error gradients g (2) for each output unit 4 Evaluate the error gradients g (1) for each hidden unit using bac-propagation of error 5 Evaluate the derivatives for each training pattern Bac-propagation can be extended to multiple hidden layers, in each case computing the g (l) s for the current layer as a weighted sum of the g (l+1) s of the next layer MLP Lecture 3 Deep Neural Networs (1) 24
Training with multiple hidden layers y 1 y` Outputs yk g (3) 1 w (3) g (3) w (3) ` K g (3) 1 K w (3) ` h (2) 1 h (2) g (2) 1 g (2) w (2) 1 w (2) Hidden g (2) w (2) H H h (2) H h (1) w (1) i g (1) x i Hidden g (2) = X! g m (3) w m h (2) (1 h(2) ) m Inputs @E n @w (2) = g (2) h(1) MLP Lecture 3 Deep Neural Networs (1) 25
Are there alternatives to Sigmoid Hidden Units? MLP Lecture 3 Deep Neural Networs (1) 26
Sigmoid function 1 Logistic sigmoid activation function g(a) = 1/(1+exp( a)) 0.9 0.8 0.7 0.6 g(a) 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 a MLP Lecture 3 Deep Neural Networs (1) 27
Sigmoid Hidden Units Compress unbounded inputs to (0,1), saturating high magnitudes to 1 Interpretable as the probability of a feature defined by their weight vector Interpretable as the (normalised) firing rate of a neuron However... Saturation causes gradients to approach 0: If the output of a sigmoid unit is h, then then gradient is h(1 h) which approaches 0 as h saturates to 0 or 1 - and hence the gradients it multiplies into approach 0. Very small gradients result in very small parameter changes, so learning becomes very slow Outputs are not centred at 0: The output of a sigmoid layer will have mean> 0. This is numerically undesirable. MLP Lecture 3 Deep Neural Networs (1) 28
tanh tanh(x) = ex e x e x + e x sigmoid(x) = 1 + tanh(x/2) 2 Derivative: d dx tanh(x) = 1 tanh2 (x) MLP Lecture 3 Deep Neural Networs (1) 29
tanh hidden units tanh has same shape as sigmoid but has output range ±1 Results about approximation capability of sigmoid networs also apply to tanh networs h (2) 1 h (2) (2) Hidden h (2) H Possible reason to prefer tanh over sigmoid: allowing units to be positive or negative allows gradient for weights into a hidden unit to have a different sign h (1) 1 h (1) Hidden h (1) H Saturation still a problem MLP Lecture 3 Deep Neural Networs (1) 30
Rectified Linear Unit ReLU relu(x) = max(0, x) Derivative: { d dx relu(x) = 0 if x 0 1 if x > 0 MLP Lecture 3 Deep Neural Networs (1) 31
ReLU hidden units Similar approximation results to tanh and sigmoid hidden units Empirical results for speech and vision show consistent improvements using relu over sigmoid or tanh Unlie tanh or sigmoid there is no positive saturation saturation results in very small derivatives (and hence slower learning) Negative input to relu results in zero gradient (and hence no learning) Relu is computationally efficient: max(0, x) Relu units can die (i.e. respond with 0 to everything) Relu units can be very sensitive to the learning rate MLP Lecture 3 Deep Neural Networs (1) 32
Summary Understanding what single-layer networs compute How multi-layer networs allow feature computation Training multi-layer networs using bac-propagation of error Tanh and ReLU activation functions Multi-layer networs are also referred to as deep neural networs or multi-layer perceptrons Reading: Nielsen, chapter 2 Goodfellow, sections 6.3, 6.4, 6.5 Bishop, sections 3.1, 3.2, and chapter 4 MLP Lecture 3 Deep Neural Networs (1) 33