Computational Intelligence

Similar documents
Computational Intelligence Winter Term 2017/18

Artificial Neural Networks. Historical description

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

y(x n, w) t n 2. (1)

Machine Learning: Multi Layer Perceptrons

Revision: Neural Network

Lab 5: 16 th April Exercises on Neural Networks

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Computational Intelligence

Computational Intelligence Winter Term 2009/10

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

Lecture 4: Perceptrons and Multilayer Perceptrons

Computational Intelligence Winter Term 2017/18

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Computational Intelligence

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Neural Networks (Part 1) Goals for the lecture

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

4. Multilayer Perceptrons

Artifical Neural Networks

Notes on Back Propagation in 4 Lines

Introduction to Neural Networks

Computational statistics

8. Lecture Neural Networks

Introduction to Artificial Neural Networks

Lecture 17: Neural Networks and Deep Learning

Multilayer Perceptron

Pattern Classification

Computational Intelligence

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Input layer. Weight matrix [ ] Output layer

Course 395: Machine Learning - Lectures

Statistical NLP for the Web

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Neural Networks and the Back-propagation Algorithm

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Statistical Machine Learning from Data

Intelligent Systems Discriminative Learning, Neural Networks

Intro to Neural Networks and Deep Learning

Neural Networks DWML, /25

Artificial Neural Networks 2

Lecture 7 Artificial neural networks: Supervised learning

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Neural Networks Task Sheet 2. Due date: May

Gradient Descent Training Rule: The Details

Introduction Biologically Motivated Crude Model Backpropagation

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Rprop Using the Natural Gradient

Multilayer Perceptron

Chapter 9: The Perceptron

Learning Neural Networks

ECS171: Machine Learning

Multilayer Perceptron = FeedForward Neural Network

Multilayer Perceptrons and Backpropagation

Chapter ML:VI (continued)

Artificial Neural Networks

Linear Discrimination Functions

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Unit III. A Survey of Neural Network Model

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Classification with Perceptrons. Reading:

Optimization and Gradient Descent

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Neural Network Training

SGD and Deep Learning

Lecture 5: Logistic Regression. Neural Networks

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Address for Correspondence

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Machine Learning Lecture 5

Neural Networks. Volker Tresp Summer 2015

Multilayer Neural Networks

Introduction to Convolutional Neural Networks (CNNs)

Chapter ML:VI (continued)

Artificial Intelligence

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Nets Supervised learning

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Artificial Neuron (Perceptron)

Neural networks. Chapter 20. Chapter 20 1

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Neural Networks. Learning and Computer Vision Prof. Olga Veksler CS9840. Lecture 10

Data Mining Part 5. Prediction

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Neural Networks Lecture 4: Radial Bases Function Networks

Machine Learning

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Artificial Neural Networks

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Transcription:

Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning Online- vs. Batch-Learning Multi-Layer-Perceptron Model Backpropagation Single-Layer Perceptron (SLP) Single-Layer Perceptron (SLP) Acceleration of Perceptron Learning Assumption: { 0, } n ) x for all x (0,, 0) If classification incorrect, then w x < 0. Consequently, size of error is just δ = -w x > 0. ) w t+ = w t + (δ + ε) x for ε > 0 (small) corrects error in a single step, since w t+ x = (w t + (δ + ε) x) x = w t x + (δ + ε) x x = -δ + δ x + ε x = δ ( x ) + ε x > 0 0 > 0 3 Generalization: Assumption: R n ) x > 0 for all x (0,, 0) as before: w t+ = w t + (δ + ε) x for ε > 0 (small) and δ = - w t x > 0 ) w t+ x = δ ( x ) + ε x < 0 possible! > 0 Idea: Scaling of data does not alter classification task! Let = min { x : B } > 0 Set ^x = x ) set of scaled examples B^ ) ^x ) ^x 0 ) w t+ x ^ > 0 4

Single-Layer Perceptron (SLP) Single-Layer Perceptron (SLP) There exist numerous variants of Perceptron Learning Methods. Theorem: (Duda & Hart 973) If rule for correcting weights is w t+ = w t + γ t x (if w t x < 0). 8 t 0 : γ t 0. as yet: now: Online Learning Update of weights after each training pattern (if necessary) Batch Learning Update of weights only after test of all training patterns 3. Update rule: w t+ = w t + γ Σ x w t x < 0 B (γ > 0) then w t w* for t with 8 x w* > 0. e.g.: γ t = γ > 0 or γ t = γ / (t+) for γ > 0 5 vague assessment in literature: advantage : usually faster disadvantage : needs more memory just a single vector! 6 Single-Layer Perceptron (SLP) Single-Layer Perceptron (SLP) find weights by means of optimization Let F(w) = { B : w x < 0 } be the set of patterns incorrectly classified by weight w. Gradient method w t+ = w t γrf(w t ) Gradient points in direction of steepest ascent of function f( ) Objective function: f(w) = Σ F(w) w x min! Optimum: f(w) = 0 iff F(w) is empty Possible approach: gradient method w t+ = w t γ rf(w t ) (γ > 0) converges to a local minimum (dep. on w 0 ) Gradient Caution: Indices i of w i here denote components of vektor w; they are not the iteration counters! 7 8

Single-Layer Perceptron (SLP) Single-Layer Perceptron (SLP) Gradient method thus: gradient How difficult is it (a) to find a separating hyperplane, provided it exists? (b) to decide, that there is no separating hyperplane? Let B = P [ { -x : N } (only positive examples), w i R, θ R, B = m For every example x i B should hold: x i w + x i w + + x in w n θ trivial solution w i = θ = 0 to be excluded! Therefore additionally: η R x i w + x i w + + x in w n θ η 0 gradient method batch learning Idea: η maximize if η* > 0, then solution found 9 0 Single-Layer Perceptron (SLP) Matrix notation: What can be achieved by adding a layer? Single-layer perceptron (SLP) ) Hyperplane separates space in two subspaces N P Linear Programming Problem: f(z, z,, z n, z n+, z n+ ) = z n+ s.t. Az 0 max! If z n+ = η > 0, then weights and threshold are given by z. Otherwise separating hyperplane does not exist! calculated by e.g. Kamarkaralgorithm in polynomial time Two-layer perceptron ) arbitrary convex sets can be separated Three-layer perceptron ) arbitrary sets can be separated (depends on number of neurons)- several convex sets representable by nd layer, these sets can be combined in 3rd layer ) more than 3 layers not necessary! connected by AND gate in nd layer convex sets of nd layer connected by OR gate in 3rd layer

XOR with 3 neurons in steps XOR with 3 neurons in layers x - y z x y y z 0 0 0 0 0 x - y z x y y z 0 0 0 0 0 0 0 - y 0 0 0 - y 0 0 0 0 0 convex set without AND gate in nd layer 3 4 XOR can be realized with only neurons! x y - z BUT: this is not a layered network (no MLP)! x y -y x -y+ z 0 0 0 0 0 0 0 0 0 0 0 0-0 0 Evidently: MLPs deployable for addressing significantly more difficult problems than SLPs! But: How can we adjust all these weights and thresholds? Is there an efficient learning algorithm for MLPs? History: Unavailability of efficient learning algorithm for MLPs was a brake shoe until Rumelhart, Hinton and Williams (986): Backpropagation Actually proposed by Werbos (974) but unknown to ANN researchers (was PhD thesis) 5 6

Quantification of classification error of MLP Learning algorithms for Multi-Layer-Perceptron (here: layers) Total Sum Squared Error (TSSE) output of net for weights w and input x Total Mean Squared Error (TMSE) target output of net for input x idea: minimize error! f(w t, u t ) = TSSE min! Gradient method u t+ = u t - γr u f(w t, u t ) w t+ = w t - γr w f(w t, u t ) x x n w w nm m u # training patters # output neurons const. TSSE leads to same solution as TSSE 7 BUT: f(w, u) cannot be differentiated! Why? Discontinuous activation function a(.) in neuron! idea: find smooth activation function similar to original function! 8 0 θ Learning algorithms for Multi-Layer-Perceptron (here: layers) good idea: sigmoid activation function (instead of signum function) 0 θ Learning algorithms for Multi-Layer-Perceptron (here: layers) Gradient method w y x u z monotone increasing f(w t, u t ) = TSSE y 0 differentiable non-linear output [0,] instead of { 0, } u t+ = u t - γr u f(w t, u t ) w t+ = w t - γr w f(w t, u t ) z e.g.: threshold θ integrated in activation function values of derivatives directly determinable from function values x i : inputs y j : values after first layer z k : values after second layer x I w nm y J J y j = h( ) z K K z k = a( ) 9 0

output of neuron j after st layer error for input x and target output z*: output of neuron k after nd layer error of input x: total error for all training patterns (x, z*) B: (TSSE) output of net target output for input x gradient of total error: thus: vector of partial derivatives w.r.t. weights u jk and w ij assume: ) and: and chain rule of differential calculus: outer derivative inner derivative 3 4

partial derivative w.r.t. w ij : partial derivative w.r.t. u jk : factors reordered error signal δ k error signal δ k from previous layer 5 error signal δ j from current layer 6 Generalization (> layers) Let neural network have L layers S, S, S L. Let neurons of all layers be numbered from to N. All weights w ij are gathered in weights matrix W. Let o j be output of neuron j. error signal: j S m neuron j is in m-th layer error signal of neuron in inner layer determined by error signals of all neurons of subsequent layer and weights of associated connections. ) First determine error signals of output neurons, use these error signals to calculate the error signals of the preceding layer, use these error signals to calculate the error signals of the preceding layer, and so forth until reaching the first inner layer. correction: in case of online learning: correction after each test pattern presented 7 ) thus, error is propagated backwards from output layer to first inner ) backpropagation (of error) 8

) other optimization algorithms deployable! in addition to backpropagation (gradient descent) also: Backpropagation with Momentum take into account also previous change of weights: QuickProp assumption: error function can be approximated locally by quadratic function, update rule uses last two weights at step t and t. Resilient Propagation (RPROP) exploits sign of partial derivatives: times negative or positive ) increase step! change of sign ) reset last step and decrease step! typical values: factor for decreasing 0,5 / factor of increasing, evolutionary algorithms individual = weights matrix later more about this! 9