Introduction to Neural Networks

Similar documents
COMP304 Introduction to Neural Networks based on slides by:

Artificial Neural Networks and Deep Learning

Artifical Neural Networks

Lecture 4: Perceptrons and Multilayer Perceptrons

Neural networks. Chapter 20. Chapter 20 1

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Multilayer Perceptron

Artificial Neural Networks

Neural networks. Chapter 19, Sections 1 5 1

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Linear discriminant functions

Data Mining Part 5. Prediction

Computational statistics

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Introduction to Neural Networks

Linear Regression, Neural Networks, etc.

4. Multilayer Perceptrons

Neural networks. Chapter 20, Section 5 1

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

y(x n, w) t n 2. (1)

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Artificial Intelligence

AI Programming CS F-20 Neural Networks

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Neural Networks and the Back-propagation Algorithm

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

Lecture 5: Logistic Regression. Neural Networks

CSC242: Intro to AI. Lecture 21

Artificial Neural Network and Fuzzy Logic

Lecture 7 Artificial neural networks: Supervised learning

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

CS:4420 Artificial Intelligence

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Lecture 17: Neural Networks and Deep Learning

Machine Learning: Multi Layer Perceptrons

Revision: Neural Network

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Ch.8 Neural Networks

Lab 5: 16 th April Exercises on Neural Networks

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

CMSC 421: Neural Computation. Applications of Neural Networks

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 4700: Foundations of Artificial Intelligence

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

Feedforward Neural Nets and Backpropagation

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Introduction to Artificial Neural Networks

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Artificial Neural Network

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296

Artificial Neural Networks. Edward Gatt

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Introduction Biologically Motivated Crude Model Backpropagation

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

8. Lecture Neural Networks

Simple Neural Nets For Pattern Classification

Neural Networks biological neuron artificial neuron 1

Neural Networks: Introduction

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Learning in State-Space Reinforcement Learning CIS 32

18.6 Regression and Classification with Linear Models

Artificial Neural Networks Examination, June 2005

Machine Learning. Neural Networks

Unit III. A Survey of Neural Network Model

Artificial neural networks

Artificial Neural Networks (ANN)

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Artificial Neural Networks Examination, June 2004

Computational Intelligence Winter Term 2017/18

ECE521 Lectures 9 Fully Connected Neural Networks

CSC 578 Neural Networks and Deep Learning

Artificial Neural Networks. MGS Lecture 2

Artificial Neuron (Perceptron)

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Multilayer Perceptrons and Backpropagation

Learning from Examples

Linear & nonlinear classifiers

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Intelligent Systems Discriminative Learning, Neural Networks

Multilayer Perceptron Tutorial

Neural Networks Introduction

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Learning Neural Networks

Transcription:

Introduction to Neural Networks Christian Borgelt Intelligent Data Analysis and Graphical Models Research Unit European Center for Soft Computing c/ Gonzalo Gutiérrez Quirós s/n, 33600 Mieres, Spain christian.borgelt@softcomputing.es http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks

Contents Introduction Motivation, Biological Background Threshold Logic Units Definition, Geometric Interpretation, Limitations, Networks of TLUs, Training General Neural Networks Structure, Operation, Training Multilayer Perceptrons Definition, Function Approximation, Gradient Descent, Backpropagation, Variants, Sensitivity Analysis Radial Basis Function Networks Definition, Function Approximation, Initialization, Training, Generalized Version Self-Organizing Maps Definition, Learning Vector Quantization, Neighborhood of Output Neurons Hopfield Networks Definition, Convergence, Associative Memory, Solving Optimization Problems Recurrent Neural Networks Differential Equations, Vector Networks, Backpropagation through Time Christian Borgelt Introduction to Neural Networks

Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology / (Neuro-)Physiology / Psychology: Exploit similarity to real (biological) neural networks. Build models to understand nerve and brain operation by simulation. Computer Science / Engineering / Economics Mimic certain cognitive capabilities of human beings. Solve learning/adaptation, prediction, and optimization problems. Physics / Chemistry Use neural network models to describe physical phenomena. Special case: spin glasses (alloys of magnetic and non-magnetic metals). Christian Borgelt Introduction to Neural Networks 3

Motivation: Why Neural Networks in AI? Physical-Symbol System Hypothesis [Newell and Simon 976] A physical-symbol system has the necessary and sufficient means for general intelligent action. Neural networks process simple signals, not symbols. So why study neural networks in Artificial Intelligence? Symbol-based representations work well for inference tasks, but are fairly bad for perception tasks. Symbol-based expert systems tend to get slower with growing knowledge, human experts tend to get faster. Neural networks allow for highly parallel information processing. There are several successful applications in industry and finance. Christian Borgelt Introduction to Neural Networks 4

Biological Background Structure of a prototypical biological neuron terminal button synapsis dendrites cell core cell body (soma) axon myelin sheath Christian Borgelt Introduction to Neural Networks 5

Biological Background (Very) simplified description of neural information processing Axon terminal releases chemicals, called neurotransmitters. These act on the membrane of the receptor dendrite to change its polarization. (The inside is usually 70mV more negative than the outside.) Decrease in potential difference: excitatory synapse Increase in potential difference: inhibitory synapse If there is enough net excitatory input, the axon is depolarized. The resulting action potential travels along the axon. (Speed depends on the degree to which the axon is covered with myelin.) When the action potential reaches the terminal buttons, it triggers the release of neurotransmitters. Christian Borgelt Introduction to Neural Networks 6

Threshold Logic Units Christian Borgelt Introduction to Neural Networks 7

Threshold Logic Units A Threshold Logic Unit (TLU) is a processing unit for numbers with n inputs x,..., x n and one output y. The unit has a threshold θ and each input x i is associated with a weight w i. A threshold logic unit computes the function y =, if x w = 0, otherwise. n i= w i x i θ, x w.. θ y x n w n Christian Borgelt Introduction to Neural Networks 8

Threshold Logic Units: Examples Threshold logic unit for the conjunction x x. x 3 x 4 y x x 3x + x y 0 0 0 0 0 3 0 0 0 5 Threshold logic unit for the implication x x. x x y x x x x y 0 0 0 0 0 0 0 Christian Borgelt Introduction to Neural Networks 9

Threshold Logic Units: Examples Threshold logic unit for (x x ) (x x 3 ) (x x 3 ). x x x 3 y x x x 3 i w i x i y 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 Christian Borgelt Introduction to Neural Networks 0

Threshold Logic Units: Geometric Interpretation Review of line representations Straight lines are usually represented in one of the following forms: with the parameters: Explicit Form: g x = bx + c Implicit Form: g a x + a x + d = 0 Point-Direction Form: g x = p + k r Normal Form: g ( x p) n = 0 b : c : p : r : n : Gradient of the line Section of the x axis Vector of a point of the line (base vector) Direction vector of the line Normal vector of the line Christian Borgelt Introduction to Neural Networks

Threshold Logic Units: Geometric Interpretation A straight line and its defining parameters. c x b = r r r n = (a, a ) q = d n n n g ϕ p d = p n x O Christian Borgelt Introduction to Neural Networks

Threshold Logic Units: Geometric Interpretation How to determine the side on which a point x lies. x z z = x n n n n q = d n n n g ϕ x x O Christian Borgelt Introduction to Neural Networks 3

Threshold Logic Units: Geometric Interpretation Threshold logic unit for x x. x 3 x 4 y x 0 0 0 x A threshold logic unit for x x. x 0 x y x 0 x 0 Christian Borgelt Introduction to Neural Networks 4

Threshold Logic Units: Geometric Interpretation Visualization of 3-dimensional Boolean functions: x 3 x (,, ) x (0, 0, 0) Threshold logic unit for (x x ) (x x 3 ) (x x 3 ). x x y x 3 x x 3 x Christian Borgelt Introduction to Neural Networks 5

Threshold Logic Units: Limitations The biimplication problem x x : There is no separating line. x x y 0 0 0 0 0 0 x 0 0 x Formal proof by reductio ad absurdum: since (0, 0) : 0 θ, () since (, 0) 0: w < θ, () since (0, ) 0: w < θ, (3) since (, ) : w + w θ. (4) () and (3): w + w < θ. With (4): θ > θ, or θ > 0. Contradiction to (). Christian Borgelt Introduction to Neural Networks 6

Threshold Logic Units: Limitations Total number and number of linearly separable Boolean functions. ([Widner 960] as cited in [Zell 994]) inputs Boolean functions linearly separable functions 4 4 6 4 3 56 04 4 65536 774 5 4.3 0 9 9457 6.8 0 9 5.0 0 6 For many inputs a threshold logic unit can compute almost no functions. Networks of threshold logic units are needed to overcome the limitations. Christian Borgelt Introduction to Neural Networks 7

Networks of Threshold Logic Units Solving the biimplication problem with a network. Idea: logical decomposition x x (x x ) (x x ) x x computes y = x x 3 computes y = x x computes y = y y y = x x Christian Borgelt Introduction to Neural Networks 8

Networks of Threshold Logic Units Solving the biimplication problem: Geometric interpretation x d 0 c g g 0 = y b 0 ac g 3 0 a b 0 d 0 x y 0 The first layer computes new Boolean coordinates for the points. After the coordinate transformation the problem is linearly separable. Christian Borgelt Introduction to Neural Networks 9

Representing Arbitrary Boolean Functions Let y = f(x,..., x n ) be a Boolean function of n variables. (i) Represent f(x,..., x n ) in disjunctive normal form. That is, determine D f = K... K m, where all K j are conjunctions of n literals, i.e., K j = l j... l jn with l ji = x i (positive literal) or l ji = x i (negative literal). (ii) Create a neuron for each conjunction K j of the disjunctive normal form (having n inputs one input for each variable), where {, if lji = x w ji = i, and θ, if l ji = x i, j = n + n w ji. (iii) Create an output neuron (having m inputs one input for each neuron that was created in step (ii)), where i= w (n+)k =, k =,..., m, and θ n+ =. Christian Borgelt Introduction to Neural Networks 0

Training Threshold Logic Units Christian Borgelt Introduction to Neural Networks

Training Threshold Logic Units Geometric interpretation provides a way to construct threshold logic units with and 3 inputs, but: Not an automatic method (human visualization needed). Not feasible for more than 3 inputs. General idea of automatic training: Start with random values for weights and threshold. Determine the error of the output for a set of training patterns. Error is a function of the weights and the threshold: e = e(w,..., w n, θ). Adapt weights and threshold so that the error gets smaller. Iterate adaptation until the error vanishes. Christian Borgelt Introduction to Neural Networks

Training Threshold Logic Units Single input threshold logic unit for the negation x. x w θ y x y 0 0 Output error as a function of weight and threshold. 0 w e θ error for x = 0 0 0 w e θ error for x = 0 0 w e θ sum of errors 0 Christian Borgelt Introduction to Neural Networks 3

Training Threshold Logic Units The error function cannot be used directly, because it consists of plateaus. Solution: If the computed output is wrong, take into account, how far the weighted sum is from the threshold. Modified output error as a function of weight and threshold. 4 e 4 e 4 e 0 w θ error for x = 0 0 0 w θ error for x = 0 0 w θ sum of errors 0 Christian Borgelt Introduction to Neural Networks 4

Training Threshold Logic Units Schemata of resulting directions of parameter changes. w 0 0 θ changes for x = 0 w 0 0 θ changes for x = w 0 0 θ sum of changes Start at random point. Iteratively adapt parameters according to the direction corresponding to the current point. Christian Borgelt Introduction to Neural Networks 5

Training Threshold Logic Units Example training procedure: Online and batch training. w 0 0 θ Online-Lernen w 0 0 θ Batch-Lernen 4 0 w e θ 0 Batch-Lernen x y x 0 Christian Borgelt Introduction to Neural Networks 6

Training Threshold Logic Units: Delta Rule Formal Training Rule: Let x = (x,..., x n ) be an input vector of a threshold logic unit, o the desired output for this input vector and y the actual output of the threshold logic unit. If y o, then the threshold θ and the weight vector w = (w,..., w n ) are adapted as follows in order to reduce the error: i {,..., n} : θ (new) = θ (old) + θ with θ = η(o y), = w (old) i + w i with w i = η(o y)x i, w (new) i where η is a parameter that is called learning rate. It determines the severity of the weight changes. This procedure is called Delta Rule or Widrow Hoff Procedure [Widrow and Hoff 960]. Online Training: Adapt parameters after each training pattern. Batch Training: Adapt parameters only at the end of each epoch, i.e. after a traversal of all training patterns. Christian Borgelt Introduction to Neural Networks 7

Training Threshold Logic Units: Delta Rule Turning the threshold value into a weight: = x 0 w 0 = θ x w x w x w θ y x w 0 y.. w n w n x n x n n i= w i x i θ n i= w i x i θ 0 Christian Borgelt Introduction to Neural Networks 8

Training Threshold Logic Units: Delta Rule procedure online training (var w, var θ, L, η); var y, e; (* output, sum of errors *) begin repeat e := 0; (* initialize the error sum *) for all ( x, o) L do begin (* traverse the patterns *) if ( w x θ) then y := ; (* compute the output *) else y := 0; (* of the threshold logic unit *) if (y o) then begin (* if the output is wrong *) θ := θ η(o y); (* adapt the threshold *) w := w + η(o y) x; (* and the weights *) e := e + o y ; (* sum the errors *) end; end; until (e 0); (* repeat the computations *) end; (* until the error vanishes *) Christian Borgelt Introduction to Neural Networks 9

Training Threshold Logic Units: Delta Rule procedure batch training (var w, var θ, L, η); var y, e, (* output, sum of errors *) θ c, w c ; (* summed changes *) begin repeat e := 0; θ c := 0; w c := 0; (* initializations *) for all ( x, o) L do begin (* traverse the patterns *) if ( w x θ) then y := ; (* compute the output *) else y := 0; (* of the threshold logic unit *) if (y o) then begin (* if the output is wrong *) θ c := θ c η(o y); (* sum the changes of the *) w c := w c + η(o y) x; (* threshold and the weights *) e := e + o y ; (* sum the errors *) end; end; θ := θ + θ c ; (* adapt the threshold *) w := w + w c ; (* and the weights *) until (e 0); (* repeat the computations *) end; (* until the error vanishes *) Christian Borgelt Introduction to Neural Networks 30

Training Threshold Logic Units: Online epoch x o x w y e θ w θ w.5 0.5 0 0 0.5 0.5.5 0.5 0 0 0.5 0 0.5.5 0 3 0.5 0 0 0.5 0 0 0.5 0 0 0 0 0.5 0 4 0 0.5 0 0 0.5 0 0 0.5 0.5 5 0 0.5 0 0 0.5 0 0.5 0 0 0 0 0.5 6 0 0.5 0 0 0 0.5 0 0.5 0 0 0 0 0.5 Christian Borgelt Introduction to Neural Networks 3

Training Threshold Logic Units: Batch epoch x o x w y e θ w θ w.5 0.5 0 0 0 0.5.5 0.5 0 0 0 0.5 0 0 0 0 0.5 3 0 0.5 0 0 0 0.5 0.5 0 4 0 0.5 0 0 0 0.5 0 0 0 0 0.5 0 5 0 0.5 0 0 0 0 0.5 0.5 6 0 0.5 0 0 0.5 0 0 0 0 0.5 7 0 0.5 0 0 0 0 0.5 0 0 0 0 0.5 Christian Borgelt Introduction to Neural Networks 3

Training Threshold Logic Units: Conjunction Threshold logic unit with two inputs for the conjunction. x w x w θ y x x y 0 0 0 0 0 0 0 x y 0 x 0 0 Christian Borgelt Introduction to Neural Networks 33

Training Threshold Logic Units: Conjunction epoch x x o x w y e θ w w θ w w 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 5 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3 0 0 0 0 0 3 6 0 0 0 3 0 0 0 0 0 3 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 3 0 0 0 0 0 3 Christian Borgelt Introduction to Neural Networks 34

Training Threshold Logic Units: Biimplication epoch x x o x w y e θ w w θ w w 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 Christian Borgelt Introduction to Neural Networks 35

Training Threshold Logic Units: Convergence Convergence Theorem: Let L = {( x, o ),... ( x m, o m )} be a set of training patterns, each consisting of an input vector x i IR n and a desired output o i {0, }. Furthermore, let L 0 = {( x, o) L o = 0} and L = {( x, o) L o = }. If L 0 and L are linearly separable, i.e., if w IR n and θ IR exist, such that ( x, 0) L 0 : w x < θ and ( x, ) L : w x θ, then online as well as batch training terminate. The algorithms terminate only when the error vanishes. Therefore the resulting threshold and weights must solve the problem. For not linearly separable problems the algorithms do not terminate. Christian Borgelt Introduction to Neural Networks 36

Training Networks of Threshold Logic Units Single threshold logic units have strong limitations: They can only compute linearly separable functions. Networks of threshold logic units can compute arbitrary Boolean functions. Training single threshold logic units with the delta rule is fast and guaranteed to find a solution if one exists. Networks of threshold logic units cannot be trained, because there are no desired values for the neurons of the first layer, the problem can usually be solved with different functions computed by the neurons of the first layer. When this situation became clear, neural networks were seen as a research dead end. Christian Borgelt Introduction to Neural Networks 37

General (Artificial) Neural Networks Christian Borgelt Introduction to Neural Networks 38

General Neural Networks Basic graph theoretic notions A (directed) graph is a pair G = (V, E) consisting of a (finite) set V of nodes or vertices and a (finite) set E V V of edges. We call an edge e = (u, v) E directed from node u to node v. Let G = (V, E) be a (directed) graph and u V a node. Then the nodes of the set pred(u) = {v V (v, u) E} are called the predecessors of the node u and the nodes of the set succ(u) = {v V (u, v) E} are called the successors of the node u. Christian Borgelt Introduction to Neural Networks 39

General Neural Networks General definition of a neural network An (artificial) neural network is a (directed) graph G = (U, C), whose nodes u U are called neurons or units and whose edges c C are called connections. The set U of nodes is partitioned into the set U in of input neurons, the set U out of output neurons, the set U hidden of hidden neurons. and It is U = U in U out U hidden, U in, U out, U hidden (U in U out ) =. Christian Borgelt Introduction to Neural Networks 40

General Neural Networks Each connection (v, u) C possesses a weight w uv and each neuron u U possesses three (real-valued) state variables: the network input net u, the activation act u, the output out u. and Each input neuron u U in also possesses a fourth (real-valued) state variable, the external input ex u. Furthermore, each neuron u U possesses three functions: the network input function the activation function the output function f (u) net : IR pred(u) +κ(u) IR, f (u) act : IRκ (u) IR, and f (u) out : IR IR, which are used to compute the values of the state variables. Christian Borgelt Introduction to Neural Networks 4

General Neural Networks Types of (artificial) neural networks If the graph of a neural network is acyclic, it is called a feed-forward network. If the graph of a neural network contains cycles (backward connections), it is called a recurrent network. Representation of the connection weights by a matrix u u... u r w u u w u u... w u u r w u u w u u w u u r.. w ur u w ur u... w ur u r u u. u r Christian Borgelt Introduction to Neural Networks 4

General Neural Networks: Example A simple recurrent neural network x u u 3 y 4 3 x u Weight matrix of this network u u u 3 0 0 4 0 0 3 0 u u u 3 Christian Borgelt Introduction to Neural Networks 43

Structure of a Generalized Neuron A generalized neuron is a simple numeric processor ex u out v = in uv u w uv.. out vn = in uvn. w uvn f (u) net net u f (u) act act u f (u) out out u σ,..., σ l θ,..., θ k Christian Borgelt Introduction to Neural Networks 44

General Neural Networks: Example x u u 3 y 4 3 x u f (u) net ( w u, in u ) = v pred(u) w uvin uv = v pred(u) w uv out v f (u) act (net u, θ) = {, if netu θ, 0, otherwise. f (u) out (act u) = act u Christian Borgelt Introduction to Neural Networks 45

General Neural Networks: Example Updating the activations of the neurons u u u 3 input phase 0 0 work phase 0 0 net u3 = 0 0 0 net u = 0 0 0 0 net u = 0 0 0 0 net u3 = 0 0 0 0 net u = 0 Order in which the neurons are updated: u 3, u, u, u 3, u, u, u 3,... A stable state with a unique output is reached. Christian Borgelt Introduction to Neural Networks 46

General Neural Networks: Example Updating the activations of the neurons u u u 3 input phase 0 0 work phase 0 0 net u3 = 0 net u = 0 0 net u = 0 0 net u3 = 3 0 0 net u = 0 0 net u = 4 0 0 net u3 = Order in which the neurons are updated: u 3, u, u, u 3, u, u, u 3,... No stable state is reached (oscillation of output). Christian Borgelt Introduction to Neural Networks 47

General Neural Networks: Training Definition of learning tasks for a neural network A fixed learning task L fixed for a neural network with n input neurons, i.e. U in = {u,..., u n }, and m output neurons, i.e. U out = {v,..., v m }, is a set of training patterns l = ( ı (l), o (l) ), each consisting of an input vector ı (l) = ( ex (l) u,..., ex (l) u n ) an output vector o (l) = (o (l) v,..., o (l) v m ). and A fixed learning task is solved, if for all training patterns l L fixed the neural network computes from the external inputs contained in the input vector ı (l) of a training pattern l the outputs contained in the corresponding output vector o (l). Christian Borgelt Introduction to Neural Networks 48

General Neural Networks: Training Solving a fixed learning task: Error definition Measure how well a neural network solves a given fixed learning task. Compute differences between desired and actual outputs. Do not sum differences directly in order to avoid errors canceling each other. Square has favorable properties for deriving the adaptation rules. e = l L fixed e (l) = v U out e v = l L fixed v U out e (l) v, where e (l) v = ( o (l) v out (l) ) v Christian Borgelt Introduction to Neural Networks 49

General Neural Networks: Training Definition of learning tasks for a neural network A free learning task L free for a neural network with n input neurons, i.e. U in = {u,..., u n }, is a set of training patterns l = ( ı (l) ), each consisting of an input vector ı (l) = ( ex (l) u,..., ex (l) u n ). Properties: There is no desired output for the training patterns. Outputs can be chosen freely by the training method. Solution idea: Similar inputs should lead to similar outputs. (clustering of input vectors) Christian Borgelt Introduction to Neural Networks 50

General Neural Networks: Preprocessing Normalization of the input vectors Compute expected value and standard deviation for each input: µ k = L l L ex (l) u k and σ k = L l L ( ex (l) u k µ k ), Normalize the input vectors to expected value 0 and standard deviation : ex (l)(neu) u k = ex(l)(alt) u k σ k µ k Avoids unit and scaling problems. Christian Borgelt Introduction to Neural Networks 5

Multilayer Perceptrons (MLPs) Christian Borgelt Introduction to Neural Networks 5

Multilayer Perceptrons An r layer perceptron is a neural network with a graph G = (U, C) that satisfies the following conditions: (i) U in U out =, (ii) U hidden = U () hidden U (r ) hidden, (iii) C i < j r : ( U in U () ) hidden U (i) hidden U (j) hidden =, ( r 3 i= U (i) hidden U (i+) ) hidden or, if there are no hidden neurons (r =, U hidden = ), C U in U out. Feed-forward network with strictly layered structure. ( U (r ) ) hidden U out Christian Borgelt Introduction to Neural Networks 53

Multilayer Perceptrons General structure of a multilayer perceptron x y x... y... x n y m U in U () hidden U () hidden U (r ) hidden U out Christian Borgelt Introduction to Neural Networks 54

Multilayer Perceptrons The network input function of each hidden neuron and of each output neuron is the weighted sum of its inputs, i.e. u U hidden U out : f (u) net ( w u, in u ) = w u in u = v pred (u) w uv out v. The activation function of each hidden neuron is a so-called sigmoid function, i.e. a monotonously increasing function f : IR [0, ] with lim f(x) = 0 and lim f(x) =. x x The activation function of each output neuron is either also a sigmoid function or a linear function, i.e. f act (net, θ) = α net θ. Christian Borgelt Introduction to Neural Networks 55

Sigmoid Activation Functions step function: f act (net, θ) = {, if net θ, 0, otherwise. semi-linear function: f act (net, θ) = {, if net > θ +, 0, if net < θ, (net θ) +, otherwise. θ net θ θ θ + net sine until saturation: f act (net, θ) =, if net > θ + π, 0, if net < θ π, sin(net θ)+, otherwise. logistic function: f act (net, θ) = + e (net θ) θ θ π θ + π net net θ 8 θ 4 θ θ + 4 θ + 8 Christian Borgelt Introduction to Neural Networks 56

Sigmoid Activation Functions All sigmoid functions on the previous slide are unipolar, i.e., they range from 0 to. Sometimes bipolar sigmoid functions are used, like the tangens hyperbolicus. tangens hyperbolicus: f act (net, θ) = tanh(net θ) = + e (net θ) 0 θ 4 θ θ θ + θ + 4 net Christian Borgelt Introduction to Neural Networks 57

Multilayer Perceptrons: Weight Matrices Let U = {v,..., v m } and U = {u,..., u n } be the neurons of two consecutive layers of a multilayer perceptron. Their connection weights are represented by an n m matrix W = w u v w u v... w u v m w u v w u v... w u v m... w un v w un v... w un v m where w ui v j = 0 if there is no connection from neuron v j to neuron u i. Advantage: The computation of the network input can be written as net U = W in U = W out U where net U = (net u,..., net un ) and in U = out U = (out v,..., out vm )., Christian Borgelt Introduction to Neural Networks 58

Multilayer Perceptrons: Biimplication Solving the biimplication problem with a multilayer perceptron. x x 3 y U in U hidden U out Note the additional input neurons compared to the TLU solution. W = ( ) and W = ( ) Christian Borgelt Introduction to Neural Networks 59

Multilayer Perceptrons: Fredkin Gate 0 a b s x x 0 a b a b s y y b a s 0 0 0 0 x 0 0 0 0 x 0 0 0 0 y 0 0 0 0 y 0 0 0 0 y y s x s x x x Christian Borgelt Introduction to Neural Networks 60

Multilayer Perceptrons: Fredkin Gate x s x 3 3 y y W = W = 0 0 0 0 ( 0 0 0 0 ) U in U hidden U out Christian Borgelt Introduction to Neural Networks 6

Why Non-linear Activation Functions? With weight matrices we have for two consecutive layers U and U If the activation functions are linear, i.e., net U = W in U = W out U. f act (net, θ) = α net θ. the activations of the neurons in the layer U can be computed as where act U = D act net U θ, act U = (act u,..., act un ) is the activation vector, D act is an n n diagonal matrix of the factors α ui, i =,..., n, and θ = (θ u,..., θ un ) is a bias vector. Christian Borgelt Introduction to Neural Networks 6

Why Non-linear Activation Functions? If the output function is also linear, it is analogously where out U = D out act U ξ, out U = (out u,..., out un ) is the output vector, D out is again an n n diagonal matrix of factors, and ξ = (ξ u,..., ξ un ) a bias vector. Combining these computations we get out U = D out ( D act (W ) ) out U θ ξ and thus out U = A out U + b with an n m matrix A and an n-dimensional vector b. Christian Borgelt Introduction to Neural Networks 63

Why Non-linear Activation Functions? Therefore we have and out U = A out U + b out U3 = A 3 out U + b 3 for the computations of two consecutive layers U and U 3. These two computations can be combined into out U3 = A 3 out U + b 3, where A 3 = A 3 A and b 3 = A 3 b + b 3. Result: With linear activation and output functions any multilayer perceptron can be reduced to a two-layer perceptron. Christian Borgelt Introduction to Neural Networks 64

Multilayer Perceptrons: Function Approximation General idea of function approximation Approximate a given function by a step function. Construct a neural network that computes the step function. y y 3 y 4 y y 0 y x x x x 3 x 4 Christian Borgelt Introduction to Neural Networks 65

Multilayer Perceptrons: Function Approximation.. x x... y x. x 3 x 4... y y 3. id y Christian Borgelt Introduction to Neural Networks 66

Multilayer Perceptrons: Function Approximation Theorem: Any Riemann-integrable function can be approximated with arbitrary accuracy by a four-layer perceptron. But: Error is measured as the area between the functions. More sophisticated mathematical examination allows a stronger assertion: With a three-layer perceptron any continuous function can be approximated with arbitrary accuracy (error: maximum function value difference). Christian Borgelt Introduction to Neural Networks 67

Multilayer Perceptrons: Function Approximation y y y 3 y 4 y y 4 y 3 x y 0 y y y x x x x 3 x 4 x x x 3 x 4 0 0 0 0 y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 68

Multilayer Perceptrons: Function Approximation. x.. x x x 3 x 4. y y 3. y y 4. id y Christian Borgelt Introduction to Neural Networks 69

Multilayer Perceptrons: Function Approximation y y y 3 y 4 y y 4 y 3 x y 0 y y y x x x x 3 x 4 x x x 3 x 4 0 0 0 0 y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 70

Multilayer Perceptrons: Function Approximation.. x x θ θ y. y x id y x θ 3 y 3 x. θ 4 y 4. θ i = x i x. Christian Borgelt Introduction to Neural Networks 7

Mathematical Background: Regression Christian Borgelt Introduction to Neural Networks 7

Mathematical Background: Linear Regression Training neural networks is closely related to regression Given: A dataset ((x, y ),..., (x n, y n )) of n data tuples and a hypothesis about the functional relationship, e.g. y = g(x) = a + bx. Approach: Minimize the sum of squared errors, i.e. F (a, b) = n i= (g(x i ) y i ) = n i= (a + bx i y i ). Necessary conditions for a minimum: F n a = (a + bx i y i ) = 0 and i= F b = n i= (a + bx i y i )x i = 0 Christian Borgelt Introduction to Neural Networks 73

Mathematical Background: Linear Regression Result of necessary conditions: System of so-called normal equations, i.e. n x i i= na + a + n x i i= n x i i= b = b = n i= n i= y i, x i y i. Two linear equations for two unknowns a and b. System can be solved with standard methods from linear algebra. Solution is unique unless all x-values are identical. The resulting line is called a regression line. Christian Borgelt Introduction to Neural Networks 74

Linear Regression: Example x 3 4 5 6 7 8 y 3 3 4 3 5 6 y = 3 4 + 7 x. 6 5 4 3 0 y 0 3 4 5 6 7 8 x Christian Borgelt Introduction to Neural Networks 75

Mathematical Background: Polynomial Regression Generalization to polynomials y = p(x) = a 0 + a x +... + a m x m Approach: Minimize the sum of squared errors, i.e. F (a 0, a,..., a m ) = n i= (p(x i ) y i ) = n i= (a 0 + a x i +... + a m x m i y i ) Necessary conditions for a minimum: All partial derivatives vanish, i.e. F a 0 = 0, F = 0,..., F = 0. a a m Christian Borgelt Introduction to Neural Networks 76

Mathematical Background: Polynomial Regression System of normal equations for polynomials n x i i= n x m i i= na 0 + a 0 + n x i i= n x i i= a +... + a +... + n x m i i= n x m+ i i= a m = a m = n y i i= n i=.. a 0 + n x m+ i i= a +... + n x m i i= a m = m + linear equations for m + unknowns a 0,..., a m. System can be solved with standard methods from linear algebra. Solution is unique unless all x-values are identical. n i= x i y i x m i y i, Christian Borgelt Introduction to Neural Networks 77

Mathematical Background: Multilinear Regression Generalization to more than one argument z = f(x, y) = a + bx + cy Approach: Minimize the sum of squared errors, i.e. F (a, b, c) = n i= (f(x i, y i ) z i ) = n i= (a + bx i + cy i z i ) Necessary conditions for a minimum: All partial derivatives vanish, i.e. F n a = (a + bx i + cy i z i ) = 0, i= F b F c = = n i= n i= (a + bx i + cy i z i )x i = 0, (a + bx i + cy i z i )y i = 0. Christian Borgelt Introduction to Neural Networks 78

Mathematical Background: Multilinear Regression System of normal equations for several arguments n x i i= n y i i= na + a + a + n x i i= n x i i= n i= b + b + x i y i b + n y i i= n i= n yi i= c = x i y i c = c = n z i i= n i= n i= z i x i z i y i 3 linear equations for 3 unknowns a, b, and c. System can be solved with standard methods from linear algebra. Solution is unique unless all x- or all y-values are identical. Christian Borgelt Introduction to Neural Networks 79

Multilinear Regression General multilinear case: y = f(x,..., x m ) = a 0 + Approach: Minimize the sum of squared errors, i.e. where X = x... x m...... x n... x mn m k= F ( a) = (X a y) (X a y),, y = Necessary conditions for a minimum: y. y n a k x k, and a = a F ( a) = a (X a y) (X a y) = 0 a 0 a. a m Christian Borgelt Introduction to Neural Networks 80

Multilinear Regression a F ( a) may easily be computed by remembering that the differential operator ( ) a =,..., a 0 a m behaves formally like a vector that is multiplied to the sum of squared errors. Alternatively, one may write out the differentiation componentwise. With the former method we obtain for the derivative: a (X a y) (X a y) = ( a (X a y)) (X a y) + ((X a y) ( a (X a y))) = ( a (X a y)) (X a y) + ( a (X a y)) (X a y) = X (X a y) = X X a X y = 0 Christian Borgelt Introduction to Neural Networks 8

Multilinear Regression Necessary condition for a minimum therefore: a F ( a) = a (X a y) (X a y) = X X a X y! = 0 As a consequence we get the system of normal equations: X X a = X y This system has a solution if X X is not singular. Then we have a = (X X) X y. (X X) X is called the (Moore-Penrose-)Pseudoinverse of the matrix X. With the matrix-vector representation of the regression problem an extension to multipolynomial regression is straighforward: Simply add the desired products of powers to the matrix X. Christian Borgelt Introduction to Neural Networks 8

Mathematical Background: Logistic Regression Generalization to non-polynomial functions Simple example: y = ax b Idea: Find transformation to linear/polynomial case. Transformation for example: ln y = ln a + b ln x. Special case: logistic function y = Y + e a+bx y = + ea+bx Y Y y y = e a+bx. Result: Apply so-called Logit-Transformation ln ( Y y y ) = a + bx. Christian Borgelt Introduction to Neural Networks 83

Logistic Regression: Example x 3 4 5 y 0.4.0 3.0 5.0 5.6 Transform the data with z = ln ( Y y y ), Y = 6. The transformed data points are The resulting regression line is x 3 4 5 z.64.6 0.00.6.64 z.3775x + 4.33. Christian Borgelt Introduction to Neural Networks 84

Logistic Regression: Example 4 3 0 3 4 z 3 4 5 x 6 5 4 3 0 y 0 3 4 5 Y = 6 x The logistic regression function can be computed by a single neuron with network input function f net (x) wx with w.3775, activation function f act (net, θ) ( + e (net θ )) with θ 4.33 and output function f out (act) 6 act. Christian Borgelt Introduction to Neural Networks 85

Training Multilayer Perceptrons Christian Borgelt Introduction to Neural Networks 86

Training Multilayer Perceptrons: Gradient Descent Problem of logistic regression: Works only for two-layer perceptrons. More general approach: gradient descent. Necessary condition: differentiable activation and output functions. z (x0,y 0 ) z y 0 z x x 0 z y y 0 y x 0 Illustration of the gradient of a real-valued function z = f(x, y) at a point (x 0, y 0 ). It is z (x0,y 0 ) = ( z x x 0, y z ) y 0. x Christian Borgelt Introduction to Neural Networks 87

Gradient Descent: Formal Approach General Idea: Approach the minimum of the error function in small steps. Error function: e = l L fixed e (l) = v U out e v = l L fixed v U out e (l) v, Form gradient to determine the direction of the step: wu e = e w u = ( e θ u, Exploit the sum over the training patterns: e w up,..., e w upn ). wu e = e w u = w u l L fixed e (l) = l L fixed e(l) w u. Christian Borgelt Introduction to Neural Networks 88

Gradient Descent: Formal Approach Single pattern error depends on weights only through the network input: wu e (l) = e(l) w u = e(l) net (l) u net (l) u w u. Since net (l) u = w u in (l) u we have for the second factor net (l) u w u = in (l) u. For the first factor we consider the error e (l) for the training pattern l = ( ı (l), o (l) ): e (l) = v U out e (l) u = v U out ( o (l) v i.e. the sum of the errors over all output neurons. out (l) ) v, Christian Borgelt Introduction to Neural Networks 89

Gradient Descent: Formal Approach Therefore we have e (l) net (l) u = v U out ( o (l) v net (l) u out (l) ) v = v U out ( o (l) v out (l) ) v net (l) u. Since only the actual output out v (l) of an output neuron v depends on the network input net (l) u of the neuron u we are considering, it is e (l) net (l) u = v U out ( o (l) v out v (l) ) (l) out v net (l) u } {{ } δ u (l), which also introduces the abbreviation δ (l) u for the important sum appearing here. Christian Borgelt Introduction to Neural Networks 90

Gradient Descent: Formal Approach Distinguish two cases: The neuron u is an output neuron. In the first case we have u U out : Therefore we have for the gradient The neuron u is a hidden neuron. δ (l) u = ( o (l) u out (l) ) (l) out u u net (l) u u U out : wu e (l) u and thus for the weight change = e(l) ( u = o u (l) w u out (l) ) (l) out u u net (l) u in (l) u u U out : w (l) u = η wu e (l) u = η ( o (l) u out (l) ) (l) out u u net (l) u in (l) u. Christian Borgelt Introduction to Neural Networks 9

Gradient Descent: Formal Approach Exact formulae depend on choice of activation and output function, since it is out (l) u = f out ( act (l) u ) = f out (f act ( net (l) u )). Consider special case with output function is the identity, activation function is logistic, i.e. f act (x) = +e x. The first assumption yields out (l) u net (l) u = act(l) u net u (l) = f act( net (l) u ). Christian Borgelt Introduction to Neural Networks 9

Gradient Descent: Formal Approach For a logistic activation function we have and therefore f act(x) = d dx ( + e x ) = + e x ( + e x ) = = f act (x) ( f act (x)), f act( net (l) u ) = f act ( net (l) u ) The resulting weight change is therefore w (l) u = η ( o (l) u = ( + e x) ( e x) + e x ( f act ( net (l) ) u ) out (l) u which makes the computations very simple. ) out (l) u ( ( ) + e x = out (l) u ( out (l) ) u in (l) out (l) u u, ). Christian Borgelt Introduction to Neural Networks 93

Error Backpropagation Consider now: The neuron u is a hidden neuron, i.e. u U k, 0 < k < r. The output out (l) v of an output neuron v depends on the network input net (l) u only indirectly through its successor neurons succ(u) = {s U (u, s) C} = {s,..., s m } U k+, namely through their network inputs net (l) s. We apply the chain rule to obtain δ (l) u = v U out (o (l) v s succ(u) out (l) v ) out(l) v net (l) s net (l) s net (l) u. Exchanging the sums yields δ (l) u = (o (l) v s succ(u) v U out out v (l) ) out(l) v net (l) s net(l) s net (l) u = s succ(u) δ (l) s net (l) s net (l). u Christian Borgelt Introduction to Neural Networks 94

Error Backpropagation Consider the network input net (l) s where one element of in (l) net (l) s net (l) u = = w s in (l) s = p pred(s) is the output out (l) s p pred(s) out p (l) w sp net (l) u u w sp out (l) p θ s, of the neuron u. Therefore it is θ s net (l) u The result is the recursive equation (error backpropagation) δ (l) u = δ s (l) w su out(l) u s succ(u) net (l) u out (l) u = w su net (l), u. Christian Borgelt Introduction to Neural Networks 95

Error Backpropagation The resulting formula for the weight change is w (l) u = η wu e (l) = η δ (l) u in (l) u = η δ s (l) w su out(l) u s succ(u) net (l) u in (l) u. Consider again the special case with output function is the identity, activation function is logistic. The resulting formula for the weight change is then w (l) u = η δ s (l) w su out (l) s succ(u) u ( out (l) u ) in (l) u. Christian Borgelt Introduction to Neural Networks 96

Error Backpropagation: Cookbook Recipe u U in : out (l) u = ex (l) u forward propagation: u U hidden U out : out (l) u = ( + exp ( p pred(u) w up out (l) p )) x x. x n backward propagation: activation derivative:.. u U hidden : δ (l) u λ (l) u. = ( ) s succ(u) δ(l) (l) s w su λ u = out (l) u ( (l) ) out u.. weight change: y y y m logistic activation function implicit bias value error factor: u U out : δ u (l) = ( o (l) u out (l) ) (l) u λ u w up (l) = η δ u (l) out (l) p Christian Borgelt Introduction to Neural Networks 97

Gradient Descent: Examples Gradient descent training for the negation x x w θ y x y 0 0 e e e 4 0 4 w θ error for x = 0 4 0 4 4 0 4 w θ error for x = 4 0 4 4 0 4 w θ sum of errors 4 0 4 Christian Borgelt Introduction to Neural Networks 98

Gradient Descent: Examples epoch θ w error 0 3.00 3.50.307 0 3.77.9 0.986 40 3.7.8 0.970 60 3.50.53 0.958 80 3.5.4 0.937 00.57 0.88 0.890 0.48 0.5 0.75 40 0.06 0.98 0.33 60 0.80.07 0.49 80.9.74 0.087 00.44 3.0 0.059 0.6 3.54 0.044 Online Training epoch θ w error 0 3.00 3.50.95 0 3.76.0 0.985 40 3.70.8 0.970 60 3.48.53 0.957 80 3..5 0.934 00.49 0.88 0.880 0.7 0. 0.676 40 0..04 0.9 60 0.86.08 0.40 80..74 0.084 00.45 3.9 0.058 0.63 3.53 0.044 Batch Training Christian Borgelt Introduction to Neural Networks 99

Gradient Descent: Examples Visualization of gradient descent for the negation x w 4 0 4 4 0 4 θ Online Training w 4 0 4 4 0 4 θ Batch Training w e 4 0 4 θ 4 0 4 Batch Training Training is obviously successful. Error cannot vanish completely due to the properties of the logistic function. Christian Borgelt Introduction to Neural Networks 00

Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x 3 + 5 6 x 8x + 6, i x i f(x i ) f (x i ) x i 0 0.00 3..47 0.0 0..990 0.8 0.0 0..874 0.490 0.00 3 0.3.766 0.8 0.00 4 0.43.664 9.888 0.00 5 0.53.568 9.606 0.00 6 0.6.477 9.335 0.009 7 0.7.39 9.075 0.009 8 0.8.309 8.85 0.009 9 0.89.33 8.585 0.009 0 0.98.60 6 5 4 3 0 0 3 4 Gradient descent with initial value 0. and learning rate 0.00. Christian Borgelt Introduction to Neural Networks 0

Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x 3 + 5 6 x 8x + 6, i x i f(x i ) f (x i ) x i 0.500.79 3.500 0.875 0.65 0.655.43 0.358 0.983 0.955.554 0.639 3 0.344.80 7.57.789 4.34 4.7 0.567 0.4 5.99 3.989.380 0.345 6.647 3.03 3.063 0.766 7 0.88 0.734.753 0.438 8 0.443. 4.85.3 9.656 3.3 3.09 0.757 0 0.898 0.766 6 5 4 3 0 start 0 3 4 Gradient descent with initial value.5 and learning rate 0.5. Christian Borgelt Introduction to Neural Networks 0

Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x 3 + 5 6 x 8x + 6, i x i f(x i ) f (x i ) x i 0.600 3.86.707 0.085.685 3.660.947 0.097.783 3.46.6 0.06 3.888 3.33.53 0.08 4.996 3.008.009 0.00 5 3.097.80.688 0.084 6 3.8.695.63 0.063 7 3.44.68 0.845 0.04 8 3.86.599 0.55 0.06 9 3.3.589 0.93 0.05 0 3.37.585 6 5 4 3 0 0 3 4 Gradient descent with initial value.6 and learning rate 0.05. Christian Borgelt Introduction to Neural Networks 03

Gradient Descent: Variants Weight update rule: Standard backpropagation: w(t + ) = w(t) + w(t) w(t) = η we(t) Manhattan training: w(t) = η sgn( w e(t)). Momentum term: w(t) = η we(t) + β w(t ), Christian Borgelt Introduction to Neural Networks 04

Gradient Descent: Variants Self-adaptive error backpropagation: η w (t) = c η w (t ), if w e(t) w e(t ) < 0, c + η w (t ), if w e(t) w e(t ) > 0 w e(t ) w e(t ) 0, η w (t ), otherwise. Resilient error backpropagation: w(t) = c w(t ), if w e(t) w e(t ) < 0, c + w(t ), if w e(t) w e(t ) > 0 w e(t ) w e(t ) 0, w(t ), otherwise. Typical values: c [0.5, 0.7] and c + [.05,.]. Christian Borgelt Introduction to Neural Networks 05

Gradient Descent: Variants Quickpropagation e apex e(t ) e(t) m w(t+) w(t) w(t ) w w e w e(t ) w e(t) The weight update rule can be derived from the triangles: w(t) = w e(t) w e(t ) w e(t) w(t ). w(t+) w(t) w(t ) 0 w Christian Borgelt Introduction to Neural Networks 06

Gradient Descent: Examples epoch θ w error 0 3.00 3.50.95 0 3.76.0 0.985 40 3.70.8 0.970 60 3.48.53 0.957 80 3..5 0.934 00.49 0.88 0.880 0.7 0. 0.676 40 0..04 0.9 60 0.86.08 0.40 80..74 0.084 00.45 3.9 0.058 0.63 3.53 0.044 without momentum term epoch θ w error 0 3.00 3.50.95 0 3.80.9 0.984 0 3.75.84 0.97 30 3.56.58 0.960 40 3.6.33 0.943 50.79.04 0.90 60.99 0.60 0.84 70 0.54 0.5 0.497 80 0.53.5 0. 90.0.36 0.3 00.3.9 0.073 0.5 3.3 0.053 0.67 3.6 0.04 with momentum term Christian Borgelt Introduction to Neural Networks 07

Gradient Descent: Examples w 4 0 4 4 0 4 θ without momentum term w 4 0 4 4 0 4 θ with momentum term w e 4 0 4 θ 4 0 4 with momentum term Dots show position every 0 (without momentum term) or every 0 epochs (with momentum term). Learning with a momentum term is about twice as fast. Christian Borgelt Introduction to Neural Networks 08

Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x 3 + 5 6 x 8x + 6, i x i f(x i ) f (x i ) x i 0 0.00 3..47 0.0 0..990 0.8 0.0 0.3.77 0.96 0.09 3 0.6.488 9.368 0.035 4 0.96.73 8.397 0.040 5 0.337.856 7.348 0.044 6 0.380.559 6.77 0.046 7 0.46.98 5.8 0.046 8 0.47.079 4.35 0.046 9 0.58 0.907 3.39 0.045 0 0.56 0.777 6 5 4 3 0 0 3 4 gradient descent with momentum term (β = 0.9) Christian Borgelt Introduction to Neural Networks 09

Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x 3 + 5 6 x 8x + 6, i x i f(x i ) f (x i ) x i 0.500.79 3.500.050 0.450.78 4.699 0.705.55.476 3.396 0.509 3 0.645 0.69.0 0.083 4 0.79 0.587 0.07 0.005 5 0.73 0.587 0.00 0.000 6 0.73 0.587 0.000 0.000 7 0.73 0.587 0.000 0.000 8 0.73 0.587 0.000 0.000 9 0.73 0.587 0.000 0.000 0 0.73 0.587 6 5 4 3 0 0 3 4 Gradient descent with self-adapting learning rate (c + =., c = 0.5). Christian Borgelt Introduction to Neural Networks 0

Other Extensions of Error Backpropagation Flat Spot Elimination: w(t) = η we(t) + ζ Eliminates slow learning in saturation region of logistic function. Counteracts the decay of the error signals over the layers. Weight Decay: w(t) = η we(t) ξ w(t), Helps to improve the robustness of the training results. Can be derived from an extended error function penalizing large weights: e = e + ξ u U out U hidden ( θ u + p pred(u) w up). Christian Borgelt Introduction to Neural Networks

Sensitivity Analysis Christian Borgelt Introduction to Neural Networks

Sensitivity Analysis Question: How important are different inputs to the network? Idea: Determine change of output relative to change of input. u U in : s(u) = L fixed l L fixed (l) outv v U out ex (l) u. Formal derivation: Apply chain rule. out v ex u = out v out u out u ex u = out v net v net v out u out u ex u. Simplification: Assume that the output function is the identity. out u ex u =. Christian Borgelt Introduction to Neural Networks 3

Sensitivity Analysis For the second factor we get the general result: net v out u = out u p pred(v) w vp out p = p pred(v) w vp out p out u. This leads to the recursion formula out v out u = out v net v net v out u = out v net v However, for the first hidden layer we get p pred(v) w vp out p out u. net v out u = w vu, therefore out v out u = out v net v w vu. This formula marks the start of the recursion. Christian Borgelt Introduction to Neural Networks 4

Sensitivity Analysis Consider as usual the special case with output function is the identity, activation function is logistic. The recursion formula is in this case out v out u = out v ( out v ) and the anchor of the recursion is p pred(v) out v out u = out v ( out v )w vu. w vp out p out u Christian Borgelt Introduction to Neural Networks 5

Demonstration Software: xmlp/wmlp Demonstration of multilayer perceptron training: Visualization of the training process Biimplication and Exclusive Or, two continuous functions http://www.borgelt.net/mlpd.html Christian Borgelt Introduction to Neural Networks 6

Multilayer Perceptron Software: mlp/mlpgui Software for training general multilayer perceptrons: Command line version written in C, fast training Graphical user interface in Java, easy to use http://www.borgelt.net/mlp.html, http://www.borgelt.net/mlpgui.html Christian Borgelt Introduction to Neural Networks 7

Radial Basis Function Networks Christian Borgelt Introduction to Neural Networks 8

Radial Basis Function Networks A radial basis function network (RBFN) is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U in U out =, (ii) C = (U in U hidden ) C, C (U hidden U out ) The network input function of each hidden neuron is a distance function of the input vector and the weight vector, i.e. u U hidden : f (u) net ( w u, in u ) = d( w u, in u ), where d : IR n IR n IR + 0 is a function satisfying x, y, z IRn : (i) d( x, y) = 0 x = y, (ii) d( x, y) = d( y, x) (symmetry), (iii) d( x, z) d( x, y) + d( y, z) (triangle inequality). Christian Borgelt Introduction to Neural Networks 9

Distance Functions Illustration of distance functions d k ( x, y) = n i= (x i y i ) k Well-known special cases from this family are: k = : Manhattan or city block distance, k = : Euclidean distance, k : maximum distance, i.e. d ( x, y) = max n i= x i y i. k = k = k k Christian Borgelt Introduction to Neural Networks 0

Radial Basis Function Networks The network input function of the output neurons is the weighted sum of their inputs, i.e. u U out : f (u) net ( w u, in u ) = w u in u = v pred (u) w uv out v. The activation function of each hidden neuron is a so-called radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x The activation function of each output neuron is a linear function, namely f (u) act (net u, θ u ) = net u θ u. (The linear activation function is important for the initialization.) Christian Borgelt Introduction to Neural Networks

Radial Activation Functions rectangle function: f act (net, σ) = { 0, if net > σ,, otherwise. triangle function: f act (net, σ) = { 0, if net > σ, net σ, otherwise. 0 σ net 0 σ net cosine until zero: f act (net, σ) = { 0, if net > σ, cos( π σ net )+, otherwise. Gaussian function: f act (net, σ) = e net σ e 0 σ σ net e 0 σ σ net Christian Borgelt Introduction to Neural Networks

Radial Basis Function Networks: Examples Radial basis function networks for the conjunction x x x x y 0 x 0 0 x x x 0 0 6 5 y x 0 0 x Christian Borgelt Introduction to Neural Networks 3

Radial Basis Function Networks: Examples Radial basis function networks for the biimplication x x Idea: logical decomposition x x (x x ) (x x ) x x 0 0 0 y x 0 0 x Christian Borgelt Introduction to Neural Networks 4

Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x 4 0 y 4 0 y 3 0 y 0 y Christian Borgelt Introduction to Neural Networks 5

Radial Basis Function Networks: Function Approximation.. x σ. y x σ y x 0 y x 3 σ y 3 x 4 y 4. σ.. σ = x = (x i+ x i ) Christian Borgelt Introduction to Neural Networks 6

Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x 4 0 0 0 0 y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 7

Radial Basis Function Networks: Function Approximation y y 0 4 6 8 x 0 4 6 8 x 0 w 0 w 0 w 3 Christian Borgelt Introduction to Neural Networks 8

Radial Basis Function Networks: Function Approximation Radial basis function network for a sum of three Gaussian functions x 5 3 0 y 6 Christian Borgelt Introduction to Neural Networks 9

Training Radial Basis Function Networks Christian Borgelt Introduction to Neural Networks 30

Radial Basis Function Networks: Initialization Let L fixed = {l,..., l m } be a fixed learning task, consisting of m training patterns l = ( ı (l), o (l) ). Simple radial basis function network: One hidden neuron v k, k =,..., m, for each training pattern: k {,..., m} : w vk = ı (l k). If the activation function is the Gaussian function, the radii σ k are chosen heuristically k {,..., m} : σ k = d max m, where d max = max d ( ı (lj), ı (l ) k). l j,l k L fixed Christian Borgelt Introduction to Neural Networks 3

Radial Basis Function Networks: Initialization Initializing the connections from the hidden to the output neurons u : m k= w uvm out (l) v m θ u = o (l) u or abbreviated A w u = o u, where o u = (o (l ) u,..., o (l m) ) is the vector of desired outputs, θ u = 0, and u A = out (l ) v out (l ) v... out (l ) v m out (l ) v out (l ) v... out (l ) v m... out (l m) v out (l m) v... out (l m) v m This is a linear equation system, that can be solved by inverting the matrix A: w u = A o u.. Christian Borgelt Introduction to Neural Networks 3

RBFN Initialization: Example Simple radial basis function network for the biimplication x x x x y 0 0 0 0 0 0 x x 0 0 0 0 w w w 3 w 4 0 y Christian Borgelt Introduction to Neural Networks 33

RBFN Initialization: Example Simple radial basis function network for the biimplication x x where A = e e e 4 e e 4 e e e 4 e e 4 e e A = a D b D b D c D b D a D c D b D b D c D a D b D c D b D b D a D D = 4e 4 + 6e 8 4e + e 6 0.987 a = e 4 + e 8 0.9637 b = e + e 6 e 0 0.304 c = e 4 e 8 + e 0.077 w u = A o u = D a + c b b a + c.0567 0.809 0.809.0567 Christian Borgelt Introduction to Neural Networks 34

RBFN Initialization: Example Simple radial basis function network for the biimplication x x act act y (,0) x 0 0 x x 0 0 x x 0 0 x single basis function all basis functions output Initialization leads already to a perfect solution of the learning task. Subsequent training is not necessary. Christian Borgelt Introduction to Neural Networks 35

Radial Basis Function Networks: Initialization Normal radial basis function networks: Select subset of k training patterns as centers. A = out (l ) v out (l ) v... out (l ) v k out (l ) v out (l ) v... out (l ) v k.... out (l m) v out (l m) v... out (l m) v k A w u = o u Compute (Moore Penrose) pseudo inverse: A + = (A A) A. The weights can then be computed by w u = A + o u = (A A) A o u Christian Borgelt Introduction to Neural Networks 36

RBFN Initialization: Example Normal radial basis function network for the biimplication x x Select two training patterns: l = ( ı (l ), o (l ) ) = ((0, 0), ()) l 4 = ( ı (l 4), o (l 4) ) = ((, ), ()) x x 0 0 w w θ y Christian Borgelt Introduction to Neural Networks 37

RBFN Initialization: Example Normal radial basis function network for the biimplication x x A = e 4 e e e e e 4 A + = (A A) A = a b b a c d d e e d d c where a 0.80, b 0.680, c.78, d 0.6688, e 0.594. Resulting weights: w u = θ w w = A + o u 0.360.3375.3375. Christian Borgelt Introduction to Neural Networks 38

RBFN Initialization: Example Normal radial basis function network for the biimplication x x act act y (,0) x 0 0 x x 0 0 x 0 0.36 basis function (0,0) basis function (,) output Initialization leads already to a perfect solution of the learning task. This is an accident, because the linear equation system is not over-determined, due to linearly dependent equations. Christian Borgelt Introduction to Neural Networks 39

Radial Basis Function Networks: Initialization Finding appropriate centers for the radial basis functions One approach: k-means clustering Select randomly k training patterns as centers. Assign to each center those training patterns that are closest to it. Compute new centers as the center of gravity of the assigned training patterns Repeat previous two steps until convergence, i.e., until the centers do not change anymore. Use resulting centers for the weight vectors of the hidden neurons. Alternative approach: learning vector quantization Christian Borgelt Introduction to Neural Networks 40

Radial Basis Function Networks: Training Training radial basis function networks: Derivation of update rules is analogous to that of multilayer perceptrons. Weights from the hidden to the output neurons. Gradient: wu e (l) u = e(l) u = (o (l) u w u out (l) u ) in (l) u, Weight update rule: w u (l) = η 3 wu e (l) u = η 3 (o (l) u out (l) u ) in (l) u (Two more learning rates are needed for the center coordinates and the radii.) Christian Borgelt Introduction to Neural Networks 4

Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Gradient: wv e (l) = e(l) w v = s succ(v) (o (l) s out s (l) out (l) v )w su net (l) v net (l) v w v Weight update rule: w (l) v = η wv e (l) = η s succ(v) (o (l) s out (l) out (l) v s )w sv net (l) v net (l) v w v Christian Borgelt Introduction to Neural Networks 4

Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Special case: Euclidean distance net (l) v w v = n i= (w vpi out (l) p i ) ( w v in (l) v ). Special case: Gaussian activation function out (l) v net (l) v = f act( net (l) v, σ v ) net (l) v = net (l) v e ( net (l) v σ v ) = net(l) v σv e ( net (l) v ) σ v. Christian Borgelt Introduction to Neural Networks 43

Radial Basis Function Networks: Training Training radial basis function networks: Radii of radial basis functions. Gradient: Weight update rule: e (l) σ v = s succ(v) σ v (l) = η e (l) = η σ v (o (l) s s succ(v) out (l) out (l) v s )w su. σ v (o (l) s Special case: Gaussian activation function out (l) v σ v = σ v e ( net (l) v ) σ v = ( out (l) out (l) v s )w sv. σ v net (l) v σ 3 v ) e ( net (l) v ) σ v. Christian Borgelt Introduction to Neural Networks 44

Radial Basis Function Networks: Generalization Generalization of the distance function Idea: Use anisotropic distance function. Example: Mahalanobis distance d( x, y) = ( x y) Σ ( x y). Example: biimplication x x y 3 0 Σ = ( 9 8 8 9 ) x 0 0 x Christian Borgelt Introduction to Neural Networks 45

Learning Vector Quantization Christian Borgelt Introduction to Neural Networks 46

Vector Quantization Voronoi diagram of a vector quantization Dots represent vectors that are used for quantizing the area. Lines are the boundaries of the regions of points that are closest to the enclosed vector. Christian Borgelt Introduction to Neural Networks 47

Learning Vector Quantization Finding clusters in a given set of data points Data points are represented by empty circles ( ). Cluster centers are represented by full circles ( ). Christian Borgelt Introduction to Neural Networks 48

Learning Vector Quantization Networks A learning vector quantization network (LVQ) is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U in U out =, U hidden = (ii) C = U in U out The network input function of each output neuron is a distance function of the input vector and the weight vector, i.e. u U out : f (u) net ( w u, in u ) = d( w u, in u ), where d : IR n IR n IR + 0 is a function satisfying x, y, z IRn : (i) d( x, y) = 0 x = y, (ii) d( x, y) = d( y, x) (symmetry), (iii) d( x, z) d( x, y) + d( y, z) (triangle inequality). Christian Borgelt Introduction to Neural Networks 49

Distance Functions Illustration of distance functions d k ( x, y) = n i= (x i y i ) k Well-known special cases from this family are: k = : Manhattan or city block distance, k = : Euclidean distance, k : maximum distance, i.e. d ( x, y) = max n i= x i y i. k = k = k k Christian Borgelt Introduction to Neural Networks 50

Learning Vector Quantization The activation function of each output neuron is a so-called radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x Sometimes the range of values is restricted to the interval [0, ]. However, due to the special output function this restriction is irrelevant. The output function of each output neuron is not a simple function of the activation of the neuron. Rather it takes into account the activations of all output neurons: f (u) out (act u) =, if act u = max act v, v U out 0, otherwise. If more than one unit has the maximal activation, one is selected at random to have an output of, all others are set to output 0: winner-takes-all principle. Christian Borgelt Introduction to Neural Networks 5

Radial Activation Functions rectangle function: f act (net, σ) = { 0, if net > σ,, otherwise. triangle function: f act (net, σ) = { 0, if net > σ, net σ, otherwise. 0 σ net 0 σ net cosine until zero: f act (net, σ) = { 0, if net > σ, cos( π σ net )+, otherwise. Gaussian function: f act (net, σ) = e net σ e 0 σ σ net e 0 σ σ net Christian Borgelt Introduction to Neural Networks 5

Learning Vector Quantization Adaptation of reference vectors / codebook vectors For each training pattern find the closest reference vector. Adapt only this reference vector (winner neuron). For classified data the class may be taken into account: Each reference vector is assigned to a class. Attraction rule (data point and reference vector have same class) r (new) = r (old) + η( x r (old) ), Repulsion rule (data point and reference vector have different class) r (new) = r (old) η( x r (old) ). Christian Borgelt Introduction to Neural Networks 53

Learning Vector Quantization Adaptation of reference vectors / codebook vectors r r d ηd r d r ηd x r 3 x r 3 attraction rule repulsion rule x: data point, r i : reference vector η = 0.4 (learning rate) Christian Borgelt Introduction to Neural Networks 54

Learning Vector Quantization: Example Adaptation of reference vectors / codebook vectors Left: Online training with learning rate η = 0., Right: Batch training with learning rate η = 0.05. Christian Borgelt Introduction to Neural Networks 55

Learning Vector Quantization: Learning Rate Decay Problem: fixed learning rate can lead to oscillations Solution: time dependent learning rate η(t) = η 0 α t, 0 < α <, or η(t) = η 0 t κ, κ > 0. Christian Borgelt Introduction to Neural Networks 56

Learning Vector Quantization: Classified Data Improved update rule for classified data Idea: Update not only the one reference vector that is closest to the data point (the winner neuron), but update the two closest reference vectors. Let x be the currently processed data point and c its class. Let r j and r k be the two closest reference vectors and z j and z k their classes. Reference vectors are updated only if z j z k and either c = z j or c = z k. (Without loss of generality we assume c = z j.) The update rules for the two closest reference vectors are: r (new) j r (new) k = r (old) j = r (old) k + η( x r (old) j ) and η( x r (old) k ), while all other reference vectors remain unchanged. Christian Borgelt Introduction to Neural Networks 57

Learning Vector Quantization: Window Rule It was observed in practical tests that standard learning vector quantization may drive the reference vectors further and further apart. To counteract this undesired behavior a window rule was introduced: update only if the data point x is close to the classification boundary. Close to the boundary is made formally precise by requiring min ( d( x, rj ) d( x, r k ), d( x, r ) k) d( x, r j ) ξ is a parameter that has to be specified by a user. > θ, where θ = ξ + ξ. Intuitively, ξ describes the width of the window around the classification boundary, in which the data point has to lie in order to lead to an update. Using it prevents divergence, because the update ceases for a data point once the classification boundary has been moved far enough away. Christian Borgelt Introduction to Neural Networks 58

Soft Learning Vector Quantization Idea: Use soft assignments instead of winner-takes-all. Assumption: Given data was sampled from a mixture of normal distributions. Each reference vector describes one normal distribution. Objective: Maximize the log-likelihood ratio of the data, that is, maximize ln L ratio = n j= n j= ln ln r R(c j ) r Q(c j ) exp ( x j r) ( x j r) σ exp ( x j r) ( x j r) σ. Here σ is a parameter specifying the size of each normal distribution. R(c) is the set of reference vectors assigned to class c and Q(c) its complement. Intuitively: at each data point the probability density for its class should be as large as possible while the density for all other classes should be as small as possible. Christian Borgelt Introduction to Neural Networks 59

Soft Learning Vector Quantization Update rule derived from a maximum log-likelihood approach: r (new) i = r (old) i + η u ij ( x j r (old) i ), if c j = z i, u ij ( x j r (old) i ), if c j z i, where z i is the class associated with the reference vector r i and u ij = u ij = exp ( σ ( x j r (old) i ) ( x j r (old) i )) exp ( σ ( x j r (old) ) ( x j r (old) )) r R(c j ) r Q(c j ) exp ( σ ( x j r (old) i ) ( x j r (old) i )) exp ( σ ( x j r (old) ) ( x j r (old) )). and R(c) is the set of reference vectors assigned to class c and Q(c) its complement. Christian Borgelt Introduction to Neural Networks 60

Hard Learning Vector Quantization Idea: Derive a scheme with hard assignments from the soft version. Approach: Let the size parameter σ of the Gaussian function go to zero. The resulting update rule is in this case: where r (new) i = r (old) i + η u ij ( x j r (old) i ), if c j = z i, u ij ( x j r (old) i ), if c j z i,, if r i = argmin d( x j, r), u ij = r R(c j ) 0, otherwise, r i is closest vector of same class, if r i = argmin d( x j, r), u ij = r Q(c j ) 0, otherwise. r i is closest vector of different class This update rule is stable without a window rule restricting the update. Christian Borgelt Introduction to Neural Networks 6

Learning Vector Quantization: Extensions Frequency Sensitive Competitive Learning The distance to a reference vector is modified according to the number of data points that are assigned to this reference vector. Fuzzy Learning Vector Quantization Exploits the close relationship to fuzzy clustering. Can be seen as an online version of fuzzy clustering. Leads to faster clustering. Size and Shape Parameters Associate each reference vector with a cluster radius. Update this radius depending on how close the data points are. Associate each reference vector with a covariance matrix. Update this matrix depending on the distribution of the data points. Christian Borgelt Introduction to Neural Networks 6

Demonstration Software: xlvq/wlvq Demonstration of learning vector quantization: Visualization of the training process Arbitrary datasets, but training only in two dimensions http://www.borgelt.net/lvqd.html Christian Borgelt Introduction to Neural Networks 63

Self-Organizing Maps Christian Borgelt Introduction to Neural Networks 64

Self-Organizing Maps A self-organizing map or Kohonen feature map is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U hidden =, U in U out =, (ii) C = U in U out. The network input function of each output neuron is a distance function of input and weight vector. The activation function of each output neuron is a radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x The output function of each output neuron is the identity. The output is often discretized according to the winner takes all principle. On the output neurons a neighborhood relationship is defined: d neurons : U out U out IR + 0. Christian Borgelt Introduction to Neural Networks 65

Self-Organizing Maps: Neighborhood Neighborhood of the output neurons: neurons form a grid quadratic grid hexagonal grid Thin black lines: Indicate nearest neighbors of a neuron. Thick gray lines: Indicate regions assigned to a neuron for visualization. Christian Borgelt Introduction to Neural Networks 66

Topology Preserving Mapping Images of points close to each other in the original space should be close to each other in the image space. Example: Robinson projection of the surface of a sphere Robinson projection is frequently used for world maps. Christian Borgelt Introduction to Neural Networks 67

Self-Organizing Maps: Neighborhood Find topology preserving mapping by respecting the neighborhood Reference vector update rule: r (new) u = r (old) u + η(t) f nb (d neurons (u, u ), ϱ(t)) ( x r (old) u ), u is the winner neuron (reference vector closest to data point). The function f nb is a radial function. Time dependent learning rate η(t) = η 0 α t η, 0 < α η <, or η(t) = η 0 t κ η, κ η > 0. Time dependent neighborhood radius ϱ(t) = ϱ 0 α t ϱ, 0 < α ϱ <, or ϱ(t) = ϱ 0 t κ ϱ, κ ϱ > 0. Christian Borgelt Introduction to Neural Networks 68

Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Christian Borgelt Introduction to Neural Networks 69

Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Christian Borgelt Introduction to Neural Networks 70

Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Training a self-organizing map may fail if the (initial) learning rate is chosen too small or or the (initial) neighbor is chosen too small. Christian Borgelt Introduction to Neural Networks 7

Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. (a) (b) (c) Self-organizing maps that have been trained with random points from (a) a rotation parabola, (b) a simple cubic function, (c) the surface of a sphere. In this case original space and image space have different dimensionality. Self-organizing maps can be used for dimensionality reduction. Christian Borgelt Introduction to Neural Networks 7

Demonstration Software: xsom/wsom Demonstration of self-organizing map training: Visualization of the training process Two-dimensional areas and three-dimensional surfaces http://www.borgelt.net/somd.html Christian Borgelt Introduction to Neural Networks 73

Hopfield Networks Christian Borgelt Introduction to Neural Networks 74

Hopfield Networks A Hopfield network is a neural network with a graph G = (U, C) that satisfies the following conditions: (i) U hidden =, U in = U out = U, (ii) C = U U {(u, u) u U}. In a Hopfield network all neurons are input as well as output neurons. There are no hidden neurons. Each neuron receives input from all other neurons. A neuron is not connected to itself. The connection weights are symmetric, i.e. u, v U, u v : w uv = w vu. Christian Borgelt Introduction to Neural Networks 75

Hopfield Networks The network input function of each neuron is the weighted sum of the outputs of all other neurons, i.e. u U : f (u) net ( w u, in u ) = w u in u = v U {u} w uv out v. The activation function of each neuron is a threshold function, i.e. u U : f (u) act (net u, θ u ) = {, if netu θ,, otherwise. The output function of each neuron is the identity, i.e. u U : f (u) out (act u) = act u. Christian Borgelt Introduction to Neural Networks 76

Hopfield Networks Alternative activation function u U : f (u) act (net u, θ u, act u ) =, if net u > θ,, if net u < θ, act u, if net u = θ. This activation function has advantages w.r.t. the physical interpretation of a Hopfield network. General weight matrix of a Hopfield network W = 0 w u u... w u u n w u u 0... w u u n... w u u n w u u n... 0 Christian Borgelt Introduction to Neural Networks 77

Hopfield Networks: Examples Very simple Hopfield network u x 0 y W = ( 0 0 ) x 0 y u The behavior of a Hopfield network can depend on the update order. Computations can oscillate if neurons are updated in parallel. Computations always converge if neurons are updated sequentially. Christian Borgelt Introduction to Neural Networks 78

Hopfield Networks: Examples Parallel update of neuron activations u u input phase work phase The computations oscillate, no stable state is reached. Output depends on when the computations are terminated. Christian Borgelt Introduction to Neural Networks 79

Hopfield Networks: Examples Sequential update of neuron activations u u input phase work phase u u input phase work phase Regardless of the update order a stable state is reached. Which state is reached depends on the update order. Christian Borgelt Introduction to Neural Networks 80

Hopfield Networks: Examples Simplified representation of a Hopfield network x 0 y x 0 y u 0 u 3 0 0 u W = 0 0 0 x 3 0 y 3 Symmetric connections between neurons are combined. Inputs and outputs are not explicitely represented. Christian Borgelt Introduction to Neural Networks 8

Hopfield Networks: State Graph Graph of activation states and transitions +++ u u u 3 u 3 u u u ++ + + u 3 ++ u u u 3 u u u 3 u u 3 u + + + u u u 3 u u u 3 Christian Borgelt Introduction to Neural Networks 8

Hopfield Networks: Convergence Convergence Theorem: If the activations of the neurons of a Hopfield network are updated sequentially (asynchronously), then a stable state is reached in a finite number of steps. If the neurons are traversed cyclically in an arbitrary, but fixed order, at most n n steps (updates of individual neurons) are needed, where n is the number of neurons of the Hopfield network. The proof is carried out with the help of an energy function. The energy function of a Hopfield network with n neurons u,..., u n is E = act W act + θ T act = u,v U,u v w uv act u act v + u U θ u act u. Christian Borgelt Introduction to Neural Networks 83

Hopfield Networks: Convergence Consider the energy change resulting from an update that changes an activation: E = E (new) E (old) = ( v U {u} ( = v U {u} ( act (old) u net u < θ u : Second factor is less than 0. act (new) u w uv act (new) u w uv act (old) u act (new) u ) ( act v +θ u act (new) u ) act v +θ u act (old) u ) v U {u} w uv act v }{{} = net u = and act (old) u =, therefore first factor greater than 0. Result: E < 0. net u θ u : Second factor greater than or equal to 0. act (new) u = and act (old) u =, therefore first factor less than 0. Result: E 0. θ u ). Christian Borgelt Introduction to Neural Networks 84

Hopfield Networks: Examples Arrange states in state graph according to their energy E + + ++ ++ 0 + + + 4 +++ Energy function for example Hopfield network: E = act u act u act u act u3 act u act u3. Christian Borgelt Introduction to Neural Networks 85

Hopfield Networks: Examples The state graph need not be symmetric 5 E u 3 + + u 3 u ++ ++ + +++ 7 + + Christian Borgelt Introduction to Neural Networks 86

Hopfield Networks: Physical Interpretation Physical interpretation: Magnetism A Hopfield network can be seen as a (microscopic) model of magnetism (so-called Ising model, [Ising 95]). physical atom magnetic moment (spin) strength of outer magnetic field magnetic coupling of the atoms Hamilton operator of the magnetic field neural neuron activation state threshold value connection weights energy function Christian Borgelt Introduction to Neural Networks 87

Hopfield Networks: Associative Memory Idea: Use stable states to store patterns First: Store only one pattern x = (act (l) u,..., act u (l) n ) {, } n, n, i.e., find weights, so that pattern is a stable state. Necessary and sufficient condition: S(W x θ ) = x, where with S : IR n {, } n, x y i {,..., n} : y i = {, if xi 0,, otherwise. Christian Borgelt Introduction to Neural Networks 88

Hopfield Networks: Associative Memory If θ = 0 an appropriate matrix W can easily be found. It suffices W x = c x with c IR +. Algebraically: Find a matrix W that has a positive eigenvalue w.r.t. x. Choose W = x x T E where x x T is the so-called outer product. With this matrix we have W x = ( x x T ) x }{{} E x = x = n x x = (n ) x. ( ) = x ( x T x ) x }{{} = x =n Christian Borgelt Introduction to Neural Networks 89

Hopfield Networks: Associative Memory Hebbian learning rule [Hebb 949] Written in individual weights the computation of the weight matrix reads: w uv = 0, if u = v,, if u v, act u (p), otherwise. Originally derived from a biological analogy. = act (v) u, Strengthen connection between neurons that are active at the same time. Note that this learning rule also stores the complement of the pattern: With W x = (n ) x it is also W( x ) = (n )( x ). Christian Borgelt Introduction to Neural Networks 90

Hopfield Networks: Associative Memory Storing several patterns Choose W x j = m i= W i x j = If patterns are orthogonal, we have and therefore x T i x j = = m i= m i= ( x i x i T ) x j { 0, if i j, n, if i = j, W x j = (n m) x j. m E x j }{{} = x j x i ( x i T x j) m x j Christian Borgelt Introduction to Neural Networks 9

Hopfield Networks: Associative Memory Storing several patterns Result: As long as m < n, x is a stable state of the Hopfield network. Note that the complements of the patterns are also stored. With W x j = (n m) x j it is also W( x j ) = (n m)( x j ). But: Capacity is very small compared to the number of possible states ( n ). Non-orthogonal patterns: W x j = (n m) x j + m i= i j x i ( x T i x j) }{{} disturbance term. Christian Borgelt Introduction to Neural Networks 9

Associative Memory: Example Example: Store patterns x = (+, +,, ) and x = (, +,, +). where W = W = W + W = x x T + x x T E 0 0 0 0, W = 0 0 0 0. The full weight matrix is: Therefore it is W = 0 0 0 0 0 0 0 0 0 0 0 0. W x = (+, +,, ) and W x = (, +,, +). Christian Borgelt Introduction to Neural Networks 93

Associative Memory: Examples Example: Storing bit maps of numbers Left: Bit maps stored in a Hopfield network. Right: Reconstruction of a pattern from a random input. Christian Borgelt Introduction to Neural Networks 94

Hopfield Networks: Associative Memory Training a Hopfield network with the Delta rule Necessary condition for pattern x being a stable state: s(0 + w u u act (p) u +... + w u u n act (p) u n θ u ) = act (p) u, s(w u u act (p) u + 0 +... + w u u n act (p) u n θ u ) = act (p) u,..... s(w un u act (p) u + w un u act (p) u +... + 0 with the standard threshold function θ un ) = act (p) u n. s(x) = {, if x 0,, otherwise. Christian Borgelt Introduction to Neural Networks 95

Hopfield Networks: Associative Memory Training a Hopfield network with the Delta rule Turn weight matrix into a weight vector: w = ( w u u, w u u 3,..., w u u n, w u u 3,..., w u u n,.... w un u n, θ u, θ u,..., θ un ). Construct input vectors for a threshold logic unit z = (act (p) u, 0,..., 0, }{{} n zeros act (p) u 3,..., act (p) Apply Delta rule training until convergence. u n,... 0,, 0,..., 0 ). }{{} n zeros Christian Borgelt Introduction to Neural Networks 96

Demonstration Software: xhfn/whfn Demonstration of Hopfield networks as associative memory: Visualization of the association/recognition process Two-dimensional networks of arbitrary size http://www.borgelt.net/hfnd.html Christian Borgelt Introduction to Neural Networks 97