Introduction to Neural Networks

Size: px
Start display at page:

Download "Introduction to Neural Networks"

Transcription

1 Introduction to Neural Networks Christian Borgelt Intelligent Data Analysis and Graphical Models Research Unit European Center for Soft Computing c/ Gonzalo Gutiérrez Quirós s/n, Mieres, Spain Christian Borgelt Introduction to Neural Networks

2 Contents Introduction Motivation, Biological Background Threshold Logic Units Definition, Geometric Interpretation, Limitations, Networks of TLUs, Training General Neural Networks Structure, Operation, Training Multilayer Perceptrons Definition, Function Approximation, Gradient Descent, Backpropagation, Variants, Sensitivity Analysis Radial Basis Function Networks Definition, Function Approximation, Initialization, Training, Generalized Version Self-Organizing Maps Definition, Learning Vector Quantization, Neighborhood of Output Neurons Hopfield Networks Definition, Convergence, Associative Memory, Solving Optimization Problems Recurrent Neural Networks Differential Equations, Vector Networks, Backpropagation through Time Christian Borgelt Introduction to Neural Networks

3 Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology / (Neuro-)Physiology / Psychology: Exploit similarity to real (biological) neural networks. Build models to understand nerve and brain operation by simulation. Computer Science / Engineering / Economics Mimic certain cognitive capabilities of human beings. Solve learning/adaptation, prediction, and optimization problems. Physics / Chemistry Use neural network models to describe physical phenomena. Special case: spin glasses (alloys of magnetic and non-magnetic metals). Christian Borgelt Introduction to Neural Networks 3

4 Motivation: Why Neural Networks in AI? Physical-Symbol System Hypothesis [Newell and Simon 976] A physical-symbol system has the necessary and sufficient means for general intelligent action. Neural networks process simple signals, not symbols. So why study neural networks in Artificial Intelligence? Symbol-based representations work well for inference tasks, but are fairly bad for perception tasks. Symbol-based expert systems tend to get slower with growing knowledge, human experts tend to get faster. Neural networks allow for highly parallel information processing. There are several successful applications in industry and finance. Christian Borgelt Introduction to Neural Networks 4

5 Biological Background Structure of a prototypical biological neuron terminal button synapsis dendrites cell core cell body (soma) axon myelin sheath Christian Borgelt Introduction to Neural Networks 5

6 Biological Background (Very) simplified description of neural information processing Axon terminal releases chemicals, called neurotransmitters. These act on the membrane of the receptor dendrite to change its polarization. (The inside is usually 70mV more negative than the outside.) Decrease in potential difference: excitatory synapse Increase in potential difference: inhibitory synapse If there is enough net excitatory input, the axon is depolarized. The resulting action potential travels along the axon. (Speed depends on the degree to which the axon is covered with myelin.) When the action potential reaches the terminal buttons, it triggers the release of neurotransmitters. Christian Borgelt Introduction to Neural Networks 6

7 Threshold Logic Units Christian Borgelt Introduction to Neural Networks 7

8 Threshold Logic Units A Threshold Logic Unit (TLU) is a processing unit for numbers with n inputs x,..., x n and one output y. The unit has a threshold θ and each input x i is associated with a weight w i. A threshold logic unit computes the function y =, if x w = 0, otherwise. n i= w i x i θ, x w.. θ y x n w n Christian Borgelt Introduction to Neural Networks 8

9 Threshold Logic Units: Examples Threshold logic unit for the conjunction x x. x 3 x 4 y x x 3x + x y Threshold logic unit for the implication x x. x x y x x x x y Christian Borgelt Introduction to Neural Networks 9

10 Threshold Logic Units: Examples Threshold logic unit for (x x ) (x x 3 ) (x x 3 ). x x x 3 y x x x 3 i w i x i y Christian Borgelt Introduction to Neural Networks 0

11 Threshold Logic Units: Geometric Interpretation Review of line representations Straight lines are usually represented in one of the following forms: with the parameters: Explicit Form: g x = bx + c Implicit Form: g a x + a x + d = 0 Point-Direction Form: g x = p + k r Normal Form: g ( x p) n = 0 b : c : p : r : n : Gradient of the line Section of the x axis Vector of a point of the line (base vector) Direction vector of the line Normal vector of the line Christian Borgelt Introduction to Neural Networks

12 Threshold Logic Units: Geometric Interpretation A straight line and its defining parameters. c x b = r r r n = (a, a ) q = d n n n g ϕ p d = p n x O Christian Borgelt Introduction to Neural Networks

13 Threshold Logic Units: Geometric Interpretation How to determine the side on which a point x lies. x z z = x n n n n q = d n n n g ϕ x x O Christian Borgelt Introduction to Neural Networks 3

14 Threshold Logic Units: Geometric Interpretation Threshold logic unit for x x. x 3 x 4 y x x A threshold logic unit for x x. x 0 x y x 0 x 0 Christian Borgelt Introduction to Neural Networks 4

15 Threshold Logic Units: Geometric Interpretation Visualization of 3-dimensional Boolean functions: x 3 x (,, ) x (0, 0, 0) Threshold logic unit for (x x ) (x x 3 ) (x x 3 ). x x y x 3 x x 3 x Christian Borgelt Introduction to Neural Networks 5

16 Threshold Logic Units: Limitations The biimplication problem x x : There is no separating line. x x y x 0 0 x Formal proof by reductio ad absurdum: since (0, 0) : 0 θ, () since (, 0) 0: w < θ, () since (0, ) 0: w < θ, (3) since (, ) : w + w θ. (4) () and (3): w + w < θ. With (4): θ > θ, or θ > 0. Contradiction to (). Christian Borgelt Introduction to Neural Networks 6

17 Threshold Logic Units: Limitations Total number and number of linearly separable Boolean functions. ([Widner 960] as cited in [Zell 994]) inputs Boolean functions linearly separable functions For many inputs a threshold logic unit can compute almost no functions. Networks of threshold logic units are needed to overcome the limitations. Christian Borgelt Introduction to Neural Networks 7

18 Networks of Threshold Logic Units Solving the biimplication problem with a network. Idea: logical decomposition x x (x x ) (x x ) x x computes y = x x 3 computes y = x x computes y = y y y = x x Christian Borgelt Introduction to Neural Networks 8

19 Networks of Threshold Logic Units Solving the biimplication problem: Geometric interpretation x d 0 c g g 0 = y b 0 ac g 3 0 a b 0 d 0 x y 0 The first layer computes new Boolean coordinates for the points. After the coordinate transformation the problem is linearly separable. Christian Borgelt Introduction to Neural Networks 9

20 Representing Arbitrary Boolean Functions Let y = f(x,..., x n ) be a Boolean function of n variables. (i) Represent f(x,..., x n ) in disjunctive normal form. That is, determine D f = K... K m, where all K j are conjunctions of n literals, i.e., K j = l j... l jn with l ji = x i (positive literal) or l ji = x i (negative literal). (ii) Create a neuron for each conjunction K j of the disjunctive normal form (having n inputs one input for each variable), where {, if lji = x w ji = i, and θ, if l ji = x i, j = n + n w ji. (iii) Create an output neuron (having m inputs one input for each neuron that was created in step (ii)), where i= w (n+)k =, k =,..., m, and θ n+ =. Christian Borgelt Introduction to Neural Networks 0

21 Training Threshold Logic Units Christian Borgelt Introduction to Neural Networks

22 Training Threshold Logic Units Geometric interpretation provides a way to construct threshold logic units with and 3 inputs, but: Not an automatic method (human visualization needed). Not feasible for more than 3 inputs. General idea of automatic training: Start with random values for weights and threshold. Determine the error of the output for a set of training patterns. Error is a function of the weights and the threshold: e = e(w,..., w n, θ). Adapt weights and threshold so that the error gets smaller. Iterate adaptation until the error vanishes. Christian Borgelt Introduction to Neural Networks

23 Training Threshold Logic Units Single input threshold logic unit for the negation x. x w θ y x y 0 0 Output error as a function of weight and threshold. 0 w e θ error for x = w e θ error for x = 0 0 w e θ sum of errors 0 Christian Borgelt Introduction to Neural Networks 3

24 Training Threshold Logic Units The error function cannot be used directly, because it consists of plateaus. Solution: If the computed output is wrong, take into account, how far the weighted sum is from the threshold. Modified output error as a function of weight and threshold. 4 e 4 e 4 e 0 w θ error for x = w θ error for x = 0 0 w θ sum of errors 0 Christian Borgelt Introduction to Neural Networks 4

25 Training Threshold Logic Units Schemata of resulting directions of parameter changes. w 0 0 θ changes for x = 0 w 0 0 θ changes for x = w 0 0 θ sum of changes Start at random point. Iteratively adapt parameters according to the direction corresponding to the current point. Christian Borgelt Introduction to Neural Networks 5

26 Training Threshold Logic Units Example training procedure: Online and batch training. w 0 0 θ Online-Lernen w 0 0 θ Batch-Lernen 4 0 w e θ 0 Batch-Lernen x y x 0 Christian Borgelt Introduction to Neural Networks 6

27 Training Threshold Logic Units: Delta Rule Formal Training Rule: Let x = (x,..., x n ) be an input vector of a threshold logic unit, o the desired output for this input vector and y the actual output of the threshold logic unit. If y o, then the threshold θ and the weight vector w = (w,..., w n ) are adapted as follows in order to reduce the error: i {,..., n} : θ (new) = θ (old) + θ with θ = η(o y), = w (old) i + w i with w i = η(o y)x i, w (new) i where η is a parameter that is called learning rate. It determines the severity of the weight changes. This procedure is called Delta Rule or Widrow Hoff Procedure [Widrow and Hoff 960]. Online Training: Adapt parameters after each training pattern. Batch Training: Adapt parameters only at the end of each epoch, i.e. after a traversal of all training patterns. Christian Borgelt Introduction to Neural Networks 7

28 Training Threshold Logic Units: Delta Rule Turning the threshold value into a weight: = x 0 w 0 = θ x w x w x w θ y x w 0 y.. w n w n x n x n n i= w i x i θ n i= w i x i θ 0 Christian Borgelt Introduction to Neural Networks 8

29 Training Threshold Logic Units: Delta Rule procedure online training (var w, var θ, L, η); var y, e; (* output, sum of errors *) begin repeat e := 0; (* initialize the error sum *) for all ( x, o) L do begin (* traverse the patterns *) if ( w x θ) then y := ; (* compute the output *) else y := 0; (* of the threshold logic unit *) if (y o) then begin (* if the output is wrong *) θ := θ η(o y); (* adapt the threshold *) w := w + η(o y) x; (* and the weights *) e := e + o y ; (* sum the errors *) end; end; until (e 0); (* repeat the computations *) end; (* until the error vanishes *) Christian Borgelt Introduction to Neural Networks 9

30 Training Threshold Logic Units: Delta Rule procedure batch training (var w, var θ, L, η); var y, e, (* output, sum of errors *) θ c, w c ; (* summed changes *) begin repeat e := 0; θ c := 0; w c := 0; (* initializations *) for all ( x, o) L do begin (* traverse the patterns *) if ( w x θ) then y := ; (* compute the output *) else y := 0; (* of the threshold logic unit *) if (y o) then begin (* if the output is wrong *) θ c := θ c η(o y); (* sum the changes of the *) w c := w c + η(o y) x; (* threshold and the weights *) e := e + o y ; (* sum the errors *) end; end; θ := θ + θ c ; (* adapt the threshold *) w := w + w c ; (* and the weights *) until (e 0); (* repeat the computations *) end; (* until the error vanishes *) Christian Borgelt Introduction to Neural Networks 30

31 Training Threshold Logic Units: Online epoch x o x w y e θ w θ w Christian Borgelt Introduction to Neural Networks 3

32 Training Threshold Logic Units: Batch epoch x o x w y e θ w θ w Christian Borgelt Introduction to Neural Networks 3

33 Training Threshold Logic Units: Conjunction Threshold logic unit with two inputs for the conjunction. x w x w θ y x x y x y 0 x 0 0 Christian Borgelt Introduction to Neural Networks 33

34 Training Threshold Logic Units: Conjunction epoch x x o x w y e θ w w θ w w Christian Borgelt Introduction to Neural Networks 34

35 Training Threshold Logic Units: Biimplication epoch x x o x w y e θ w w θ w w Christian Borgelt Introduction to Neural Networks 35

36 Training Threshold Logic Units: Convergence Convergence Theorem: Let L = {( x, o ),... ( x m, o m )} be a set of training patterns, each consisting of an input vector x i IR n and a desired output o i {0, }. Furthermore, let L 0 = {( x, o) L o = 0} and L = {( x, o) L o = }. If L 0 and L are linearly separable, i.e., if w IR n and θ IR exist, such that ( x, 0) L 0 : w x < θ and ( x, ) L : w x θ, then online as well as batch training terminate. The algorithms terminate only when the error vanishes. Therefore the resulting threshold and weights must solve the problem. For not linearly separable problems the algorithms do not terminate. Christian Borgelt Introduction to Neural Networks 36

37 Training Networks of Threshold Logic Units Single threshold logic units have strong limitations: They can only compute linearly separable functions. Networks of threshold logic units can compute arbitrary Boolean functions. Training single threshold logic units with the delta rule is fast and guaranteed to find a solution if one exists. Networks of threshold logic units cannot be trained, because there are no desired values for the neurons of the first layer, the problem can usually be solved with different functions computed by the neurons of the first layer. When this situation became clear, neural networks were seen as a research dead end. Christian Borgelt Introduction to Neural Networks 37

38 General (Artificial) Neural Networks Christian Borgelt Introduction to Neural Networks 38

39 General Neural Networks Basic graph theoretic notions A (directed) graph is a pair G = (V, E) consisting of a (finite) set V of nodes or vertices and a (finite) set E V V of edges. We call an edge e = (u, v) E directed from node u to node v. Let G = (V, E) be a (directed) graph and u V a node. Then the nodes of the set pred(u) = {v V (v, u) E} are called the predecessors of the node u and the nodes of the set succ(u) = {v V (u, v) E} are called the successors of the node u. Christian Borgelt Introduction to Neural Networks 39

40 General Neural Networks General definition of a neural network An (artificial) neural network is a (directed) graph G = (U, C), whose nodes u U are called neurons or units and whose edges c C are called connections. The set U of nodes is partitioned into the set U in of input neurons, the set U out of output neurons, the set U hidden of hidden neurons. and It is U = U in U out U hidden, U in, U out, U hidden (U in U out ) =. Christian Borgelt Introduction to Neural Networks 40

41 General Neural Networks Each connection (v, u) C possesses a weight w uv and each neuron u U possesses three (real-valued) state variables: the network input net u, the activation act u, the output out u. and Each input neuron u U in also possesses a fourth (real-valued) state variable, the external input ex u. Furthermore, each neuron u U possesses three functions: the network input function the activation function the output function f (u) net : IR pred(u) +κ(u) IR, f (u) act : IRκ (u) IR, and f (u) out : IR IR, which are used to compute the values of the state variables. Christian Borgelt Introduction to Neural Networks 4

42 General Neural Networks Types of (artificial) neural networks If the graph of a neural network is acyclic, it is called a feed-forward network. If the graph of a neural network contains cycles (backward connections), it is called a recurrent network. Representation of the connection weights by a matrix u u... u r w u u w u u... w u u r w u u w u u w u u r.. w ur u w ur u... w ur u r u u. u r Christian Borgelt Introduction to Neural Networks 4

43 General Neural Networks: Example A simple recurrent neural network x u u 3 y 4 3 x u Weight matrix of this network u u u u u u 3 Christian Borgelt Introduction to Neural Networks 43

44 Structure of a Generalized Neuron A generalized neuron is a simple numeric processor ex u out v = in uv u w uv.. out vn = in uvn. w uvn f (u) net net u f (u) act act u f (u) out out u σ,..., σ l θ,..., θ k Christian Borgelt Introduction to Neural Networks 44

45 General Neural Networks: Example x u u 3 y 4 3 x u f (u) net ( w u, in u ) = v pred(u) w uvin uv = v pred(u) w uv out v f (u) act (net u, θ) = {, if netu θ, 0, otherwise. f (u) out (act u) = act u Christian Borgelt Introduction to Neural Networks 45

46 General Neural Networks: Example Updating the activations of the neurons u u u 3 input phase 0 0 work phase 0 0 net u3 = net u = net u = net u3 = net u = 0 Order in which the neurons are updated: u 3, u, u, u 3, u, u, u 3,... A stable state with a unique output is reached. Christian Borgelt Introduction to Neural Networks 46

47 General Neural Networks: Example Updating the activations of the neurons u u u 3 input phase 0 0 work phase 0 0 net u3 = 0 net u = 0 0 net u = 0 0 net u3 = net u = 0 0 net u = net u3 = Order in which the neurons are updated: u 3, u, u, u 3, u, u, u 3,... No stable state is reached (oscillation of output). Christian Borgelt Introduction to Neural Networks 47

48 General Neural Networks: Training Definition of learning tasks for a neural network A fixed learning task L fixed for a neural network with n input neurons, i.e. U in = {u,..., u n }, and m output neurons, i.e. U out = {v,..., v m }, is a set of training patterns l = ( ı (l), o (l) ), each consisting of an input vector ı (l) = ( ex (l) u,..., ex (l) u n ) an output vector o (l) = (o (l) v,..., o (l) v m ). and A fixed learning task is solved, if for all training patterns l L fixed the neural network computes from the external inputs contained in the input vector ı (l) of a training pattern l the outputs contained in the corresponding output vector o (l). Christian Borgelt Introduction to Neural Networks 48

49 General Neural Networks: Training Solving a fixed learning task: Error definition Measure how well a neural network solves a given fixed learning task. Compute differences between desired and actual outputs. Do not sum differences directly in order to avoid errors canceling each other. Square has favorable properties for deriving the adaptation rules. e = l L fixed e (l) = v U out e v = l L fixed v U out e (l) v, where e (l) v = ( o (l) v out (l) ) v Christian Borgelt Introduction to Neural Networks 49

50 General Neural Networks: Training Definition of learning tasks for a neural network A free learning task L free for a neural network with n input neurons, i.e. U in = {u,..., u n }, is a set of training patterns l = ( ı (l) ), each consisting of an input vector ı (l) = ( ex (l) u,..., ex (l) u n ). Properties: There is no desired output for the training patterns. Outputs can be chosen freely by the training method. Solution idea: Similar inputs should lead to similar outputs. (clustering of input vectors) Christian Borgelt Introduction to Neural Networks 50

51 General Neural Networks: Preprocessing Normalization of the input vectors Compute expected value and standard deviation for each input: µ k = L l L ex (l) u k and σ k = L l L ( ex (l) u k µ k ), Normalize the input vectors to expected value 0 and standard deviation : ex (l)(neu) u k = ex(l)(alt) u k σ k µ k Avoids unit and scaling problems. Christian Borgelt Introduction to Neural Networks 5

52 Multilayer Perceptrons (MLPs) Christian Borgelt Introduction to Neural Networks 5

53 Multilayer Perceptrons An r layer perceptron is a neural network with a graph G = (U, C) that satisfies the following conditions: (i) U in U out =, (ii) U hidden = U () hidden U (r ) hidden, (iii) C i < j r : ( U in U () ) hidden U (i) hidden U (j) hidden =, ( r 3 i= U (i) hidden U (i+) ) hidden or, if there are no hidden neurons (r =, U hidden = ), C U in U out. Feed-forward network with strictly layered structure. ( U (r ) ) hidden U out Christian Borgelt Introduction to Neural Networks 53

54 Multilayer Perceptrons General structure of a multilayer perceptron x y x... y... x n y m U in U () hidden U () hidden U (r ) hidden U out Christian Borgelt Introduction to Neural Networks 54

55 Multilayer Perceptrons The network input function of each hidden neuron and of each output neuron is the weighted sum of its inputs, i.e. u U hidden U out : f (u) net ( w u, in u ) = w u in u = v pred (u) w uv out v. The activation function of each hidden neuron is a so-called sigmoid function, i.e. a monotonously increasing function f : IR [0, ] with lim f(x) = 0 and lim f(x) =. x x The activation function of each output neuron is either also a sigmoid function or a linear function, i.e. f act (net, θ) = α net θ. Christian Borgelt Introduction to Neural Networks 55

56 Sigmoid Activation Functions step function: f act (net, θ) = {, if net θ, 0, otherwise. semi-linear function: f act (net, θ) = {, if net > θ +, 0, if net < θ, (net θ) +, otherwise. θ net θ θ θ + net sine until saturation: f act (net, θ) =, if net > θ + π, 0, if net < θ π, sin(net θ)+, otherwise. logistic function: f act (net, θ) = + e (net θ) θ θ π θ + π net net θ 8 θ 4 θ θ + 4 θ + 8 Christian Borgelt Introduction to Neural Networks 56

57 Sigmoid Activation Functions All sigmoid functions on the previous slide are unipolar, i.e., they range from 0 to. Sometimes bipolar sigmoid functions are used, like the tangens hyperbolicus. tangens hyperbolicus: f act (net, θ) = tanh(net θ) = + e (net θ) 0 θ 4 θ θ θ + θ + 4 net Christian Borgelt Introduction to Neural Networks 57

58 Multilayer Perceptrons: Weight Matrices Let U = {v,..., v m } and U = {u,..., u n } be the neurons of two consecutive layers of a multilayer perceptron. Their connection weights are represented by an n m matrix W = w u v w u v... w u v m w u v w u v... w u v m... w un v w un v... w un v m where w ui v j = 0 if there is no connection from neuron v j to neuron u i. Advantage: The computation of the network input can be written as net U = W in U = W out U where net U = (net u,..., net un ) and in U = out U = (out v,..., out vm )., Christian Borgelt Introduction to Neural Networks 58

59 Multilayer Perceptrons: Biimplication Solving the biimplication problem with a multilayer perceptron. x x 3 y U in U hidden U out Note the additional input neurons compared to the TLU solution. W = ( ) and W = ( ) Christian Borgelt Introduction to Neural Networks 59

60 Multilayer Perceptrons: Fredkin Gate 0 a b s x x 0 a b a b s y y b a s x x y y y y s x s x x x Christian Borgelt Introduction to Neural Networks 60

61 Multilayer Perceptrons: Fredkin Gate x s x 3 3 y y W = W = ( ) U in U hidden U out Christian Borgelt Introduction to Neural Networks 6

62 Why Non-linear Activation Functions? With weight matrices we have for two consecutive layers U and U If the activation functions are linear, i.e., net U = W in U = W out U. f act (net, θ) = α net θ. the activations of the neurons in the layer U can be computed as where act U = D act net U θ, act U = (act u,..., act un ) is the activation vector, D act is an n n diagonal matrix of the factors α ui, i =,..., n, and θ = (θ u,..., θ un ) is a bias vector. Christian Borgelt Introduction to Neural Networks 6

63 Why Non-linear Activation Functions? If the output function is also linear, it is analogously where out U = D out act U ξ, out U = (out u,..., out un ) is the output vector, D out is again an n n diagonal matrix of factors, and ξ = (ξ u,..., ξ un ) a bias vector. Combining these computations we get out U = D out ( D act (W ) ) out U θ ξ and thus out U = A out U + b with an n m matrix A and an n-dimensional vector b. Christian Borgelt Introduction to Neural Networks 63

64 Why Non-linear Activation Functions? Therefore we have and out U = A out U + b out U3 = A 3 out U + b 3 for the computations of two consecutive layers U and U 3. These two computations can be combined into out U3 = A 3 out U + b 3, where A 3 = A 3 A and b 3 = A 3 b + b 3. Result: With linear activation and output functions any multilayer perceptron can be reduced to a two-layer perceptron. Christian Borgelt Introduction to Neural Networks 64

65 Multilayer Perceptrons: Function Approximation General idea of function approximation Approximate a given function by a step function. Construct a neural network that computes the step function. y y 3 y 4 y y 0 y x x x x 3 x 4 Christian Borgelt Introduction to Neural Networks 65

66 Multilayer Perceptrons: Function Approximation.. x x... y x. x 3 x 4... y y 3. id y Christian Borgelt Introduction to Neural Networks 66

67 Multilayer Perceptrons: Function Approximation Theorem: Any Riemann-integrable function can be approximated with arbitrary accuracy by a four-layer perceptron. But: Error is measured as the area between the functions. More sophisticated mathematical examination allows a stronger assertion: With a three-layer perceptron any continuous function can be approximated with arbitrary accuracy (error: maximum function value difference). Christian Borgelt Introduction to Neural Networks 67

68 Multilayer Perceptrons: Function Approximation y y y 3 y 4 y y 4 y 3 x y 0 y y y x x x x 3 x 4 x x x 3 x y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 68

69 Multilayer Perceptrons: Function Approximation. x.. x x x 3 x 4. y y 3. y y 4. id y Christian Borgelt Introduction to Neural Networks 69

70 Multilayer Perceptrons: Function Approximation y y y 3 y 4 y y 4 y 3 x y 0 y y y x x x x 3 x 4 x x x 3 x y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 70

71 Multilayer Perceptrons: Function Approximation.. x x θ θ y. y x id y x θ 3 y 3 x. θ 4 y 4. θ i = x i x. Christian Borgelt Introduction to Neural Networks 7

72 Mathematical Background: Regression Christian Borgelt Introduction to Neural Networks 7

73 Mathematical Background: Linear Regression Training neural networks is closely related to regression Given: A dataset ((x, y ),..., (x n, y n )) of n data tuples and a hypothesis about the functional relationship, e.g. y = g(x) = a + bx. Approach: Minimize the sum of squared errors, i.e. F (a, b) = n i= (g(x i ) y i ) = n i= (a + bx i y i ). Necessary conditions for a minimum: F n a = (a + bx i y i ) = 0 and i= F b = n i= (a + bx i y i )x i = 0 Christian Borgelt Introduction to Neural Networks 73

74 Mathematical Background: Linear Regression Result of necessary conditions: System of so-called normal equations, i.e. n x i i= na + a + n x i i= n x i i= b = b = n i= n i= y i, x i y i. Two linear equations for two unknowns a and b. System can be solved with standard methods from linear algebra. Solution is unique unless all x-values are identical. The resulting line is called a regression line. Christian Borgelt Introduction to Neural Networks 74

75 Linear Regression: Example x y y = x y x Christian Borgelt Introduction to Neural Networks 75

76 Mathematical Background: Polynomial Regression Generalization to polynomials y = p(x) = a 0 + a x a m x m Approach: Minimize the sum of squared errors, i.e. F (a 0, a,..., a m ) = n i= (p(x i ) y i ) = n i= (a 0 + a x i a m x m i y i ) Necessary conditions for a minimum: All partial derivatives vanish, i.e. F a 0 = 0, F = 0,..., F = 0. a a m Christian Borgelt Introduction to Neural Networks 76

77 Mathematical Background: Polynomial Regression System of normal equations for polynomials n x i i= n x m i i= na 0 + a 0 + n x i i= n x i i= a a n x m i i= n x m+ i i= a m = a m = n y i i= n i=.. a 0 + n x m+ i i= a n x m i i= a m = m + linear equations for m + unknowns a 0,..., a m. System can be solved with standard methods from linear algebra. Solution is unique unless all x-values are identical. n i= x i y i x m i y i, Christian Borgelt Introduction to Neural Networks 77

78 Mathematical Background: Multilinear Regression Generalization to more than one argument z = f(x, y) = a + bx + cy Approach: Minimize the sum of squared errors, i.e. F (a, b, c) = n i= (f(x i, y i ) z i ) = n i= (a + bx i + cy i z i ) Necessary conditions for a minimum: All partial derivatives vanish, i.e. F n a = (a + bx i + cy i z i ) = 0, i= F b F c = = n i= n i= (a + bx i + cy i z i )x i = 0, (a + bx i + cy i z i )y i = 0. Christian Borgelt Introduction to Neural Networks 78

79 Mathematical Background: Multilinear Regression System of normal equations for several arguments n x i i= n y i i= na + a + a + n x i i= n x i i= n i= b + b + x i y i b + n y i i= n i= n yi i= c = x i y i c = c = n z i i= n i= n i= z i x i z i y i 3 linear equations for 3 unknowns a, b, and c. System can be solved with standard methods from linear algebra. Solution is unique unless all x- or all y-values are identical. Christian Borgelt Introduction to Neural Networks 79

80 Multilinear Regression General multilinear case: y = f(x,..., x m ) = a 0 + Approach: Minimize the sum of squared errors, i.e. where X = x... x m x n... x mn m k= F ( a) = (X a y) (X a y),, y = Necessary conditions for a minimum: y. y n a k x k, and a = a F ( a) = a (X a y) (X a y) = 0 a 0 a. a m Christian Borgelt Introduction to Neural Networks 80

81 Multilinear Regression a F ( a) may easily be computed by remembering that the differential operator ( ) a =,..., a 0 a m behaves formally like a vector that is multiplied to the sum of squared errors. Alternatively, one may write out the differentiation componentwise. With the former method we obtain for the derivative: a (X a y) (X a y) = ( a (X a y)) (X a y) + ((X a y) ( a (X a y))) = ( a (X a y)) (X a y) + ( a (X a y)) (X a y) = X (X a y) = X X a X y = 0 Christian Borgelt Introduction to Neural Networks 8

82 Multilinear Regression Necessary condition for a minimum therefore: a F ( a) = a (X a y) (X a y) = X X a X y! = 0 As a consequence we get the system of normal equations: X X a = X y This system has a solution if X X is not singular. Then we have a = (X X) X y. (X X) X is called the (Moore-Penrose-)Pseudoinverse of the matrix X. With the matrix-vector representation of the regression problem an extension to multipolynomial regression is straighforward: Simply add the desired products of powers to the matrix X. Christian Borgelt Introduction to Neural Networks 8

83 Mathematical Background: Logistic Regression Generalization to non-polynomial functions Simple example: y = ax b Idea: Find transformation to linear/polynomial case. Transformation for example: ln y = ln a + b ln x. Special case: logistic function y = Y + e a+bx y = + ea+bx Y Y y y = e a+bx. Result: Apply so-called Logit-Transformation ln ( Y y y ) = a + bx. Christian Borgelt Introduction to Neural Networks 83

84 Logistic Regression: Example x y Transform the data with z = ln ( Y y y ), Y = 6. The transformed data points are The resulting regression line is x z z.3775x Christian Borgelt Introduction to Neural Networks 84

85 Logistic Regression: Example z x y Y = 6 x The logistic regression function can be computed by a single neuron with network input function f net (x) wx with w.3775, activation function f act (net, θ) ( + e (net θ )) with θ 4.33 and output function f out (act) 6 act. Christian Borgelt Introduction to Neural Networks 85

86 Training Multilayer Perceptrons Christian Borgelt Introduction to Neural Networks 86

87 Training Multilayer Perceptrons: Gradient Descent Problem of logistic regression: Works only for two-layer perceptrons. More general approach: gradient descent. Necessary condition: differentiable activation and output functions. z (x0,y 0 ) z y 0 z x x 0 z y y 0 y x 0 Illustration of the gradient of a real-valued function z = f(x, y) at a point (x 0, y 0 ). It is z (x0,y 0 ) = ( z x x 0, y z ) y 0. x Christian Borgelt Introduction to Neural Networks 87

88 Gradient Descent: Formal Approach General Idea: Approach the minimum of the error function in small steps. Error function: e = l L fixed e (l) = v U out e v = l L fixed v U out e (l) v, Form gradient to determine the direction of the step: wu e = e w u = ( e θ u, Exploit the sum over the training patterns: e w up,..., e w upn ). wu e = e w u = w u l L fixed e (l) = l L fixed e(l) w u. Christian Borgelt Introduction to Neural Networks 88

89 Gradient Descent: Formal Approach Single pattern error depends on weights only through the network input: wu e (l) = e(l) w u = e(l) net (l) u net (l) u w u. Since net (l) u = w u in (l) u we have for the second factor net (l) u w u = in (l) u. For the first factor we consider the error e (l) for the training pattern l = ( ı (l), o (l) ): e (l) = v U out e (l) u = v U out ( o (l) v i.e. the sum of the errors over all output neurons. out (l) ) v, Christian Borgelt Introduction to Neural Networks 89

90 Gradient Descent: Formal Approach Therefore we have e (l) net (l) u = v U out ( o (l) v net (l) u out (l) ) v = v U out ( o (l) v out (l) ) v net (l) u. Since only the actual output out v (l) of an output neuron v depends on the network input net (l) u of the neuron u we are considering, it is e (l) net (l) u = v U out ( o (l) v out v (l) ) (l) out v net (l) u } {{ } δ u (l), which also introduces the abbreviation δ (l) u for the important sum appearing here. Christian Borgelt Introduction to Neural Networks 90

91 Gradient Descent: Formal Approach Distinguish two cases: The neuron u is an output neuron. In the first case we have u U out : Therefore we have for the gradient The neuron u is a hidden neuron. δ (l) u = ( o (l) u out (l) ) (l) out u u net (l) u u U out : wu e (l) u and thus for the weight change = e(l) ( u = o u (l) w u out (l) ) (l) out u u net (l) u in (l) u u U out : w (l) u = η wu e (l) u = η ( o (l) u out (l) ) (l) out u u net (l) u in (l) u. Christian Borgelt Introduction to Neural Networks 9

92 Gradient Descent: Formal Approach Exact formulae depend on choice of activation and output function, since it is out (l) u = f out ( act (l) u ) = f out (f act ( net (l) u )). Consider special case with output function is the identity, activation function is logistic, i.e. f act (x) = +e x. The first assumption yields out (l) u net (l) u = act(l) u net u (l) = f act( net (l) u ). Christian Borgelt Introduction to Neural Networks 9

93 Gradient Descent: Formal Approach For a logistic activation function we have and therefore f act(x) = d dx ( + e x ) = + e x ( + e x ) = = f act (x) ( f act (x)), f act( net (l) u ) = f act ( net (l) u ) The resulting weight change is therefore w (l) u = η ( o (l) u = ( + e x) ( e x) + e x ( f act ( net (l) ) u ) out (l) u which makes the computations very simple. ) out (l) u ( ( ) + e x = out (l) u ( out (l) ) u in (l) out (l) u u, ). Christian Borgelt Introduction to Neural Networks 93

94 Error Backpropagation Consider now: The neuron u is a hidden neuron, i.e. u U k, 0 < k < r. The output out (l) v of an output neuron v depends on the network input net (l) u only indirectly through its successor neurons succ(u) = {s U (u, s) C} = {s,..., s m } U k+, namely through their network inputs net (l) s. We apply the chain rule to obtain δ (l) u = v U out (o (l) v s succ(u) out (l) v ) out(l) v net (l) s net (l) s net (l) u. Exchanging the sums yields δ (l) u = (o (l) v s succ(u) v U out out v (l) ) out(l) v net (l) s net(l) s net (l) u = s succ(u) δ (l) s net (l) s net (l). u Christian Borgelt Introduction to Neural Networks 94

95 Error Backpropagation Consider the network input net (l) s where one element of in (l) net (l) s net (l) u = = w s in (l) s = p pred(s) is the output out (l) s p pred(s) out p (l) w sp net (l) u u w sp out (l) p θ s, of the neuron u. Therefore it is θ s net (l) u The result is the recursive equation (error backpropagation) δ (l) u = δ s (l) w su out(l) u s succ(u) net (l) u out (l) u = w su net (l), u. Christian Borgelt Introduction to Neural Networks 95

96 Error Backpropagation The resulting formula for the weight change is w (l) u = η wu e (l) = η δ (l) u in (l) u = η δ s (l) w su out(l) u s succ(u) net (l) u in (l) u. Consider again the special case with output function is the identity, activation function is logistic. The resulting formula for the weight change is then w (l) u = η δ s (l) w su out (l) s succ(u) u ( out (l) u ) in (l) u. Christian Borgelt Introduction to Neural Networks 96

97 Error Backpropagation: Cookbook Recipe u U in : out (l) u = ex (l) u forward propagation: u U hidden U out : out (l) u = ( + exp ( p pred(u) w up out (l) p )) x x. x n backward propagation: activation derivative:.. u U hidden : δ (l) u λ (l) u. = ( ) s succ(u) δ(l) (l) s w su λ u = out (l) u ( (l) ) out u.. weight change: y y y m logistic activation function implicit bias value error factor: u U out : δ u (l) = ( o (l) u out (l) ) (l) u λ u w up (l) = η δ u (l) out (l) p Christian Borgelt Introduction to Neural Networks 97

98 Gradient Descent: Examples Gradient descent training for the negation x x w θ y x y 0 0 e e e w θ error for x = w θ error for x = w θ sum of errors Christian Borgelt Introduction to Neural Networks 98

99 Gradient Descent: Examples epoch θ w error Online Training epoch θ w error Batch Training Christian Borgelt Introduction to Neural Networks 99

100 Gradient Descent: Examples Visualization of gradient descent for the negation x w θ Online Training w θ Batch Training w e θ Batch Training Training is obviously successful. Error cannot vanish completely due to the properties of the logistic function. Christian Borgelt Introduction to Neural Networks 00

101 Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x x 8x + 6, i x i f(x i ) f (x i ) x i Gradient descent with initial value 0. and learning rate Christian Borgelt Introduction to Neural Networks 0

102 Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x x 8x + 6, i x i f(x i ) f (x i ) x i start Gradient descent with initial value.5 and learning rate 0.5. Christian Borgelt Introduction to Neural Networks 0

103 Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x x 8x + 6, i x i f(x i ) f (x i ) x i Gradient descent with initial value.6 and learning rate Christian Borgelt Introduction to Neural Networks 03

104 Gradient Descent: Variants Weight update rule: Standard backpropagation: w(t + ) = w(t) + w(t) w(t) = η we(t) Manhattan training: w(t) = η sgn( w e(t)). Momentum term: w(t) = η we(t) + β w(t ), Christian Borgelt Introduction to Neural Networks 04

105 Gradient Descent: Variants Self-adaptive error backpropagation: η w (t) = c η w (t ), if w e(t) w e(t ) < 0, c + η w (t ), if w e(t) w e(t ) > 0 w e(t ) w e(t ) 0, η w (t ), otherwise. Resilient error backpropagation: w(t) = c w(t ), if w e(t) w e(t ) < 0, c + w(t ), if w e(t) w e(t ) > 0 w e(t ) w e(t ) 0, w(t ), otherwise. Typical values: c [0.5, 0.7] and c + [.05,.]. Christian Borgelt Introduction to Neural Networks 05

106 Gradient Descent: Variants Quickpropagation e apex e(t ) e(t) m w(t+) w(t) w(t ) w w e w e(t ) w e(t) The weight update rule can be derived from the triangles: w(t) = w e(t) w e(t ) w e(t) w(t ). w(t+) w(t) w(t ) 0 w Christian Borgelt Introduction to Neural Networks 06

107 Gradient Descent: Examples epoch θ w error without momentum term epoch θ w error with momentum term Christian Borgelt Introduction to Neural Networks 07

108 Gradient Descent: Examples w θ without momentum term w θ with momentum term w e θ with momentum term Dots show position every 0 (without momentum term) or every 0 epochs (with momentum term). Learning with a momentum term is about twice as fast. Christian Borgelt Introduction to Neural Networks 08

109 Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x x 8x + 6, i x i f(x i ) f (x i ) x i gradient descent with momentum term (β = 0.9) Christian Borgelt Introduction to Neural Networks 09

110 Gradient Descent: Examples Example function: f(x) = 5 6 x4 7x x 8x + 6, i x i f(x i ) f (x i ) x i Gradient descent with self-adapting learning rate (c + =., c = 0.5). Christian Borgelt Introduction to Neural Networks 0

111 Other Extensions of Error Backpropagation Flat Spot Elimination: w(t) = η we(t) + ζ Eliminates slow learning in saturation region of logistic function. Counteracts the decay of the error signals over the layers. Weight Decay: w(t) = η we(t) ξ w(t), Helps to improve the robustness of the training results. Can be derived from an extended error function penalizing large weights: e = e + ξ u U out U hidden ( θ u + p pred(u) w up). Christian Borgelt Introduction to Neural Networks

112 Sensitivity Analysis Christian Borgelt Introduction to Neural Networks

113 Sensitivity Analysis Question: How important are different inputs to the network? Idea: Determine change of output relative to change of input. u U in : s(u) = L fixed l L fixed (l) outv v U out ex (l) u. Formal derivation: Apply chain rule. out v ex u = out v out u out u ex u = out v net v net v out u out u ex u. Simplification: Assume that the output function is the identity. out u ex u =. Christian Borgelt Introduction to Neural Networks 3

114 Sensitivity Analysis For the second factor we get the general result: net v out u = out u p pred(v) w vp out p = p pred(v) w vp out p out u. This leads to the recursion formula out v out u = out v net v net v out u = out v net v However, for the first hidden layer we get p pred(v) w vp out p out u. net v out u = w vu, therefore out v out u = out v net v w vu. This formula marks the start of the recursion. Christian Borgelt Introduction to Neural Networks 4

115 Sensitivity Analysis Consider as usual the special case with output function is the identity, activation function is logistic. The recursion formula is in this case out v out u = out v ( out v ) and the anchor of the recursion is p pred(v) out v out u = out v ( out v )w vu. w vp out p out u Christian Borgelt Introduction to Neural Networks 5

116 Demonstration Software: xmlp/wmlp Demonstration of multilayer perceptron training: Visualization of the training process Biimplication and Exclusive Or, two continuous functions Christian Borgelt Introduction to Neural Networks 6

117 Multilayer Perceptron Software: mlp/mlpgui Software for training general multilayer perceptrons: Command line version written in C, fast training Graphical user interface in Java, easy to use Christian Borgelt Introduction to Neural Networks 7

118 Radial Basis Function Networks Christian Borgelt Introduction to Neural Networks 8

119 Radial Basis Function Networks A radial basis function network (RBFN) is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U in U out =, (ii) C = (U in U hidden ) C, C (U hidden U out ) The network input function of each hidden neuron is a distance function of the input vector and the weight vector, i.e. u U hidden : f (u) net ( w u, in u ) = d( w u, in u ), where d : IR n IR n IR + 0 is a function satisfying x, y, z IRn : (i) d( x, y) = 0 x = y, (ii) d( x, y) = d( y, x) (symmetry), (iii) d( x, z) d( x, y) + d( y, z) (triangle inequality). Christian Borgelt Introduction to Neural Networks 9

120 Distance Functions Illustration of distance functions d k ( x, y) = n i= (x i y i ) k Well-known special cases from this family are: k = : Manhattan or city block distance, k = : Euclidean distance, k : maximum distance, i.e. d ( x, y) = max n i= x i y i. k = k = k k Christian Borgelt Introduction to Neural Networks 0

121 Radial Basis Function Networks The network input function of the output neurons is the weighted sum of their inputs, i.e. u U out : f (u) net ( w u, in u ) = w u in u = v pred (u) w uv out v. The activation function of each hidden neuron is a so-called radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x The activation function of each output neuron is a linear function, namely f (u) act (net u, θ u ) = net u θ u. (The linear activation function is important for the initialization.) Christian Borgelt Introduction to Neural Networks

122 Radial Activation Functions rectangle function: f act (net, σ) = { 0, if net > σ,, otherwise. triangle function: f act (net, σ) = { 0, if net > σ, net σ, otherwise. 0 σ net 0 σ net cosine until zero: f act (net, σ) = { 0, if net > σ, cos( π σ net )+, otherwise. Gaussian function: f act (net, σ) = e net σ e 0 σ σ net e 0 σ σ net Christian Borgelt Introduction to Neural Networks

123 Radial Basis Function Networks: Examples Radial basis function networks for the conjunction x x x x y 0 x 0 0 x x x y x 0 0 x Christian Borgelt Introduction to Neural Networks 3

124 Radial Basis Function Networks: Examples Radial basis function networks for the biimplication x x Idea: logical decomposition x x (x x ) (x x ) x x y x 0 0 x Christian Borgelt Introduction to Neural Networks 4

125 Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x 4 0 y 4 0 y 3 0 y 0 y Christian Borgelt Introduction to Neural Networks 5

126 Radial Basis Function Networks: Function Approximation.. x σ. y x σ y x 0 y x 3 σ y 3 x 4 y 4. σ.. σ = x = (x i+ x i ) Christian Borgelt Introduction to Neural Networks 6

127 Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x y 4 y 3 y y Christian Borgelt Introduction to Neural Networks 7

128 Radial Basis Function Networks: Function Approximation y y x x 0 w 0 w 0 w 3 Christian Borgelt Introduction to Neural Networks 8

129 Radial Basis Function Networks: Function Approximation Radial basis function network for a sum of three Gaussian functions x y 6 Christian Borgelt Introduction to Neural Networks 9

130 Training Radial Basis Function Networks Christian Borgelt Introduction to Neural Networks 30

131 Radial Basis Function Networks: Initialization Let L fixed = {l,..., l m } be a fixed learning task, consisting of m training patterns l = ( ı (l), o (l) ). Simple radial basis function network: One hidden neuron v k, k =,..., m, for each training pattern: k {,..., m} : w vk = ı (l k). If the activation function is the Gaussian function, the radii σ k are chosen heuristically k {,..., m} : σ k = d max m, where d max = max d ( ı (lj), ı (l ) k). l j,l k L fixed Christian Borgelt Introduction to Neural Networks 3

132 Radial Basis Function Networks: Initialization Initializing the connections from the hidden to the output neurons u : m k= w uvm out (l) v m θ u = o (l) u or abbreviated A w u = o u, where o u = (o (l ) u,..., o (l m) ) is the vector of desired outputs, θ u = 0, and u A = out (l ) v out (l ) v... out (l ) v m out (l ) v out (l ) v... out (l ) v m... out (l m) v out (l m) v... out (l m) v m This is a linear equation system, that can be solved by inverting the matrix A: w u = A o u.. Christian Borgelt Introduction to Neural Networks 3

133 RBFN Initialization: Example Simple radial basis function network for the biimplication x x x x y x x w w w 3 w 4 0 y Christian Borgelt Introduction to Neural Networks 33

134 RBFN Initialization: Example Simple radial basis function network for the biimplication x x where A = e e e 4 e e 4 e e e 4 e e 4 e e A = a D b D b D c D b D a D c D b D b D c D a D b D c D b D b D a D D = 4e 4 + 6e 8 4e + e a = e 4 + e b = e + e 6 e c = e 4 e 8 + e w u = A o u = D a + c b b a + c Christian Borgelt Introduction to Neural Networks 34

135 RBFN Initialization: Example Simple radial basis function network for the biimplication x x act act y (,0) x 0 0 x x 0 0 x x 0 0 x single basis function all basis functions output Initialization leads already to a perfect solution of the learning task. Subsequent training is not necessary. Christian Borgelt Introduction to Neural Networks 35

136 Radial Basis Function Networks: Initialization Normal radial basis function networks: Select subset of k training patterns as centers. A = out (l ) v out (l ) v... out (l ) v k out (l ) v out (l ) v... out (l ) v k.... out (l m) v out (l m) v... out (l m) v k A w u = o u Compute (Moore Penrose) pseudo inverse: A + = (A A) A. The weights can then be computed by w u = A + o u = (A A) A o u Christian Borgelt Introduction to Neural Networks 36

137 RBFN Initialization: Example Normal radial basis function network for the biimplication x x Select two training patterns: l = ( ı (l ), o (l ) ) = ((0, 0), ()) l 4 = ( ı (l 4), o (l 4) ) = ((, ), ()) x x 0 0 w w θ y Christian Borgelt Introduction to Neural Networks 37

138 RBFN Initialization: Example Normal radial basis function network for the biimplication x x A = e 4 e e e e e 4 A + = (A A) A = a b b a c d d e e d d c where a 0.80, b 0.680, c.78, d , e Resulting weights: w u = θ w w = A + o u Christian Borgelt Introduction to Neural Networks 38

139 RBFN Initialization: Example Normal radial basis function network for the biimplication x x act act y (,0) x 0 0 x x 0 0 x basis function (0,0) basis function (,) output Initialization leads already to a perfect solution of the learning task. This is an accident, because the linear equation system is not over-determined, due to linearly dependent equations. Christian Borgelt Introduction to Neural Networks 39

140 Radial Basis Function Networks: Initialization Finding appropriate centers for the radial basis functions One approach: k-means clustering Select randomly k training patterns as centers. Assign to each center those training patterns that are closest to it. Compute new centers as the center of gravity of the assigned training patterns Repeat previous two steps until convergence, i.e., until the centers do not change anymore. Use resulting centers for the weight vectors of the hidden neurons. Alternative approach: learning vector quantization Christian Borgelt Introduction to Neural Networks 40

141 Radial Basis Function Networks: Training Training radial basis function networks: Derivation of update rules is analogous to that of multilayer perceptrons. Weights from the hidden to the output neurons. Gradient: wu e (l) u = e(l) u = (o (l) u w u out (l) u ) in (l) u, Weight update rule: w u (l) = η 3 wu e (l) u = η 3 (o (l) u out (l) u ) in (l) u (Two more learning rates are needed for the center coordinates and the radii.) Christian Borgelt Introduction to Neural Networks 4

142 Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Gradient: wv e (l) = e(l) w v = s succ(v) (o (l) s out s (l) out (l) v )w su net (l) v net (l) v w v Weight update rule: w (l) v = η wv e (l) = η s succ(v) (o (l) s out (l) out (l) v s )w sv net (l) v net (l) v w v Christian Borgelt Introduction to Neural Networks 4

143 Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Special case: Euclidean distance net (l) v w v = n i= (w vpi out (l) p i ) ( w v in (l) v ). Special case: Gaussian activation function out (l) v net (l) v = f act( net (l) v, σ v ) net (l) v = net (l) v e ( net (l) v σ v ) = net(l) v σv e ( net (l) v ) σ v. Christian Borgelt Introduction to Neural Networks 43

144 Radial Basis Function Networks: Training Training radial basis function networks: Radii of radial basis functions. Gradient: Weight update rule: e (l) σ v = s succ(v) σ v (l) = η e (l) = η σ v (o (l) s s succ(v) out (l) out (l) v s )w su. σ v (o (l) s Special case: Gaussian activation function out (l) v σ v = σ v e ( net (l) v ) σ v = ( out (l) out (l) v s )w sv. σ v net (l) v σ 3 v ) e ( net (l) v ) σ v. Christian Borgelt Introduction to Neural Networks 44

145 Radial Basis Function Networks: Generalization Generalization of the distance function Idea: Use anisotropic distance function. Example: Mahalanobis distance d( x, y) = ( x y) Σ ( x y). Example: biimplication x x y 3 0 Σ = ( ) x 0 0 x Christian Borgelt Introduction to Neural Networks 45

146 Learning Vector Quantization Christian Borgelt Introduction to Neural Networks 46

147 Vector Quantization Voronoi diagram of a vector quantization Dots represent vectors that are used for quantizing the area. Lines are the boundaries of the regions of points that are closest to the enclosed vector. Christian Borgelt Introduction to Neural Networks 47

148 Learning Vector Quantization Finding clusters in a given set of data points Data points are represented by empty circles ( ). Cluster centers are represented by full circles ( ). Christian Borgelt Introduction to Neural Networks 48

149 Learning Vector Quantization Networks A learning vector quantization network (LVQ) is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U in U out =, U hidden = (ii) C = U in U out The network input function of each output neuron is a distance function of the input vector and the weight vector, i.e. u U out : f (u) net ( w u, in u ) = d( w u, in u ), where d : IR n IR n IR + 0 is a function satisfying x, y, z IRn : (i) d( x, y) = 0 x = y, (ii) d( x, y) = d( y, x) (symmetry), (iii) d( x, z) d( x, y) + d( y, z) (triangle inequality). Christian Borgelt Introduction to Neural Networks 49

150 Distance Functions Illustration of distance functions d k ( x, y) = n i= (x i y i ) k Well-known special cases from this family are: k = : Manhattan or city block distance, k = : Euclidean distance, k : maximum distance, i.e. d ( x, y) = max n i= x i y i. k = k = k k Christian Borgelt Introduction to Neural Networks 50

151 Learning Vector Quantization The activation function of each output neuron is a so-called radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x Sometimes the range of values is restricted to the interval [0, ]. However, due to the special output function this restriction is irrelevant. The output function of each output neuron is not a simple function of the activation of the neuron. Rather it takes into account the activations of all output neurons: f (u) out (act u) =, if act u = max act v, v U out 0, otherwise. If more than one unit has the maximal activation, one is selected at random to have an output of, all others are set to output 0: winner-takes-all principle. Christian Borgelt Introduction to Neural Networks 5

152 Radial Activation Functions rectangle function: f act (net, σ) = { 0, if net > σ,, otherwise. triangle function: f act (net, σ) = { 0, if net > σ, net σ, otherwise. 0 σ net 0 σ net cosine until zero: f act (net, σ) = { 0, if net > σ, cos( π σ net )+, otherwise. Gaussian function: f act (net, σ) = e net σ e 0 σ σ net e 0 σ σ net Christian Borgelt Introduction to Neural Networks 5

153 Learning Vector Quantization Adaptation of reference vectors / codebook vectors For each training pattern find the closest reference vector. Adapt only this reference vector (winner neuron). For classified data the class may be taken into account: Each reference vector is assigned to a class. Attraction rule (data point and reference vector have same class) r (new) = r (old) + η( x r (old) ), Repulsion rule (data point and reference vector have different class) r (new) = r (old) η( x r (old) ). Christian Borgelt Introduction to Neural Networks 53

154 Learning Vector Quantization Adaptation of reference vectors / codebook vectors r r d ηd r d r ηd x r 3 x r 3 attraction rule repulsion rule x: data point, r i : reference vector η = 0.4 (learning rate) Christian Borgelt Introduction to Neural Networks 54

155 Learning Vector Quantization: Example Adaptation of reference vectors / codebook vectors Left: Online training with learning rate η = 0., Right: Batch training with learning rate η = Christian Borgelt Introduction to Neural Networks 55

156 Learning Vector Quantization: Learning Rate Decay Problem: fixed learning rate can lead to oscillations Solution: time dependent learning rate η(t) = η 0 α t, 0 < α <, or η(t) = η 0 t κ, κ > 0. Christian Borgelt Introduction to Neural Networks 56

157 Learning Vector Quantization: Classified Data Improved update rule for classified data Idea: Update not only the one reference vector that is closest to the data point (the winner neuron), but update the two closest reference vectors. Let x be the currently processed data point and c its class. Let r j and r k be the two closest reference vectors and z j and z k their classes. Reference vectors are updated only if z j z k and either c = z j or c = z k. (Without loss of generality we assume c = z j.) The update rules for the two closest reference vectors are: r (new) j r (new) k = r (old) j = r (old) k + η( x r (old) j ) and η( x r (old) k ), while all other reference vectors remain unchanged. Christian Borgelt Introduction to Neural Networks 57

158 Learning Vector Quantization: Window Rule It was observed in practical tests that standard learning vector quantization may drive the reference vectors further and further apart. To counteract this undesired behavior a window rule was introduced: update only if the data point x is close to the classification boundary. Close to the boundary is made formally precise by requiring min ( d( x, rj ) d( x, r k ), d( x, r ) k) d( x, r j ) ξ is a parameter that has to be specified by a user. > θ, where θ = ξ + ξ. Intuitively, ξ describes the width of the window around the classification boundary, in which the data point has to lie in order to lead to an update. Using it prevents divergence, because the update ceases for a data point once the classification boundary has been moved far enough away. Christian Borgelt Introduction to Neural Networks 58

159 Soft Learning Vector Quantization Idea: Use soft assignments instead of winner-takes-all. Assumption: Given data was sampled from a mixture of normal distributions. Each reference vector describes one normal distribution. Objective: Maximize the log-likelihood ratio of the data, that is, maximize ln L ratio = n j= n j= ln ln r R(c j ) r Q(c j ) exp ( x j r) ( x j r) σ exp ( x j r) ( x j r) σ. Here σ is a parameter specifying the size of each normal distribution. R(c) is the set of reference vectors assigned to class c and Q(c) its complement. Intuitively: at each data point the probability density for its class should be as large as possible while the density for all other classes should be as small as possible. Christian Borgelt Introduction to Neural Networks 59

160 Soft Learning Vector Quantization Update rule derived from a maximum log-likelihood approach: r (new) i = r (old) i + η u ij ( x j r (old) i ), if c j = z i, u ij ( x j r (old) i ), if c j z i, where z i is the class associated with the reference vector r i and u ij = u ij = exp ( σ ( x j r (old) i ) ( x j r (old) i )) exp ( σ ( x j r (old) ) ( x j r (old) )) r R(c j ) r Q(c j ) exp ( σ ( x j r (old) i ) ( x j r (old) i )) exp ( σ ( x j r (old) ) ( x j r (old) )). and R(c) is the set of reference vectors assigned to class c and Q(c) its complement. Christian Borgelt Introduction to Neural Networks 60

161 Hard Learning Vector Quantization Idea: Derive a scheme with hard assignments from the soft version. Approach: Let the size parameter σ of the Gaussian function go to zero. The resulting update rule is in this case: where r (new) i = r (old) i + η u ij ( x j r (old) i ), if c j = z i, u ij ( x j r (old) i ), if c j z i,, if r i = argmin d( x j, r), u ij = r R(c j ) 0, otherwise, r i is closest vector of same class, if r i = argmin d( x j, r), u ij = r Q(c j ) 0, otherwise. r i is closest vector of different class This update rule is stable without a window rule restricting the update. Christian Borgelt Introduction to Neural Networks 6

162 Learning Vector Quantization: Extensions Frequency Sensitive Competitive Learning The distance to a reference vector is modified according to the number of data points that are assigned to this reference vector. Fuzzy Learning Vector Quantization Exploits the close relationship to fuzzy clustering. Can be seen as an online version of fuzzy clustering. Leads to faster clustering. Size and Shape Parameters Associate each reference vector with a cluster radius. Update this radius depending on how close the data points are. Associate each reference vector with a covariance matrix. Update this matrix depending on the distribution of the data points. Christian Borgelt Introduction to Neural Networks 6

163 Demonstration Software: xlvq/wlvq Demonstration of learning vector quantization: Visualization of the training process Arbitrary datasets, but training only in two dimensions Christian Borgelt Introduction to Neural Networks 63

164 Self-Organizing Maps Christian Borgelt Introduction to Neural Networks 64

165 Self-Organizing Maps A self-organizing map or Kohonen feature map is a neural network with a graph G = (U, C) that satisfies the following conditions (i) U hidden =, U in U out =, (ii) C = U in U out. The network input function of each output neuron is a distance function of input and weight vector. The activation function of each output neuron is a radial function, i.e. a monotonously decreasing function f : IR + 0 [0, ] with f(0) = and lim f(x) = 0. x The output function of each output neuron is the identity. The output is often discretized according to the winner takes all principle. On the output neurons a neighborhood relationship is defined: d neurons : U out U out IR + 0. Christian Borgelt Introduction to Neural Networks 65

166 Self-Organizing Maps: Neighborhood Neighborhood of the output neurons: neurons form a grid quadratic grid hexagonal grid Thin black lines: Indicate nearest neighbors of a neuron. Thick gray lines: Indicate regions assigned to a neuron for visualization. Christian Borgelt Introduction to Neural Networks 66

167 Topology Preserving Mapping Images of points close to each other in the original space should be close to each other in the image space. Example: Robinson projection of the surface of a sphere Robinson projection is frequently used for world maps. Christian Borgelt Introduction to Neural Networks 67

168 Self-Organizing Maps: Neighborhood Find topology preserving mapping by respecting the neighborhood Reference vector update rule: r (new) u = r (old) u + η(t) f nb (d neurons (u, u ), ϱ(t)) ( x r (old) u ), u is the winner neuron (reference vector closest to data point). The function f nb is a radial function. Time dependent learning rate η(t) = η 0 α t η, 0 < α η <, or η(t) = η 0 t κ η, κ η > 0. Time dependent neighborhood radius ϱ(t) = ϱ 0 α t ϱ, 0 < α ϱ <, or ϱ(t) = ϱ 0 t κ ϱ, κ ϱ > 0. Christian Borgelt Introduction to Neural Networks 68

169 Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Christian Borgelt Introduction to Neural Networks 69

170 Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Christian Borgelt Introduction to Neural Networks 70

171 Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. Training a self-organizing map may fail if the (initial) learning rate is chosen too small or or the (initial) neighbor is chosen too small. Christian Borgelt Introduction to Neural Networks 7

172 Self-Organizing Maps: Examples Example: Unfolding of a two-dimensional self-organizing map. (a) (b) (c) Self-organizing maps that have been trained with random points from (a) a rotation parabola, (b) a simple cubic function, (c) the surface of a sphere. In this case original space and image space have different dimensionality. Self-organizing maps can be used for dimensionality reduction. Christian Borgelt Introduction to Neural Networks 7

173 Demonstration Software: xsom/wsom Demonstration of self-organizing map training: Visualization of the training process Two-dimensional areas and three-dimensional surfaces Christian Borgelt Introduction to Neural Networks 73

174 Hopfield Networks Christian Borgelt Introduction to Neural Networks 74

175 Hopfield Networks A Hopfield network is a neural network with a graph G = (U, C) that satisfies the following conditions: (i) U hidden =, U in = U out = U, (ii) C = U U {(u, u) u U}. In a Hopfield network all neurons are input as well as output neurons. There are no hidden neurons. Each neuron receives input from all other neurons. A neuron is not connected to itself. The connection weights are symmetric, i.e. u, v U, u v : w uv = w vu. Christian Borgelt Introduction to Neural Networks 75

176 Hopfield Networks The network input function of each neuron is the weighted sum of the outputs of all other neurons, i.e. u U : f (u) net ( w u, in u ) = w u in u = v U {u} w uv out v. The activation function of each neuron is a threshold function, i.e. u U : f (u) act (net u, θ u ) = {, if netu θ,, otherwise. The output function of each neuron is the identity, i.e. u U : f (u) out (act u) = act u. Christian Borgelt Introduction to Neural Networks 76

177 Hopfield Networks Alternative activation function u U : f (u) act (net u, θ u, act u ) =, if net u > θ,, if net u < θ, act u, if net u = θ. This activation function has advantages w.r.t. the physical interpretation of a Hopfield network. General weight matrix of a Hopfield network W = 0 w u u... w u u n w u u 0... w u u n... w u u n w u u n... 0 Christian Borgelt Introduction to Neural Networks 77

178 Hopfield Networks: Examples Very simple Hopfield network u x 0 y W = ( 0 0 ) x 0 y u The behavior of a Hopfield network can depend on the update order. Computations can oscillate if neurons are updated in parallel. Computations always converge if neurons are updated sequentially. Christian Borgelt Introduction to Neural Networks 78

179 Hopfield Networks: Examples Parallel update of neuron activations u u input phase work phase The computations oscillate, no stable state is reached. Output depends on when the computations are terminated. Christian Borgelt Introduction to Neural Networks 79

180 Hopfield Networks: Examples Sequential update of neuron activations u u input phase work phase u u input phase work phase Regardless of the update order a stable state is reached. Which state is reached depends on the update order. Christian Borgelt Introduction to Neural Networks 80

181 Hopfield Networks: Examples Simplified representation of a Hopfield network x 0 y x 0 y u 0 u u W = x 3 0 y 3 Symmetric connections between neurons are combined. Inputs and outputs are not explicitely represented. Christian Borgelt Introduction to Neural Networks 8

182 Hopfield Networks: State Graph Graph of activation states and transitions +++ u u u 3 u 3 u u u u 3 ++ u u u 3 u u u 3 u u 3 u u u u 3 u u u 3 Christian Borgelt Introduction to Neural Networks 8

183 Hopfield Networks: Convergence Convergence Theorem: If the activations of the neurons of a Hopfield network are updated sequentially (asynchronously), then a stable state is reached in a finite number of steps. If the neurons are traversed cyclically in an arbitrary, but fixed order, at most n n steps (updates of individual neurons) are needed, where n is the number of neurons of the Hopfield network. The proof is carried out with the help of an energy function. The energy function of a Hopfield network with n neurons u,..., u n is E = act W act + θ T act = u,v U,u v w uv act u act v + u U θ u act u. Christian Borgelt Introduction to Neural Networks 83

184 Hopfield Networks: Convergence Consider the energy change resulting from an update that changes an activation: E = E (new) E (old) = ( v U {u} ( = v U {u} ( act (old) u net u < θ u : Second factor is less than 0. act (new) u w uv act (new) u w uv act (old) u act (new) u ) ( act v +θ u act (new) u ) act v +θ u act (old) u ) v U {u} w uv act v }{{} = net u = and act (old) u =, therefore first factor greater than 0. Result: E < 0. net u θ u : Second factor greater than or equal to 0. act (new) u = and act (old) u =, therefore first factor less than 0. Result: E 0. θ u ). Christian Borgelt Introduction to Neural Networks 84

185 Hopfield Networks: Examples Arrange states in state graph according to their energy E Energy function for example Hopfield network: E = act u act u act u act u3 act u act u3. Christian Borgelt Introduction to Neural Networks 85

186 Hopfield Networks: Examples The state graph need not be symmetric 5 E u u 3 u Christian Borgelt Introduction to Neural Networks 86

187 Hopfield Networks: Physical Interpretation Physical interpretation: Magnetism A Hopfield network can be seen as a (microscopic) model of magnetism (so-called Ising model, [Ising 95]). physical atom magnetic moment (spin) strength of outer magnetic field magnetic coupling of the atoms Hamilton operator of the magnetic field neural neuron activation state threshold value connection weights energy function Christian Borgelt Introduction to Neural Networks 87

188 Hopfield Networks: Associative Memory Idea: Use stable states to store patterns First: Store only one pattern x = (act (l) u,..., act u (l) n ) {, } n, n, i.e., find weights, so that pattern is a stable state. Necessary and sufficient condition: S(W x θ ) = x, where with S : IR n {, } n, x y i {,..., n} : y i = {, if xi 0,, otherwise. Christian Borgelt Introduction to Neural Networks 88

189 Hopfield Networks: Associative Memory If θ = 0 an appropriate matrix W can easily be found. It suffices W x = c x with c IR +. Algebraically: Find a matrix W that has a positive eigenvalue w.r.t. x. Choose W = x x T E where x x T is the so-called outer product. With this matrix we have W x = ( x x T ) x }{{} E x = x = n x x = (n ) x. ( ) = x ( x T x ) x }{{} = x =n Christian Borgelt Introduction to Neural Networks 89

190 Hopfield Networks: Associative Memory Hebbian learning rule [Hebb 949] Written in individual weights the computation of the weight matrix reads: w uv = 0, if u = v,, if u v, act u (p), otherwise. Originally derived from a biological analogy. = act (v) u, Strengthen connection between neurons that are active at the same time. Note that this learning rule also stores the complement of the pattern: With W x = (n ) x it is also W( x ) = (n )( x ). Christian Borgelt Introduction to Neural Networks 90

191 Hopfield Networks: Associative Memory Storing several patterns Choose W x j = m i= W i x j = If patterns are orthogonal, we have and therefore x T i x j = = m i= m i= ( x i x i T ) x j { 0, if i j, n, if i = j, W x j = (n m) x j. m E x j }{{} = x j x i ( x i T x j) m x j Christian Borgelt Introduction to Neural Networks 9

192 Hopfield Networks: Associative Memory Storing several patterns Result: As long as m < n, x is a stable state of the Hopfield network. Note that the complements of the patterns are also stored. With W x j = (n m) x j it is also W( x j ) = (n m)( x j ). But: Capacity is very small compared to the number of possible states ( n ). Non-orthogonal patterns: W x j = (n m) x j + m i= i j x i ( x T i x j) }{{} disturbance term. Christian Borgelt Introduction to Neural Networks 9

193 Associative Memory: Example Example: Store patterns x = (+, +,, ) and x = (, +,, +). where W = W = W + W = x x T + x x T E , W = The full weight matrix is: Therefore it is W = W x = (+, +,, ) and W x = (, +,, +). Christian Borgelt Introduction to Neural Networks 93

194 Associative Memory: Examples Example: Storing bit maps of numbers Left: Bit maps stored in a Hopfield network. Right: Reconstruction of a pattern from a random input. Christian Borgelt Introduction to Neural Networks 94

195 Hopfield Networks: Associative Memory Training a Hopfield network with the Delta rule Necessary condition for pattern x being a stable state: s(0 + w u u act (p) u w u u n act (p) u n θ u ) = act (p) u, s(w u u act (p) u w u u n act (p) u n θ u ) = act (p) u,..... s(w un u act (p) u + w un u act (p) u with the standard threshold function θ un ) = act (p) u n. s(x) = {, if x 0,, otherwise. Christian Borgelt Introduction to Neural Networks 95

196 Hopfield Networks: Associative Memory Training a Hopfield network with the Delta rule Turn weight matrix into a weight vector: w = ( w u u, w u u 3,..., w u u n, w u u 3,..., w u u n,.... w un u n, θ u, θ u,..., θ un ). Construct input vectors for a threshold logic unit z = (act (p) u, 0,..., 0, }{{} n zeros act (p) u 3,..., act (p) Apply Delta rule training until convergence. u n,... 0,, 0,..., 0 ). }{{} n zeros Christian Borgelt Introduction to Neural Networks 96

197 Demonstration Software: xhfn/whfn Demonstration of Hopfield networks as associative memory: Visualization of the association/recognition process Two-dimensional networks of arbitrary size Christian Borgelt Introduction to Neural Networks 97

COMP304 Introduction to Neural Networks based on slides by:

COMP304 Introduction to Neural Networks based on slides by: COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology

More information

Artificial Neural Networks and Deep Learning

Artificial Neural Networks and Deep Learning Artificial Neural Networks and Deep Learning Christian Borgelt School of Computer Science University of Konstanz Universitätsstraße 0, 78457 Konstanz, Germany christian.borgelt@uni-konstanz.de christian@borgelt.net

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Linear Regression, Neural Networks, etc.

Linear Regression, Neural Networks, etc. Linear Regression, Neural Networks, etc. Gradient Descent Many machine learning problems can be cast as optimization problems Define a function that corresponds to learning error. (More on this later)

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1 Neural Networks Xiaoin Zhu erryzhu@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison slide 1 Terminator 2 (1991) JOHN: Can you learn? So you can be... you know. More human. Not

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

CSC242: Intro to AI. Lecture 21

CSC242: Intro to AI. Lecture 21 CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang C4 Phenomenological Modeling - Regression & Neural Networks 4040-849-03: Computational Modeling and Simulation Instructor: Linwei Wang Recall.. The simple, multiple linear regression function ŷ(x) = a

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Machine Learning: Multi Layer Perceptrons

Machine Learning: Multi Layer Perceptrons Machine Learning: Multi Layer Perceptrons Prof. Dr. Martin Riedmiller Albert-Ludwigs-University Freiburg AG Maschinelles Lernen Machine Learning: Multi Layer Perceptrons p.1/61 Outline multi layer perceptrons

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Ch.8 Neural Networks

Ch.8 Neural Networks Ch.8 Neural Networks Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/?? Brains as Computational Devices Motivation: Algorithms

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward ECE 47/57 - Lecture 7 Back Propagation Types of NN Recurrent (feedback during operation) n Hopfield n Kohonen n Associative memory Feedforward n No feedback during operation or testing (only during determination

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Neural Networks Le Song Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU Reading: Chap. 5 CB Learning highly non-linear functions f:

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neuro-Fuzzy Comp. Ch. 4 March 24, R p 4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear

More information

Learning in State-Space Reinforcement Learning CIS 32

Learning in State-Space Reinforcement Learning CIS 32 Learning in State-Space Reinforcement Learning CIS 32 Functionalia Syllabus Updated: MIDTERM and REVIEW moved up one day. MIDTERM: Everything through Evolutionary Agents. HW 2 Out - DUE Sunday before the

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Artificial neural networks

Artificial neural networks Artificial neural networks Chapter 8, Section 7 Artificial Intelligence, spring 203, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 8, Section 7 Outline Brains Neural

More information

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANN) Artificial Neural Networks (ANN) Edmondo Trentin April 17, 2013 ANN: Definition The definition of ANN is given in 3.1 points. Indeed, an ANN is a machine that is completely specified once we define its:

More information

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks

Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Statistical Machine Learning (BE4M33SSU) Lecture 5: Artificial Neural Networks Jan Drchal Czech Technical University in Prague Faculty of Electrical Engineering Department of Computer Science Topics covered

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 3. Improving Neural Networks (Some figures adapted from NNDL book) 1 Various Approaches to Improve Neural Networks 1. Cost functions Quadratic Cross

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Artificial Neuron (Perceptron)

Artificial Neuron (Perceptron) 9/6/208 Gradient Descent (GD) Hantao Zhang Deep Learning with Python Reading: https://en.wikipedia.org/wiki/gradient_descent Artificial Neuron (Perceptron) = w T = w 0 0 + + w 2 2 + + w d d where

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Learning from Examples

Learning from Examples Learning from Examples Data fitting Decision trees Cross validation Computational learning theory Linear classifiers Neural networks Nonparametric methods: nearest neighbor Support vector machines Ensemble

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.

Mark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer. University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x

More information

Intelligent Systems Discriminative Learning, Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm

More information

Multilayer Perceptron Tutorial

Multilayer Perceptron Tutorial Multilayer Perceptron Tutorial Leonardo Noriega School of Computing Staffordshire University Beaconside Staffordshire ST18 0DG email: l.a.noriega@staffs.ac.uk November 17, 2005 1 Introduction to Neural

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Learning Neural Networks

Learning Neural Networks Learning Neural Networks Neural Networks can represent complex decision boundaries Variable size. Any boolean function can be represented. Hidden units can be interpreted as new features Deterministic

More information