Neural Networks. Hopfield Nets and Auto Associators Fall 2017

Size: px
Start display at page:

Download "Neural Networks. Hopfield Nets and Auto Associators Fall 2017"

Transcription

1 Neural Networks Hopfield Nets and Auto Associators Fall

2 Story so far Neural networks for computation All feedforward structures But what about.. 2

3 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i = Θ j i w ji y j + b i The output of a neuron affects the input to the neuron Each neuron is a perceptron with +1/-1 output Every neuron receives input from every other neuron Every neuron outputs signals to every other neuron 3

4 Loopy network Θ z = ቊ +1 if z > 0 1 if z 0 y i = Θ j i w ji y j + b i A symmetric network: w ij = w ji Each neuron is a perceptron with +1/-1 output Every neuron receives input from every other neuron Every neuron outputs signals to every other neuron 4

5 Hopfield Net Θ z = ቊ +1 if z > 0 1 if z 0 y i = Θ j i w ji y j + b i A symmetric network: w ij = w ji Each neuron is a perceptron with +1/-1 output Every neuron receives input from every other neuron Every neuron outputs signals to every other neuron 5

6 Loopy network +1 if z > 0 Θ z = ቊ 1 if z 0 y i = Θ j i w ji y j + b i A neuron flips if weighted sum of other neuron s outputs is of the opposite sign But this may cause other neurons to flip! Each neuron is a perceptron with a +1/-1 output Every neuron receives input from every other neuron Every neuron outputs signals to every other neuron 6

7 Loopy network y i = Θ j i w ji y j + b i +1 if z > 0 Θ z = ቊ 1 if z 0 At each time each neuron receives a field σ j i w ji y j + b i If the sign of the field matches its own sign, it does not respond If the sign of the field opposes its own sign, it flips to match the sign of the field 7

8 Example Red edges are -1, blue edges are +1 Yellow nodes are +1, black nodes are -1 8

9 Example Red edges are -1, blue edges are +1 Yellow nodes are +1, black nodes are -1 9

10 Example Red edges are -1, blue edges are +1 Yellow nodes are +1, black nodes are -1 10

11 Example Red edges are -1, blue edges are +1 Yellow nodes are +1, black nodes are -1 11

12 Loopy network If the sign of the field at any neuron opposes its own sign, it flips to match the field Which will change the field at other nodes Which may then flip Which may cause other neurons including the first one to flip» And so on 12

13 20 evolutions of a loopy net +1 if z > 0 Θ z = ቊ 1 if z 0 y i = Θ w ji y j + b i j i A neuron flips if weighted sum of other neuron s outputs is of the opposite sign But this may cause other neurons to flip! All neurons which do not align with the local field flip 13

14 120 evolutions of a loopy net All neurons which do not align with the local field flip 14

15 Loopy network If the sign of the field at any neuron opposes its own sign, it flips to match the field Which will change the field at other nodes Which may then flip Which may cause other neurons including the first one to flip Will this behavior continue for ever?? 15

16 Loopy network y i = Θ j i w ji y j + b i +1 if z > 0 Θ z = ቊ 1 if z 0 Let y i be the output of the i-th neuron just before it responds to the current field Let y i + be the output of the i-th neuron just after it responds to the current field If y i = sign σ j i w ji y j + b i, then y i + = y i If the sign of the field matches its own sign, it does not flip y i + w ji y j + b i y i w ji y j + b i = 0 j i j i 16

17 Loopy network y i = Θ j i w ji y j + b i If y i sign σ j i w ji y j + b i, then y i + = y i +1 if z > 0 Θ z = ቊ 1 if z 0 y i + w ji y j + b i y i w ji y j + b i = 2y + i w ji y j + b i j i j i This term is always positive! Every flip of a neuron is guaranteed to locally increase j i y i w ji y j + b i j i 17

18 Globally Consider the following sum across all nodes D y 1, y 2,, y N = y i w ji y j + b i i j<i = w ij y i y j + b i y i i,j<i Definition same as earlier, but avoids double counting and assumes w ii = 0 i For any unit k that flips because of the local field D y k = D y 1,, y k +,, y N D y 1,, y k,, y N 18

19 Upon flipping a single unit D y k = D y 1,, y k +,, y N D y 1,, y k,, y N Expanding D y k = y k + y k j i w jk y j + y k + y k b k All other terms that do not include y k cancel out This is always positive! Every flip of a unit results in an increase in D 19

20 Hopfield Net Flipping a unit will result in an increase (non-decrease) of D is bounded D = w ij y i y j + b i y i i,j<i i D max = i,j<i w ij The minimum increment of D in a flip is + i b i D min = min i, {y i, i=1..n} 2 j i w ji y j + b i Any sequence of flips must converge in a finite number of steps 20

21 The Energy of a Hopfield Net Define the Energy of the network as E = w ij y i y j b i y i i,j<i Just the negative of D i The evolution of a Hopfield network constantly decreases its energy Where did this energy concept suddenly sprout from? 21

22 Analogy: Spin Glasses Magnetic diploes Each dipole tries to align itself to the local field In doing so it may flip This will change fields at other dipoles Which may flip Which changes the field at the current dipole 22

23 Analogy: Spin Glasses Total field at current dipole: f p i = j i rx j p i p j 2 + b i intrinsic external p i is vector position of i-th dipole The field at any dipole is the sum of the field contributions of all other dipoles The contribution of a dipole to the field at any point falls off inversely with square of distance 23

24 Analogy: Spin Glasses Total field at current dipole: f p i = j i rx j p i p j 2 + b i Response of current diplose x i = x i if sign x i f p i = 1 x i otherwise A Dipole flips if it is misaligned with the field in its location 24

25 Analogy: Spin Glasses Total field at current dipole: f p i = j i rx j p i p j 2 + b i Response of current diplose x i = x i if sign x i f p i = 1 x i otherwise Dipoles will keep flipping A flipped dipole changes the field at other dipoles Some of which will flip Which will change the field at the current dipole Which may flip Etc.. 25

26 Analogy: Spin Glasses Total field at current dipole: f p i = j i rx j p i p j 2 + b i Response of current diplose When will it stop??? x i = x i if sign x i f p i = 1 x i otherwise 26

27 Analogy: Spin Glasses Total field at current dipole: f p i = j i rx j p i p j 2 + b i Response of current diplose x i = x i if sign x i f p i = 1 x i otherwise The total potential energy of the system E = C 1 2 rx i x j x i f p i = C 2 i i j>i p i p j i The system evolves to minimize the PE Dipoles stop flipping if any flips result in increase of PE b i x j 27

28 PE Spin Glasses The system stops at one of its stable configurations Where PE is a local minimum state Any small jitter from this stable configuration returns it to the stable configuration I.e. the system remembers its stable state and returns to it 28

29 Hopfield Network y i = Θ j i w ji y j + b i +1 if z > 0 Θ z = ቊ 1 if z 0 E = w ij y i y j b i y i i,j<i i This is analogous to the potential energy of a spin glass The system will evolve until the energy hits a local minimum 29

30 Hopfield Network y i = Θ j i w ji y j + b i +1 if z > 0 Θ z = ቊ 1 if z 0 Typically will not utilize bias: The bias is similar to having a single extra neuron E = that wis ij pegged y i y j to 1.0 i,j<i i b i y i Removing This is analogous the bias term to the does potential not affect energy the of rest a spin of glass the discussion in any manner The system will evolve until the energy hits a local minimum But not RIP, we will bring it back later in the discussion 30

31 Hopfield Network y i = Θ w ji y j j i +1 if z > 0 Θ z = ቊ 1 if z 0 E = w ij y i y j i,j<i This is analogous to the potential energy of a spin glass The system will evolve until the energy hits a local minimum 31

32 PE Evolution E = w ij y i y j i,j<i state The network will evolve until it arrives at a local minimum in the energy contour 32

33 PE Content-addressable memory state Each of the minima is a stored pattern If the network is initialized close to a stored pattern, it will inevitably evolve to the pattern This is a content addressable memory Recall memory content from partial or corrupt values Also called associative memory 33

34 Evolution E = w ij y i y j i,j<i Image pilfered from unknown source The network will evolve until it arrives at a local minimum in the energy contour 34

35 Evolution E = w ij y i y j i,j<i The network will evolve until it arrives at a local minimum in the energy contour We proved that every change in the network will result in decrease in energy So path to energy minimum is monotonic 35

36 Evolution E = w ij y i y j i,j<i For threshold activations the energy contour is only defined on a lattice Corners of a unit cube on [-1,1] N For tanh activations it will be a continuous function 36

37 Evolution E = w ij y i y j i,j<i For threshold activations the energy contour is only defined on a lattice Corners of a unit cube on [-1,1] N For tanh activations it will be a continuous function 37

38 Evolution E = 1 2 yt Wy In matrix form Note the 1/2 For threshold activations the energy contour is only defined on a lattice Corners of a unit cube For tanh activations it will be a continuous function 38

39 Energy contour for a 2-neuron net Two stable states (tanh activation) Symmetric, not at corners Blue arc shows a typical trajectory for sigmoid activation 39

40 Energy contour for a 2-neuron net Why symmetric? Because 1 2 yt Wy = 1 2 ( y)t W( y) Two stable states (tanh activation) If y is Symmetric, a local minimum, not at corners so is y Blue arc shows a typical trajectory for sigmoid activation 40

41 3-neuron net 8 possible states 2 stable states (hard thresholded network) 41

42 Examples: Content addressable memory 42

43 Hopfield net examples 43

44 Computational algorithm 1. Initialize network with initial pattern y i 0 = x i, 0 i N 1 2. Iterate until convergence y i t + 1 = Θ w ji y j, 0 i N 1 j i Very simple Updates can be done sequentially, or all at once Convergence E = w ji y j y i i j>i does not change significantly any more 44

45 Issues How do we make the network store a specific pattern or set of patterns? How many patterns can we store? 45

46 Issues How do we make the network store a specific pattern or set of patterns? How many patterns can we store? 46

47 How do we remember a specific pattern? How do we teach a network to remember this image For an image with N pixels we need a network with N neurons Every neuron connects to every other neuron Weights are symmetric (not mandatory) N(N 1) 2 weights in all 47

48 Storing patterns: Training a network A network that stores pattern P also naturally stores P Symmetry E(P) = E( P) since E is a function of y i y j E = w ji y j y i i j<i 48

49 PE A network can store multiple patterns state 1-1 Every stable point is a stored pattern So we could design the net to store multiple patterns 1-1 Remember that every stored pattern P is actually two stored patterns, P and P 49

50 Storing a pattern E = w ji y j y i i j<i Design {w ij } such that the energy is a local minimum at the desired P = {y i } 50

51 Storing specific patterns Storing 1 pattern: We want sign w ji y j = y i i j i This is a stationary pattern 51

52 Storing specific patterns -1 HEBBIAN LEARNING: 1-1 w ji = y j y i -1 1 Storing 1 pattern: We want sign w ji y j = y i i j i This is a stationary pattern 52

53 Storing specific patterns HEBBIAN LEARNING: -1 1 w ji = y j y i sign σ j i w ji y j = sign σ j i y j y i y j = sign j i y j 2 y i = sign y i = y i 53

54 Storing specific patterns HEBBIAN LEARNING: -1 1 w ji = y j y i The pattern is stationary sign σ j i w ji y j = sign σ j i y j y i y j = sign j i y j 2 y i = sign y i = y i 54

55 Storing specific patterns HEBBIAN LEARNING: -1 1 w ji = y j y i E = w ji y j y i = y i 2 y j 2 i j<i i j<i = 1 = 0.5N(N 1) i j<i This is the lowest possible energy value for the network 55

56 Storing specific patterns HEBBIAN LEARNING: -1 1 w ji = y j y i The pattern is STABLE E = w ji y j y i = y 2 2 i y j i j<i i j<i = 1 = 0.5N(N 1) i j<i This is the lowest possible energy value for the network 56

57 Storing multiple patterns w ji = y p p i y j p {y p } {y p } is the set of patterns to store Superscript p represents the specific pattern 57

58 Storing multiple patterns Let y p be the vector representing p-th pattern Let Y = y 1 y 2 be a matrix with all the stored pattern Then.. W = (y p y T p I) = YY T N p I p Number of patterns 58

59 Storing multiple patterns Note behavior of E y = y T Wy with W = YY T N p I Is identical to behavior with W = YY T Since Energy landscape only differs by an additive constant y T YY T N p I y = y T YY T y NN p But the latter W = YY T is easier to analyze. Hence in the following slides we will use W = YY T 59

60 Storing multiple patterns Let y p be the vector representing p-th pattern Let Y = y 1 y 2 be a matrix with all the stored pattern Then.. W = YY T Positive semidefinite! 60

61 Issues How do we make the network store a specific pattern or set of patterns? How many patterns can we store? 61

62 Consider the energy function E = 1 2 yt Wy b T y Reinstating the bias term for completeness sake Remember that we don t actually use it in a Hopfield net 62

63 Consider the energy function This is a quadratic! W is positive semidefinite E = 1 2 yt Wy b T y Reinstating the bias term for completeness sake Remember that we don t actually use it in a Hopfield net 63

64 Consider the energy function This is a quadratic! For Hebbian learning W is positive semidefinite E = 1 2 yt Wy b T y E is convex Reinstating the bias term for completeness sake Remember that we don t actually use it in a Hopfield net 64

65 The energy function E = 1 2 yt Wy b T y E is a convex quadratic 65

66 The energy function E = 1 2 yt Wy b T y E is a convex quadratic Shown from above (assuming 0 bias) But components of y can only take values ±1 I.e y lies on the corners of the unit hypercube 66

67 The energy function E = 1 2 yt Wy b T y E is a convex quadratic Shown from above (assuming 0 bias) But components of y can only take values ±1 I.e y lies on the corners of the unit hypercube 67

68 The energy function Stored patterns E = 1 2 yt Wy b T y The stored values of y are the ones where all adjacent corners are higher on the quadratic Hebbian learning attempts to make the quadratic steep in the vicinity of stored patterns 68

69 Patterns you can store 4-bit patterns Stored patterns would be the corners where the value of the quadratic energy is lowest 69

70 Patterns you can store Ghosts (negations) Stored patterns Ideally must be maximally separated on the hypercube The number of patterns we can store depends on the actual distance between the patterns 70

71 How many patterns can we store? Hopfield: For a network of N neurons can store up to 0.15N patterns Provided the patterns are random and far apart 71

72 How many patterns can we store? Problem with Hebbian learning: Focuses on patterns that must be stored What about patterns that must not be stored? More recent work: can actually store up to N patterns Non Hebbian learning W loses positive semi-definiteness 72

73 Non Hebbian Storing N patterns Requirement: Given y 1, y 2,, y P Design W such that sign Wy p = y p for all target patterns There are no other binary vectors for which this holds I.e. y 1, y 2,, y P are the only binary Eigenvectors of W, and the corresponding eigen values are positive 73

74 Storing N patterns Simple solution: Design W such that y 1, y 2,, y P are the Eigen vectors of W Easily achieved if y 1, y 2,, y P are orthogonal to one another Let Y = y 1 y 2 y P r P+1 r N N is the number of bits r P+1 r N are synthetic non-binary vectors y 1 y 2 y P r P+1 r N are all orthogonal to one another W = YΛY T Eigen values λ in diagonal matrix Λ determine the steepness of the energy function around the stored values What must the Eigen values corresponding to the y P s be? What must the Eigen values corresponding to the r s be? Under no condition can more than N values be stored 74

75 Storing N patterns For non-orthogonal W, solution is less simple y 1, y 2,, y P can no longer be Eigen values, but now represent quantized Eigen directions sign Wy p = y p Note that this is not an exact Eigen value equation Optimization algorithms can provide Ws for many patterns Under no condition can we store more than N patterns 75

76 Alternate Approach to Estimating the Network E = 1 2 yt Wy b T y Estimate W (and b) such that E is minimized for y 1, y 2,, y P E is maximized for all other y We will encounter this solution again soon Once again, cannot store more than N patterns 76

77 Storing more than N patterns How do we even solve the problem of storing N patterns in the first place How do we increase the capacity of the network Store more patterns Common answer to both problems.. 77

78 Lookahead.. Adding capacity to a Hopfield network 78

79 Expanding the network N Neurons K Neurons Add a large number of neurons whose actual values you don t care about! 79

80 Expanded Network N Neurons K Neurons New capacity: ~(N+K) patterns Although we only care about the pattern of the first N neurons We re interested in N-bit patterns 80

81 Introducing N Neurons K Neurons The Boltzmann machine Next regular class 81

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Artificial Intelligence Hopfield Networks

Artificial Intelligence Hopfield Networks Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization

More information

Neural Nets and Symbolic Reasoning Hopfield Networks

Neural Nets and Symbolic Reasoning Hopfield Networks Neural Nets and Symbolic Reasoning Hopfield Networks Outline The idea of pattern completion The fast dynamics of Hopfield networks Learning with Hopfield networks Emerging properties of Hopfield networks

More information

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. 2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output

More information

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen Hopfield Networks (Excerpt from a Basic Course at IK 2008) Herbert Jaeger Jacobs University Bremen Building a model of associative memory should be simple enough... Our brain is a neural network Individual

More information

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 11. Boltzmann Machines COMP9444 17s2 Boltzmann Machines 1 Outline Content Addressable Memory Hopfield Network Generative Models Boltzmann Machine Restricted Boltzmann

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296

Hopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies

More information

CHAPTER 3. Pattern Association. Neural Networks

CHAPTER 3. Pattern Association. Neural Networks CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets

Neural Networks for Machine Learning. Lecture 11a Hopfield Nets Neural Networks for Machine Learning Lecture 11a Hopfield Nets Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Hopfield Nets A Hopfield net is composed of binary threshold

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5 Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Optimization and Gradient Descent

Optimization and Gradient Descent Optimization and Gradient Descent INFO-4604, Applied Machine Learning University of Colorado Boulder September 12, 2017 Prof. Michael Paul Prediction Functions Remember: a prediction function is the function

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs

More information

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield 3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield (1982, 1984). - The net is a fully interconnected neural

More information

Week 4: Hopfield Network

Week 4: Hopfield Network Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from

More information

Stochastic Networks Variations of the Hopfield model

Stochastic Networks Variations of the Hopfield model 4 Stochastic Networks 4. Variations of the Hopfield model In the previous chapter we showed that Hopfield networks can be used to provide solutions to combinatorial problems that can be expressed as the

More information

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network. III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Basic Principles of Unsupervised and Unsupervised

Basic Principles of Unsupervised and Unsupervised Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Associative Memories (I) Hopfield Networks

Associative Memories (I) Hopfield Networks Associative Memories (I) Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Applied Brain Science - Computational Neuroscience (CNS) A Pun Associative Memories Introduction

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks

More information

How can ideas from quantum computing improve or speed up neuromorphic models of computation?

How can ideas from quantum computing improve or speed up neuromorphic models of computation? Neuromorphic Computation: Architectures, Models, Applications Associative Memory Models with Adiabatic Quantum Optimization Kathleen Hamilton, Alexander McCaskey, Jonathan Schrock, Neena Imam and Travis

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Deep Belief Networks are compact universal approximators

Deep Belief Networks are compact universal approximators 1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

CSC321 Lecture 8: Optimization

CSC321 Lecture 8: Optimization CSC321 Lecture 8: Optimization Roger Grosse Roger Grosse CSC321 Lecture 8: Optimization 1 / 26 Overview We ve talked a lot about how to compute gradients. What do we actually do with them? Today s lecture:

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory

CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory Part VII 1 The basic task Store a set of fundamental memories {ξξ 1, ξξ 2,, ξξ MM } so that, when presented a new pattern

More information

Neural Networks: Basics. Darrell Whitley Colorado State University

Neural Networks: Basics. Darrell Whitley Colorado State University Neural Networks: Basics Darrell Whitley Colorado State University In the Beginning: The Perceptron X1 W W 1,1 1,2 X2 W W 2,1 2,2 W source, destination In the Beginning: The Perceptron The Perceptron Learning

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms Neural networks Comments Assignment 3 code released implement classification algorithms use kernels for census dataset Thought questions 3 due this week Mini-project: hopefully you have started 2 Example:

More information

Hopfield Network for Associative Memory

Hopfield Network for Associative Memory CSE 5526: Introduction to Neural Networks Hopfield Network for Associative Memory 1 The next few units cover unsupervised models Goal: learn the distribution of a set of observations Some observations

More information

CSC321 Lecture 7: Optimization

CSC321 Lecture 7: Optimization CSC321 Lecture 7: Optimization Roger Grosse Roger Grosse CSC321 Lecture 7: Optimization 1 / 25 Overview We ve talked a lot about how to compute gradients. What do we actually do with them? Today s lecture:

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions

Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6 Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects

More information

epochs epochs

epochs epochs Neural Network Experiments To illustrate practical techniques, I chose to use the glass dataset. This dataset has 214 examples and 6 classes. Here are 4 examples from the original dataset. The last values

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory

7 Rate-Based Recurrent Networks of Threshold Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Fall 2005; Revised for Winter 2017 7 Rate-Based Recurrent etworks of Threshold eurons: Basis for Associative Memory 7.1 A recurrent network with threshold elements The

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

arxiv: v2 [nlin.ao] 19 May 2015

arxiv: v2 [nlin.ao] 19 May 2015 Efficient and optimal binary Hopfield associative memory storage using minimum probability flow arxiv:1204.2916v2 [nlin.ao] 19 May 2015 Christopher Hillar Redwood Center for Theoretical Neuroscience University

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs

Pattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Neural Networks Lecture 6: Associative Memory II

Neural Networks Lecture 6: Associative Memory II Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Least Mean Squares Regression

Least Mean Squares Regression Least Mean Squares Regression Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture Overview Linear classifiers What functions do linear classifiers express? Least Squares Method

More information

Performance Surfaces and Optimum Points

Performance Surfaces and Optimum Points CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture Neural Networks for Machine Learning Lecture 2a An overview of the main types of neural network architecture Geoffrey Hinton with Nitish Srivastava Kevin Swersky Feed-forward neural networks These are

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Feed-forward Network Functions

Feed-forward Network Functions Feed-forward Network Functions Sargur Srihari Topics 1. Extension of linear models 2. Feed-forward Network Functions 3. Weight-space symmetries 2 Recap of Linear Models Linear Models for Regression, Classification

More information

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory

7 Recurrent Networks of Threshold (Binary) Neurons: Basis for Associative Memory Physics 178/278 - David Kleinfeld - Winter 2019 7 Recurrent etworks of Threshold (Binary) eurons: Basis for Associative Memory 7.1 The network The basic challenge in associative networks, also referred

More information

Multiclass Classification-1

Multiclass Classification-1 CS 446 Machine Learning Fall 2016 Oct 27, 2016 Multiclass Classification Professor: Dan Roth Scribe: C. Cheng Overview Binary to multiclass Multiclass SVM Constraint classification 1 Introduction Multiclass

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Hopfield networks. Lluís A. Belanche Soft Computing Research Group

Hopfield networks. Lluís A. Belanche Soft Computing Research Group Lluís A. Belanche belanche@lsi.upc.edu Soft Computing Research Group Dept. de Llenguatges i Sistemes Informàtics (Software department) Universitat Politècnica de Catalunya 2010-2011 Introduction Content-addressable

More information

Midterm: CS 6375 Spring 2015 Solutions

Midterm: CS 6375 Spring 2015 Solutions Midterm: CS 6375 Spring 2015 Solutions The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for an

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Neural networks Daniel Hennes 21.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Logistic regression Neural networks Perceptron

More information

Markov Chains and MCMC

Markov Chains and MCMC Markov Chains and MCMC Markov chains Let S = {1, 2,..., N} be a finite set consisting of N states. A Markov chain Y 0, Y 1, Y 2,... is a sequence of random variables, with Y t S for all points in time

More information

Neural Networks Lecture 2:Single Layer Classifiers

Neural Networks Lecture 2:Single Layer Classifiers Neural Networks Lecture 2:Single Layer Classifiers H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Learning Neural Networks Classifier Short Presentation INPUT: classification data, i.e. it contains an classification (class) attribute.

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information