Machine Learning and Adaptive Systems. Lectures 5 & 6
|
|
- Annabella Ward
- 5 years ago
- Views:
Transcription
1 ECE656- Lectures 5 & 6, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015
2 c. Performance Learning-LMS Algorithm (Widrow 1960) The iterative procedure in steepest descent requires the knowledge of exact gradient J(w i (k)). In practice, the gradient is not known and needs to be estimated based upon instantaneous value of squared error (or instantaneous estimates of R xx and R xd ). J(w i (k)) e 2 i (k) = (d i(k) net i (k)) 2, net i (k) = w t i (k)x(k) J(w i (k)) w i (k) = 2x(k)e i (k) Then from gradient descent rule (also called Stochastic Gradient Descent), w i (k + 1) = w i (k) + µ x(k)e i (k) Note that the same results can be obtained using the instantaneous values of R xx and R xd instead of actual values in the steepest descent rule, i.e. R xx = x(k)x(k) t R xd = x(k)d i (k) w i (k + 1) = w i (k) + µ [R xd R xxw i (k)] = w i (k) + µ x(k)[d i (k) x(k) t w i (k)] = w i (k) + µ x(k)e i (k)
3 Remarks 1 Plot of J(w i (k)) as a function of k is the learning curve of the LMS. 2 Comparing LMS with Amari s rule, r(w i (k), x(k), d i (k)) = e i (k) i.e. the error is the learning signal. 3 If we use sample correlation and cross-correlation matrices instead of the actual values in the gradient descent rule, i.e. R xx = X t X and R xd = X t d i, then w i (k + 1) = w i (k) + µ X t [d i Xw i (k)] = w i (k) + µ X t e i = w i (k) + µ K l=1 x(l)e i(l) i.e. LS solution corresponds to LMS after one pass over the training data (hence called Batch Gradient Descent). When K and process is ergodic, LMS Wiener-Hopf. 4 Owing to presence of noise and perturbations in the gradient estimate at each iteration, LMS doesn t end up at the global minimum and wonders around it. This misadjustment is measured by M = Average Excess MSE Min. MSE = Average(J(w i (k) J min) J min It can be shown that M = µ tr[r xx] = µ N i=1 λ i where λ i is the ith eigenvalue of R xx i.e. M can be reduced (not eliminated) by reducing µ. But, this presents a trade-off between speed and accuracy.
4 Example 1: (a) For small learning rate, show that the mean of weight vector estimate satisfies the state equation, m(n) = (I µr xx) n [m(0) m( )] + m( ) where m(k) = E[w(k)]. (b) Show that as with the gradient descent for convergence 0 < µ < 2 is the maximum eigenvalue of R xx. λ max (a) Rewrite the weight update equation using e(k) = d(k) x t (k)w(k) as w(k + 1) = A(k) w(k) + µ x(k)d(k) where A(k) = (I µx(k)x t (k)). where λ max Now define error weight ɛ(k) = w(k) w where w stands for optimum Wiener-Hopf solution. Then, we get ɛ(k + 1) = A(k) ɛ(k) + f(k) where f(k) = µ x(k)[d(k) w t x(k)]. Using Kushner s direct averaging method (see Section 3.8) for small learning rates and taking E[.] from the above stochastic state equation gives, E[ɛ(k + 1)] = (I µr xx) E[ɛ(k)] = Ā(k) E[ɛ(k)] due to orthogonality principle E[x(k)(d(k) w t x(k))] = 0.
5 Since m( ) = w, the above Eq. can be written as, m(k + 1) m( ) = (I µr xx)(m(k) m( )), which is a state Eq. with no excitation. Solving this state Eq. for initial condition (m(0) m( )) yields, m(n) m( ) = (I µr xx) n (m(0) m( )). (b) Matrix R xx can be diagonalized using R xx = QΛQ t where the diagonal matrix Λ contains all the eigenvalues of R xx and Q contains all the associated eigenvectors as its columns. Then, using QQ t = I we can write, m(n) m( ) = (QQ t µqλq t ) n (m(0) m( )) ζ(n) = (I µλ) n ζ(0). where ζ(n) = Q t (m(n) m( )). For stability and convergence we require that 1 µλ i < 1, necessary and sufficient condition for convergence is, 0 < µ < 2 λ max. Remark i [1, N]. Thus, the Note that we can fit an exponential to (1 µλ i ) = e 1/τ i where τ i is the time constant of the ith mode. Now, the slowest learning mode is determined by λ min while the fastest is dictated by λ max. Thus, if the eigenvalues are widely spread the settling time is decided by the smallest eigenvalue.
6 d. Performance Learning-Perceptron Rule (Rosenblatt 1958) In contrast to Widrow s ADALINE, this supervised learning uses the actual output to generate the learning signal i.e. r = d i o i, and 1 Uses BHL (or sgn(.) function) as an activation function, 2 Uses binary ±1 desired signal. That is, o i (k) = sgn(net i (k)) and net i (k) = w i (k) t x(k) e i (k) = d i (k) o i (k), and w i (k) = µ x(k)e i (k) Thus, the Perceptron updating rule is, w i (k + 1) = w i (k) + µ [d i (k) sgn(net i (k)] x(k), i.e. the weights are only adjusted if o i is incorrect since d i = 1 or 1 and o i = 1 or 1 = e i (k) = 0, i.e. no learning. Now, if d i = 1, o i = 1 then e i (k) = 2, or d i = 1, o i = 1 then e i (k) = 2 and learning rule becomes, w i (k + 1) = w i (k) ± 2µx(k). Assume w i (0) = 0, after one epoch, w o i = 2µ x(k) 2µ x(k), k R 1 k R 2 R 1 R 2 : subset of data indices that are misclassified.
7 e. Performance Learning-Delta Rule (McClelland & Rumelhart 1986) In contrast to Perceptron s rule, this revolutionary supervised learning rule: 1 Uses continuous output (i.e. continuous activation function), 2 Uses differentiable activation function, 3 Circumvents problems of all previous learning rules. wi1 x(k) wij Cell i neti(k) f(.) oi(k) win f (1) (.) win Delta Rule ei(k) - di(k) + Delta rule is a generalization of Widrow-Hopf LMS rule where we use instantaneous error, ξ(k) = e 2 i (k) = (d i(k) o i (k)) 2 = (d i (k) f(net i (k))) 2. Now, taking partial of this error wrt w i gives, ξ(k) w i (k) = 2 f(net i) w i (k) e i(k) Using Chain Rule f(net i(k)) w i (k) hence net i(k) = x(k). w i (k) = f(net i(k)) net i (k) net i (k) w i (k). But net i(k) = w i (k) t x(k) and
8 e. Performance Learning-Delta Rule (Cont.) Thus, we get ξ(k) w i (k) = 2f (w i (k) t x(k)) x(k) e i (k) = 2f (net i (k)) x(k) e i (k), where f (net i (k)) = f(net i(k)) net i (k). Now using the gradient descent we get the delta updating rule, w i (k + 1) = w i (k) 1 2 µ ξ(k) Remarks: = w i (k) + µ f (w i (k) t x(k)) x(k) e i (k). 1 For the delta rule, the learning signal is r(w i (k), x(k), d i (k)) = f (w i (k) t x(k)) e i (k) = f (w i (k) t x(k)) (d i (k) o i (k)). 2 For the unipolar sigmoidal activation function o(net i ) = f(net i ) = f (net i (k)) = λo i (k)(1 o i (k)). 3 For linear neurons, Delta rule becomes LMS rule. 1 1+e λnet i,
9 f. Performance Learning-RLS Rule (Azimi-Sadjadi 1992) In contrast to LMS rule, this supervised learning rule: 1 Uses SE with limited memory (forgetting factor) to rely mostly on recent data, 2 Uses a continuous threshold logic activation function, 3 Is significantly faster than LMS with no accuracy-speed tradeoffs, 4 Is more suited for non-stationary environments. w i1 x(k) w ij Cell i net i(k) f(net) 1 a net o i(k) w in w in RLS Rule e i(k) - d i(k) + Objective: Given {x(k), d i (k)} K k=1, find optimum w i (n) at (current) iteration n to minimize the SE with limited memory, J(w i (n)) = 1 n γ n k e 2 i 2 (k) = 1 n γ n k (d i (k) o i (k)) 2, 2 k=1 k=1 where 0 < γ < 1 (e.g., γ =.99) is a forgetting factor that weights recent data and forgets the old data.
10 f. Performance Learning-RLS Rule (Cont.) In this rule a threshold-logic activation is used, i.e. we have 0 net i (k) 0 o i (k) = f(net i (k)) = net i (k)/a 0 < net i (k) < a where net i (k) = w t i (k)x(k). 1 net i (k) a Assuming that the current weight vector w i (n) is used in place of the previous weights i.e. w i (k) = w i (n) k [1, n 1] then taking the derivative of J(w i (n)) wrt w i (n) and setting it to zero yields the normal equation, 1 a n k=1 γn k x(k)(d i (k) 1 a xt (k) ŵ i (n)) = 0 Note that the learning only happens when the net i is within the ramp part of the activation function. The above normal equation can be solved iteratively using the weighted Recursive Least Squares (RLS) as follows, K(n) = P (n 1)x(n) γ + x t : Gain Calculation (1) (n)p (n 1)x(n) P (n) = γ 1 [I K(n)x t (n)]p (n 1) : Updating Inverse Correlation (2) ŵ i (n) = ŵ i (n 1) + K(n)[d i (n) 1 a xt (n) ŵ i (n 1)] : Weight Updating (3) where P (n) = R 1 (n) and R(n) = 1 a 2 n k=1 γn k x(k)x t (k) which is the weighted correlation matrix of the data. Start with P (0) = δi for small δ e.g., δ = 0.5 and weights are randomly initialized.
11 - Hebbian Rule (Hebb 1949) Donald Hebb, a neuro-psychologist postulated a mechanism for learning at the cellular level in the brain. Hebb s idea : When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A s efficiency as one of the cells firing B, is increased. Alternatively, we can break up this into two rules: a) If two cells (A, B) (or (i, j)) on either side of a synaptic weight (w ij ) are activated simultaneously or synchronously, then strength of that synapse is selectively increased. b) Whereas if they are activated asynchronously then, the synapse is selectively weakened (or eliminated). j wij xj Pre snyaptic i oi : Post snyaptic Thus the Hebbian synapse uses a time-dependent, highly local, and strongly interactive mechanism to increase synaptic efficiency as a function of the correlation between presynaptic and postsynaptic activities. That is, w ij = µx j o i Writing this for all j [1, N] cells connected to cell i yields the Hebbian learning rule, w i (k + 1) = w i (k) + µx(k)o i (k)
12 - Hebbian Rule (Cont.) Remarks: 1 No desired signal hence an unsupervised learning. 2 The learning signal in this case in r(w i (k), x(k)) = o i (k). 3 Used for many applications including associative memories (either auto-associative or hetero-associative), PCA extraction, etc. Example 1: (a) Derive Hebb s learning rule for a network with M output cells. (b) For data set {x(k), o(k)} K 1 k=0 and W (0) = 0 where W (k) = [w 1 (k),, w M (k)]t is the weight matrix (M N), show that the network in (a) is a perfect linear associator if x(k) s are orthonormal. o 1 (k) o i (k) o M (k) wi(k) 1 2 N x(k) (a) For cell i we have w i (k + 1) = w i (k) + µx(k)o i (k), same µ for all M cells, then W (k + 1) = W (k) + µo(k)x t (k) where o(k) = [o 1 (k),, o M (k)] t in the output vector. i [1, M]. Assuming the
13 - Hebbian Rule (Cont.) (b) W (1) = µ o(0)x t (0) W (2) = µ o(0)x t (0) + µo(1)x t (1). W (K) = W = µ K 1 k=0 o(k)xt (k): Outer Product Sum At this point, the network in trained and has stored K patterns (i.e. Storage Phase of the Associative Memory). Now, if the patterns are orthogonal i.e. { 1 i = j x t (i)x(j) = δ(i j) = 0 i j Thus, postmultiply outer product sum by x(l) gives W x(l) = µ( K 1 k=0 o(k)xt (k))x(l) = µo(l), i.e. a perfect associator (i.e. Recall Phase). Notes: (a) If x(k) s are normal but not orthogonal, then W x(l) = µ o(l) + µ( K 1 k=0 k l o(k)x t (k))x(l), where the second term shows the effects of cross-talk. (b) Also, if x = x(k) + ε(k) where ε(k) represents perturbation/deformation, then W x(l) = µ o(l) + µ( K 1 k=0 o(k)xt (k))ε(k).
14 - Hebbian Rule (Cont.) Example 2: Let x(1) = [ ] t, o(1) = [5, 1, 0] t x(2) = [ ] t, o(2) = [ 2, 1, 6] t x(3) = [ ] t, o(3) = [ 2, 4, 3] t (a) Store these patterns using Hebb s rule. (b) Let x = [.8,.15,.15,.2] t. Find associated o. What is the closest output pattern to o (in the. 2 sense)? Clearly, x(k) s are orthonormal. So the results in Example 1 apply here and the recall is perfect for x(k) s. The weight matrix after the storage phase is (assume µ = 1), W = 3 k=1 o(k)xt (k) = Now, what would happen we if apply x = [.8,.15,.15,.2] t which is a perturbed version of x(1)? o = W x = [4, 1.25,.45] t The Euclidean distances to o(k) s are: o o 1 2 = 1.26 o o 2 2 = o o 3 2 = i.e. the memory generates a pattern that is closet to o 1.
15 - Hebbian Rule (Cont.) Example 3: If x(k) s are not orthogonal devise a way to choose the best weights to obtain a linear associator. Here, we choose to minimize average SE, J(W ) = 1 K 1 K k=0 o(k) W x(k) 2 = 1 K 1 K k=0 o(k) ô(k) 2 Let us define matrices O = [o(0),, o(k 1)] M K X = [x(0),, x(k 1)] N K Then, the average SE can be rewritten as J(W ) = 1 K tr[(o W X)(O W X)t ] Minimizing wrt matrix W yields J(W ) W = 1 K (O W X)X t = 0 or W = OX t (XX t ) 1. Note that to get this result we used matrix derivative properties tr(xa) = A X t, tr(ax t ) = A and tr(xaxt ) = XA + XA X X t. If x(k) s are orthonormal XX t = I, X t x(k) = [0..1 k th..0] t, and hence W x(k) = o(k).
Machine Learning and Adaptive Systems. Lectures 3 & 4
ECE656- Lectures 3 & 4, Professor Department of Electrical and Computer Engineering Colorado State University Fall 2015 What is Learning? General Definition of Learning: Any change in the behavior or performance
More informationIntroduction to Artificial Neural Networks
Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline
More informationNeural Networks Lecture 2:Single Layer Classifiers
Neural Networks Lecture 2:Single Layer Classifiers H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural
More informationHopfield Neural Network
Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems
More informationFinancial Informatics XVII:
Financial Informatics XVII: Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-, IRELAND November 9 th, 8. https://www.cs.tcd.ie/khurshid.ahmad/teaching.html
More informationIntroduction to Neural Networks
Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning
More informationChapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions
Chapter ML:VI VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-1 Neural Networks STEIN 2005-2018 The iological Model Simplified model of a neuron:
More informationCHAPTER 3. Pattern Association. Neural Networks
CHAPTER 3 Pattern Association Neural Networks Pattern Association learning is the process of forming associations between related patterns. The patterns we associate together may be of the same type or
More informationLinear Regression. S. Sumitra
Linear Regression S Sumitra Notations: x i : ith data point; x T : transpose of x; x ij : ith data point s jth attribute Let {(x 1, y 1 ), (x, y )(x N, y N )} be the given data, x i D and y i Y Here D
More informationAdaptive Filtering Part II
Adaptive Filtering Part II In previous Lecture we saw that: Setting the gradient of cost function equal to zero, we obtain the optimum values of filter coefficients: (Wiener-Hopf equation) Adaptive Filtering,
More informationArtifical Neural Networks
Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................
More informationAdaptive Filter Theory
0 Adaptive Filter heory Sung Ho Cho Hanyang University Seoul, Korea (Office) +8--0-0390 (Mobile) +8-10-541-5178 dragon@hanyang.ac.kr able of Contents 1 Wiener Filters Gradient Search by Steepest Descent
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationLecture 7 Artificial neural networks: Supervised learning
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationArtificial Neural Networks The Introduction
Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000
More informationPlan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation
Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired
More informationRosenblatt s Perceptron
M01_HAYK1399_SE_03_C01.QXD 10/1/08 7:03 PM Page 47 C H A P T E R 1 Rosenblatt s Perceptron ORGANIZATION OF THE CHAPTER The perceptron occupies a special place in the historical development of neural networks:
More informationHebb rule book: 'The Organization of Behavior' Theory about the neural bases of learning
PCA by neurons Hebb rule 1949 book: 'The Organization of Behavior' Theory about the neural bases of learning Learning takes place in synapses. Synapses get modified, they get stronger when the pre- and
More informationPattern Association or Associative Networks. Jugal Kalita University of Colorado at Colorado Springs
Pattern Association or Associative Networks Jugal Kalita University of Colorado at Colorado Springs To an extent, learning is forming associations. Human memory associates similar items, contrary/opposite
More informationArtificial Intelligence Hopfield Networks
Artificial Intelligence Hopfield Networks Andrea Torsello Network Topologies Single Layer Recurrent Network Bidirectional Symmetric Connection Binary / Continuous Units Associative Memory Optimization
More informationMachine Learning. Neural Networks
Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE
More information3.4 Linear Least-Squares Filter
X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum
More informationArtificial Neural Networks Examination, March 2004
Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More information2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.
2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net. - For an autoassociative net, the training input and target output
More informationArtificial Neural Network : Training
Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural
More informationNeural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET
Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge
More informationLinear discriminant functions
Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative
More information4. Multilayer Perceptrons
4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output
More informationComputational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification
Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 arzaneh Abdollahi
More informationA. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.
III. Recurrent Neural Networks A. The Hopfield Network 2/9/15 1 2/9/15 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs output net
More informationMaster Recherche IAC TC2: Apprentissage Statistique & Optimisation
Master Recherche IAC TC2: Apprentissage Statistique & Optimisation Alexandre Allauzen Anne Auger Michèle Sebag LIMSI LRI Oct. 4th, 2012 This course Bio-inspired algorithms Classical Neural Nets History
More informationDATA MINING AND MACHINE LEARNING
DATA MINING AND MACHINE LEARNING Lecture 5: Regularization and loss functions Lecturer: Simone Scardapane Academic Year 2016/2017 Table of contents Loss functions Loss functions for regression problems
More informationUsing Variable Threshold to Increase Capacity in a Feedback Neural Network
Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback
More informationSerious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks. Cannot approximate (learn) non-linear functions
BACK-PROPAGATION NETWORKS Serious limitations of (single-layer) perceptrons: Cannot learn non-linearly separable tasks Cannot approximate (learn) non-linear functions Difficult (if not impossible) to design
More informationBack-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples
Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure
More informationV. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline
V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline Goals Introduce Wiener-Hopf (WH) equations Introduce application of the steepest descent method to the WH problem Approximation to the Least
More informationSingle layer NN. Neuron Model
Single layer NN We consider the simple architecture consisting of just one neuron. Generalization to a single layer with more neurons as illustrated below is easy because: M M The output units are independent
More informationChapter 2 Single Layer Feedforward Networks
Chapter 2 Single Layer Feedforward Networks By Rosenblatt (1962) Perceptrons For modeling visual perception (retina) A feedforward network of three layers of units: Sensory, Association, and Response Learning
More informationSupervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir
Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion
More informationNeural Networks (Part 1) Goals for the lecture
Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed
More informationSynaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:
Synaptic Plasticity Introduction Dayan and Abbott (2001) Chapter 8 Instructor: Yoonsuck Choe; CPSC 644 Cortical Networks Activity-dependent synaptic plasticity: underlies learning and memory, and plays
More informationADALINE for Pattern Classification
POLYTECHNIC UNIVERSITY Department of Computer and Information Science ADALINE for Pattern Classification K. Ming Leung Abstract: A supervised learning algorithm known as the Widrow-Hoff rule, or the Delta
More informationIn the Name of God. Lecture 11: Single Layer Perceptrons
1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just
More informationInput layer. Weight matrix [ ] Output layer
MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons
More informationArtificial Neural Networks Examination, June 2005
Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either
More informationHopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5
Hopfield Neural Network and Associative Memory Typical Myelinated Vertebrate Motoneuron (Wikipedia) PHY 411-506 Computational Physics 2 1 Wednesday, March 5 1906 Nobel Prize in Physiology or Medicine.
More informationSimple Neural Nets For Pattern Classification
CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification
More information2.6 The optimum filtering solution is defined by the Wiener-Hopf equation
.6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o
More information1. A discrete-time recurrent network is described by the following equation: y(n + 1) = A y(n) + B x(n)
Neuro-Fuzzy, Revision questions June, 25. A discrete-time recurrent network is described by the following equation: y(n + ) = A y(n) + B x(n) where A =.7.5.4.6, B = 2 (a) Sketch the dendritic and signal-flow
More informationFeedforward Neural Nets and Backpropagation
Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features
More informationLecture 4: Feed Forward Neural Networks
Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training
More informationSlide02 Haykin Chapter 2: Learning Processes
Slide2 Haykin Chapter 2: Learning Processes CPSC 636-6 Instructor: Yoonsuck Choe Spring 28 Introduction Property of primary significance in nnet: learn from its environment, and improve its performance
More informationA. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.
Part 3A: Hopfield Network III. Recurrent Neural Networks A. The Hopfield Network 1 2 Typical Artificial Neuron Typical Artificial Neuron connection weights linear combination activation function inputs
More informationThe perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.
1 The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. The algorithm applies only to single layer models
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,
More informationArtificial Neural Networks Examination, June 2004
Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum
More informationAdministration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6
Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects
More informationHopfield Networks and Boltzmann Machines. Christian Borgelt Artificial Neural Networks and Deep Learning 296
Hopfield Networks and Boltzmann Machines Christian Borgelt Artificial Neural Networks and Deep Learning 296 Hopfield Networks A Hopfield network is a neural network with a graph G = (U,C) that satisfies
More informationUnit III. A Survey of Neural Network Model
Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of
More informationSGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method
SGN 21006 Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 20 Adaptive filtering:
More informationIntroduction to gradient descent
6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our
More informationIn biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.
In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required. In humans, association is known to be a prominent feature of memory.
More informationPMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron
PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationLecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods.
Lecture 5: Linear models for classification. Logistic regression. Gradient Descent. Second-order methods. Linear models for classification Logistic regression Gradient descent and second-order methods
More informationArtificial Neural Networks. Historical description
Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of
More informationWeek 4: Hopfield Network
Week 4: Hopfield Network Phong Le, Willem Zuidema November 20, 2013 Last week we studied multi-layer perceptron, a neural network in which information is only allowed to transmit in one direction (from
More informationComputational Intelligence Winter Term 2017/18
Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning
More informationTemporal Backpropagation for FIR Neural Networks
Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static
More informationLeast Mean Square Filtering
Least Mean Square Filtering U. B. Desai Slides tex-ed by Bhushan Least Mean Square(LMS) Algorithm Proposed by Widrow (1963) Advantage: Very Robust Only Disadvantage: It takes longer to converge where X(n)
More informationPart 8: Neural Networks
METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as
More informationComputational Intelligence
Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,
More informationArtificial Neural Networks. Edward Gatt
Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very
More informationIntroduction to Neural Networks
Introduction to Neural Networks Vincent Barra LIMOS, UMR CNRS 6158, Blaise Pascal University, Clermont-Ferrand, FRANCE January 4, 2011 1 / 46 1 INTRODUCTION Introduction History Brain vs. ANN Biological
More informationReading Group on Deep Learning Session 1
Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular
More informationNeed for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels
Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)
More informationLecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method
1 Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method Adaptive filtering: Problem statement Consider the family of variable parameter FIR filters, computing their
More informationCh4: Method of Steepest Descent
Ch4: Method of Steepest Descent The method of steepest descent is recursive in the sense that starting from some initial (arbitrary) value for the tap-weight vector, it improves with the increased number
More informationFundamentals of Neural Networks
Fundamentals of Neural Networks : Soft Computing Course Lecture 7 14, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html
More informationLeast Mean Squares Regression. Machine Learning Fall 2018
Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises
More informationMultilayer Feedforward Networks. Berlin Chen, 2002
Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be
More informationMultilayer Perceptrons and Backpropagation
Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:
More informationLecture 9: Large Margin Classifiers. Linear Support Vector Machines
Lecture 9: Large Margin Classifiers. Linear Support Vector Machines Perceptrons Definition Perceptron learning rule Convergence Margin & max margin classifiers (Linear) support vector machines Formulation
More informationBasic Principles of Unsupervised and Unsupervised
Basic Principles of Unsupervised and Unsupervised Learning Toward Deep Learning Shun ichi Amari (RIKEN Brain Science Institute) collaborators: R. Karakida, M. Okada (U. Tokyo) Deep Learning Self Organization
More informationIn this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification.
Linear Models In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification. Best Fitting Line In this section, we examine
More informationNeed for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels
Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)
More informationPerceptron. (c) Marcin Sydow. Summary. Perceptron
Topics covered by this lecture: Neuron and its properties Mathematical model of neuron: as a classier ' Learning Rule (Delta Rule) Neuron Human neural system has been a natural source of inspiration for
More informationArtificial Neural Network
Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation
More informationComputational Neuroscience. Structure Dynamics Implementation Algorithm Computation - Function
Computational Neuroscience Structure Dynamics Implementation Algorithm Computation - Function Learning at psychological level Classical conditioning Hebb's rule When an axon of cell A is near enough to
More informationArtificial Neural Networks. Q550: Models in Cognitive Science Lecture 5
Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand
More informationNeuro-Fuzzy Comp. Ch. 4 March 24, R p
4 Feedforward Multilayer Neural Networks part I Feedforward multilayer neural networks (introduced in sec 17) with supervised error correcting learning are used to approximate (synthesise) a non-linear
More informationLearning and Memory in Neural Networks
Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units
More informationAdaptive Beamforming Algorithms
S. R. Zinka srinivasa_zinka@daiict.ac.in October 29, 2014 Outline 1 Least Mean Squares 2 Sample Matrix Inversion 3 Recursive Least Squares 4 Accelerated Gradient Approach 5 Conjugate Gradient Method Outline
More informationNeural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science
Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks 1 Hopfield Networks Rudolf Kruse Neural Networks
More informationNeural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation
Neural Networks Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Historical Perspective A first wave of interest
More informationRecursive Least Squares for an Entropy Regularized MSE Cost Function
Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University
More informationIntroduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis
Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.
More informationDEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY
DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo
More informationLecture 14 Population dynamics and associative memory; stable learning
Lecture 14 Population dynamics and associative memory; stable learning -Introduction -Associative Memory -Dense networks (mean-ield) -Population dynamics and Associative Memory -Discussion Systems or computing
More information