Latched recurrent neural network

Size: px
Start display at page:

Download "Latched recurrent neural network"

Transcription

1 Elektrotehniški vestnik 7(-2: 46 5, 23 Electrotechnical Review, Ljubljana, Slovenija Latched recurrent neural network Branko Šter University of Ljubljana, Faculty of Computer and Information Science, Laboratory of Adaptive Systems and Parallel Processing, Tržaška 25, Ljubljana, Slovenia E-pošta: Abstract. An extended architecture of recurrent neural networks is proposed. It is based on ignoring unimportant input information using a register of latches as the input layer of the network. The latch is implemented with a multiplexer 2/ whose output is differentiable with respect to all of its inputs, thus enabling the derivatives to be propagated through the network. The relevance of input vectors is learned together with the weights of the network using a gradient-based algorithm. Key words: recurrent neural networks, finite state automata, temporal processing, long-term dependencies, latch Rekurentna nevronska mreža z zapahi Povzetek. V članku je predstavljena razširjena arhitektura rekurentnih nevronskih mrež. Temelji na zadrževanju nepomembnih vhodnih informacij na podlagi registra zapahov v vhodni plasti mreže. Zapah je implementiran z izbiralnikom 2/, katerega izhod je odvedljiv po vseh vhodih, kar omogoča razširjanje odvodov po mreži v času. Uporabljeno je gradientno učenje, s katerim se mreža poleg uteži uči tudi relevantnosti vhodnih vektorjev. Ključne besede: rekurentne nevronske mreže, končni avtomati, procesiranje zaporedij, zapah Introduction Neural networks with feedback connections are called recurrent. A recurrent neural network (RNN can be trained to model dynamical systems. We are especially interested in discrete-time RNNs, which are applicable to sequence processing problems, such as sequence recognition and time-series prediction. RNNs were shown to be capable of behaving like finite state automata, thus providing a model of finite state computation by means of continuous dynamical systems. Automata can also be successfully extracted from RNNs 2. Recurrent neural networks are able to store and process context or state information, which is required in temporal processing tasks. It is known that recurrent neural networks are harder to train than feedforward networks. A particularly difficult problem is learning long-term dependencies, since the network must be able to store relevant information over long periods of time. It was shown in 3 that in Received May 22 Accepted 26 November 22 learning by gradient descent, propagation of derivatives through the network decays in time. Alternative learning paradigms which do not face this problem are nongradient methods and evolutionary algorithms. In the following, some of the existing solutions to the problem of learning long-term dependencies are presented. In 4, a hierarchy of neural networks to recursively decompose sequences was proposed. When a lower-level network cannot satisfactorily predict a subsequent output, a higher-level network becomes responsible. The system detects causal dependencies in the sequence. The approach is limited to sequence learning, since no external inputs are assumed. The LSTM (Long Short-Term Memory algorithm 5 was designed to overcome these problems by enforcing constant error flow. In 6, wavelet-based smoothing was integrated into a feedforward neural network and treated as a uniform trainable model for continuous time-series prediction. By filtering high frequency noise so as to decrease the prediction error, the network was able to learn smoothed timeseries more efficiently. Our idea is to exclude irrelevant input vectors from RNN processing and thereby to facilitate learning of RNN. The relevance of input vectors is established through the learning process. We introduce a modification of the 2-layer recurrent neural network, where input vectors to the network are selectively latched in order to suppress irrelevant ones, and the error signals at the RNN s outputs are weighted correspondingly. When an input vector is considered unimportant, the output error of the network is considered to be unimportant, too.

2 Latched recurrent neural network 47 2 Latched recurrent neural network First the architecture of the network is described, and then the steepest-descent learning rule is derived. 2. Architecture The latched recurrent neural network or LRNN has two feedforward processing layers and global feedback connections, as shown in Fig.. In addition, a register of Figure. Latched recurrent neural network. The dotted lines show optional feedback (necessary in multi-step prediction. We used the teacher-forcing technique, which applies the correct previous outputs. latches in the input layer suppresses assumably irrelevant input vectors by latching previous data and context information. The latch (Fig. 2a is controlled through the enable input, propagating input data while active or high, and remembering the previous data while low. This is in fact a D-type flip-flop with clock enable. A flip-flop is an edge-triggered (synchronous device and RNN also operates in a synchronous manner, i.e. we can imagine a clock signal driving the network. a. b. Figure 2. (a D flip-flop with clock enable and (b differentiable neural-based multiplexer 2/. To be able to propagate derivatives through time, we made the multiplexer s output differentiable with respect to its inputs. We applied a simple neural net with two sigmoid neurons with fixed weights and a summator, see Fig. 2b. The symbol L stands for a large value. The output of the multiplexer is governed by z mux(d,d, enable σ(w i d i + w ei enable Θ i ξ + ξ i with σ(x /( + e x being the sigmoid activation function. When enable, z σ(ld L/2 + σ(ld 3L/2 and d is propagated due to a large threshold for d. Similarly, when enable, z σ(ld 3L/2 + σ(ld L/2 and d is propagated. When < enable <, a mixture of both d i is propagated. There are Uo output units and U Uo context units in the output layer, H units in the hidden layer, U + I latches in the input layer, and I external inputs (Fig.. Only the context units provide feedback. As previous outputs the desired outputs are used, i.e. the teacher-forcing technique is applied. Outputs of the output units and of the context units are y k (t σ( v kl u l (t, k,.., U. ( The weights of the output layer are denoted as v kl. Note that v k, is the bias of the k-th neuron in the output layer, therefore u (t is a constant virtual input of unity. Outputs of the hidden units are U+I+ u l (t σ( w lm z m (t, l,.., H. (2 m The w l,u+i+ is the bias of the l-th hidden neuron. Outputs of the input layer at time t are z m (t mux(ỹ m (t,z m (t, enable (3 for m,.., Uo (previous desired outputs ỹ m, z m (t mux(y m (t,z m (t, enable (4 for m Uo+,.., U (context units, and z m (t mux(x m (t,z m (t, enable (5 for m U +,.., U + I (external inputs x m. The inputs to the input (register layer are: external inputs x m (t, delayed feedback outputs (previous desired outputs ỹ m (t and context values y m (t, and the local feedback of the latched data z m (t. The derivatives of the multiplexer s output z with respect to its inputs will be further required. They are ξ ( ξ w e + ξ ( ξ w e, enable ξ i ( ξ i w i, i,. d i

3 48 Šter The derivatives of z m of the context units (Eq. 4 with respect to the output weights v ij are m (t mux ym ym + mux m m ξ m( ξ mw y m + ξ m( ξ mw m, where y m (t and z m (t are abbreviated as y m and z m, and similar with respect to the hidden weights. Other z m (Eq. 3 and Eq. 5 are treated analogously. 2.2 Derivation of the learning rule In this section we derive the gradient learning rule for LRNN. It is customary to unfold the structure of a recurrent neural network in time to be able to derive learning rules easier. The unfolded LRNN is shown in Fig. 3. R(x I i β ix i, where β i will be called the relevance weight, denoting the relevance of the i-th input (or input line. If the step function is applied to R(x, then input vectors with R larger than some threshold Θ are enabled, f(t, and others are ignored, f(t. Also, the corresponding outputs either contribute or do not contribute to the error. Instead of the step function we may utilize some similar, but differentiable step-like function such as the sigmoid function f(t σ(r(x(t Θ /( + e a(r(x(t Θ in order to allow the gradient descent methods to be applied. The effect is that the enable signal becomes continuous or soft, i.e. an input vector is enabled with a degree between and, <f(t <, and the outputs contribute to the error with the same degree. Relevant input lines x i should have large relevance weights β i. When input symbols are -of-n encoded, this is equal to the relevance of a symbol or the corresponding input vector. There are two cases:. The relevances β i of the input signals x i are known or guessed (prior knowledge or hint and held constant, and the threshold Θ is optimized. By adjusting Θ, the enable of input vectors and the contribution of corresponding outputs to the error is optimized. When Θ is significantly below zero and inputs are binary, f(t is always close to and no latching occurs. When Θ increases, the ratio σ(r Θ/σ(R 2 Θ, where R >R 2, also increases. This is the indication that β i are correct. By ignoring irrelevant inputs, training of RNN is facilitated. 2. We have no clue as to what R(x might be. Therefore, β i are learned and Θ is held constant. The structure may be viewed as an additional neuron with the relevance weights β i, see Fig.. The error at the output of RNN is weighted by the continuous enable signal f(t and may be written as Err(t y(t ỹ(t 2 f(t, (6 Ef(t Figure 3. Unfolded LRNN (2 time-steps shown. The feedback connections of the latches are also unfolded in time. Teacher-forcing (TF correction means only that correct outputs are applied in the next step, rather than predicted ones. As a measure of performance, the mean-squared error (MSE at the network s outputs is usually considered. We weighted the MSE(t by the enable signal of the latch register at time t, which will be denoted in derivations as f(t, i.e. enable(t f(t. Let R(x Rbe a function called relevance of the input vector x (x,x 2,..., x I T. It should be large for relevant input vectors x and small otherwise, such as where E. denotes the expectation operator and ỹ(t denotes the desired output at time t. The weights are optimized by the steepest descent learning algorithm: v kl v kl η Err, w lm w lm η Err (7 v kl w lm with η being the learning step. The same holds for the threshold Θ and for the relevance weights β i in case they are optimized. The derivative of Err(t with respect to the output weights is Err(t y(t ỹ(t 2 f(t Ef(t (8

4 Latched recurrent neural network 49 Uo 2 (y k (t ỹ k (t y k f(t Ef, and similar for the hidden weights. The derivatives of y k, k,.., U, with respect to all the weights are required. For the output weights v: y k v kl u l vkl u l u l + v kl δ kl,ij u l + v kl u l U muo+ { δ ki u j + v kl u l ( u l m w lm U muo+ w lm } ξm( ξmw y m + ξ v m( ξmw m, ij where all quantities y, u, and z are at time t, except y denoting y(t, and z denoting z(t. The y k denotes the derivative of the sigmoid activation function and is equal to y k ( y k ; similarly u l. For the hidden weights: y k ( y k Since U+I+ m v kl u l U+I+ v kl u l ( u l (w lm z m δ il z j + m U muo we have U v kl u w l(δ il z j + ij { v ki u iz j + v kl u l muo (w lm z m. w lm m m w lm U muo w lm ξ m( ξ mw y m + ξ m( ξ mw m } The derivative with respect to the threshold Θ is Err(t y(t ỹ(t 2 f(t Ef + (9 y(t ỹ(t 2 ( f(t Ef The last term may be written as ( f(t Ef f(t Ef Ef f(t (Ef 2 af(t( f(tef f(te af(t( f(t (Ef 2 af(t Ef f(tef Ef 2 Ef Therefore Uo Err(t 2 ((y k ỹ k f(t Ef + Uo a (y k ỹ k 2 f(t f(tef Ef 2 Ef Ef Uo 2 ((y k ỹ k f(t Ef + ( ( aerr(t f(t Ef 2 Ef From Eq. we require also u l y k (v kl ( y k (v kl u l ( u l U+I m w lm m ( and from Eq. m is required, where z m are the outputs of the multiplexers: m ξ m( ξm w y m f + we ξ m( ξ m w m + we f + ξm( ξm w y m we af( f + ξm( ξm w m we af( f The derivatives with respect to the relevance weights β i are Err(t Uo 2 ((y k ỹ k f(t β i β i Ef + The β i m β i a ( k (y k ỹ k 2 f(t Ef ( f(tx i (t+ Ef 2 x i Efx i Ef is calculated as in Eq., but ξm( ξm w y m + we af( fx i + ξm( ξm w mθ+w e af( fx i..

5 5 Šter To summarize, the basic procedure is calculation of the derivatives of the network s outputs y k (t with respect to all adjustable weights v kl, w lm, Θ, and β i, using also derivatives from the past. At each time step the weighted error Err(t is calculated and the weights are updated accordingly. 3 Experiments The proposed method was tested on two tasks with long time lags. 3. Task In the first problem 7, there are n input symbols a, b, c, etc. with the -of-n encoding, i.e. only one of the n input lines is and all others are at any moment. The task is to output a immediately following the first occurrence of b after a has already appeared, no matter how long ago. All other signals have no influence and serve merely as distractors. After the occurrence of b, a is used up, and the next time the output should be is when a new a has been followed by its first matching b. This corresponds to a small finite automaton. The solution is simple, once the relevant signals are found. The hard part of the task is to find the distractors. The caution is necessary here. Namely, if the task were off-line and therefore with a limited number of examples, other meaningful automata might be induced. In our experiments n was. RNN had 2 output units ( output and context unit, 8 hidden units, and external inputs. After steps, RNN without latches fails when n. On the reduced problem with n 4,it succeeds after about 8 steps. In this task we chose only to test the predefined criterion R(x a + b Θ, i.e. β a β b and other β i are zero. This is of course the correct criterion and it remains only to observe the course of Θ. Fig. 4a shows an increase of Θ, which is understandable, since the ratio between f(t at R and f(t at R also increases. The weighted error is small, contrary to the ordinary error, which is large because of the errors when unimportant inputs appear. It is interesting to observe behavior of outputs y in Fig. 4c. After an occurrence of symbol b, e.g. at 995 (the occurrences of a and b are indicated also in Fig. 4b, the output is, as desired, but y continues to output a until the next a occurs about steps later. Since the error is weighted by f(t, errors of irrelevant symbols are not considered. Behavior of the context unit is also interesting. A single a activates this unit only partially (to about.45, while eventual subsequent occurrences of a activate it completely. mean-squared error, Theta mean-squared error RNN outputs a. error weighted error Theta b. f(t Ef error c. target y y Figure 4. LRNN on Task for n. (a MSE, weighted MSE, and Θ, (b enable, mean enable, and MSE during final steps, (c desired output ỹ, actual output y, and context value y. 3.2 Task 2 In the second task 5, there are p input symbols a x, a 2 y, a 3,..., a p. Each symbol is -of-n encoded with p binary input signals. The training sequence consists of random occurrences of only two similar subsequences: (x, a 3, a 4,..., a p, x and (y, a 3, a 4,..., a p, y. After a subsequence is finished, the next is selected with probability.5 for each, and so on. The sequence is fed to the network, which always has to predict the next symbol. The prediction of the first symbol of any subsequence is meaningless, since x and y occur with the same probability. The hard part is to predict the last symbol of a subsequence, since the first symbol has to be remembered over a long time lag (depending on p, which was set to. We demanded the network to correctly predict the last symbol. RNN had 2 output units ( outputs and 2 context units, hidden units, and external inputs. RNN without latches failed to predict correctly the last symbol of the subsequence, the error was large, about.4. There was no improvement even after steps. When ex-

6 Error and Theta RNN outputs b. target x.8 target y output x output y.6 context context a. weighted error Theta beta (x beta 2 (y other relevance weights Figure 5. LRNN on Task 2 for p. (a weighted MSE, Θ, and relevance weights, (b desired outputs ỹ x, ỹ y, actual output y, and context value y during final steps. plicitly demanded, it mastered the easy part of the task, i.e. predicting a i, i 3,.., p. Using LRNN, this time we decided to learn the relevance weights β i, which signify the relevance of the i-th input signal. The Θ was set to.9 and all the β i started at.5. It is clear from Fig. 5a that only β (β x and β 2 (β y rise above Θ, while the others slightly decrease. This enabled the LRNN to learn the task quickly. From Fig. 5b it is obvious that the two context units remember or encode symbol x with a slightly larger activation than for symbol y. Since they encode the symbols in the same way, one of them is clearly redundant, as expected. An interesting fact is that they employ (attenuated binary encoding, not - of-n, for example. Outputs are set to final target values immediately after the first symbol of a subsequence. This is the explanation why the context values are not very pronounced. The output neurons themselves remember the first symbol, which is actually the simplest way to solve the task. 4 Conclusion An extended recurrent neural network architecture for dealing with temporal processing with long term dependencies was proposed. It is based on ignoring assumably irrelevant inputs using a register of latches in the input layer of the network. Latches are differentiable, such that gradient descent learning can be applied. The method yielded good results on two standard sequence processing tasks with long time lags, where irrelevant inputs were distracting symbols, rather than repeated symbols. On the latter type of problems when dealing with large finite automata with low-frequency properties, we intend to test the method in the near future. 5 References A. Cleeremans, D. Servan-Schreiber, J.L. McClelland, Finite State Automata and Simple Recurrent Networks, Neural Computation, vol., no. 3, pp , I. Gabrijel, A. Dobnikar, On-line Identification and Reconstruction of Finite Automata with Generalized Recurrent Neural Networks, to appear in Neural Networks. 3 Y. Bengio, P. Simard, P. Frasconi, Learning Long-Term Dependencies with Gradient Descent is Difficult, IEEE Transactions on Neural Networks, vol. 5, no. 2, pp , J. Schmidhuber, Learning complex, extended sequences using the principle of history compression, Neural Computation, vol. 4, no. 2, pp , S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, vol. 9, no. 8, pp , U. Lotrič, Wavelet Based Denoising Integrated into Multilayered Perceptron, submitted to Neurocomputing, R. J. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Neural Computation, vol., no. 2, pp , 989. Branko Šter received the Ph.D. degree in Computer and Information Science from the University of Ljubljana, in 999. He works at the Faculty of Computer and Information Science in Ljubljana. His research interests include neural networks, reinforcement learning, mobile robotics, and dynamical systems.

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Long-Short Term Memory

Long-Short Term Memory Long-Short Term Memory Sepp Hochreiter, Jürgen Schmidhuber Presented by Derek Jones Table of Contents 1. Introduction 2. Previous Work 3. Issues in Learning Long-Term Dependencies 4. Constant Error Flow

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

arxiv: v3 [cs.lg] 14 Jan 2018

arxiv: v3 [cs.lg] 14 Jan 2018 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation Gang Chen Department of Computer Science and Engineering, SUNY at Buffalo arxiv:1610.02583v3 [cs.lg] 14 Jan 2018 1 abstract We describe

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

Recurrent Neural Net Learning and Vanishing Gradient. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6(2):107{116, 1998

Recurrent Neural Net Learning and Vanishing Gradient. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6(2):107{116, 1998 Recurrent Neural Net Learning and Vanishing Gradient International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6(2):107{116, 1998 Sepp Hochreiter Institut fur Informatik Technische Universitat

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Other Artificial Neural Networks Samy Bengio IDIAP Research Institute, Martigny, Switzerland,

More information

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 135, FIM 72 GU, PhD Time: Place: Teachers: Allowed material: Not allowed: October 23, 217, at 8 3 12 3 Lindholmen-salar

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

LSTM CAN SOLVE HARD. Jurgen Schmidhuber Lugano, Switzerland. Abstract. guessing than by the proposed algorithms.

LSTM CAN SOLVE HARD. Jurgen Schmidhuber Lugano, Switzerland. Abstract. guessing than by the proposed algorithms. LSTM CAN SOLVE HARD LONG TIME LAG PROBLEMS Sepp Hochreiter Fakultat fur Informatik Technische Universitat Munchen 80290 Munchen, Germany Jurgen Schmidhuber IDSIA Corso Elvezia 36 6900 Lugano, Switzerland

More information

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve) Neural Turing Machine Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve) Introduction Neural Turning Machine: Couple a Neural Network with external memory resources The combined

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

Recurrent Neural Network Training with Preconditioned Stochastic Gradient Descent

Recurrent Neural Network Training with Preconditioned Stochastic Gradient Descent Recurrent Neural Network Training with Preconditioned Stochastic Gradient Descent 1 Xi-Lin Li, lixilinx@gmail.com arxiv:1606.04449v2 [stat.ml] 8 Dec 2016 Abstract This paper studies the performance of

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Multi-layer Perceptron Networks

Multi-layer Perceptron Networks Multi-layer Perceptron Networks Our task is to tackle a class of problems that the simple perceptron cannot solve. Complex perceptron networks not only carry some important information for neuroscience,

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

Lecture 6. Regression

Lecture 6. Regression Lecture 6. Regression Prof. Alan Yuille Summer 2014 Outline 1. Introduction to Regression 2. Binary Regression 3. Linear Regression; Polynomial Regression 4. Non-linear Regression; Multilayer Perceptron

More information

Deep Learning Recurrent Networks 2/28/2018

Deep Learning Recurrent Networks 2/28/2018 Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

y c y out h y out g y in net w c net c =1.0 for Std. LSTM output gating out out ouput gate output squashing memorizing and forgetting forget gate

y c y out h y out g y in net w c net c =1.0 for Std. LSTM output gating out out ouput gate output squashing memorizing and forgetting forget gate Learning to Forget: Continual rediction with LSM Felix A. Gers Jurgen Schmidhuber Fred Cummins felix@idsia.ch uergen@idsia.ch fred@idsia.ch IDSIA, Corso Elvezia 36 6900 Lugano, Switzerland http://www.idsia.ch/

More information

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch

Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, Deep Lunch Long Short- Term Memory (LSTM) M1 Yuichiro Sawai Computa;onal Linguis;cs Lab. January 15, 2015 @ Deep Lunch 1 Why LSTM? OJen used in many recent RNN- based systems Machine transla;on Program execu;on Can

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

LONG SHORT-TERM MEMORY. Technical Report FKI , Version 3.0. Abstract

LONG SHORT-TERM MEMORY. Technical Report FKI , Version 3.0. Abstract LONG SHORT-TERM MEMORY Technical Report FKI-207-95, Version 3.0 Sepp Hochreiter Fakultat fur Informatik Technische Universitat Munchen 80290 Munchen, Germany hochreit@informatik.tu-muenchen.de http://www7.informatik.tu-muenchen.de/~hochreit

More information

Learning Recurrent Neural Networks with Hessian-Free Optimization: Supplementary Materials

Learning Recurrent Neural Networks with Hessian-Free Optimization: Supplementary Materials Learning Recurrent Neural Networks with Hessian-Free Optimization: Supplementary Materials Contents 1 Pseudo-code for the damped Gauss-Newton vector product 2 2 Details of the pathological synthetic problems

More information

Reification of Boolean Logic

Reification of Boolean Logic 526 U1180 neural networks 1 Chapter 1 Reification of Boolean Logic The modern era of neural networks began with the pioneer work of McCulloch and Pitts (1943). McCulloch was a psychiatrist and neuroanatomist;

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning

Deep Learning. Recurrent Neural Network (RNNs) Ali Ghodsi. October 23, Slides are partially based on Book in preparation, Deep Learning Recurrent Neural Network (RNNs) University of Waterloo October 23, 2015 Slides are partially based on Book in preparation, by Bengio, Goodfellow, and Aaron Courville, 2015 Sequential data Recurrent neural

More information

memory networks, have been proposed by Hopeld (1982), Lapedes and Farber (1986), Almeida (1987), Pineda (1988), and Rohwer and Forrest (1987). Other r

memory networks, have been proposed by Hopeld (1982), Lapedes and Farber (1986), Almeida (1987), Pineda (1988), and Rohwer and Forrest (1987). Other r A Learning Algorithm for Continually Running Fully Recurrent Neural Networks Ronald J. Williams College of Computer Science Northeastern University Boston, Massachusetts 02115 and David Zipser Institute

More information

Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function

Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function Analysis of the Learning Process of a Recurrent Neural Network on the Last k-bit Parity Function Austin Wang Adviser: Xiuyuan Cheng May 4, 2017 1 Abstract This study analyzes how simple recurrent neural

More information

Learning Long Term Dependencies with Gradient Descent is Difficult

Learning Long Term Dependencies with Gradient Descent is Difficult Learning Long Term Dependencies with Gradient Descent is Difficult IEEE Trans. on Neural Networks 1994 Yoshua Bengio, Patrice Simard, Paolo Frasconi Presented by: Matt Grimes, Ayse Naz Erkan Recurrent

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Introduction to Neural Networks: Structure and Training

Introduction to Neural Networks: Structure and Training Introduction to Neural Networks: Structure and Training Professor Q.J. Zhang Department of Electronics Carleton University, Ottawa, Canada www.doe.carleton.ca/~qjz, qjz@doe.carleton.ca A Quick Illustration

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Multilayer Perceptrons and Backpropagation

Multilayer Perceptrons and Backpropagation Multilayer Perceptrons and Backpropagation Informatics 1 CG: Lecture 7 Chris Lucas School of Informatics University of Edinburgh January 31, 2017 (Slides adapted from Mirella Lapata s.) 1 / 33 Reading:

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Recurrent Neural Networks. Jian Tang

Recurrent Neural Networks. Jian Tang Recurrent Neural Networks Jian Tang tangjianpku@gmail.com 1 RNN: Recurrent neural networks Neural networks for sequence modeling Summarize a sequence with fix-sized vector through recursively updating

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

MLPR: Logistic Regression and Neural Networks

MLPR: Logistic Regression and Neural Networks MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition Amos Storkey Amos Storkey MLPR: Logistic Regression and Neural Networks 1/28 Outline 1 Logistic Regression 2 Multi-layer

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Outline. MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition. Which is the correct model? Recap.

Outline. MLPR: Logistic Regression and Neural Networks Machine Learning and Pattern Recognition. Which is the correct model? Recap. Outline MLPR: and Neural Networks Machine Learning and Pattern Recognition 2 Amos Storkey Amos Storkey MLPR: and Neural Networks /28 Recap Amos Storkey MLPR: and Neural Networks 2/28 Which is the correct

More information

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

arxiv: v1 [cs.cl] 21 May 2017

arxiv: v1 [cs.cl] 21 May 2017 Spelling Correction as a Foreign Language Yingbo Zhou yingbzhou@ebay.com Utkarsh Porwal uporwal@ebay.com Roberto Konow rkonow@ebay.com arxiv:1705.07371v1 [cs.cl] 21 May 2017 Abstract In this paper, we

More information

Lecture 5 Neural models for NLP

Lecture 5 Neural models for NLP CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm

More information

Memory Elements I. CS31 Pascal Van Hentenryck. CS031 Lecture 6 Page 1

Memory Elements I. CS31 Pascal Van Hentenryck. CS031 Lecture 6 Page 1 Memory Elements I CS31 Pascal Van Hentenryck CS031 Lecture 6 Page 1 Memory Elements (I) Combinational devices are good for computing Boolean functions pocket calculator Computers also need to remember

More information

Neural Networks. Volker Tresp Summer 2015

Neural Networks. Volker Tresp Summer 2015 Neural Networks Volker Tresp Summer 2015 1 Introduction The performance of a classifier or a regression model critically depends on the choice of appropriate basis functions The problem with generic basis

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data January 17, 2006 Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Multi-Layer Perceptrons Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2. 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2011 Recitation 8, November 3 Corrected Version & (most) solutions

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

A Novel Activity Detection Method

A Novel Activity Detection Method A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

CSC321 Lecture 16: ResNets and Attention

CSC321 Lecture 16: ResNets and Attention CSC321 Lecture 16: ResNets and Attention Roger Grosse Roger Grosse CSC321 Lecture 16: ResNets and Attention 1 / 24 Overview Two topics for today: Topic 1: Deep Residual Networks (ResNets) This is the state-of-the

More information

Back-propagation as reinforcement in prediction tasks

Back-propagation as reinforcement in prediction tasks Back-propagation as reinforcement in prediction tasks André Grüning Cognitive Neuroscience Sector S.I.S.S.A. via Beirut 4 34014 Trieste Italy gruening@sissa.it Abstract. The back-propagation (BP) training

More information

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural 1 2 The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural networks. First we will look at the algorithm itself

More information

Replacing eligibility trace for action-value learning with function approximation

Replacing eligibility trace for action-value learning with function approximation Replacing eligibility trace for action-value learning with function approximation Kary FRÄMLING Helsinki University of Technology PL 5500, FI-02015 TKK - Finland Abstract. The eligibility trace is one

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Introduction to RNNs!

Introduction to RNNs! Introduction to RNNs Arun Mallya Best viewed with Computer Modern fonts installed Outline Why Recurrent Neural Networks (RNNs)? The Vanilla RNN unit The RNN forward pass Backpropagation refresher The RNN

More information

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE LECTURE NOTE #NEW 6 PROF. ALAN YUILLE 1. Introduction to Regression Now consider learning the conditional distribution p(y x). This is often easier than learning the likelihood function p(x y) and the

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

y k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation

y k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation Neural networks viewed as directed graph - Signal flow graph: w j f(.) x j y k = w kj x j x j y k = (a)synaptic f(x j ) link linear i/p o/p relation (b) Activation link linear i/p o/p relation y i x j

More information

Dynamic Working Memory in Recurrent Neural Networks

Dynamic Working Memory in Recurrent Neural Networks Dynamic Working Memory in Recurrent Neural Networks Alexander Atanasov Research Advisor: John Murray Physics 471 Fall Term, 2016 Abstract Recurrent neural networks (RNNs) are physically-motivated models

More information