International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training

Size: px
Start display at page:

Download "International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training"

Transcription

1 International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training Aakash Jain Spring Semester Executive Summary Echo State Networks (ESN) present a novel approach to analysing and training recurrent neural networks (RNNs). It leads to a fast, simple and constructive algorithm for supervised training of RNNs. A very powerful blackbox modeling tool to build models to simulate, predict, filter, classify, or control nonlinear dynamical systems, what makes ESNs excel over traditional techniques is that it can efficiently encode and retain massive information in an ESN echo network state about a long previous history. This makes ESNs excellent approximators of, among other nonlinear dynamical systems, chaotic time series - with obvious applications in prediction of such tasks as currency exchange rates. This research proposal aims to further improve upon the best empirical result of predicting a chaotic time series, which was obtained by an ESN, by replacing the linear readout mechanism employed by ESNs with a multi layer perceptron (MLP). This shall allow the resulting network to harness the dynamical memory of an ESN with the approximating powers of the gradient descent algorithm of a MLP, resulting in a powerful approximator, smaller in size than using just an ESN and allowing for more feasible practical implementations in telecommunications. 1

2 2 Summary Description of Project By now, there exist many kinds of artificial neural networks (ANN) and can be mainly characterized by their learning mechanisms (supervised or unsupervised) and network structures (feedforward-only or recurrent). In feedforward networks, activation is piped through the network from input units to output units, such as in Multi-Layer Perceptrons (MLP). Conversely, recurrent neural network (RNN) are characterized by feedback ( recurrent ) loops in their synaptic connection pathways, thus closely resembling biological neural networks and exhibiting dynamic memory. An Echo State Network (ESN) is an artificial recurrent neural network, characterized by its use of a large randomly-connected RNN (50 to 1000 neurons) and in that only the synaptic connections from the RNN to the output readout neurons are modified by learning. Because there are no cyclic dependencies between the trained readout connections, training an ESN becomes a simple linear regression task, solved by any offline or online linear regression algorithm to minimize the error: E[(d(t) y(t)) 2 ], where d(t) is the desired output (teacher signal when teacher forcing the output) and y(t) is the network generated output. Thus ESNs benefit from reduced training complexity, allowing for sparsely connected large network structures. It is important that the network is sparsely connected in order to develop and provide a rich reservoir - the dynamical reservoir (DR) - of excited dynamics which is then tapped by the output weights. The large recurrent network structure, under certain conditions, develops so-called echo states, which can be thought of as a state-space representation of neurons (internal units) of the network inherently encoding previous and current input (and output for networks with output feedback). It is this echo state property of ESNs which gives them the capability to store and represent massive information in a single ESN echo network state about a long previous history, and develop a large DR. This makes ESNs a very good predictor of chaotic time series, such as that generated by the Mackey-Glass delay differential equation. ESNs have already been shown to improve upon the benchmark task of predicting the Mackey-Glass system (MGS) time series by a factor of 2400 over previous techniques [12]. This research proposal aims to study the effect of training a MLP as the readout mechanism for the ESN instead of the technique of training just the disjoint DR to output weights employed by conventional ESNs. Basically, this means cascading a suitable MLP to a modified ESN in the sense that the internal units of the ESN are connected to the input units of the MLP in a suitable manner. The MLP is then trained to generate the desired output on the input it receives from the internal units of the ESN - the echo state. This approach replaces the linear readout mechanism of an ESN with a nonlinear method of a MLP, with the goal of further improving the prediction of a chaotic time series by harnessing the increased non-linearity posed by the MLP, along with maintaining the very useful echo state property of an ESN. It is not clear if the new neural network schema will in fact be more powerful in predicting the chaotic time series, however, one hypothesis which seems quite likely is that this schema should allow in a considerable reduction in the number of internal units of the ESN resulting in a general decrease in the number of network units and faster network response times during the exploitation phase. Section 3 provides a more detailed statement of the problem/research along with some motivation to do research in the desired field. Section 4 talks about the planned experiments in order to study the desired effects. 2

3 3 Statement and Motivation of Research The whole universe can be seen as a very complex high-dimensional nonlinear dynamical system, composed of smaller such systems. Such systems are not well understood, making it infeasible to obtain executable analytical models required to simulate, predict, filter, classify or control them. In such cases, one has to resort to blackbox modeling techniques which, while ignoring the internal physical mechanisms, reproduce the outwardly observable input-output behavior of the target system. Neural networks represent one such class of blackbox modeling techniques. Depending on the network structure they possess the capabilities to approximate linear and nonlinear dynamical systems to an arbitrary precision, making them practically relevant tools in applications related to telecommunications (channel equalization), control (of engines, generators, chemical plants), dynamic pattern classification (speech recognition), pattern generation (computer game animation, dynamical models of humans, machines, natural systems) and time series prediction (prediction of currency exchange rates or coronary attacks). Echo State Networks (ESN) present a novel approach to analyzing and training recurrent neural networks [9], resulting in a fast, simple and constructive algorithm for supervised training. Recurrent neural networks are more powerful in their representative powers since, like biological networks (which are recurrent), they can approximate arbitrary non-linear dynamical systems to an arbitrary accuracy (the universal approximation property)[7], as opposed to the static nonlinear input-output mappings achieved by feedforward networks, such as multi-layer perceptrons (MLP). Much of the neural network related applications and a big majority of the literature is based on these feedforward networks, since the established training algorithms for recurrent neural networks, such as Back Propagation Through Time (BPTT) [21], Real Time Recurrent Learning (RTRL) [24] and the Extended Kalman Filter (EKF) [6], suffer from drawbacks of slow convergence and suboptimal solutions. Section 3.1 provides a quick introduction on the training method of an ESN and section 3.2 provides the same for MLPs. Section 3.3 introduces the new network structure proposed by in this proposal, which is constructed by replacing the linear readout mechanism of an ESN with a MLP and discusses possible advantages or drawbacks of this new schema. 3.1 Echo State Networks (ESN) Figure 3.1: ESN Schema The state update equation of the ESN is given as: x(n + 1) = tanh(wx(n) + w in u(n + 1) + w fb y(n) + v(n)), (1) W: N N matrix of internal connection weights, w in : N-size vector of input connection weights, 3

4 w fb (optional): weight vector for feedback connections from the output neuron to the reservoir (internal units), v(n) (optional): noise vector. The output equation y(n) for a single-output network (as shown in figure 3.1) is: y(n) = tanh(w out (x(n), u(n))), (2) w out : (N+1)-size vector of weights of connections to the output neuron. nonlinearity tanh is optional. The output It is the output weights w out that are adjusted by the learning procedure. Lets assume that the weight vector w out is composed of w i, where each w i corresponds to the connection weight from the internal unit i to the output unit. As seen from figure 3.1, there exist no cyclic dependencies between these w i, which results in the task of adjusting the w i boiling down to a linear regression task to minimize the error E[(d(n) y(n)) 2 ]. Now, lets discuss an important property of ESNs that makes them such good approximators: the echo state property. The echo state property basically states that under certain conditions (e.g. σ max < 1), certain I/O echo functions exist for teacher forced output, modeled as: x i (n) = h i (u(n), u(n 1),..., y(n 1), y(n 2),...), (3) where h i is the echo function which produces the activation of the internal unit i. ESNs build the function h of the final deterministic dynamic equation d(t) = h(u(t), u(t 1),..., d(t 1), d(t 2),...) from linear combinations of the I/O echo functions (h i ): w i h i (u(t),..., y(t 1),...). (4) i Inherently, this means that ESNs linearly tap the desired output from the dynamical reservoir (DR), which can be thought to inherently encode the information about current and past input and output in its state as a result of the echo functions h i. This is also why the ESN should be large and sparsely connected, since it allows for a rich set of diverse dynamics to develop and reverberate in the dynamical memory of the DR that can then be tapped by the readout mechanism of the ESN. It might be useful to note here that a similar ANN model has been suggested by the research group of Wolfgang Maass et al, who have termed such networks as Liquid State Machines (LSM) [14]. They refer to the DR as the liquid and have also suggested employing a powerful readout mechanism from the liquid, such as that proposed in section 3.3 (the main topic of this research proposal). 3.2 Multi Layer Perceptrons (MLP) MLPs can be cosidered as providing a nonlinear mapping between an input vector, and a corresponding output vector. From a set of input output vectors, an MLP with a given number of hidden neurons may be trained by minimizing a least mean square cost criterion. One of the most widely known forms of an MLP training algorithms is the so called backpropagation algorithm, which was introduced in [19] and first applied to a time series modeling task in [13]. Backpropagation is a gradient descent technique, descending the error surface in the weight space. It involves four main stages: 1) randomly initialize network weights, 2) propagate the input x(n) forward through the network, 3) backpropagate the associated error terms δ i from the output layer to the hidden layer(s), and 4) update the network weights in the direction of the steepest gradient descent towards a local minima. Steps 2-4 are repeated until desired stopping criterion is achieved, usually realised through a crossvalidation scheme. It is well known that MLPs with just one hidden layer possess the universal approximation property to approximate arbitrary static nonlinear functions to arbitrary accuracy. 4

5 3.3 ESN with MLP as Readout Mechanism Figure 3.2: Schema of an ESN using MLP as the readout mechanism Figure 3.2 shows the basic schema of the new network structure we would like to investigate. The idea is to replace the linear readout mechanism of a conventional ESN with the nonlinear MLP and train the weights associated with the MLP using the backpropagation algorithm to achieve the desired approximation. Effectively, instead of linearly combining the echo states x i (n), we now combine them nonlinearly by feeding them as input to the MLP. The effect of this process is not quite predictable on the final generalization capabilities of ESNs since it could either improve or degrade performance depending on the task domain and other parameters of the network. Increased nonlinearity could drive the network in the thresholding range, resulting in poor approximation of smooth functions. On the other hand, it could also be that the new network structure emerges as a powerful approximator of nonlinear dynamical systems by exploiting the rich and dynamic memory reservoir of the ESN along with the approximating powers (gradient descent technique, which can localise a minima better) of the MLP. The true behaviour of the network needs to be empirically determined and the effect of altering various network parameters determined. One clear hypothesis that can be made about the resulting network properties is that for it to have the same performance as a traditional ESN, this new network would require considerably less internal units in its dynamic reservoir, otherwise we will almost certainly overfit the training data and result in poor performance in the exploitation stage. Adding the MLP considerably increases the number of trainable parameters in the network, thus allowing us to reduce the ESN size and result in a generally smaller network with faster response times and activation propagation. However, reducing the ESN size also means reducing the dimensions of the dynamical reservoir, where the rich set of the varied dynamics evolves and lives as echo states. This will have an adverse effect on the temporal memory capabilities of the ESN. Therefore, there seems to be a tradeoff between overfitting and memory as the size of an ESN is reduced. This is another effect that needs to be empirically investigated. Finally, to test whether the new network structure does (or does not) evolve as a more powerful approximating tool than conventional ESNs, we shall test the performance of the new structure on chaotic time series. ESNs hold the current record of prediciting the Mackey Glass 17 System (MGS 17) (see section 4.1 for more info on MGS) 84 steps in the future (a benchmark task) with a log 10 NRMSE 84 of [12], which is an improvement by a factor of 2450 over any other chaotic time series approximator. Improving on this would certainly establish the proposed network structure as one of the best approximators to chaotic time series, and it is expected that this can be done with in fact a smaller network size than that used in [12]. 5

6 4 Experimental Setup First, we would like to determine if the new network structure proposed in 3.3 does possess the same or better approximation capabilities of nonlinear dynamical systems (e.g. chaotic time series), quite possibly with a smaller network size. Another parameter that we would like to test is the effect of the structure of incoming synapses from the ESN to the MLP. During training, the training input is teacher forced onto the output of the MLP, which is directly fed back into the ESN. The output of the ESN is then fed into the MLP, which then generates the final network output. This is compared with the teacher to compute the error to be minimized. After training, the teacher forced signal is decoupled from the network and its own output (from the MLP) is fed back in (to the ESN). As a stopping criterion for training, we shall use crossvalidation. 4.1 Preparation of training and testing data The dataset is obtained from the discretized version of the Mackey-Glass (MG) delay differential equation: dx/dt = 0.2x(t τ)/(1 + x(t τ) x(t)). This equation was proposed by L. Glass and M.C. Mackey in [15] in 1977 to describe a model for the onset of leukaemia. Over the years, this equation has established itself as a benchmark time series prediction dataset. τ is the delay and we shall first use τ = 17, followed by testing on τ = 30. If time permits, the network shall also be tested on dataset from other chaotic attractors: Lorentz attractor and the Laser time series. The dataset is prepared (discretized, shifted and scaled) as described in the supporting online material to [12], before feeding it to the network. Artificial noise is injected into the data. 4.2 Network Setup Here, we have two individual networks to setup and initialize: the ESN and the MLP. ESN: Setup in accordance to as described in [11] (and using the Matlab implementation provided). Initially, start with a large DR of 1000 units. This results in a weight matrix W with 1% connectivity and random weights drawn from a uniform distribution over (-1,1) and then rescaled to spectrul radius of 0.8. As the network size is one of the main investigation parameters and it has been postulated that we will need smaller networks, we repeat the experiments with smaller DRs until performance begins to degrade. The decay rate of the DR can be determined based on results obtained. Output feedback is turned on and an auxiliary input unit is attached to feed in a constant bias input. MLP: For the MLP setup, we use one hidden layer (see section 3.2) with 8 hidden units h. This number has been chosen from a survey of relevant literature, such that it is not too big (resulting in overfitting) or too small (resulting in underfitting). [1] and [23] suggest that the effects of overparametrized MLPs can be overcome by careful selection of the range { a, a} from which the weight values are initialized, and that one may fix the value of h and carefully pick a for the range from which the initial weights are initialized. [1] also suggests a new method to do complexity analysis of a MLP, based on which the initial weights can be picked, as opposed to the commonly held belief that the smaller the value of a the better. However, we shall empirically determine the value of a by performing pretests with decaying values of a. Besides, for the final value of a picked, we shall repeat the experiment several times, each time re-initializing the weights from { a, a}; thus starting in different places on the error surface of the weight space (Monte-Carlo simulation). Another variable parameter here is the connection structure from the ESN internal units to the hidden units of the MLP. For this, we first experiment by connecting n/h (n: number of internal ESN 6

7 units) units from the ESN to each hidden unit of the MLP. This value can then be scaled to n, in which case there would be an incoming synapse from all of the ESN internal units to all of the MLP hidden units. The activation functions for the MLP are chosen to be sigmoid for the hidden units and linear for the single output unit, using the tanh function as the sigmoid function. The step-size η for the backpropagation algorithm is made to decay, with the decay rate being determined empirically through pretests. Too large an η results in initial fast convergence followed by constant jitter around the minima. On the other hand, a small η might take prohibitively long to converge. 4.3 Evaluation Criterion The evaluation criterion for our experiments is simply the log 10 NRMSE, which stands at for MG 17 and for MG 30 as the best results achieved yet. Improving upon these would certainly be one goal. For this, we probably have to use the ideas posed by the refined version of learning method in [12]: train the network using a reservoir with dynamics closer to the one encountered at exploitation time to improve modelling accuracy. 4.4 Time Scale 25th April, 2004: Setup the network as a Matlab simulation and start preliminary testing to identify various parameters. 27th April, 2004: Finish preliminary testing and start the real experiments. 03rd May, 2004: Finish experimentation phase. 07th May, 2004: Finish data analysis. 13th May, 2004: Final report due. 7

8 References [1] A. Atiya and C. Ji. How initial conditions affect generalization performances in large networks. IEEE Trans. Neural Networks, 8(2): , [2] H. Bersini, M. Birattari, and G. Bontempi. Proc. IEEE World Congr. on Computational Intelligence (IJCNN 98), pages , [3] T. Chow and C.T. Leung. Performance enhancement uning nonlinear preprocessing. IEEE Trans. on Neural Networks, 7(4), July [4] L. Chudy and I. Farkas. Neural Network World, 8:481, [5] L. Fausett. Fundamentals of Neural Networks. Prentice Hall, [6] L.A. Feldkamp, D.V. Prokhorov, C.F. Eagen, and F.Yuan. in Nonlinear Modeling: Advanced Black-Box Techniques, pages 29 54, [7] K.-I. Funahashi and Y. Nakamura. Neural Networks, 6:801, [8] F. Gers, D. Eck, and J.F. Schmidhuber. Applying lstm to time series predictable through timewindow approaches. IDSIA-IDSIA-22-00, [9] H. Jaeger. The echo state approach to analysing and training recurrent neural networks. Technical Report 148, German National Research Center for Information Technology, [10] H. Jaeger. Short term memory in echo state networks. Technical Report 152, German National Research Center for Information Technology, [11] H. Jaeger. Tutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the echo state network approach. Technical Report 159, German National Research Center for Information Technology, [12] H. Jaeger and H. Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, April 2:78 80, [13] A. Lapedes and R. Farber. Non-linear signal processing using neural networks: Prediction and system modelling. Technical Report LA-UR , Los Alamos National Laboratory, [14] W. Maass, T. Natschläger, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14: , [15] M.C. Mackey and L. Glass. Science, 197:287, [16] T.M. Martinetz, S.G. Berkovich, and K.J. Schulten. IEEE Trans. Neural Networks, 4:558, [17] J. McNames, J.A.K. Suykens, and J. Vandewalle. Int. J. Bifurcation Chaos, 9:1485, [18] T.M. Mitchell. Machine Learning. McGraw-Hill, [19] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representations by error propagation. Parallel Distributed Processing, 1: , [20] J. Vesanto. Proc. WSOM 97,

9 [21] P.J. Werbos. Proc. IEEE, 78(10):1550, [22] X. Yao and Y. Liu. IEEE Trans. Neural Networks, 8:694, [23] S. Zhong and V. Cherkassky. Factors controlling generalization ability of mlp networks. [24] D. Zipser and R.J. Williams. Neural Comput., 1:270,

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication Publishde in Science Magazine, 2004 Siamak Saliminejad Overview Eco State Networks How to build ESNs Chaotic

More information

A quick introduction to reservoir computing

A quick introduction to reservoir computing A quick introduction to reservoir computing Herbert Jaeger Jacobs University Bremen 1 Recurrent neural networks Feedforward and recurrent ANNs A. feedforward B. recurrent Characteristics: Has at least

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Reservoir Computing with Stochastic Bitstream Neurons

Reservoir Computing with Stochastic Bitstream Neurons Reservoir Computing with Stochastic Bitstream Neurons David Verstraeten, Benjamin Schrauwen and Dirk Stroobandt Department of Electronics and Information Systems (ELIS), Ugent {david.verstraeten, benjamin.schrauwen,

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

Short Term Memory and Pattern Matching with Simple Echo State Networks

Short Term Memory and Pattern Matching with Simple Echo State Networks Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Negatively Correlated Echo State Networks

Negatively Correlated Echo State Networks Negatively Correlated Echo State Networks Ali Rodan and Peter Tiňo School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {a.a.rodan, P.Tino}@cs.bham.ac.uk

More information

Refutation of Second Reviewer's Objections

Refutation of Second Reviewer's Objections Re: Submission to Science, "Harnessing nonlinearity: predicting chaotic systems and boosting wireless communication." (Ref: 1091277) Refutation of Second Reviewer's Objections Herbert Jaeger, Dec. 23,

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Echo State Networks with Filter Neurons and a Delay&Sum Readout

Echo State Networks with Filter Neurons and a Delay&Sum Readout Echo State Networks with Filter Neurons and a Delay&Sum Readout Georg Holzmann 2,1 (Corresponding Author) http://grh.mur.at grh@mur.at Helmut Hauser 1 helmut.hauser@igi.tugraz.at 1 Institute for Theoretical

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Advanced Methods for Recurrent Neural Networks Design

Advanced Methods for Recurrent Neural Networks Design Universidad Autónoma de Madrid Escuela Politécnica Superior Departamento de Ingeniería Informática Advanced Methods for Recurrent Neural Networks Design Master s thesis presented to apply for the Master

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

Temporal Backpropagation for FIR Neural Networks

Temporal Backpropagation for FIR Neural Networks Temporal Backpropagation for FIR Neural Networks Eric A. Wan Stanford University Department of Electrical Engineering, Stanford, CA 94305-4055 Abstract The traditional feedforward neural network is a static

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Good vibrations: the issue of optimizing dynamical reservoirs

Good vibrations: the issue of optimizing dynamical reservoirs Good vibrations: the issue of optimizing dynamical reservoirs Workshop on ESNs / LSMs, NIPS 2006 Herbert Jaeger International University Bremen (Jacobs University Bremen, as of Spring 2007) The basic idea:

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Computing and Informatics, Vol. 30, 2011, 321 334 MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Štefan Babinec, Jiří Pospíchal Department of Mathematics Faculty of Chemical and Food Technology

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Long-Short Term Memory

Long-Short Term Memory Long-Short Term Memory Sepp Hochreiter, Jürgen Schmidhuber Presented by Derek Jones Table of Contents 1. Introduction 2. Previous Work 3. Issues in Learning Long-Term Dependencies 4. Constant Error Flow

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory Danilo López, Nelson Vera, Luis Pedraza International Science Index, Mathematical and Computational Sciences waset.org/publication/10006216

More information

Reservoir Computing in Forecasting Financial Markets

Reservoir Computing in Forecasting Financial Markets April 9, 2015 Reservoir Computing in Forecasting Financial Markets Jenny Su Committee Members: Professor Daniel Gauthier, Adviser Professor Kate Scholberg Professor Joshua Socolar Defense held on Wednesday,

More information

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington Neural Networks CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Perceptrons x 0 = 1 x 1 x 2 z = h w T x Output: z x D A perceptron

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate

More information

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang C4 Phenomenological Modeling - Regression & Neural Networks 4040-849-03: Computational Modeling and Simulation Instructor: Linwei Wang Recall.. The simple, multiple linear regression function ŷ(x) = a

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w 1 x 1 + w 2 x 2 + w 0 = 0 Feature 1 x 2 = w 1 w 2 x 1 w 0 w 2 Feature 2 A perceptron

More information

Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy

Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy 2 Piero Baraldi Data Industry 4.0 2 Digitalization 2.8 Trillion GD (ZD)

More information

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits Wolfgang Maass, Robert Legenstein, Nils Bertschinger Institute for Theoretical Computer Science Technische

More information

Chapter 15. Dynamically Driven Recurrent Networks

Chapter 15. Dynamically Driven Recurrent Networks Chapter 15. Dynamically Driven Recurrent Networks Neural Networks and Learning Machines (Haykin) Lecture Notes on Self-learning Neural Algorithms Byoung-Tak Zhang School of Computer Science and Engineering

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

y(n) Time Series Data

y(n) Time Series Data Recurrent SOM with Local Linear Models in Time Series Prediction Timo Koskela, Markus Varsta, Jukka Heikkonen, and Kimmo Kaski Helsinki University of Technology Laboratory of Computational Engineering

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Felix Schürmann, Karlheinz Meier, Johannes Schemmel Kirchhoff Institute for Physics University of Heidelberg Im Neuenheimer Feld 227, 6912 Heidelberg,

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption ANDRÉ NUNES DE SOUZA, JOSÉ ALFREDO C. ULSON, IVAN NUNES

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Intelligent Modular Neural Network for Dynamic System Parameter Estimation Intelligent Modular Neural Network for Dynamic System Parameter Estimation Andrzej Materka Technical University of Lodz, Institute of Electronics Stefanowskiego 18, 9-537 Lodz, Poland Abstract: A technique

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos

Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Memory Capacity of Input-Driven Echo State NetworksattheEdgeofChaos Peter Barančok and Igor Farkaš Faculty of Mathematics, Physics and Informatics Comenius University in Bratislava, Slovakia farkas@fmph.uniba.sk

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Threshold units Gradient descent Multilayer networks Backpropagation Hidden layer representations Example: Face Recognition Advanced topics 1 Connectionist Models Consider humans:

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

T Machine Learning and Neural Networks

T Machine Learning and Neural Networks T-61.5130 Machine Learning and Neural Networks (5 cr) Lecture 11: Processing of Temporal Information Prof. Juha Karhunen https://mycourses.aalto.fi/ Aalto University School of Science, Espoo, Finland 1

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Recurrent neural networks

Recurrent neural networks 12-1: Recurrent neural networks Prof. J.C. Kao, UCLA Recurrent neural networks Motivation Network unrollwing Backpropagation through time Vanishing and exploding gradients LSTMs GRUs 12-2: Recurrent neural

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Christian Emmerich, R. Felix Reinhart, and Jochen J. Steil Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Neural Networks. Volker Tresp Summer 2015

Neural Networks. Volker Tresp Summer 2015 Neural Networks Volker Tresp Summer 2015 1 Introduction The performance of a classifier or a regression model critically depends on the choice of appropriate basis functions The problem with generic basis

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 4, April 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Application of

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A.

A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network. José Maria P. Menezes Jr. and Guilherme A. A New Look at Nonlinear Time Series Prediction with NARX Recurrent Neural Network José Maria P. Menezes Jr. and Guilherme A. Barreto Department of Teleinformatics Engineering Federal University of Ceará,

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

Jakub Hajic Artificial Intelligence Seminar I

Jakub Hajic Artificial Intelligence Seminar I Jakub Hajic Artificial Intelligence Seminar I. 11. 11. 2014 Outline Key concepts Deep Belief Networks Convolutional Neural Networks A couple of questions Convolution Perceptron Feedforward Neural Network

More information

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks Thomas Natschläger Software Competence Center Hagenberg A-4232 Hagenberg, Austria Thomas.Natschlaeger@scch.at

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information