From Neural Networks To Reservoir Computing...

Size: px
Start display at page:

Download "From Neural Networks To Reservoir Computing..."

Transcription

1 From Neural Networks To Reservoir Computing... An Introduction Naveen Kuppuswamy, PhD Candidate, A.I Lab, University of Zurich

2 Disclaimer : I am only an egg 2

3 Part I From Classical Computation to ANNs 3

4 Whis happening here? 4

5 What is happening here? Computation? 5

6 This might be feasible en... But Let's back up a bit... 6

7 What is Computation? 7

8 What is Computation? A Mapping of each element of an input set X to an output which is an element of e output set Y. 8

9 How are ese ings defined? Input is : Sequences of 0 and 1 : x {0,1}* Output is : y {0,1} Implementation? 9

10 How do you implement it? 10

11 Practical Implementation Distinction between following elements : Program (in memory) Data (input device), Output (output device) Computation in : processor (sequential) 11

12 Where is e tape here? 12

13 Where is e tape here? What makes it unique en? 13

14 The Brain Massively parallel computation Analog information processing Self-adaptation Implementation in wetware 14

15 The Brain Massively parallel computation Analog information processing Self-adaptation Implementation in wetware Whats in it en? 15

16 Computational Units : Neurons many different types of neurons 16

17 Computational Units : Neurons many different types of neurons How to explain e Big picture? 17

18 Abstractions of Neural Function McCulloch and Pitts Artificial neuron, 1943 Hebb's Learning Rule, 1949 Rosenblatt's Perceptrons and backpropogation learning,

19 From biological neurons to abstract models 19

20 From biological neurons to abstract models some important abstractions: abstract artificial neurons: simple but very powerful Learning Rule 20

21 Feedforward Neural Networks Output Layer O1 O2 Middle (Hidden) Layer Input Layer I1 More General case can be obtained using Multiple layers Nonlinearities can be introduced in resholding Eg. Multi Layer Perceptrons + Backpropogation, Radial Basis Function Networks I2 21

22 Some Applications Output Layer O1 O2 Action Movie Storylines Middle (Hidden) Layer Input Layer I1 I2 22

23 Some Applications Output Layer O1 O2 Classification Pattern Recognition Forecasting and Series Prediction Control - Industrial/Robots Middle (Hidden) Layer Input Layer I1 I2 23

24 Classification Examples Handwriting Target Identification < I: Pixels, O: Digit Face Recognition I: Pixels, O: Frame Coords. Lipreading I: Pixels, O: Letters I: Pixels, O: Face Y/N Source: 24

25 Oer Examples Obstacle Climbing Function Approximation Soccer (Strategy) Trajectory Prediction Source: 25

26 Feedforward Neural Networks : Features Output Layer O1 O2 Middle (Hidden) Layer Input Layer I1 Information moves in 1 direction Nonlinear Static Networks Learning can be accomplished by gradient descent meods (error backpropogation) I2 26

27 Feedforward Neural Networks : Features Output Layer O1 O2 Information moves in 1 direction Nonlinear Static Networks Middle (Hidden) Layer Input Layer I1 I2 Sweet! But can it solve everying? 27

28 Why is 'Timing' important in e game? 28

29 Temporal aspects in many problems 29

30 Dynamic problem example Song / Speech Recognition : Shazam? Works for songs and short (fixed) duration speech! 30

31 Dynamic problem example Song / Speech Recognition : Works for songs and short (fixed) duration speech! How can is be implmented on a Neural Network? 31

32 FF Networks wi time window inputs Practically possible, but is it general? 32

33 FF Networks wi time window inputs 33

34 34

35 Part II From RNNs to Reservoir Computing 35

36 Offline vs. Online Computation Static Function Dynamical System 36

37 From Feedforward to Recurrent Output Layer Middle (Hidden) Layer Input Layer 37

38 Recurrent Neural Networks RNN: O = ft (I), O(t+1)= g(i, O(t)) Output Layer Feedback loop Middle (Hidden) Layer Input Layer 38

39 Recurrent Neural Network (RNN) :a comparison FFN: O = f (I) Output Layer RNN: O = ft (I), O(t+1)= g(i, O(t)) Output Layer Feedback loop Middle (Hidden) Layer Input Layer Middle (Hidden) Layer Input Layer RNN is a dynamical system! 39

40 Recurrent Neural Networks : Topology? RNN: O = ft (I), O(t+1)= g(i, O(t)) 40

41 RNN : Features Input Layer Output Layer Universal approximator of dynamical systems Nearly all biological modules exhibit recurrent paways Different topologies : feedback, symmetric or fully connected But what about learning of RNNs? 41

42 RNN : Learning Meods Gradient Descent Meods Real Time Recurrent Learning (RTRL) Back Propogation Through Time (BPTT) Atiya Parlos Recurrent Learning (APRL) Global Optimization Meods Genetic Algorims Simulated Annealing 42

43 Back Propogation Through Time Prepare Ordered Pairs of Training data : <a0,y0>, <a1,y1>...<an-1,yn-1> Unfold a Neural Network during e Training Phase... 43

44 Back Propogation Through Time? Prepare Ordered Pairs of Training data : <a0,y0>, <a1,y1>...<an-1,yn-1> Unfold a Neural Network during e Training Phase... Apply regular Back Propogation! (Gradient-descent) 44

45 RNN : Learning Problems The gradual change of network parameters might reach points where e gradient information degenerates,ill-defined convergence cannot be guaranteed. A single parameter update is expensive, many update cycles may be necessary. Long training times RNN training feasible only for relatively small networks (in e order of tens of units). Intrinsically hard to learn dependences requiring long-range memory - gradient information exponentially dissolves over time Most meods require a lot of experience : Almost an Art! 45

46 A curious Insight into RNN Learning If a random recurrent neural network (RNN) possesses certain algebraic properties, training only a linear readout from it is often sufficient to achieve excellent performance in practical applications Jaeger,

47 The Reservoir Approach Use a large, random and fixed RNN- called a reservoir in is context -, inducing in each unit in e RNN a nonlinear transform of e input; Output signals read out from excited RNN by some readout mechanism : typically a simple linear combination of e reservoir signals; Outputs can be trained in a supervised way : typically by linear regression of e teacher output on e tapped reservoir signals. Jaeger, 2007 Reservoir Input Linear readout Output 47

48 The Reservoir Approach Use a large, random and fixed RNN- called a reservoir in is context -, inducing in each unit in e RNN a nonlinear transform of e input; Output signals read out from excited RNN by some readout mechanism : typically a simple linear combination of e reservoir signals; Outputs can be trained in a supervised way : typically by linear regression of e teacher output on e tapped reservoir signals. Jaeger, 2007 Key Idea : Separation between a reservoir and a readout function 48

49 Contrast wi Traditional RNN training Traditional RNN Reservoir Computing Lukosevicius and Jaeger,

50 Main Flavours Echo State Networks BackPropagation-DeCorrelation (BPDC) Liquid State Machines Temporal Recurrent Networks 50

51 Main Flavours Engineering (Application) Oriented Echo State Networks BackPropagation-DeCorrelation Temporal Recurrent Networks Liquid State Machines Biological Modeling 51

52 Echo State Networks (ESN) Observation : if a random RNN possesses certain algebraic properties, training only a linear readout from it is often sufficient. Jaeger, 2001 The untrained RNN part of an ESN is called a dynamical reservoir, and e resulting states x(n) are termed echoes of its input history. Proposed for machine learning and nonlinear signal processing 52

53 ESN : Characteristics Uses a large number of internal neurons Weight matrices randomly initialised Neurons typically use nonlinearity of form: Uses a weighted linear readout of form : Outputs trained as a linear regression Or e leaky integrator: 53

54 ESN : Parameters Connection Weights can be chosen Randomly *But* Reservoir has to have a fading memory The connection weights should ensure at e network functions at e edge of chaos 54

55 ESN : Parameters Time Scale : Sampling and Leak Rate Size of e Reservoir 55

56 ESN : Best Learning Results 3 Fold Cross validation 56

57 BackPropogation DeCorrelation (BPDC) An interesting insight into e Atiyas Parlos Recurrent Learning (ATPRL) technique : A functional decomposition of e trained networks into a fast adapting readout layer and a slowly changing dynamic reservoir Steil, 2004 ATPRL : Basically differentiates error function wrt states instead of wrt weights in BP (Virtual Teacher Forcing) Error computed wrt state Weight update drives network towards... Name refers to fact at x-δx is never fed back..hence a virtual force 57

58 Decorrelation of States and Input? Wait. What? 58

59 The BPDC principle explained.. To train e network, treat inner neurons as a dynamic reservoir providing dynamical memory. The Information Processing capacity is maximal IF e states are maximally decorrelated wi e input. A compromise of correlation wi conventional error backpropogation allows derivation of e learning rule which only modifies output weights Steil,

60 The BPDC principle explained.. Steil, 2004 To train e network, treat inner neurons as a dynamic reservoir providing dynamical memory. The Information Processing capacity is maximal IF e states are maximally decorrelated wi e input. A compromise of correlation wi conventional error backpropogation allows derivation of e learning rule which only modifies output weights. Its only O(N)! Nearly Warp speed! 60

61 Biological Models : Liquid State Machines Developed from a computational neuroscience perspective aiming at explaining principal computational properties of neural microcircuits. The reservoir is often referred to as e liquid, following an intuitive metaphor of e excited states as ripples on e surface of a pool of water. 61

62 Liquid State Machines Sophisticated, biologically realistic models of spiking integrate-and-fire neurons and dynamic synaptic connection models in reservoir. Neuron connectivity often follows topological and metric constraints. Maass et al, 2002 bio-motivated Inputs : Spike trains, Readouts : Originally used MLFFNN (of eier spiking or sigmoid neurons) 62

63 Liquid State Machines Sophisticated, biologically realistic models of spiking integrate-and-fire neurons and dynamic synaptic connection models in reservoir. Neuron connectivity often follows topological and metric constraints. Maass et al, 2002 bio-motivated Inputs : Spike trains, Readouts : Originally used MLFFNN (of eier spiking or sigmoid neurons) Aha! Eureka!? 63

64 Liquid State Machines Sophisticated, biologically realistic models of spiking integrate-and-fire neurons and dynamic synaptic connection models in reservoir. Neuron connectivity often follows topological and metric constraints. Maass et al, 2002 bio-motivated Inputs : Spike trains, Readouts : Originally used MLFFNN (of eier spiking or sigmoid neurons) Aha! Eureka!? Hmmm...Realistic, but hard to tune and train. Useful Noneeless 64

65 Temporal Recurrent Networks Based on research into cortico-striatal circuits in e human brain by Dominey. Focuss on empirical cognitive neuroscience and functional neuroanatomy Dominey et al, 1995,2003, ere is no learning in e recurrent connections, only between e State units and e Output units. adaptation is based on a simple associative learning mechanism..." "... It is wor noting at e simulated recurrent prefrontal network relies on fixed randomized recurrent connections,..." 65

66 Temporal Recurrent Networks Based on research into cortico-striatal circuits in e human brain by Dominey. Focuss on empirical cognitive neuroscience and functional neuroanatomy Dominey et al, 1995,2003, ere is no learning in e recurrent connections, only between e State units and e Output units. adaptation is based on a simple associative learning mechanism..." We finally might see e tape ;) "... It is wor noting at e simulated recurrent prefrontal network relies on fixed randomized recurrent connections,..." 66

67 67

68 Part III Applications, Case Studies 68

69 Some Applications Nonlinear Time series prediction System modeling Financial data modelling Signal Generation and prediction Classification Speech / audio Epileptic Seizure detection Robotics and Control 69

70 Some Application Examples Nonlinear Dynamical System Modeling and Identfication I: Input Data time series, O: nonlinear dynamical system Financial Time Series Prediction I: Financial data time series, O: Future data time series prediction Source: 70

71 Epileptic Seizure Detection I: real-time EEG Data time series, O: seizure onset (t/f) Home-MATE Project : Buteneers et al,

72 Robotics Applications Reservoir Computing for Pattern Generation for running robots and Frequency Modulation of patterns Echo State Networks for sensorimotor control of a multi-arm octopus robot 72

73 Reservoir computing for pattern generation Large Scale Dynamical Systems (Reservoir) for Implementing Motor Skills 73

74 Frequency modulation Introduced by Jaeger (2010) Continued by Li, Jaeger (2011) Train & equilibrate e RC-system Implement a suitable observer 3. Calculate e control weights 4. Tune e PID parameters 74

75 Puppy experiment ist d ed r i s de - ce n a measured distance robot Frequency modulation of RC pattern generator to maintain wall distance 75

76 Sensorimotor Control of Octopus Robot 76

77 Bio-inspiration and ESN design Visual System Central Nervous System Peripheral arm controller Peripheral Nervous System 77

78 Results 78

79 Case Studies : Morphological Computations Literal liquid state machines : Pattern Recognition in a bucket Theoritical Foundation for Morphological Computation : Kity Robot Spine as a Reservoir 79

80 Literal Liquid State Machines Pattern Recognition in a Bucket Fernando and Sojakka, Objective: Robust spatiotemporal pattern recognition in a2003 noisy environment samples ( zero and one ), seconds in leng Short-Time Fourier transform on active frequency range (1-3000Hz) to create a 8x8 matrix of inputs from each sample (8 motors, 8 time slices) Each sample to drive motors for 4 seconds, one after e oer Vision Processing : edge detection and to produce 700 outputs. 50 perceptrons in parallel trained using e p-delta rule 80

81 States of e Liquid Brain Zero One 81

82 Physical body as a reservoir ESN, LSN: artificial neurons high-dimensional dynamical system A physical body = a high-dimensional dynamical system A physical body might be able to work as a reservoir? Input stream u (t ) Recurrently connected nonlinear mass-spring system Mass points Mount point A linear, static readout States l1 (t ) w1 l N (t ) y (t ) wn Physical body x = v mass-spring system F = kx k x 3 dv d v 3 + Fx x : difference between e actual leng and e resting one 82

83 Theoretical Foundation for Morphological Computation? Readout 1 u (t ) Readout 2 Physical body Readout 3 y1 (t ) y2 (t ) y3 (t ) (Linear regression on raw input data: No physical body) Readout 1 w1u (t ) + wb,1 u (t ) Readout 2 w2u (t ) + wb, 2 Readout 3 w3u (t ) + wb,3 u y1 y2 y1 (t ) y2 (t ) y3 y3 (t ) The physical body contributes much to e computation! 83

84 Extension incorporating feedback Gait Patterns for a quadruped can be emulated by e morphological computational device (under feedback) 84

85 Influence of biologically-inspired constraints u Readout 1 u (t ) y1 Readout 2 Physical body Constraints: - Rigid part - Joint - Arrangement of springs u (t ) Rod Springs Readout 1 Ball joint Readout 2 y2 y1 (t ) y2 (t ) Musculoskeletal model The biologically-inspired model can have computational capability 85

86 Future application Quadrupedal robot wi a multi-joint spine Spine movement itself works as a controller u (t ) Readout 1 Readout 2 Spine movement A y1 (t ) y2 (t ) Walking Spine movement B Bounding Spine movement C Trotting For furer questions, please contact sumioka@ifi.uzh.ch 86

87 Kitty robot Spine: compliance, nonlinearities Kitty robot: Force sensors: 32 force sensors embedded to detect spinal dynamics Biologically inspired Spine Driven by e spine Size: 23cm X 29 cm X 20 cm Weight: 1.1kg 87

88 Spine as a reservoir 88

89 Key References Lukoševičius, M., and H. Jaeger, "Reservoir computing approaches to recurrent neural network training", Computer Science Review, vol. 3, no. 3, pp , August, Jaeger, H., "Echo state network", Scholarpedia, vol. 2, no. 9, pp. 2330, Jaeger, H., and H. Haas, "Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless telecommunication", Science, vol. 304, no. 5667, pp , April 2, Maass, W., T. Natschlaeger, and H. Markram, "Real-time Computing wiout stable states: A New Framework for Neural Computation Based on Perturbations", Neural Computation, vol. 14, no. 11, pp , Schiller, U. D., and J. J. Steil, "Analyzing e weight dynamics of recurrent learning algorims", Neurocomputing, vol. 63C, pp. 5-23, Steil, J. J., "Backpropagation-Decorrelation: online recurrent learning wi O(N) complexity," IJCNN,

90 Additional Resources Website wi information and resources : EU FP7 funded collaborative projects AMARSi (Adaptive Modular Architecture for Rich Motor Skills) PHOCUS (toward a PHOtonic liquid state machine based on delay-coupled Systems) Mission: design and implement a photonics realization of a liquid state machine (LSM), wi e potential for versatile and fast signal handling. ORGANIC (Self-Organized Recurrent Neural Learning for Language Processing) Mission: exploit principles of neurodynamics and neurocontrol to endow complex and compliant robots wi rich sets of motor skills Mission: Establish neurodynamical architectures as viable alternative to statistical meods for speech and handwriting recognition. The OrGanic Environment for Reservoir computing (Oger) toolbox : Pyon toolbox for rapidly building, training and evaluating modular learning architectures on large datasets. 90

91 Shameless Plug Need motivated students! Projects on : Reservoir Computing C++ implementations, Tendon Driven Robot characterisation using Tracker System Contact : Helmut Hauser (hhauser@ifi.uzh.ch) or Naveen Kuppuswamy (naveenoid@ifi.uzh.ch) 91

92 Perhaps Someday? Questions, Comments, feedback? Thanks for all e FISH! A.I. Lab (Zurich): Matej Hoffmann, Dr. Hidenobu Sumioka, Dr. Kohei Nakajima, Dr. Helmut Hauser, Qian Zhao, Tao Li, Maias Weyland Reservoir Lab (Ghent) : Francis wyffels, Tim waegeman, Ken Caluwaerts 92

From Neural Networks To Reservoir Computing...

From Neural Networks To Reservoir Computing... From Neural Networks To Reservoir Computing... An Introduction Naveen Kuppuswamy, PhD Candidate, A.I Lab, University of Zurich 1 Disclaimer : I am only an egg 2 Part I From Classical Computation to ANNs

More information

Reservoir Computing and Echo State Networks

Reservoir Computing and Echo State Networks An Introduction to: Reservoir Computing and Echo State Networks Claudio Gallicchio gallicch@di.unipi.it Outline Focus: Supervised learning in domain of sequences Recurrent Neural networks for supervised

More information

Advanced Methods for Recurrent Neural Networks Design

Advanced Methods for Recurrent Neural Networks Design Universidad Autónoma de Madrid Escuela Politécnica Superior Departamento de Ingeniería Informática Advanced Methods for Recurrent Neural Networks Design Master s thesis presented to apply for the Master

More information

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving

Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication Publishde in Science Magazine, 2004 Siamak Saliminejad Overview Eco State Networks How to build ESNs Chaotic

More information

Short Term Memory and Pattern Matching with Simple Echo State Networks

Short Term Memory and Pattern Matching with Simple Echo State Networks Short Term Memory and Pattern Matching with Simple Echo State Networks Georg Fette (fette@in.tum.de), Julian Eggert (julian.eggert@honda-ri.de) Technische Universität München; Boltzmannstr. 3, 85748 Garching/München,

More information

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training

International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training International University Bremen Guided Research Proposal Improve on chaotic time series prediction using MLPs for output training Aakash Jain a.jain@iu-bremen.de Spring Semester 2004 1 Executive Summary

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Negatively Correlated Echo State Networks

Negatively Correlated Echo State Networks Negatively Correlated Echo State Networks Ali Rodan and Peter Tiňo School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {a.a.rodan, P.Tino}@cs.bham.ac.uk

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Using reservoir computing in a decomposition approach for time series prediction.

Using reservoir computing in a decomposition approach for time series prediction. Using reservoir computing in a decomposition approach for time series prediction. Francis wyffels, Benjamin Schrauwen and Dirk Stroobandt Ghent University - Electronics and Information Systems Department

More information

REAL-TIME COMPUTING WITHOUT STABLE

REAL-TIME COMPUTING WITHOUT STABLE REAL-TIME COMPUTING WITHOUT STABLE STATES: A NEW FRAMEWORK FOR NEURAL COMPUTATION BASED ON PERTURBATIONS Wolfgang Maass Thomas Natschlager Henry Markram Presented by Qiong Zhao April 28 th, 2010 OUTLINE

More information

A Model for Real-Time Computation in Generic Neural Microcircuits

A Model for Real-Time Computation in Generic Neural Microcircuits A Model for Real-Time Computation in Generic Neural Microcircuits Wolfgang Maass, Thomas Natschläger Institute for Theoretical Computer Science Technische Universitaet Graz A-81 Graz, Austria maass, tnatschl

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks Christian Emmerich, R. Felix Reinhart, and Jochen J. Steil Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy

Annales UMCS Informatica AI 1 (2003) UMCS. Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and informational entropy Annales UMC Informatica AI 1 (2003) 107-113 Annales UMC Informatica Lublin-Polonia ectio AI http://www.annales.umcs.lublin.pl/ Liquid state machine built of Hodgkin-Huxley neurons pattern recognition and

More information

Reservoir Computing with Stochastic Bitstream Neurons

Reservoir Computing with Stochastic Bitstream Neurons Reservoir Computing with Stochastic Bitstream Neurons David Verstraeten, Benjamin Schrauwen and Dirk Stroobandt Department of Electronics and Information Systems (ELIS), Ugent {david.verstraeten, benjamin.schrauwen,

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

A quick introduction to reservoir computing

A quick introduction to reservoir computing A quick introduction to reservoir computing Herbert Jaeger Jacobs University Bremen 1 Recurrent neural networks Feedforward and recurrent ANNs A. feedforward B. recurrent Characteristics: Has at least

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

A First Attempt of Reservoir Pruning for Classification Problems

A First Attempt of Reservoir Pruning for Classification Problems A First Attempt of Reservoir Pruning for Classification Problems Xavier Dutoit, Hendrik Van Brussel, Marnix Nuttin Katholieke Universiteit Leuven - P.M.A. Celestijnenlaan 300b, 3000 Leuven - Belgium Abstract.

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

An overview of reservoir computing: theory, applications and implementations

An overview of reservoir computing: theory, applications and implementations An overview of reservoir computing: theory, applications and implementations Benjamin Schrauwen David Verstraeten Jan Van Campenhout Electronics and Information Systems Department, Ghent University, Belgium

More information

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions

Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Direct Method for Training Feed-forward Neural Networks using Batch Extended Kalman Filter for Multi- Step-Ahead Predictions Artem Chernodub, Institute of Mathematical Machines and Systems NASU, Neurotechnologies

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy

Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy Reservoir Computing Methods for Prognostics and Health Management (PHM) Piero Baraldi Energy Department Politecnico di Milano Italy 2 Piero Baraldi Data Industry 4.0 2 Digitalization 2.8 Trillion GD (ZD)

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017

Modelling Time Series with Neural Networks. Volker Tresp Summer 2017 Modelling Time Series with Neural Networks Volker Tresp Summer 2017 1 Modelling of Time Series The next figure shows a time series (DAX) Other interesting time-series: energy prize, energy consumption,

More information

In the Name of God. Lecture 9: ANN Architectures

In the Name of God. Lecture 9: ANN Architectures In the Name of God Lecture 9: ANN Architectures Biological Neuron Organization of Levels in Brains Central Nervous sys Interregional circuits Local circuits Neurons Dendrite tree map into cerebral cortex,

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Reservoir Computing in Forecasting Financial Markets

Reservoir Computing in Forecasting Financial Markets April 9, 2015 Reservoir Computing in Forecasting Financial Markets Jenny Su Committee Members: Professor Daniel Gauthier, Adviser Professor Kate Scholberg Professor Joshua Socolar Defense held on Wednesday,

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

arxiv: v1 [cs.lg] 2 Feb 2018

arxiv: v1 [cs.lg] 2 Feb 2018 Short-term Memory of Deep RNN Claudio Gallicchio arxiv:1802.00748v1 [cs.lg] 2 Feb 2018 Department of Computer Science, University of Pisa Largo Bruno Pontecorvo 3-56127 Pisa, Italy Abstract. The extension

More information

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2 Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction 1.1 Introduction to Chapter This chapter starts by describing the problems addressed by the project. The aims and objectives of the research are outlined and novel ideas discovered

More information

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks

At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks Thomas Natschläger Software Competence Center Hagenberg A-4232 Hagenberg, Austria Thomas.Natschlaeger@scch.at

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Analyzing the weight dynamics of recurrent learning algorithms

Analyzing the weight dynamics of recurrent learning algorithms Analyzing the weight dynamics of recurrent learning algorithms Ulf D. Schiller and Jochen J. Steil Neuroinformatics Group, Faculty of Technology, Bielefeld University Abstract We provide insights into

More information

Using a Hopfield Network: A Nuts and Bolts Approach

Using a Hopfield Network: A Nuts and Bolts Approach Using a Hopfield Network: A Nuts and Bolts Approach November 4, 2013 Gershon Wolfe, Ph.D. Hopfield Model as Applied to Classification Hopfield network Training the network Updating nodes Sequencing of

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

Neural Network Control of Robot Manipulators and Nonlinear Systems

Neural Network Control of Robot Manipulators and Nonlinear Systems Neural Network Control of Robot Manipulators and Nonlinear Systems F.L. LEWIS Automation and Robotics Research Institute The University of Texas at Arlington S. JAG ANNATHAN Systems and Controls Research

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Datamining Seminar Kaspar Märtens Karl-Oskar Masing Today's Topics Modeling sequences: a brief overview Training RNNs with back propagation A toy example of training an RNN Why

More information

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION

MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Computing and Informatics, Vol. 30, 2011, 321 334 MODULAR ECHO STATE NEURAL NETWORKS IN TIME SERIES PREDICTION Štefan Babinec, Jiří Pospíchal Department of Mathematics Faculty of Chemical and Food Technology

More information

Echo State Networks with Filter Neurons and a Delay&Sum Readout

Echo State Networks with Filter Neurons and a Delay&Sum Readout Echo State Networks with Filter Neurons and a Delay&Sum Readout Georg Holzmann 2,1 (Corresponding Author) http://grh.mur.at grh@mur.at Helmut Hauser 1 helmut.hauser@igi.tugraz.at 1 Institute for Theoretical

More information

Introduction To Artificial Neural Networks

Introduction To Artificial Neural Networks Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

Neural Networks. Intro to AI Bert Huang Virginia Tech

Neural Networks. Intro to AI Bert Huang Virginia Tech Neural Networks Intro to AI Bert Huang Virginia Tech Outline Biological inspiration for artificial neural networks Linear vs. nonlinear functions Learning with neural networks: back propagation https://en.wikipedia.org/wiki/neuron#/media/file:chemical_synapse_schema_cropped.jpg

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Liquid Computing. Wolfgang Maass. Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria

Liquid Computing. Wolfgang Maass. Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria NNB SS10 1 Liquid Computing Wolfgang Maass Institut für Grundlagen der Informationsverarbeitung Technische Universität Graz, Austria Institute for Theoretical Computer Science http://www.igi.tugraz.at/maass/

More information

Improving Liquid State Machines Through Iterative Refinement of the Reservoir

Improving Liquid State Machines Through Iterative Refinement of the Reservoir Improving Liquid State Machines Through Iterative Refinement of the Reservoir David Norton, Dan Ventura Computer Science Department, Brigham Young University, Provo, Utah, United States Abstract Liquid

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Good vibrations: the issue of optimizing dynamical reservoirs

Good vibrations: the issue of optimizing dynamical reservoirs Good vibrations: the issue of optimizing dynamical reservoirs Workshop on ESNs / LSMs, NIPS 2006 Herbert Jaeger International University Bremen (Jacobs University Bremen, as of Spring 2007) The basic idea:

More information

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

INTRODUCTION TO ARTIFICIAL INTELLIGENCE v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

Linking non-binned spike train kernels to several existing spike train metrics

Linking non-binned spike train kernels to several existing spike train metrics Linking non-binned spike train kernels to several existing spike train metrics Benjamin Schrauwen Jan Van Campenhout ELIS, Ghent University, Belgium Benjamin.Schrauwen@UGent.be Abstract. This work presents

More information

The Application of Liquid State Machines in Robot Path Planning

The Application of Liquid State Machines in Robot Path Planning 82 JOURNAL OF COMPUTERS, VOL. 4, NO., NOVEMBER 2009 The Application of Liquid State Machines in Robot Path Planning Zhang Yanduo School of Computer Science and Engineering, Wuhan Institute of Technology,

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball

Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball Dimitri Probst 1,3, Wolfgang Maass 2, Henry Markram 1, and Marc-Oliver Gewaltig 1 1 Blue Brain Project, École Polytechnique

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Several ways to solve the MSO problem

Several ways to solve the MSO problem Several ways to solve the MSO problem J. J. Steil - Bielefeld University - Neuroinformatics Group P.O.-Box 0 0 3, D-3350 Bielefeld - Germany Abstract. The so called MSO-problem, a simple superposition

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

Structured reservoir computing with spatiotemporal chaotic attractors

Structured reservoir computing with spatiotemporal chaotic attractors Structured reservoir computing with spatiotemporal chaotic attractors Carlos Lourenço 1,2 1- Faculty of Sciences of the University of Lisbon - Informatics Department Campo Grande, 1749-016 Lisboa - Portugal

More information

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28 1 / 28 Neural Networks Mark van Rossum School of Informatics, University of Edinburgh January 15, 2018 2 / 28 Goals: Understand how (recurrent) networks behave Find a way to teach networks to do a certain

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

Refutation of Second Reviewer's Objections

Refutation of Second Reviewer's Objections Re: Submission to Science, "Harnessing nonlinearity: predicting chaotic systems and boosting wireless communication." (Ref: 1091277) Refutation of Second Reviewer's Objections Herbert Jaeger, Dec. 23,

More information

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems

Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Short Term Memory Quantifications in Input-Driven Linear Dynamical Systems Peter Tiňo and Ali Rodan School of Computer Science, The University of Birmingham Birmingham B15 2TT, United Kingdom E-mail: {P.Tino,

More information

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics Processing of Time Series by Neural Circuits with iologically Realistic Synaptic Dynamics Thomas Natschläger & Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz, ustria

More information

Temporal Pattern Analysis

Temporal Pattern Analysis LIACS Leiden Institute of Advanced Computer Science Master s Thesis June 17, 29 Temporal Pattern Analysis Using Reservoir Computing Author: Ron Vink Supervisor: Dr. Walter Kosters 1 Contents 1 Introduction

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

Movement Generation with Circuits of Spiking Neurons

Movement Generation with Circuits of Spiking Neurons no. 2953 Movement Generation with Circuits of Spiking Neurons Prashant Joshi, Wolfgang Maass Institute for Theoretical Computer Science Technische Universität Graz A-8010 Graz, Austria {joshi, maass}@igi.tugraz.at

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Course 10. Kernel methods. Classical and deep neural networks.

Course 10. Kernel methods. Classical and deep neural networks. Course 10 Kernel methods. Classical and deep neural networks. Kernel methods in similarity-based learning Following (Ionescu, 2018) The Vector Space Model ò The representation of a set of objects as vectors

More information

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid Felix Schürmann, Karlheinz Meier, Johannes Schemmel Kirchhoff Institute for Physics University of Heidelberg Im Neuenheimer Feld 227, 6912 Heidelberg,

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations ARTICLE Communicated by Rodney Douglas Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations Wolfgang Maass maass@igi.tu-graz.ac.at Thomas Natschläger

More information

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation Master Recherche IAC TC2: Apprentissage Statistique & Optimisation Alexandre Allauzen Anne Auger Michèle Sebag LIMSI LRI Oct. 4th, 2012 This course Bio-inspired algorithms Classical Neural Nets History

More information

Neuroevolution for sound event detection in real life audio: A pilot study

Neuroevolution for sound event detection in real life audio: A pilot study Neuroevolution for sound event detection in real life audio: A pilot study Christian Kroos & Mark D. Plumbley Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK Background

More information

Deep Feedforward Networks. Sargur N. Srihari

Deep Feedforward Networks. Sargur N. Srihari Deep Feedforward Networks Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information