Financial Informatics XVII:

Similar documents
Synaptic Plasticity. Introduction. Biophysics of Synaptic Plasticity. Functional Modes of Synaptic Plasticity. Activity-dependent synaptic plasticity:

2- AUTOASSOCIATIVE NET - The feedforward autoassociative net considered in this section is a special case of the heteroassociative net.

Machine Learning and Adaptive Systems. Lectures 5 & 6

Neural Networks Lecture 2:Single Layer Classifiers

Hebb rule book: 'The Organization of Behavior' Theory about the neural bases of learning

Lecture 4: Feed Forward Neural Networks

Introduction to Neural Networks

Artificial Neural Networks Examination, March 2004

Using a Hopfield Network: A Nuts and Bolts Approach

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

The Hebb rule Neurons that fire together wire together.

Artificial Neural Networks The Introduction

Slide02 Haykin Chapter 2: Learning Processes

A. The Hopfield Network. III. Recurrent Neural Networks. Typical Artificial Neuron. Typical Artificial Neuron. Hopfield Network.

Artificial Neural Networks. Edward Gatt

Hopfield Networks. (Excerpt from a Basic Course at IK 2008) Herbert Jaeger. Jacobs University Bremen

Plasticity and Learning

Artificial Neural Networks Examination, June 2004

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

CSE/NB 528 Final Lecture: All Good Things Must. CSE/NB 528: Final Lecture

Application of Associative Matrices to Recognize DNA Sequences in Bioinformatics

Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule

Artificial Neural Networks Examination, June 2005

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification.

Machine Learning. Neural Networks

Financial Informatics XI: Fuzzy Rule-based Systems

Part 8: Neural Networks

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Systems Biology: A Personal View IX. Landscapes. Sitabhra Sinha IMSc Chennai

Linear And Exponential Algebra Lesson #1

Machine Learning and Adaptive Systems. Lectures 3 & 4

Institute for Advanced Management Systems Research Department of Information Technologies Åbo Akademi University

Memories Associated with Single Neurons and Proximity Matrices

Artificial Neural Network and Fuzzy Logic

Inf2b Learning and Data

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Outline. NIP: Hebbian Learning. Overview. Types of Learning. Neural Information Processing. Amos Storkey

The Perceptron. Volker Tresp Summer 2014

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS. Ronald H. Silverman Cornell University Medical College, New York, NY 10021

Sections 8.1 & 8.2 Systems of Linear Equations in Two Variables

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Introduction to Neural Networks

An Introductory Course in Computational Neuroscience

The Perceptron. Volker Tresp Summer 2016

Neural Networks Based on Competition

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

EEE 241: Linear Systems

Hopfield Neural Network

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

CMSC 421: Neural Computation. Applications of Neural Networks

Artificial Neural Networks Examination, March 2002

Lecture 7 Artificial neural networks: Supervised learning

MODELS OF LEARNING AND THE POLAR DECOMPOSITION OF BOUNDED LINEAR OPERATORS

arxiv: v1 [cs.ne] 19 Sep 2015

Introduction. Previous work has shown that AER can also be used to construct largescale networks with arbitrary, configurable synaptic connectivity.

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

Neural Networks Introduction

How do biological neurons learn? Insights from computational modelling of

Chapter 9: The Perceptron

How do synapses transform inputs?

Fundamentals of Neural Networks

AN INFORMATION-THEORETIC PERSPECTIVE FOR LEARNING SYSTEMS WITH ENGINEERING APPLICATIONS CHUAN WANG

Multilayer Feedforward Networks. Berlin Chen, 2002

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Beyond Hebbian plasticity: Effective learning with ineffective Hebbian learning rules

Fuzzy Logic and Fuzzy Systems

Week 4: Hopfield Network

Simple neuron model Components of simple neuron

Simple Neural Nets For Pattern Classification

Neural Networks. Textbook. Other Textbooks and Books. Course Info. (click on

Artificial Intelligence Hopfield Networks

Artificial Neural Network : Training

Marr's Theory of the Hippocampus: Part I

In the Name of God. Lecture 9: ANN Architectures

Synaptic dynamics. John D. Murray. Synaptic currents. Simple model of the synaptic gating variable. First-order kinetics

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Introduction to Artificial Neural Networks

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network

Logarithmic differentiation

Neural networks: Unsupervised learning

Artificial Neural Networks

Hebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Neural Network to Control Output of Hidden Node According to Input Patterns

Storage Capacity of Letter Recognition in Hopfield Networks

Financial Informatics IX: Fuzzy Sets

The Variance of Covariance Rules for Associative Matrix Memories and Reinforcement Learning

Adaptation in the Neural Code of the Retina

Commonly used pattern classi ers include SVMs (support

Neural Networks. Nethra Sambamoorthi, Ph.D. Jan CRMportals Inc., Nethra Sambamoorthi, Ph.D. Phone:

Artificial Neural Network

Introduction To Artificial Neural Networks

Neuron, Volume 63. Supplemental Data. Generating Coherent Patterns of Activity. from Chaotic Neural Networks. David Sussillo and L.F.

Consider the following spike trains from two different neurons N1 and N2:

How to do backpropagation in a brain

Machine Learning Basics III

In this section, we review some basics of modeling via linear algebra: finding a line of best fit, Hebbian learning, pattern classification.

Transcription:

Financial Informatics XVII: Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-, IRELAND November 9 th, 8. https://www.cs.tcd.ie/khurshid.ahmad/teaching.html

Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. ERROR-CORRECTION OR PERFORMANCE LEARNING HEBBIAN OR COINCIDENCE LEARNING BOLTZMAN LEARNING STOCHASTIC NET LEARNING COMPETITIVE LEARNING FILTER LEARNING GROSSBERG'S NETS

Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. California sought to have the license of one of the largest auditing firms Ernst & Young removed because of their role in the well-publicized collapse of Lincoln Savings & Loan Association. Further, regulators could use a bankruptcy 3

ANN Learning Algorithms ENVIRONMENT Vector describing the environment TEACHER LEARNING SYSTEM Desired response Actual Σ Response - + Error Signal 4

ANN Learning Algorithms ENVIRONMENT Vector describing state of the environment LEARNING SYSTEM 5

ANN Learning Algorithms State-vector input ENVIRONMENT CRITIC Primary Reinforcement Heuristic Reinforcement Actions LEARNING SYSTEM 6

Hebbian Learning DONALD HEBB, a Canadian psychologist, was interested in investigating PLAUSIBLE MECHANISMS FOR LEARNING AT THE CELLULAR LEVELS IN THE BRAIN. see for eample, Donald Hebb's 949 The Organisation of Behaviour. New York: Wiley 7

Hebbian Learning HEBB s POSTULATE: When an aon of cell A is near enough to ecite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A's efficiency as one of the cells firing B, is increased. 8

Hebbian Learning Hebbian Learning laws CAUSE WEIGHT CHANGES IN RESPONSE TO EVENTS WITHIN A PROCESSING ELEMENT THAT HAPPEN SIMULTANEOUSLY. THE LEARNING LAWS IN THIS CATEGORY ARE CHARACTERIZED BY THEIR COMPLETELY LOCAL - BOTH IN SPACE AND IN TIME-CHARACTER. 9

Hebbian Learning LINEAR ASSOCIATOR: A substrate for Hebbian Learning Systems y y y y y 3 y 3 Output y w w w 3 w w w 3 w 3 w 3 w 33 Input 3

Hebbian Learning A simple form of Hebbian Learning Rule new old w y k j k j k w + η j, where h is the so-called rate of learning and and y are the input and output respectively. This rule is also called the activity product rule.

Hebbian Learning A simple form of Hebbian Learning Rule If there are "m" pairs of vectors, v v L v v to be stored in a network,then the training sequence will change the weight-matri, w, from its initial value of ZERO to its final state by simply adding together all of the incremental weight change caused by the "m" applications of Hebb's law: v v v + v v +L+

3 Hebbian Learning A worked eample: Consider the Hebbian learning of three input vectors:.5 ;.5.5 ;.5 3 in a network with the following initial weight vector:.5 w

4 Hebbian Learning A worked eample: Consider the Hebbian learning of three input vectors:.5 ;.5.5 ;.5 3 in a network with the following initial weight vector:.5 w

5 Hebbian Learning A worked eample: Consider the Hebbian learning of three input vectors:.5 ;.5.5 ;.5 3 in a network with the following initial weight vector:.5 w

6 Hebbian Learning A worked eample: Consider the Hebbian learning of three input vectors:.5 ;.5.5 ;.5 3 in a network with the following initial weight vector:.5 w

Hebbian Learning The worked eample shows that with discrete fnet and η, the weight change involves ADDING or SUBTRACTING the entire input pattern vectors to and from the weight vectors respectively. Consider the case when the activation function is a continuous one. For eample, take the bipolar continuous activation function: f net net + ep λ * ; where λ. 7

Hebbian Learning The worked eample shows that with bipolar continuous activation function indicates that the weight adjustments are tapered for the continuous function but are generally in the same direction: Vector Discrete Bipolar fnet Continuous Bipolar fnet.95 - -.77 3 - -.93 8

9 Hebbian Learning The details of the computation for the three steps with a discrete bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:.5 ;.5.5 ;.5 3 5. w

Hebbian Learning The details of the computation for the three steps with a continuous bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:.5 ;.5.5 ;.5 3 5. w

Hebbian Learning Recall that the simple form of Hebbian learning law suggests that the repeated application of the presynaptic signal j leads to an increase in y k and therefore eponential growth that finally drives the synaptic connection into saturation. new old w w y k j k j k j k w η A number of researchers have proposed ways in which such saturation can be avoided. Sejnowski has suggested that j w η k j y k y j, where the time averaged value of y the time averaged value of y j k ; &.

Hebbian Learning The Hebbian synapse described below is said to involve the use of POSITIVE FEEDBACK. new old w w y k j k j k j k w η j

Hebbian Learning What is the principal limitation of this simplest form of learning? new old w w y k j k j k j k w η j The above equation suggests that the repeated application of the input signal leads to an increase in, and therefore eponential growth that finally drives the synaptic connection into saturation. At that point of saturation no information cannot be stored in the synapse and selectivity will be lost. Graphically the relationship with the postsynaptic activityis a simple one: it is linear with a slope. 3

Hebbian Learning The so-called covariance hypothesis was introduced to deal with the principal limitation of the simplest form of Hebbian learning and is given as w kj k j n η y n y n where and denote the time-averaged values of the pre-synaptic and postsynaptic signals. 4

5 Hebbian Learning If we epand the above equation: the last term in the above equation is a constant and the first term is what we have for the simplest Hebbian learning rule: y n y n y n n y n w k j j k kj η η η η + y n y n y n w n w k j kj kj Simple Modified η η η + n y n y n w j k kj η

Hebbian Learning Graphically the relationship w ij with the postsynaptic activity y k is still linear but with a slope η j n and the assurance that the straight line curve changes its rate of change at and the minimum value of the weight change w ij is η y j n 6