Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Similar documents
Artificial Neural Network and Fuzzy Logic

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Lecture 4: Feed Forward Neural Networks

Data Mining Part 5. Prediction

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Neural Networks Introduction

Introduction Biologically Motivated Crude Model Backpropagation

Artificial Neural Network

Artificial Neural Networks The Introduction

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Lecture 7 Artificial neural networks: Supervised learning

Introduction to Neural Networks

EEE 241: Linear Systems

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Artifical Neural Networks

Artificial Neural Networks. Edward Gatt

Artificial Intelligence

Part 8: Neural Networks

An artificial neural networks (ANNs) model is a functional abstraction of the

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Artificial Neural Network : Training

Hopfield Neural Network and Associative Memory. Typical Myelinated Vertebrate Motoneuron (Wikipedia) Topic 3 Polymers and Neurons Lecture 5

Neural networks. Chapter 19, Sections 1 5 1

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Multilayer Perceptron

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

PV021: Neural networks. Tomáš Brázdil

Artificial Neural Networks

Artificial Neural Networks. Historical description

Neuro-Fuzzy Comp. Ch. 1 March 24, 2005

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Fundamentals of Neural Networks

Artificial neural networks

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

Neural networks. Chapter 20. Chapter 20 1

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Introduction and Perceptron Learning

INTRODUCTION TO NEURAL NETWORKS

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

4. Multilayer Perceptrons

Neural Networks: Introduction

Neural networks. Chapter 20, Section 5 1

Revision: Neural Network

ESTIMATING THE ACTIVATION FUNCTIONS OF AN MLP-NETWORK

CMSC 421: Neural Computation. Applications of Neural Networks

Neural Networks Introduction CIS 32

Multilayer Perceptron Tutorial

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Ch.8 Neural Networks

Hand Written Digit Recognition using Kalman Filter

CS:4420 Artificial Intelligence

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Neural Networks and Ensemble Methods for Classification

Neural Networks and the Back-propagation Algorithm

Deep Neural Networks (1) Hidden layers; Back-propagation

Introduction to Neural Networks

In the Name of God. Lecture 9: ANN Architectures

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Using a Hopfield Network: A Nuts and Bolts Approach

Artificial Neural Networks Examination, June 2004

Learning and Memory in Neural Networks

Chapter 9: The Perceptron

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Simple neuron model Components of simple neuron

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Artificial Neural Networks Examination, June 2005

Neural Networks. Textbook. Other Textbooks and Books. Course Info. (click on

Introduction to Artificial Neural Networks

Artificial Neural Networks Examination, March 2004

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning. Neural Networks

AI Programming CS F-20 Neural Networks

Chapter 37 Active Reading Guide Neurons, Synapses, and Signaling

Artificial Neural Network Method of Rock Mass Blastability Classification

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Study of a neural network-based system for stability augmentation of an airplane

Introduction To Artificial Neural Networks

Feedforward Neural Nets and Backpropagation

Tuning tuning curves. So far: Receptive fields Representation of stimuli Population vectors. Today: Contrast enhancment, cortical processing

Deep Neural Networks (1) Hidden layers; Back-propagation

Information processing. Divisions of nervous system. Neuron structure and function Synapse. Neurons, synapses, and signaling 11/3/2017

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Simple Neural Nets For Pattern Classification

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Network Based Response Surface Methods a Comparative Study

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Introduction to Neural Networks

Linear discriminant functions

Chapter 48 Neurons, Synapses, and Signaling

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

BIOLOGY 11/10/2016. Neurons, Synapses, and Signaling. Concept 48.1: Neuron organization and structure reflect function in information transfer

Machine Learning (CSE 446): Neural Networks

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Neurochemistry 1. Nervous system is made of neurons & glia, as well as other cells. Santiago Ramon y Cajal Nobel Prize 1906

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Sections 18.6 and 18.7 Artificial Neural Networks

Neurons, Synapses, and Signaling

Chapter 9. Nerve Signals and Homeostasis

Neural Network to Control Output of Hidden Node According to Input Patterns

A Three-dimensional Physiologically Realistic Model of the Retina

Transcription:

Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche April

Introductory Notes on Neural Networs Dr. José Ernesto Rayas Sánche BIOLOGICAL NEURAL NETWORKS The brain can be seen as a highly complex, nonlinear, and parallel computer. The neuron is the fundamental unit of the nervous system, particularly the brain. The neuron is a simple processing unit that receives and combines signals from many other neurons through filamentary input paths, the dendrites. A simplified representation of a neuron is shown in Fig.. dendritic tree soma axon nucleus axonic ending Fig. Representation of a biological neuron. Dendritic trees are connected with the main body of the nerve cell, the soma. The outer boundary of the cell is the membrane. When excited above a certain level, the threshold, the neuron fires; that is, it transmits an electrical signal, the action potential, along a single path called the axon. When the action potential reaches the axonic ending, chemical messengers, nown as neurotransmitters, are released. The connection (or junction) between a neuron s axon and another neuron s dendrite is called a synapse. A single neuron may have to, synapses and may be connected with some neurons. All signals coming from the dendritic tree are combined at the soma, and if the amplitude of the combined signals reaches the threshold of the neuron, a firing process is activated and an output signal is produced. The signal, either a single pulse or a sequence of pulses at a particular rate, is transmitted along the cell s axon to the axonic endings. One neuron can mae many contacts with other neurons, thus forming a complicated neural networ. The axonic endings of a neuron may contact another neuron in different ways: a contact with a dendrite, a direct contact with the trun of a dendritic tree, a contact with the soma of a neuron, o even a contact with the axon itself, as shown in Fig.. Some neurons feed bac directly to themselves or indirectly via a third or fourth neuron. When a signal appears at a synapse, a charge is generated. The magnitude of this charge depends on the strength of the incoming signal, which is weighted by a factor associated with this particular input. The weighting factor may not be constant over a long time. In an embryonic neuron (i.e., a neuron under development) its axon is not fully developed and contacts with other neurons have not been fully formed. As the axon grows, however, the neuron recognies the Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35

correct pathway using a mechanism similar to an electrochemical seeing radar to contact a target neuron. It is believed that a pattern is formed and stored when a group of events is associated in time. Patterns are stored and lined with already stored ones. This type of memory is called associative. Evidence suggests that each neuron can store thousands of patterns in the dendritric trees. During the learning phases (via input senses), patterns are formed, stored, and associated (and experience is gained). Memories are lined and associated. When a patterns is invoed, it triggers (recall) other patterns, which then triggers others, so that a cascade of memories (recalled patterns) occurs. axon-dendritic axon-axon axon-axon dendritic axon-soma Fig. Types of synapses. All neurons are not the same, nor do they perform the same function. There are many categories of neurons with unique and highly specialied functions. Research has shown that some neurons have simple functions such as contracting and releasing a muscle, while other neurons control the excitatory or inhibitory behavior of other neurons; other types of neurons are specialied receptors (photonic, deflection, pressure, chemical, etc.), and so on. Neural networs are quite diverse. Nevertheless, all neural networ exhibit certain properties, namely: a) Many parallel connections exist between many neurons. b) Many of the parallel connections provide feedbac mechanisms to other neurons and to themselves. c) Some neurons may excite other neurons while inhibiting the operation of still others. d) Some parts of the networ may be pre-wired, whereas other parts may be evolving or under development. e) Neural networs are asynchronous in operation. f) Neural networs have a gross and slow synchroniing mechanism, as slow as the heartbeat, which supports their vital functions. g) Neural networs execute a program that is fully distributed and not sequentially executed. h) Neural networs do not have a central processor, but processing is distributed. Biological neural networs are characteried by a hierarchical architecture. Lower neural networs preprocess raw information and pass their outcome to higher levels for higher-level processing. Events in a silicon chip happen in the nanosecond range, whereas neural events happen in the Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 3

milliseconds range. Researchers estimate that there are billion ( ) neurons in the human cerebral cortex. Each neuron may have as many as, dendrites or more, so that the human cerebral cortex has at least, billion ( 4 ) synapses. Since each neuron can fire about times per second, the brain can thus possibly execute, trillion ( 6 ) synaptic activities per second. This number is even more impressive when we realie that the human brain weights about.5 Kg. The energetic efficiency of the brain is about 6 joules per operation per second, whereas the corresponding value for the best computers in use today is about 6 J/s. The pea theoretical speed of the fastest computers now (around 35 gigaflops - 9 floating-point operations per second -) is not even close to our biological computer, the brain. ARTIFICIAL NEURAL NETWORKS An Artificial Neural Networ (ANN) is a massively parallel distributed processor made up of simple processing units, that is able of acquiring nowledge from its environment though a learning process. ANNs are also information processing systems that emulate biological neural networs: they are inspired in the ability of human brain to learn from observation and generalie by abstraction. An ANN may be thought of a sophisticated signal processor, in which the strength of each synapse (i.e., the synaptic weight), the bias and threshold values of each neuron at steady state constitute the networ s program. Thus every neuron taes part in the massive parallel program execution. The architecture of an ANN defines the networ structure, that is, the number of artificial neurons in the networ and their interconnectivity. The neurodynamics of an ANN defines how it learns, recalls, associates and continuously compares new information with existing nowledge, as well as how it classifies new information and how it develops new classifications if necessary. Basic models of a neuron Several basic models to approximate the behavior of a biological neuron in a simplified manner are considered. a) Linear Neuron A general linear neuron can be modeled as shown in Fig. 3. Linear neurons are mostly used in the output of artificial neural networs. v w b v w Σ L v n w n (a) (b) Fig. 3 Linear neuron: a) model, b) symbol. This -th linear neuron has n inputs and one output, where v = [v v n ] T is the vector of inputs, which Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 4

are signals coming from other neurons, w = [w w n ] T is the vector of weights to represent the corresponding synapse strength and b is the bias or offset term. The output signal is the activation potential or induced local field given by b) Inner-Product Nonlinear Neuron b T = + v w () The most popular nonlinear neuron used mainly in the hidden s of multiple- perceptrons is the inner-product nonlinear neuron, whose model is shown in Fig. 4. v w b v w Σ s ϕ s ( ) I v n w n (a) (b) Fig. 4 Inner product nonlinear neuron: (a) model, (b) symbol. This inner-product nonlinear -th neuron has n inputs, and one output, where v = [v v n ] T is the vector of inputs, which are signals coming from other neurons, w = [w w n ] T is the vector of weights to represent the corresponding synapse strength, b is the bias or offset term, s is the activation potential or induced local field given by s b and is the output signal in neuron express as T = + v w () = ϕ ( s ) (3) The purpose of the nonlinear function ϕ (s ), called activation function or squashing function, is to ensure that the neuron s response is bounded, that is, the actual response of the neuron is conditioned or damped, as a result of large or small activating stimuli and thus is controllable. In the biological world, conditioning of stimuli is continuously done by all sensory inputs. Three popular nonlinearities for innerproduct neurons are the sigmoid, the hard limiter and the hyperbolic tangent. If a sigmoid or logistic function is used (see Fig. 5), then the response of neuron can be given by = ϕ ( s ) = (4) + e s Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 5

.9.8.7.6.5.4.3.. -4-3 - - 3 4 Fig. 5 Sigmoid activation function The sigmoid function is the most popular nonlinear function for ANN because it is easy to implement and is continuously differentiable. It can be shown that a sigmoid function might be approximated by using s = ϕ ( s ) = s + for s << (5) s + e The sigmoid function and its linear approximation are shown in Fig. 6a..5.5 -.5-4 - 4.8.6.4. -4-4 (a) Fig. 6. Sigmoid activation function: (a) comparing it ( ) with its linear approximation (o); (b) using different slope parameters: a = ( ), a =.5 ( ) and a = (--). A slope parameter a for the sigmoid function can be introduced as follows (b) Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 6

= ϕ ( s ) = (6) a s + e By varying a we can control the slope of the sigmoid, and therefore the amount of nonlinearity, as illustrated in Fig. 6b. Taing the Taylor series expansion of an exponential function, ϕ for s small (7) s + L)! 3! = ( s ) = s 3 + e s s + ( s + If a hyperbolic tangent is used, the response of the neuron is s s e e = ϕ ( s ) = tanh ( s ) = (8) s s e + e The hyperbolic tangent activation function is illustrated in Fig. 7, considering its behavior for different slopes. On the other hand, the hyperbolic tangent activation function exhibits a better behavior in the linear region, which is clear from its Taylor series expansion 3 5 7 s s 7s = ( s ) = tanh ( s ) s + + L ϕ s for s small (9) 3 5 35.5 - -.5 - - - - - - (a) (b) Fig. 7. Hyperbolic tangent activation function: (a) comparing it ( ) with its linear approximation (o); (b) using different slope parameters: a = ( ), a =.5 ( ) and a = (--). Since each neuron in a biological neural networ is different and is connected in a particular manner, the vector of inputs, the vector of weights, the bias and the nonlinear function can be different for each neuron in an ANN. In most practical cases, an ANN has the same nonlinear function at each of neurons. Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 7

c) Euclidean Distance Neuron Euclidean distance neurons, shown in Fig. 8, are nonlinear neurons used to implement Radial Basis Functions ANNs. v θ v Σ s ϕ ( s ) G v n (a) (b) Fig. 8. Euclidean distance neuron: (a) model, (b) symbol. This Euclidean distance -th neuron has n inputs, and one output, where v = [v v n ] T is the vector of inputs, which are signals coming from other neurons, θ = [θ θ n ] T is the -th center vector with respect to the Euclidean distance is measured, s is the activation potential or induced local field given by and is the output signal in neuron express as s = v θ () = ϕ ( s ) () The following nonlinear functions are of particular interest for RBF: multiquadratics, inverse multiquadratics and Gaussian functions. If a Gaussian function is used, s ( ) σ s = e = ϕ for some σ > () Gaussian activation functions for several values of σ are illustrated in Fig. 9. General Characteristics of ANNs Neural networs are characteried by a number of general features: a) Collective and synergistic computation (or neurocomputing): program is executed collectively and synergistically; operations are decentralied. b) Robustness: operation is insensitive to scattered failures and to partial inputs or inputs with inaccuracies. c) Learning: the networ maes associations automatically; it creates the program during leaning; the networ adapts with or without a teacher (no programmer intervention). d) Asynchronous operation: biological neural networs have no explicit cloc to synchronie their Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 8

operation. Nevertheless, a number of ANNs require a cloc..9.8.7.6.5.4.3.. - -8-6 -4-4 6 8 Fig. 9 Gaussian activation function using σ =.5 ( ), σ = (ο), σ = 3 ( ) and σ = ( ). s Artificial Neural Networ Topologies An ANN configuration is also called a paradigm. It is characteried by a certain topology, a specific neuron model, and sometimes a particular learning strategy. Some of the most used topologies are illustrated in Fig., where each circle represents a single neuron. The most widely used paradigm is the multi feedforward perceptron (MLP). An MLP is formed by many basic neurons connected in a feedforward topology, with one or more hidden s. The most common training algorithms for MLP are the Delta and the Bac Propagation. The sigmoid is the most common nonlinearily for MLP, and usually is used only for the hidden neurons (the input and output s are linear). The neurons in the input don t have any synaptic weight and bias element (they are just connecting nodes). A two- perceptron (LP) consists of only an input and an output. It can be shown that a LP actually implements a linear transformation on the input parameter space. Another popular paradigm, not illustrated in Fig., is the Recurrent Neural Networ, which employes a feedbac and delay units. It is widely used in time-domain applications. In most practical applications, input and output scaling is applied to the signals in the ANN. This is realied in order to control the relative importance of the different input parameters and to define a Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35 9

suitable dynamical range for the region of interest. This is also very important to improve the numerical performance of the training algorithms. Multi perceptron (MLP): Multi cooperative/competitive: Input Hidden Output Input Hidden Output x y x y x y x y Bi: feedforward/bacward: Mono heterofeedbac: Input Output x y x y x y x y x 3 y 3 x 3 y 3 Fig. Typical topologies for ANNs. Learning in Artificial Neural Networs Learning (or training) is the process by which an ANN adapts itself to a stimulus, and eventually (after maing the proper parameter adjustment to its synaptic weights and bias) produces the desired response. When the actual output response is the same as the desired one, the networ has completed the learning process; in other words, it has acquired nowledge. Different learning techniques suit different Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35

artificial neural networs: a) Supervised Learning. During the training session of a neural networ, an input stimulus is applied that results in an output response. This response is compared with an a priori desired output signal, the target response. If the actual response differs from the target response, an error signal is generated which is then used to calculate the adjustment that should be made to the networ s synaptic weights and bias, so that the actual response matches the target output. b) Unsupervised learning. During the training session, the NN receives many different excitations, or input patterns, and it organies the patterns into categories. When a stimulus is later applied, the NN provides an output response indicating the class to which the stimulus belongs. If a class can not be found for the input stimulus, a new class is generated. Even though unsupervised learning does not require a target response, it requires guidelines or selecting criteria to determine how it will form groups. This ind of training of ANNs is extensively used in pattern recognition. c) Competitive learning. In this scheme, several neurons are at the output. When an input stimulus is applied, each output neuron competes with the others to produce the closest signal to the target. This output then becomes the dominant one, and the other output cease producing an output signal for that stimulus. For another stimulus, another output neuron becomes the dominant one. Competitive learning can be viewed as random specialiation process, which may not always be desirable. d) Hebbian learning. In this technique the synaptic weight between two neurons is modified according to the degree of correlated activity between input and output. It is inspired on observations of metabolic changes in biological neural networs. It is possible for an ANN to modify its own topology during learning, which is motivated by the fact that neurons in the human brain can die and new synaptic connections can grow. The problem of training an ANN can be formulated as an optimiation problem, where the ANN parameters (weights, bias, threshold, etc.) are to be found so that the ANN response matches the desired response. Some of the most used optimiation techniques for ANN training are the Delta Rule, Boltmann s algorithm, the bac-propagation learning algorithm, simulation annealing, and the Marovian technique. SUGGESTED REFERENCES S. Hayin, Neural Networs: A Comprehensive Foundation. New Jersey, MA: Prentice Hall, 999. S.V. Kartalopoulos, Understanding Neural Networs and Fuy Logic. New Yor, NY: IEEE Press, 996. Periférico Sur Manuel Góme Morín 8585 Tel: +5 33 3669 3598 / Fax: +5 33 3669 35