Radial-Basis Function Networks

Similar documents
Radial-Basis Function Networks

Vorlesung Neuronale Netze - Radiale-Basisfunktionen (RBF)-Netze -

Single layer NN. Neuron Model

Neural Networks Lecture 4: Radial Bases Function Networks

Multilayer Neural Networks

Multi-layer Neural Networks

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Simple Neural Nets For Pattern Classification

Unit III. A Survey of Neural Network Model

Introduction to Neural Networks

In the Name of God. Lecture 11: Single Layer Perceptrons

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

3.4 Linear Least-Squares Filter

Radial Basis Function (RBF) Networks

Slide05 Haykin Chapter 5: Radial-Basis Function Networks

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Multilayer Perceptrons and Backpropagation

Neural Networks DWML, /25

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

Neural Networks and the Back-propagation Algorithm

Intelligent Systems Discriminative Learning, Neural Networks

Artificial Neural Network : Training

Course 395: Machine Learning - Lectures

Artificial Neural Networks

18.6 Regression and Classification with Linear Models

Multilayer Neural Networks

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Data Mining Part 5. Prediction

4. Multilayer Perceptrons

Neural Networks (Part 1) Goals for the lecture

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Multilayer Perceptron

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

The Perceptron Algorithm

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation

Linear & nonlinear classifiers

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.

y(x n, w) t n 2. (1)

Multilayer Perceptron = FeedForward Neural Network

Artificial Neural Networks The Introduction

Neural Networks and Deep Learning

Artificial Neural Network

Pattern Classification

Introduction to Neural Networks

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 6. Notes on Linear Algebra. Perceptron

Lecture 5: Logistic Regression. Neural Networks

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Artificial Neural Networks. Edward Gatt

Lecture 2: Logistic Regression and Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Artifical Neural Networks

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Lab 5: 16 th April Exercises on Neural Networks

Radial Basis Function Networks. Ravi Kaushik Project 1 CSC Neural Networks and Pattern Recognition

Neuro-Fuzzy Comp. Ch. 4 March 24, R p

Neural networks. Chapter 19, Sections 1 5 1

Unit 8: Introduction to neural networks. Perceptrons

Chapter ML:VI (continued)

Machine Learning. Neural Networks

AI Programming CS F-20 Neural Networks

Learning Vector Quantization

Artificial Neural Networks Examination, June 2005

An artificial neural networks (ANNs) model is a functional abstraction of the

In the Name of God. Lectures 15&16: Radial Basis Function Networks

Revision: Neural Network

Computational statistics

Artificial Neural Networks

Neural Networks biological neuron artificial neuron 1

ECE521 Lectures 9 Fully Connected Neural Networks

ADALINE for Pattern Classification

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Machine Learning I Continuous Reinforcement Learning

Nonlinear Classification

Part 8: Neural Networks

Chapter ML:VI (continued)

Machine Learning Lecture 5

Neural networks. Chapter 20. Chapter 20 1

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks Lecture 2:Single Layer Classifiers

How to do backpropagation in a brain

Neural Networks Based on Competition

Artificial Neural Networks. MGS Lecture 2

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

CS545 Contents XVI. Adaptive Control. Reading Assignment for Next Class. u Model Reference Adaptive Control. u Self-Tuning Regulators

Artificial Neural Networks

A Novel Activity Detection Method

Backpropagation: The Good, the Bad and the Ugly

Course 10. Kernel methods. Classical and deep neural networks.

Transcription:

Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated below, where each point is a stored vector used in one. In a network one hidden layer uses neurons with activation functions describing local receptors. Then one output node is used to combine linearly the outputs of the hidden neurons. w3 P w w The vector P is interpolated using the three vectors; each vector gives a contribution that depends on its weight and on its distance from the point P. In the picture we have w < w3 < w 5 ARCHITECTURE x ϕ w x y ϕ m w m x m One hidden layer with activation functions ϕ... ϕ m Output layer with linear activation function. y w ϕ ( x t ) +... + w m ϕ m ( x t m x t distance of x ( x,..., xm ) from vector t ) 5 Rossella Cancelliere

HIDDE EURO MODEL Hidden units: use radial basis functions φ σ ( x - t ) the output depends on the distance of the input x from the center t x φ σ ( x - t ) x x m ϕ σ t is called center σ is called spread center and spread are parameters 5 3 Hidden eurons A hidden neuron is more sensitive to data points near its center. For Gaussian this sensitivity may be tuned by adjusting the spread σ, where a larger spread implies less sensitivity. 5 4 Rossella Cancelliere

Gaussian φ φ : center σ is a measure of how spread the curve is: Large σ Small σ 5 5 Interpolation with The interpolation problem: m Given a set of different points { x R, i L } i and a set of real numbers { R, i L }, find a function F : R m R d i that satisfies the interpolation condition: F ( x i ) d i If F ( x ) w iϕ ( x x i ) i ϕ( x x )...... ϕ( x x )... ϕ( x ϕ( x x x we have: ) ) w... w d... d Φ w 5 6 d Rossella Cancelliere 3

Types of φ Micchelli s theorem: Let { x be a set of distinct points in. Then the -by- interpolation i} i R matrix Φ, whose ji-th element is ϕ ji ϕ ( x j x i ) is nonsingular. Multiquadrics: ϕ ( r ) ( r + c ) Inverse multiquadrics: ϕ ( r ) ( r + c Gaussian functions (most used): r ϕ ( r ) exp σ m ) σ > 0 c > 0 r x t 5 7 Example: the XOR problem Input space: x (0,) (,) (0,0) (,0) x Output space: 0 y Construct an pattern classifier such that: (0,0) and (,) are mapped to 0, class C (,0) and (0,) are mapped to, class C 5 8 Rossella Cancelliere 4

Example: the XOR problem In the feature (hidden layer) space: ϕ ( x t ϕ ( x t ) e ) e x t x t with t (,) and t (0,0) φ.0 (0,0) Decision boundary 0.5 (,) (0,) and (,0) 0.5.0 φ When mapped into the feature space < ϕ, ϕ > (hidden layer), C and C become linearly separable. So a linear classifier with ϕ (x) and ϕ (x) as inputs can be used to solve the XOR problem. 5 9 for the XOR problem x t ϕ ( x t ) e with t (,) and t (0,0) ϕ ( x t ) e x t y e x t x t e x t x t - y - + + If y > 0 then classotherwise class 0 5 0 Rossella Cancelliere 5

network parameters What do we have to learn for a with a given architecture? The centers of the activation functions the spreads of the Gaussian activation functions the weights from the hidden to the output layer Different learning algorithms may be used for learning the network parameters. We describe three possible methods for learning centers, spreads and weights. 5 Learning Algorithm Centers: are selected at random centers are chosen randomly from the training set Spreads: are chosen by normalization: Maximum distance between any centers σ number of centers d max m Then the activation function of hidden neuron becomes: i ϕ i m ( ) x ti exp x t i d max 5 Rossella Cancelliere 6

Learning Algorithm Weights: are computed by means of the pseudo-inverse method. For an example ( x i, d ) consider the output of i the network y( xi ) w ϕ( xi t ) +... + wm ϕ m ( xi tm ) We would like y ( x i ) d i for each example, that is w ϕ ( xi t ) +... + wmϕ m( xi tm ) d i 5 3 Learning Algorithm This can be re-written in matrix form for one example and T [ ϕ ( x i t )... ϕm ( xi tm ) ][ w... wm ] di ϕ( x t )... ϕm ( x tm )... [ w... wm] ϕ( x t )... ϕm ( x tm ) for all the examples at the same time T [ d... d ] T 5 4 Rossella Cancelliere 7

Learning Algorithm let then we can write If Φ + ϕ( x t )... Φ... ϕ( x t )... w Φ... w ϕ ( x t t ) ) is the pseudo-inverse of the matrix we obtain the weights using the following formula T + [... wm] Φ [ d m m m m w... d ] ϕ ( x d... m d Φ 5 5 T Learning Algorithm : summary. Choose the centers randomly from the training set.. Compute the spread for the function using the normalization method. 3. Find the weights using the pseudo-inverse method. 5 6 Rossella Cancelliere 8

Learning Algorithm : Centers clustering algorithm for finding the centers Initialization: t k (0) random k,, m Sampling: draw x from input space 3 Similarity matching: find index of center closer to x k(x) arg min 4 Updating: adjust centers t k (n + ) x(n) t (n) 5 Continuation: increment n by, goto and continue until no noticeable changes of centers occur k [ x(n) t (n)] if k k(x) tk (n) +η k t k (n) otherwise 5 7 k Learning Algorithm : summary Hybrid Learning Process: Clustering for finding the centers. Spreads chosen by normalization. LMS algorithm (see Adaline) for finding the weights. 5 8 Rossella Cancelliere 9

Learning Algorithm 3 Apply the gradient descent method for finding centers, spread and weights, by minimizing the (instantaneous) squared error E ( y( x) d Update for: centers spread weights η E t j t j t j σ j η σ j w ij E σ E η ij w ij j ) 5 9 Comparison with FF -etworks are used for regression and for performing complex (non-linear) pattern classification tasks. Comparison between networks and FF: Both are examples of non-linear layered feed-forward networks. Both are universal approximators. 5 0 Rossella Cancelliere 0

Comparison with multilayer Architecture: networks have one single hidden layer. FF networks may have more hidden layers. euron Model: In the neuron model of the hidden neurons is different from the one of the output nodes. Typically in FF hidden and output neurons share a common neuron model. The hidden layer of is non-linear, the output layer of is linear. Hidden and output layers of FF are usually non-linear. 5 Comparison with multilayer Activation functions: The argument of activation function of each hidden neuron in a computes the Euclidean distance between input vector and the center of that unit. The argument of the activation function of each hidden neuron in a FF computes the inner product of input vector and the synaptic weight vector of that neuron. Approximation: using Gaussian functions construct local approximations to non-linear I/O mapping. FF construct global approximations to non-linear I/O mapping. 5 Rossella Cancelliere

Example: Astronomical image processing We used neural networks to verify the similarity of real astronomical images to predefined reference profiles, so we analyse realistic images and deduce from them a set of parameters able to describe their discrepancy with respect to the ideal, non-aberrated image. The focal plane image, in an ideal case, is the Airy function (blue line): I () r P S λ R p J πr ( πr ) Usually the image is perturbed by aberrations associated to the characteristics of real-world instruments (red line represents the perturbed image) 5 3 Example: Astronomical image processing A realistic image is associated to the Fourier transform of the aperture function: I PS π p iφ( ρ, θ ) iπrρ cos( θ φ ) ( r, φ) dρ dθ ρ e e π λ R 0 0 which includes the contributions of the classical (Seidel) aberrations: Φ π 4 3 ( ρ, θ ) [ A ρ + A ρ cosθ + A ρ cos θ + A ρ + A ρ cosθ ] λ A s : Spherical aberration s c a A c : Coma Aa d : Astigmatism t A d : Field curvature (d defocus) A t : Distortion (t tilt) 5 4 Rossella Cancelliere

Example: Astronomical image processing - The maximum range of variation considered is ± 0.3λ for the training set and ± 0.5λ for the test set - To ease the computational task the image is encoded using the first moments:,0 0, 0, 3 0, 4,,, 3,,, 3 where: nm x y x y dx dy σ x σ y dx dy I ( x, y) n m I ( x, y) 5 5 Example: Astronomical image processing We performed this task using a F neural network (with 60 hidden nodes) and a neural network (with 75 hidden nodes) Usually the former requires less internal nodes, but more training iterations than the latter The training required 000 iterations (F) and 75 iterations (); both optimised networks provide reasonably good results in terms of approximation of the desired unknown function relating aberrations and moments. 5 6 Rossella Cancelliere 3

Example: Astronomical image processing Performances of the sigmoidal (left) and radial (right) neural networks; from top to bottom, coma, astigmatism, defocus and distortion are shown. The blue (solid) line follows the test set targets, whereas the green dots are the values computed by the networks. 5 7 Rossella Cancelliere 4