Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Similar documents
Normal Random Variable and its discriminant functions

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

Clustering (Bishop ch 9)

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Chapter 4. Neural Networks Based on Competition

Introduction to Boosting

Response of MDOF systems

Variants of Pegasos. December 11, 2009

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

( ) () we define the interaction representation by the unitary transformation () = ()

Fitting a transformation: Feature based alignment May 1 st, 2018

( ) [ ] MAP Decision Rule

Solution in semi infinite diffusion couples (error function analysis)

Advanced Machine Learning & Perception

Lecture Slides for INTRODUCTION TO. Machine Learning. ETHEM ALPAYDIN The MIT Press,

Sklar: Sections (4.4.2 is not covered).

Lecture 2 L n i e n a e r a M od o e d l e s

TSS = SST + SSE An orthogonal partition of the total SS

Lecture 11 SVM cont

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

An introduction to Support Vector Machine

CHAPTER II AC POWER CALCULATIONS

Robustness Experiments with Two Variance Components

Robust and Accurate Cancer Classification with Gene Expression Profiling

Machine Learning 2nd Edition

CHAPTER 10: LINEAR DISCRIMINATION

Nonlinear Classifiers II

Chapters 2 Kinematics. Position, Distance, Displacement

Linear Response Theory: The connection between QFT and experiments

Graduate Macroeconomics 2 Problem set 5. - Solutions

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

On One Analytic Method of. Constructing Program Controls

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

THERMODYNAMICS 1. The First Law and Other Basic Concepts (part 2)

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

A Modified Genetic Algorithm Comparable to Quantum GA

Math 128b Project. Jude Yuen

Homework 8: Rigid Body Dynamics Due Friday April 21, 2017

Chapter 6 DETECTION AND ESTIMATION: Model of digital communication system. Fundamental issues in digital communications are

Computing Relevance, Similarity: The Vector Space Model

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Motion in Two Dimensions

How about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

CHAPTER 5: MULTIVARIATE METHODS

CHAPTER 7: CLUSTERING

Content. A Strange World. Clustering. Introduction. Unsupervised Learning Networks. What is Unsupervised Learning? Unsupervised Learning Networks

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Panel Data Regression Models

An Integrated and Interactive Video Retrieval Framework with Hierarchical Learning Models and Semantic Clustering Strategy

3D Human Pose Estimation from a Monocular Image Using Model Fitting in Eigenspaces

Notes on the stability of dynamic systems and the use of Eigen Values.

Long Term Power Load Combination Forecasting Based on Chaos-Fractal Theory in Beijing

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Series A, OF THE ROMANIAN ACADEMY Volume 9, Number 1/2008, pp

Today s topic: IMPULSE AND MOMENTUM CONSERVATION

FI 3103 Quantum Physics

A Cell Decomposition Approach to Online Evasive Path Planning and the Video Game Ms. Pac-Man

Changeovers. Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA

FTCS Solution to the Heat Equation

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering

Lecture 28: Single Stage Frequency response. Context

Connectionist Classifier System Based on Accuracy in Autonomous Agent Control

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

Reading. Lecture 28: Single Stage Frequency response. Lecture Outline. Context

Example: MOSFET Amplifier Distortion

Testing a new idea to solve the P = NP problem with mathematical induction

Influence of Probability of Variation Operator on the Performance of Quantum-Inspired Evolutionary Algorithm for 0/1 Knapsack Problem

Epistemic Game Theory: Online Appendix

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

Lecture VI Regression

Lecture 6: Learning for Control (Generalised Linear Regression)

Multi-Objective Control and Clustering Synchronization in Chaotic Connected Complex Networks*

Computational and Statistical Learning theory Assignment 4

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition

Comb Filters. Comb Filters

Constrained-Storage Variable-Branch Neural Tree for. Classification

Lecture Slides for INTRODUCTION TO. Machine Learning. ETHEM ALPAYDIN The MIT Press,

General Weighted Majority, Online Learning as Online Optimization

Introduction to Numerical Analysis. In this lesson you will be taken through a pair of techniques that will be used to solve the equations of.

Scattering at an Interface: Oblique Incidence

CS 268: Packet Scheduling

Let s treat the problem of the response of a system to an applied external force. Again,

Mechanics Physics 151

Department of Economics University of Toronto

Chapter Lagrangian Interpolation

2/20/2013. EE 101 Midterm 2 Review

Chapter 6: AC Circuits

Main questions Motivation: Recognition

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

Mechanics Physics 151

Excess Error, Approximation Error, and Estimation Error

ES 250 Practice Final Exam

Supporting information How to concatenate the local attractors of subnetworks in the HPFP

Transcription:

/4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse one by one 3. Mappng he daa no one or ore denson space 4. Copeve learnng Hang Dsance(/). Defne HD(Hang Dsance) beween wo bnary codes A and B of he sae lengh as he nuber of place n whch a, and b j dffer.. Ex: A: [ - ] B: [ - - ] => HD(A, B) = 3 3 4 Hang Dsance(/) Hang Dsance(3/) 6

/4/ Hang Dsance(4/) 3. Or, HD can be defned as he lowes nuber of edges ha us be raversed beween he wo relevan codes. 4. If A and B are bpolar bnary coponens, hen he scalar produc of A, B : Hang Dsance(/) A B n HD ( A, B ) HD( A, B) bs are agreed n HD( A, B) bs are dfferen 7 8 MAXNET(/3). For a wo layer classfer of bnary bpolar vecors, and p classes, p oupu neurons, he sronges response of a neuron ndcaes has he nu HD value and he caegory hs neuron represens. MAXNET(/3) 9 MAXNET(3/3). The MAXNET operaes o suppress values a oupu nsead of oupu values of Hang newor. 3. For he Hang ne n nex fgure, we have npu vecor X p classes => p neurons for oupu oupu vecor Y = [y, y p ] MAXNET(4/3)

/4/ 3 MAXNET(/3) 4. for any oupu neuron,, =, p, we have = [w, w, w n ] and =,, p o be he weghs beween npu X and each oupu neuron.. Also, assung ha for each class, one has he prooype vecor S () as he sandard o be ached. 4 MAXNET(6/3) 6. For classfyng p classes, one can say he h oupu s f and only f X= S () => happens only () = S () oupu for he classfer are X S (), X S (), X S (), X S (p) So when X= S (), he h oupu s n and oher oupus are saller han n. MAXNET(7/3) 7. X S () = (n - HD(X, S () ) ) - HD(X, S () ) ½ X S () = n/ HD(X, S () ) So he wegh arx: H =½S H S S ( S () () p) S S S () () ( p) S S S () n () n ( p) n 6 MAXNET(8/3) 8. By gvng a fxed bas n/ o he npu hen ne = ½X S () + n/ for =,, p or ne = n - HD(X, S () ) MAXNET(9/3) 9. To scale he npu ~n o ~ down, one can apply ransfer funcon as f(ne ) = ne for =,,..p MAXNET(/3) 7 8 3

/4/ MAXNET(/3). So for he node wh he he hghes oupu eans ha he node has salles HD beween npu and prooype vecors S () S ().e. f(ne ) = for oher nodes f(ne ) < MAXNET(/3). MAXNET s eployed as a second layer only for he cases where an enhanceen of he nal donan response of h node s requred..e., he purpose of MAXNET s o le ax{ y, y p } equal o and le ohers equal o. 9 MAXNET(3/3) MAXNET(4/3). To acheve hs, one can le y f y ( y j ) j where =, p ; j=, p ; j y s bounded by y for =, p oupu of Hang Ne. and y can only be or. 3 MAXNET(/3) 3. So ε s bounded by <ε</p and M ε: laeral neracon coeffcen ( p p) And 4 MAXNET(6/3) ne M y 4

/4/ MAXNET(7/3) 4. So he ransfer funcon MAXNET(8/3) f ( ne) ne ne ne 6 MAXNET(9/3) Ex: To have a Hang Ne for classfyng C, I, T hen S () = [ - - ] S () = [ - - - - - - ] S (3) = [ - - - - ] MAXNET(/3) 7 8 So H MAXNET(/3) ne X n H For X MAXNET(/3) ne 7 3 And 7 f ne 9 Y 3 9 9 9 3

/4/ 6 3 MAXNET(3/3) Inpu o MAXNET and selec =. < /3(=/p) So M 3 MAXNET(4/3) And M ne f Y Y ne 33 MAXNET(/3) =.333.67.99.333.67.99..333.777...... ne f Y o ne 34 MAXNET(7/3) = Y ne..... 3 MAXNET(8/3) = Y ne 3.96.48.96.4.48 36 MAXNET(9/3) =3 Y ne 4 7 3.46..46

/4/ MAXNET(3/3) Suary: Hang newor only ells whch class wll osly le, no o resore dsored paern. Cluserng Unsupervsed Learnng(/). Inroducon a. o caegorze or cluser daa b. groupng slar objecs and separang of dsslar ones. Assung paern se {X, X, X N } s subed o deerne he decson funcon requred o denfy possble clusers, => by slary rules 37 38 Cluserng Unsupervsed Learnng(/) Cluserng Unsupervsed Learnng(3/) Eucldean Dsance : X X cosψ ( X X ) ( X X ) X X X X 39 4 Cluserng Unsupervsed Learnng(4/) 3. nner-tae-all learnng Assung npu vecors are o be classfed no one of specfc nuber of p caegores accordng o he clusers deeced n he ranng se {X, X, X N }. Cluserng Unsupervsed Learnng(/) 4 4 7

/4/ Cluserng Unsupervsed Learnng(6/) Kohonen Newor Y=f( X) for vecor sze = n p and 43 w w for,,p n Cluserng Unsupervsed Learnng(7/) pror o he learnng, noralzaon of all wegh vecors s requred So 44 Cluserng Unsupervsed Learnng(8/) The eanng of ranng s o fnd all such ha X n X,, p fro.e., he h neuron wh vecor s he closes approxaon of he curren X. Fro Cluserng Unsupervsed Learnng(9/) X X and n X,, p n X,, p X X X ax,, p X 4 46 So Cluserng Unsupervsed Learnng(/) X ax X,, p The h neuron s he wnnng neuron and has he larges value of ne, =, p. Cluserng Unsupervsed Learnng(/) Thus should be adjused such ha X - s reduced. Then, X - can be reduced by he drecon of graden. X - = -(X - ) Or, ncrease n (X - ) drecon. 47 48 8

/4/ Cluserng Unsupervsed Learnng(/) For any X s jus a sngle sep, we wan only fracon of (X - ), hus for neuron : = α(x - ).< α <.7 for oher neurons: = Cluserng Unsupervsed Learnng(3/) In general: ( X ) where : wnnng neuron α : learnng consan for 49 Cluserng Unsupervsed Learnng(/) 4. Geoercal nerpreaon npu vecor X (noralzed) and wegh s wnner => X X for,,p ax 旐 Cluserng Unsupervsed Learnng(6/) So X And Cluserng Unsupervsed Learnng(7/) 3 ' s creaed. X X Cluserng Unsupervsed Learnng(8/) Thus he wegh adjusen s anly he roaon of he wegh vecor oward npu vecor whou a sgnfcan lengh change. And s no a noral, so n he new ranng sage, us be noralzed agan. s a vecor pon o he gravy of each cluser on a uny sphere. 4 X X X X X X X 9

/4/ Cluserng Unsupervsed Learnng(9/). In he case of soe paerns are nown class, hen X X α > for correc node α < oherwse Ths wll accelerae he learnng process sgnfcanly. Cluserng Unsupervsed Learnng(/) 6. Anoher odfcaon s o adjus he wegh for boh wnner and losers. leay copeve learnng 7. Recall Y= f( X) X 8. Inalzaon of weghs Randoly choose wegh fro U(,) or convex cobnaon for,, p n 6 Feaure Map(/6). Transfor fro hgh densonal paern space o low-denson feaure space.. Feaure exracon: wo caegory: naural srucure X no naural srucure -- depend on paern s slar o huan percepon or no Feaure Map(/6) X X X X X X X 7 8 Feaure Map(3/6) 3. anoher poran aspec s o represen he feaure as naural as possble. 4. Self organzng neural array ha can ap feaures X fro paern space. Feaure Map(4/6). Use one densonal array or Two densonal array 6. Exaple: X: npu vecor X X X X X X X : neuron w j :wegh fro npu x j o neuron y : oupu of neuron 9 6

/4/ Feaure Map(/6) X X X X X X X 7. So defne y f ( S( X, )) where S( X, ) eans Feaure Map(6/6) he slary beween X 8. Thus, (b) s he resul of (a) X and 6 6 Self-organzng Feaure Map(/6). One densonal appng se of paern X, =,,. use a lnear array > > 3 >.. f X y ax y( X ),,, 63 y ax y( X ),,, Self-organzng Feaure Map(/6). Thus, hs learnng s o fnd he bes achng neuron cells whch can acvae her spaal neghbors o reac o he sae X npu. 3. Or o fnd c such ha 64 X c n X Self-organzng Feaure Map(3/6) 4. In case of wo wnners, choose lower.. If c s he wnnng neuron hen defne N c as he neghborhood around c and X N c s always changng as learnng gong. 6. Then x w ) f s n he neghborho od w j 6 ( j j Nc oherwse Self-organzng Feaure Map(4/6) 7. : T where : curren ranng eraon T: oal # of ranng seps o be done. X 8. sars fro and s decreased unl reaches value of 66

/4/ Self-organzng Feaure Map(/6) 9. If recangular neghborhood s used hen c-d < x < c+d c-d < y < c+d and as learnng connues X d d T d s decreased fro d o 67 Self-organzng Feaure Mapexaple. Exaple: a. Inpu paerns are chosen randoly fro U[,]. b. The daa eployed n he experen coprsed pons dsrbued unforly over he bpolar square [,] [, ] c. The pons hus descrbe a geoercally X square opology. d. Thus, nal weghs can be ploed as n nex fgure. d. - connecon lne connecs wo adjacen neuron n copeve layer. 68 Self-organzng Feaure Mapexaple Self-organzng Feaure Map-- Exaple X X X X X X X X X X X X X X 69 7 Self-organzng Feaure Map Exaple Self-organzng Feaure Map Exaple X X X X X X X X X X X X X X 7 7