Umetne Nevronske Mreže UNM (ANN)

Similar documents
Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

INTELLIGENTNI SISTEMI NEVRONSKE MREŽE IN KLASIFIKACIJA. Nevronske mreže Prof. Jurij F. Tasič Emil Plesnik

Counter-propagation neural networks in Matlab

Artificial Neural Networks The Introduction

Introduction Biologically Motivated Crude Model Backpropagation

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Introduction to Neural Networks

Introduction to Artificial Neural Networks

Part 8: Neural Networks

Neural networks. Chapter 20, Section 5 1

CS 4700: Foundations of Artificial Intelligence

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

On the convergence speed of artificial neural networks in the solving of linear systems

Feedforward Neural Nets and Backpropagation

Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Neural Turing Machine. Author: Alex Graves, Greg Wayne, Ivo Danihelka Presented By: Tinghui Wang (Steve)

Neural networks. Chapter 20. Chapter 20 1

Department of Pharmacy, Annamalai University, Annamalainagar, Tamil Nadu , India, Received

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Neural networks. Chapter 19, Sections 1 5 1

LIMITATIONS OF RECEPTRON. XOR Problem The failure of the perceptron to successfully simple problem such as XOR (Minsky and Papert).

Artificial Neural Networks. Edward Gatt

McGill University > Schulich School of Music > MUMT 611 > Presentation III. Neural Networks. artificial. jason a. hockman

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

CS 4700: Foundations of Artificial Intelligence

Frequency Selective Surface Design Based on Iterative Inversion of Neural Networks

A Neural Network Scheme For Earthquake. City University, Centre for Information Engineering(1), Department of Computer Science(2),

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

Latched recurrent neural network

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Introduction To Artificial Neural Networks

Artificial Neural Networks. Historical description

Neural Networks (Part 1) Goals for the lecture

EEE 241: Linear Systems

Learning and Memory in Neural Networks

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Chapter ML:VI (continued)

Networks of McCulloch-Pitts Neurons

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Simple Neural Nets For Pattern Classification

Introduction to Neural Networks: Structure and Training

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

Course Overview. before 1943: Neurobiology. before 1943: Neurobiology. before 1943: Neurobiology. The Start of Artificial Neural Nets

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Lecture 4: Feed Forward Neural Networks

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

RESPONSE PREDICTION OF STRUCTURAL SYSTEM SUBJECT TO EARTHQUAKE MOTIONS USING ARTIFICIAL NEURAL NETWORK

Fundamentals of Neural Networks

Artificial Neural Networks. Part 2

Lab 5: 16 th April Exercises on Neural Networks

Feedforward Networks

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Genetic Algorithm Optimized Neural Networks Ensemble. for Estimation of Mefenamic Acid and Paracetamol in Tablets

Feedforward Networks

Multilayer Perceptron Tutorial

Matrix representation of a Neural Network

SELECTION OF THE MOST SUCCESFUL NEURAL NETWORK ALGORITHM FOR THE PORPUSE OF SUBSURFACE VELOCITY MODELING, EXAMPLE FROM SAVA DEPRESSION, CROATIA

Feedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004

CSC Neural Networks. Perceptron Learning Rule

SIMULATION OF FREEZING AND FROZEN SOIL BEHAVIOURS USING A RADIAL BASIS FUNCTION NEURAL NETWORK

Multi-Dimensional Neural Networks: Unified Theory

Neural Networks: Introduction

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

CHALMERS, GÖTEBORGS UNIVERSITET. EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CMSC 421: Neural Computation. Applications of Neural Networks

In the Name of God. Lecture 9: ANN Architectures

Artificial Neural Networks

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Master Recherche IAC Apprentissage Statistique, Optimisation & Applications

Neural Network to Control Output of Hidden Node According to Input Patterns

Artificial Neural Networks Examination, March 2004

Intelligent Systems Discriminative Learning, Neural Networks

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network

Denoising Autoencoders

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Artificial Neural Network and Fuzzy Logic

Introduction and Perceptron Learning

Computational Intelligence

Can we synthesize learning? Sérgio Hortas Rodrigues IST, Aprendizagem Simbólica e Sub-Simbólica, Jun 2009

λ-universe: Introduction and Preliminary Study

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

Course 395: Machine Learning - Lectures

Inf2b Learning and Data

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS

Artificial Neural Networks

Reification of Boolean Logic

Strength-based and time-based supervised networks: Theory and comparisons. Denis Cousineau Vision, Integration, Cognition lab. Université de Montréal

ARTIFICIAL NEURAL NETWORK APPLICATION OF MODELLING FAILURE RATE FOR BOEING 737 TIRES

Transcription:

Umetne Nevronske Mreže UNM (ANN) Glede na način učenja se ločujejo na Nenadzorovane (1) Nadzorovane () (1) Kohonenove nevronske mreže () Nevronske mreže z vzratnim širjenjem napake (error back propagation NN) (..1) Protitočne nevronske mreže (counterpropagation NN) () RBF nevronske mreže (radial basis function) 1

Začetki 194: McCulloch in Pitts, "A Logical Calculus of Ideas Immanent in Nervous Activity" Leta1949: Donald Hebb, The Organization of Behavior, predlaga Hebb-ovo pravilo (ojačitev povezave med sosednjimi vzbujenimi nevroni, ta aktivnost je osnova za učenje in memoriranje). Leta 196: Frank Rosenblatt, knjiga Principles of Neurodynamics z uporabo McCulloch-Pitts-ovega nevrona in Hebbovega pravila razvije prvi perceptron. Leta1969: Minsky in Papert, knjiga Perceptroni utemeljitev, zakaj imajo preceptroni teoretične omejitve pri reševanju nelinearnih problemov. Leta 198: John Hopfield, vpelje nelinearnost v osnovno funkcijo nevrona. S tem odpravi pomanjkljivosti in odpre novo obdobje raziskav umetnih nevronskih mrež.

XOR Input x 1 Input x Output 0 0 0 0 1 1 1 0 1 1 1 0

Tuevo Kohonen (rojen 194), Finski akademik. Selforganizing maps Kohonen, Leta1960: T. Kohonen vpelje nov koncept v neural computing asociativni spomin, optimalno asociativno mapiranje, self-organizing feature maps (SOMs). Kohonen, T. and Honkela, T. (007). "Kohonen network http://www.scholarpedia.org/article/kohonen_network Leta 1986: Rumelhart s sodelavci objavili nov algoritem za tako nevronske mreže s vzvratnim širjenjem napake (error back-propagation) 4

Nature, 5-56 (09 October 1986); doi:10.108/5a0 Learning representations by back-propagating errors DAVID E. RUMELHART *, GEOFFREY E. HINTON & RONALD J. WILLIAMS * * Institute for Cognitive Science, C-015, University of California, San Diego, La Jolla, California 909, USA Department of Computer Science, Carnegie-Mellon University, Pittsburgh, Philadelphia 151, USA To whom correspondence should be addressed. We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal 'hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure 1. References 1. Rosenblatt, F. Principles of Neurodynamics (Spartan, Washington, DC, 1961).. Minsky, M. L. & Papert, S. Perceptrons (MIT, Cambridge, 1969).. Le Cun, Y. Proc. Cognitiva 85, 599 604 (1985). 4. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. in Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations (eds Rumelhart, D. E. & McClelland, J. L.) 18 6 (MIT, Cambridge, 1986). 5

Predstavitev nevronov Vhodni signali X x 1 x x j x m Sinapse ali uteži w j Telo nevrona Akson Uteži w j Telo nevrona x 1 x x j x m Izhodni signal y Izhodni signal y

Izhodni signali nevronov Error Back Propagation nevronske mreže (perceptron) Net j = S i=1,k w ji x i a ( Net q ) y 1 1 e Y signal out y = f (Net ) y = f (Net ) a y = f (Net ) a 1 1 1 0 Net 0 Net 0 Net q q q a) b) c) 7

Različne oblike EBP nevronskih mrež x 1 x x 1 x Net j = S i=1,k w ji x i w 11 w 1 w 1 w 1 w w w 11 w 1 w 1 w 1 w w b w 1b w w b y 1 y y y 1 y y x 1 x Bias w h 11 w h 1 w h 1 w h w h b Skriti nivo w h ji (6 uteži) Izhodni nivo w o ji (9 uteži) w o 11 w o 1 w h 1b w o 1 w o 1 w o w o 1b w o b w o w o b y 1 y y 8

w l ji hidden j Popravljanje uteži v nevronski mreži z vzvratnim širjenjem napake out n l j r ( k k 1 l 1 j output w w T o k p o p r a v k o v output kj )y l,prejšnji ji hidden j ( 1 vhodna plast: ni popravkov. popravki uteži v prvi skriti plasti. popravki uteži v drugi skriti plasti 1. popravki uteži v izhodni plasti signal out y hidden j ) x s1 x s bias signal 1 e Neaktivna vhodna plast: razporeditev spremenljivk x si 1. transformacija iz x si v out sj 1. transformacija iz out sj 1 v out sk. transformacija iz out sk v y sj 1 a ( Net q ) Net j = S i=1,k w ji x i T o k p o d a t k o v output j output output (t y )y ( 1 j j j y output j ) y s1 y s y s 9

V izračunu poravka vsake uteži upoštevamo podatke iz treh plasti nevronov i-ti nevron v skriti-1 plasti da izhodni signal enak y i hidden-1 w l ji out l j l 1 j w l,prejšnji ji j-ti nevron v skriti plasti da signal enak y hidden i w ji hidden Skrita plast w 1j output w j output w kj output w rj output Izhodna plast Izhodni nevroni, ki dajo signal y k output 1 1 output output k k output r r output y 1 output y output y k output y r output hidden j n r ( k k 1 output w output kj )y hidden j ( 1 y hidden j ) 10

Kohonenova nevronska mreža Vhodni objekt X s x 1 x x x 4 x 5 x 6 x 7 x 8 Kohonenova mreža Nivoji uteži 1 4 5 6 7 8 Nivo uteži št. Nivo uteži št. 6 Posamezna utež 11

1 x 1 x x x 4 y 1 y y y 5 y n 1 0 1 1 0 1 5 5 5 5 4 4 4 4 4 4 1 1 0 1 1 1 1 4 4 4 4 4 4 1 1 1 1 1 1 1 1 0

0 1 1 0 x 1 x x x 4 x 1 x x x 4 y y 1 y y j y n y n-1 y n y 1 y a c c d b d Heksagonalna postavitev nevronov 1

1 1 4 1 6 1 5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 8 A A A A,C B A A A,C C B A C C C B C C C B C C C B B C C C,B B B B B C C B B B B 1 50 5 54 58 68 7 51 40 66 5,0 4 5 6 4,9 9 5 11,7 10 0 41 7, 48 64 8 44 6 8 4 9 16 5 6 1 4 56 45 47 1 17 19 5 6 4 61 1,65 57 14,6 46 67 1 15,6 60 49 18,59 7 8 14

.76.85.90.9.95.97.99.6.75.84.80.85.89.95.59.66.68.65.77.75.9.47.45.5.58.60.69.85.1..46.4.5.61.81.16.5.0..46.55.75.0.09.08..8.50.67 Mapa uteži spremenljivke x koncentracija pigmenta A Zgornja mapa s celicami, ki vsebujejo informacijo o kvaliteti barve (oznake A, B, in C) Vhodni objekt X s (receptura za barvo) predstavljen s tremi spremenljivkami 0.9 0.7 0.57 0. 0.1 Solvent x 1 Pigment x 0. 0.5 0.7 0.9 Binder x 0. 0.5 0.7 0.9 15

Protitočna (counterpropagation) nevronska mreža Vhodna Kohonenova plast Vhodni objekt: X s x s1 x s x s x s4 x s5 x s6 x s7 x s8 Vzbujeni nevron W e Položaj vzbujenega nvrona W e se preslika v izhodno Grosbergovo plast Izhodna ali Grossbergova plast nevronov t s1 t s t s t s4 Potem, ko preslikamo položaj W e odgovor T s :vstopi v izhodno Grossbergovo plast 16

Radial Basis Function nevronske mreže Input: x 1 x Vhodni nivo 5 RBF nodes Bias w 1 w w w 4 w 5 w bias Output: y 1 17

RMSE RMSE training RMSE test Best model Best RMSE for test set 1 4 5 6 7 8 9 10 11 1 1 14 15 Epohe (v tisočih) 18

Concentration of SO [mg/m ] Input Hidden Output (=input) 100 x 1 x x 400 100 600 400 X j 400 x m 17 input variables are the same 17 output variables as targets 50 700 Output value of the hidden node 1 19

KOHONENOVE IN PROTITOČNE (COUNTERPROPAGATION) NEVRONSKE MREŽE Študijska literatura 1. J. Zupan, J. Gasteiger, Neural Networks in Chemistry and Drug Design, Wiley-VCH, Weinheim, 1999. J. Zupan, Računalniške metode za kemike, DZS, Ljubljana, 199. Teach/Me software, Hans Lohninger, Springer Verlag Berlin 1999 4. T. Kohonen, Self-Organizing Maps, Springer-Verlag Berlin 1995 5. R. Hecht-Nielsen, "Counterpropagation networks, Applied Optics, 6, 4979-4984, 1987 6. R. Hecht-Nielsen, "Applications of counterpropagation networks, Neural Networks, 1, 11-19, 1988 7. R. Hecht-Nielsen, Neurocomputing, Reading, MA: Addison-Wesley, 1990 8. J. Zupan, M. Novič, I. Ruisánchez. Kohonen and counterpropagation artificial neural networks in analytical chemistry : tutorial. Chemometr. intell. lab. syst., 1997, vol. 8, str. 1-. 0

Vsebina: Osnovna struktura umetnih nevronskih mrež Kohonenove umetne nevronske mreže (K-UNM) Protitočne (Counterpropagation) umetne nevronske mreže -D preslikava (mapiranje) s Kohonenovo UNM Klasifikacija Modeliranje Primeri uporabe Mapiranje IR spektrov Modeli za napovedovanje toksičnosti (log(1/ld50) za Tetrahymena pyroformis), preko 00 hidroksisubstituiranih aromatskih spojin 1

Umetne nevronske mreže (UNM) kot črna skrinja pri odločitvah vhodni signali izhodni signali strategije učenja nenadzorovano nadzorovano

UNM kot črna skrinja pri odločitvah Razpoznavanje vzorcev, določitev lastnosti, kontrola procesov... Vhodni signal UNM + Net k Vhodni signal: X=(x 1,x,...x i,...x m ) UNM postane uporabna šele potem, ko jo naučimo naših podatkov. Izhodni signal Izhodni signal: Y=(y 1,y,...y j,...y n )

Nenadzorovano učenje Razmerje med objektom in tarčo ni vnaprej definirano. Objekt X=(x 1,x,...x i,...x m ) Tarča Y=(ni poznana) 4

Nenadzorovano učenje Primer: Izvor italjanskih oljčnih olj Objekt: X=(x 1,x,...x i,...x 8 ) 57 vzorcev Tarča: Y (ni poznana) x 1...x 8 : % maščobnih kislin: (1)palmitinska, ()palmitoleinska, ()stearinska, (4)oleinska, (5)linoleinska, (6)arašidna (eikosanoična), (7)linolenska, (8)eikosenoična 5

6 6 6 6 6 5 5 5 5 5 5 8 8 8 8 8 8 8 6 6 6 6 5 5 5 5 5 7 8 8 8 8 6 6 6 6 5 5 5 5 5 5 7 8 8 8 8 8 6 6 6 6 5 5 5 5 5 5 7 7 7 7 8 8 8 8 6 6 5 5 5 5 7 7 7 7 7 7 8 8 5 5 5 7 7 7 9 9 9 9 8 5 5 5 7 7 7 9 9 9 9 5 5 7 7 7 7 9 9 9 9 9 7 7 7 7 9 9 9 1 7 7 7 9 1 4 1 4 1 1 1 1 1 1 1 1 1 1 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 Razred Pokrajina 1 North Apulia 5 Calabria 56 South Apulia 06 4 Sicily 6 5 Inner Sardinia 65 6 Coastal Sardinia 7 East Liguria 50 8 West Liguria 50 9 Umbria 51 S 57 Število vzorcev J. Zupan, M. Novic, X. Li, J. Gasteiger, Classification of Multicomponent Analytical Data of Olive Oils Using Different Neural Networks, Anal. Chim. Acta, 9, (1994), 19-4. 8 7 9 1 5 6 4 6

Nadzorovano učenje Razmerje med objektom in tarčo je poznano vnaprej, kar pomeni, da za poljuben objekt natančno vemo, katera tarča mu pripada oziroma jo želimo doseči. Objekt Tarča X=(x 1,x,...x i,...x m ) Y=(y 1,y,...y j,...y n ) 7

Nadzorovano učenje Primer: Klasifikacija točk glede na kvadrante v kartezičnem koordinatnem sistemu Objekt: X=(x 1,x,...x i,...x m ) Tarča: Y=(y 1,y,...y j,...y n ) m= n=1 n=4 ---------- ----- -------------- X=( 1, 1) Y=1 Y=(1,0,0,0) X=(-1, 1) Y= Y=(0,1,0,0) X=( -1,-1) Y= Y=(0,0,1,0) X=(1,-1) Y=4 Y=(0,0,0,1) 8

Nadzorovano učenje ----------------------------------- Primer Y= x Y= 1 Y= Y= 4 x 1 X=(x 1,x ) Y=(y 1,y,y,y 4 ) n=1 n=4 ----- -------------- Y=1 Y=(1,0,0,0) Y= Y=(0,1,0,0) Y= Y=(0,0,1,0) Y=4 Y=(0,0,0,1) 9

Osnove Kohonenovih umetnih nevronskih mrež - organizacija nevronov - 1-dimenzionalna (v vrsti) - -dimenzionalna (v ravnini) - analogija Kohonenove UNM z možgani - učna strategija - zmagovalec dobi vse - parameteri hitrosti učenja - velikost in obseg popravkov uteži - krčenje območja sosedov - razpoznavanje objektov iz učnega niza - napoved neznanih objektov - vrhnja plast VP ( top-layer ) z oznakami - prazni prostori v VP - klastri v VP - konflikti v VP - nivoji uteži 0

Nevron je definiran kot niz uteži W j j-ti nevron; w ji i-ta utež j-tega nevrona W j [w ji i=1 m ] w j1 w j w j w j4 w j5 w j6 1

Organizacija nevronov W j j=1,n w ji, i=1,m n: število nevronov m: dimenzija nevronov Dimenzija nevronov m mora biti enaka dimenziji reprezentacije objektov (številu neodvisnih spremenljivk)!

1-dimenzionalna organizacija nevronov (v vrsti) X [x,x,x ] 1 m= W W W W W W 11 1 1 41 51 61 n=6 W 1 W W W 1 W W W 4 W W W W W 5 W 6 W 4 W 5 W W 6 W 1 4 5 6

-dimenzionalna organizacija nevronov (v ravnini) X [x,x,x ] 1 j =1,6 x n = 6 x Število nevronov je n = n x n y = 4 (6x4) j =1,4 y n = 4 y W j x,j y,1 W j,j, x y W j,j, x y W j,j x y 4

Kohonenove nevronske mreže lahko uporabljamo za nelinearne projekcije več-dimenzionalnih objektov v dvo-dimenzionalen prostor ravnino (mapiranje) "homunculus" V možganih so področja nevronov, ki kontrolirajo gibanje motoriko posameznih delov človeškega telesa, organizirana eno zraven drugega, kot na navideznem zemljevidu človeškega telesa. Vendar velikost posameznih področij ne ustreza velikosi organa, katerega gibanje kontrolira. Velikost področja je sorazmerna pomembnosti in kompleksnosti njegovih aktivnosti. 5

"homunculus" 6

Strategija učenja - zmagovalec dobi vse" Winner = W win min (ε j ) x i w ji ε j ej å m ( xi w ji) i 1 objekt - komponente vektorja X j ti nevron komponente vektorja W (uteži) razlika med objektom in j tim nevronom 7

Analogija med biološkim in simuliranim nevronom: W =W win j x j y Signal vzbudi en sam nevron v celem šopu realnih, bioloških nevronov. j = y j =5 x V računalniški simulaciji je vzbujeni (zmagovalni) nevron W win določen s koordinato svojega položaja v UNM W win =W j xjy 8

POPRAVLJANJE: zmagovalni" nevron + okoliški nevroni Parameter hitrosti učenja: η(t,r(t))) t: čas učenja r(t): okolica (sosedni nevroni) w novi ji w stari ji ( i 1, m j 1, n s 1, p x si w stari ji ) 9

Proces, v katerem popravimo zmagovalni nevron in ustrezne okoliške nevrone glede na en sam (s ti ) objekt, imenujemo ena iteracija učnega procesa. Potem ko skozi mrežo pošljemo enega za drugim vse objekte (p) in naredio ustrezne popravke uteži, pravimo, da smo zaključili eno učno epoho. p iteracij = 1 epoha 40

Ena epoha učenja Kohonenove mreže e X - s W stari sosed s=1 X s W stari sosed e ji W novi sosed NET w i n n e r n e i g h b o u r s s=s+1 Ponavljaj do s=p obj p : število objektov v učnem nizu w novi ji w stari ji ( x w si 41 ) stari ji

Velikost in obseg popravkov: Krčenje obsega sosedov max r Flat function Triangular function Mexican hat function r c Chef hat function 4

4

Razpoznavanje objektov iz učnega niza Ko je učenje končano, morajo biti vsi objekti iz učnega niza razpoznavni Zgornji pogoj je praviloma dosežen, ko je vsota napak v eni epohi ε epoh pod določeno mejo. e epoh p s 1 n m j 1 i 1 ( ) x si w ji 44

Vrhnja plast (VP) z oznakami: W =W win j x j y -prazni nevroni v VP -klastri v VP -konflikti v VP j = y j =5 x n x NET A A A A A A A A A A A K K E K K E KK E K E EE K E E E E E E E E 11x11 n=11 A = Acid E = Ester n y K = Ketone 45

Modifikacija Kohonenove UNM Dodatna plast izhodnih nevronov spremeni Kohonenovo v protitočno ( counterpropagation ) umetno nevronsko mrežo Objekt je sestavljen iz opisnega vektorja X (neodvisni) in lastnosti - Y (vektor odvisnih spremenljivk ali tarče) Vhodni nevroni W (vhodna ali Kohonenova plast) Izhodni nevroni O (izhodna ali Grosbergova plast) Določanje zmagovalnega nevrona samo v vhodni plasti glede na podobnost med X in W Popravljanje uteži v obeh plasteh na enak način, glede na razliko med X-W in Y-O 46

Kohonenova - vhodna plast w new ji w ( i 1, m j 1, n s 1, p old ji x si w old ji ) Učenje z X o new ji Izhodna plast o ( y i 1, t j 1, n s 1, p old ji si o old ji ) O W j x j y Učenje z Y 47

Protitočna (counterpropagation) mreža (CP-UNM) X: reprezentacija molekulske strukture CP ANN j x j y neurons (5 5) i=1 Xi i=644 Y: tarča lastnost P: Napoved lastnost 48

Protitočna (counterpropagation) mreža (CP-UNM) X: reprezentacija molekulske strukture CP ANN j x j y neurons (5 5) D j x j y ( ) x w i i i, j x j y i=1 Mapiranje v Kohonenovi plasti Xi Uteži: Wj x j y i Zmagovalni nevron i=644 Y: tarča lastnost P: Napoved lastnost Napoved v Izhodni plasti 49

Protitočna (counterpropagation) mreža (CP-UNM) za napoved vezavne afinitete za ERa Y Xj Izhodna plast - ravnina odgovorov HOMO-LUMO energija 50