Motivation for the topic of the seminar

Size: px
Start display at page:

Download "Motivation for the topic of the seminar"

Transcription

1 Bogdan M. Wilamoski Motivation for the topic of the seminar Constrains: Not to talk about AMNSTC (nano-micro) Bring a ne perspective to students Keep it to the state-of-the-art Folloing topics ere considered: Solving Engineering Problems ith Computer Advanced Netork Programming Analog Signal Processing Computational Intelligence Keynotes: AINA 4-th Conf. on Advanced Information Netorking and Applications, Perth, Australia (April) ICAISC - th International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland (June) ICIT -nd International Conf. on Information Technology, Gdansk, Poland (June) ISIE 9-th International Symposium on Industrial Electronics, Bari, Italy (July) ISRCS 3-rd International Symposium on Resilient Control Systems, Idaho Falls, USA (August) Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Challenges in Neural Netorks Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation

2 3 4

3 6 3

4 7 8 4

5 9

6 6

7 3 4 7

8 6 8

9 7 8 9

10 9

11

12 WTA Winner Takes All Hamming layer linear layer binary inputs binary outputs unipolar neurons summing circuits pattern retrieval layer 3 The conclusion: The system of computational intelligence can be smarter than humans Is this ne technological revolution? years ago man poer as replaced by machines (steam and electric) years ago significant portion of man brain functions ere replaced by computer (calculations, administrative functions, voice and image recognitions etc) We are still claiming that e are the most intelligent creatures in the universe, but for ho much longer? 4

13 Find clusters Find number of clusters and its location in 4-dim. space Adding neurons as needed using minimum distance concept much simpler and more efficient then ART. First pattern is applied and the first neuron is introduced. Next pattern is applied and then: a) If distance form all existing clusters is larger then threshold then a ne neuron is added b) Else eights of the closest neuron are updated W k mw + = k αx m+ here m is the number of previous patterns of a given set hich ere used to update this particular neuron and α is the learning constant 6 3

14 X Y Fuzzifier Fuzzifier MIN operators Fuzzy rules MAX operators Defuzzifier Fuzzy systems Fuzzy controllers Practical Examples used from cameras to elevators Neural netorks out Diagnostic (medical, mechanical etc) Modeling (natural phenomena, or complex systems) Business and Military (various predictions) Evolutionary computation Are they replacing design processes? Neurl Netorks 7 x x ( t ) dt y x Neural Netork or Fuzzy System x ( t ) dt y x n x ( t ) dt y n Block diagram of nonlinear dynamic system using neural netorks or fuzzy systems. 8 4

15 Introduction x nonlinear terms x ( t ) dt y x nonlinear terms x ( t ) dt y x n nonlinear terms x ( t ) dt y n y y y n = = = f f f ( x, x, Lxn, y, y, L yn) ( x, x, Lx, y, y, L y ) n L n ( x, x, Lx, y, y, L y ) n Block diagram of an arbitrary nonlinear dynamic system. n n dt dt dt 9 3 Another area ith 4 partitions neuron equations: y x + y > x + y > 3 x + y > x 3 x y + < ; 3 3; ; ; ; Weights in the first layer: Weights in the second layer: x+ y > x+ 3 y 3> x+ y > x y+ > 3

16 Another area ith 4 partitions first layer second layer 3 Design neural netork ith unipolar McCulloch -Pitts neurons, hich has to input and three outputs. Each output respond to the patterns located in three areas as shon on figure belo. Dra neural netork and indicate value of each eight x y UNIPOLAR

17 Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 33 Neural netorks as nonlinear elements Feedforard neural netorks 34 7

18 Soft activation functions if net> sign( net) + o= f ( net) = =. if net = if net< o= f ( net) = sgn( net) = if net if net if net > = < o = f(net)= + exp f (-λnet) o= f(net)= tanh (.λnet) ' = λ ( o) o f ' = λ( o ) = + exp (-λnet) - 3 Neural Netork Learning Let us consider binary signals and eights such as x = if eights = x = then n net = i xi= 8 i= this is the maximum value net can have for any other combinations net ould be smaller 36 8

19 For the same pattern x = and slightly different eights = n net = i xi= 4 i= n net = i xi= n HD HD is the i= Neural Netork Learning Hamming Distance 37 Supervised learning rules for single neuron i = cδ x correlation rule (supervised): perceptron fixed rule: δ = d δ = d o perceptron adjustable rule - as above but the learning constant is modified to: T * x net α = α λ = α λ T LMS (Widro-Hoff) rule: δ = d x x net delta rule: δ = ( d o) f ' pseudoinverse rule (the same as LMS): = x ( T x x) x T d 38 9

20 EBP Error Back Propagation algorithm for single pattern Weights Layer j Layer k Neurons Weights Neurons Input vector Feed forard Backpropagation z V η learning constant net j V = ηδyz J f' (net j) initial eight vector j jδo δy = jδof'(netj) y = f(net j) o = f(net k) W net k W = ηδoy δo = [(dk-ok)f'(netk)] f' (net j) η K d - o - + d Desired output vector Output vector 39 EBP Error Back Propagation algorithm for single pattern x i ij j = α j-th + net j K [ ( dkp okp) gainkj x j] k= z j E j gain d ok gain kj = d net kj = f j ' ( net ) F '{ z } j j k j + o o k o K Illustration of the concept of the gain computation in neural netorks 4

21 4 Steepest descent method: g α = + k k Neton method: g A + = k k k here A k is Hessian and g is gradient vector if error is defined as: then: J J A T = and e J g T = here J is Jacobian and e is error vector LM or NBN algorithm N E E E M N N N N N E E E E E E E E E L M O M M L L Hessian Jacobian Gradient N is number of eights M is number of outputs = N MP MP MP N P P P N P P P N M M M n n e e e e e e e e e e e e e e e e e e L M M M L L M M M L M M M L L J ( ) = = = P p M m d pm o pm E Advantages of NBN algorithm over LM algorithm Both LM and NBN are very fast NBN do not calculate Jacobian matrix so it can handle problems ith basically unlimited number of patterns, hile LM can be used only for small problems LM needs the forard and back propagation processes to calculate Jacobian, hile NBN uses only forard calculation so it is faster (especially for netorks ith multiple outputs) NBN can handle arbitrarily connected neural netorks, hile LM as developed only for MLP

22 Number of iterations Training time Success rate ms 46% Sum of squared errors as a function of number of iterations for the Parity-4 problem using EBP algorithm, and runs Number of iterations Training time Success rate 7 ms 69% Result of parity-4 training using NBN algorithm ith architecture, and runs Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 44

23 Functional Link Netorks outputs inputs nonlinear elements + Genetic algorithms? 4 Polynomial Netorks outputs inputs polynomial terms xy x y x y + Fourier or other series? Nonlinear regression? 46 3

24 The cascade correlation architecture + hidden neurons + output neurons + outputs inputs eights adjusted every step once adjusted eights and then frozen + 47 Hamming layer The counterpropagation netorks (ROM) inputs outputs unipolar neurons summing circuits

25 Hamming layer The counterpropagation netorks (analog memory) binary inputs outputs inputs outputs unipolar neurons summing circuits R X analog memory ith analog address unipolar neurons summing circuits inputs outputs Consider using it as alternative to fuzzy systems Number of neurons = number of predefined values Easy implementation of systems ith multiple inputs

26 RBF - Radial Basis Function netorks minimum distance classifier inputs x is close to s hidden "neurons" s stored s stored stored s 3 s 4 stored summing circuit x s out= exp σ D y y y 3 y D y D y 3 D output normalization outputs Competitive Layer W LVQ Learning Vector Quantization Conterpropagation netork First layer detect V subclasses LinearLayer Second layer combines subclasses into a single class unipolar neurons summing circuits First layer computes Euclidean distances beteen input pattern and stored patterns. Wining neuron is ith the minimum distance 6

27 Input pattern transformation on a sphere Fix to Kohonen netork deficiency x x x n. z z z n z n+ R x xe xc xc3 xe3. -. xc xe Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Challenges in Neural Netorks Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 4 7

28 Boolean Fuzzy Fundamentals of Fuzzy Systems Comparison Boolean algebra ith fuzzy logic Fuzzy logic A B A B A B A B Fuzzy systems Inputs can be any value from to. The basic fuzzy principle is similar to Boolean logic. Max and min operators are used instead of AND and OR. The NOT operator also becomes - #. A B C min{a, B,C} smallest value of A,B or C A B C max{a,b,c} largest value of A,B or C A - A Boolean A B one minus A complement Fuzzy A B A B A B union intersection

29 Boolean or Fuzzy systems Fuzzy systems Entropy measure of uncertainty E Entropy a l A) = = b l ( ( A, A ) near ( A, A ) far When a=b then E(A)= a A (.,.7) b A A Fuzzy entropy theorem E( A) = a b M = M ( A A) ( A A) a A A b A (.8,.3) 8 9

30 m A ( x i ) Fuzzy systems degree of association of variable x i ith fuzzy set A Size of fuzzy set ( A ) = ( ) n M i= Distance beteen fuzzy sets A and B m A x i l p ( A, B) = p n i= m A ( x ) m ( x ) here p is distance order If p= this is fuzzy Hamming distance ( A,) M ( A) l = If p= this is Euclidean distance i B i p A (.,.7) l = l =.4+.4=.8 B (.6,.3) =.7 9 Design Example of Mamdani Fuzzy Controller cold cool norm arm hot 3 3 x input - temperature Y et moist norm dry 3 X 3 Sum of membership functions = 6 y input - humidity centroid 3

31 Fuzzy systems X Y Fuzzyfier Fuzzyfier voltages Array of cluster cells out eighted currents X Y Fuzzifier Fuzzifier MIN operators Fuzzy rules Block diagram for Zadeh fuzzy controller MAX operators Defuzzifier out k (fuzzy) defuzzifier output (analog) k (fuzzy) normalization k (fuzzy) eighted sum output (analog) Takagi-Sugeno type defuzzifier 6 Fundamentals of Fuzzy Systems Fuzzy controllers X Y Fuzzifier Fuzzifier Rule selection cells min-max operations Defuzzification out X Y Fuzzyfier Fuzzyfier voltages Array of cluster cells eighted currents out Block diagram of a Mamdani type fuzzy controller Block diagram of a TSK (Takagi-Sugeno-Kang) fuzzy controller 3

32 Y 3 C D E E B B D E A B B B C D D C D E E E Y X 3 X Mamdani fuzzy controller TSK fuzzy controller 63 Mamdani controller TSK controller triangular triangular trapezoidal 64 trapezoidal 64 3

33 X y fuzzification Fuzzifier Fuzzifier multiplication Π Π Π sum all eights equal expected values division out z Fuzzifier Π all eights equal -. - Fuzzy neural netorks 6 u v x y z u v x y z f f e e d d c c b b a a TSK fuzzy controller Sigmoidal pairs fuzzy controller ith sigmoidal membership functions 66 33

34 y u v x y z f f e B e d d c c b A C b u, v x y z x a a fuzzy controller ith sigmoidal membership functions 67 Neural Netorks or Fuzzy Systems Fuzzy Neural Number of inputs - + Analog implementation - + Digital implementation - + Speed - + Smoothens of the surface - + Design complexity + + So, most researches use FUZZY Why researches are frustrated ith neural 68? 34

35 Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Challenges in Neural Netorks Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 69 all eights = + Best neural netork architectures ; -; -3; -; ; 3; ; 7 bipolar - eights = eights (-8, -4,, 4, 8) bipolar eights = - out eights = + eights = unipolar out Layered bipolar neural netork ith one hidden layer for the parity-8 problem. Parity- implemented in fully connected bipolar neural netorks ith five neurons in the hidden layer. Parity- implemented ith 4 neurons in one cascade 3

36 Solution of the to spiral problem using MLP architecture ith 3 neurons ( ). Solution of the to spiral problem using BMLP architecture ith 7 neurons (====). Solution of the to spiral problem using FCC architecture ith 6 neurons (======) Best neural netork architectures Most softare can train only MLP exceptions are SNNS and NBN 36

37 Performance of Neural Netorks Control surface of TSK fuzzy controller (a) required control surface (b) 8*6=48 defuzzyfication rules Performance of Neural Netorks Control surface obtained ith neural netorks (a) 3 neurons in cascade ( eights) Training Error=.49 (b) 4 neurons in cascade (8 eights) Training Error=

38 Performance of Neural Netorks Control surface obtained ith neural netorks (a) neurons in cascade ( eights) Training Error=.3973 (b) 8 neurons in cascade ( eights) Training Error=.8e- EBP is not able to train optimal architectures Comparison beteen EBP algorithm and NBN algorithm, for different number of neurons in fully connected cascade netorks: (a) average training time; (b) success rate 38

39 Common Mistakes Researchers are using rong architectures Researchers are using excessive number of neurons First order algorithm such as EBP is not able to train optimal netorks Second order algorithms such as LM can train only MLP netorks Success Rate EBP NBN Number of Neurons Nely developed NBN algoritm is not only very fast but it can train all neural netork architectures and it can find solutions for optimal neural netork architectures Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Challenges in Neural Netorks Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 78 39

40 Digital implementation Neural netork implementations usually require computation of the sigmoidal function f ( net) = + exp net for unipolar neurons, or ( ) f ( net) = tanh( net) = exp ( net) for bipolar neurons. These functions are relatively difficult to compute, making implementation on a microprocessor difficult. If the Elliott function is used: net f ( net) = + net Elliott function sigmoidal unipolar Elliott sigmoidal bipolar instead of the sigmoidal, then the computations are relatively simple and the results are almost as good as in the case of sigmoidal function required surface triangular trapezoidal Mamdani fuzzy ith MIN Gaussian 8 4

41 required surface Neural netork ith one hidden layer 8 to inputs cases Fuzzy ith 6 membership functions Mamdani fuzzy controllers require: *6+6 =8 analog values + rule table 36*3= 8 bits TSK fuzzy controllers require: *6+6*6 =48 analog values No rule table has to be stored Neural netorks hidden neurons *3+= analog values 3 hidden neurons 3*3+= 4 analog values 4 hidden neurons 4*3+6= 8 analog values

42 Comparison of various fuzzy and neural controllers Type of controller length of code processing time (ms) Error MSE Mamdani ith trapezoidal Mamdani ith triangular Mamdani ith Gaussian Tagagi-Sugeno ith trapezoidal Tagagi-Sugeno ith triangulal 8..9 Tagagi-Sugeno ith Gaussian Neural netork ith 3 neurons in cascade Neural netork ith neurons in cascade Neural netork ith 6 neurons in one hidden layer Bogdan M. Wilamoski Problems ith computational intelligence Introduction Neural Netork Learning Neural Netorks Architectures Challenges in Neural Netorks Fuzzy Systems Comparison of Neural and Fuzzy Systems Evolutionary Computation 84 4

43 Genetic Algorithms The genetic algorithms follo the evolution process in the nature to find the better solutions of some complicated problems. Foundations of genetic algorithms are given in Holland (97) and Goldberg (989) books. Genetic algorithms consist the folloing steps: Initialization Selection Reproduction ith crossover and mutation Selection and reproduction are repeated for each generation until a solution is reached During this procedure a certain strings of symbols, knon as chromosomes, evaluate toard better solution. 8 Genetic Algorithms All significant steps of the genetic algorithm ill be explained using a simple example of finding a maximum of the function (sin (x)-.*x) ith the range of x from to.6. Note, that in this range the function has global maximum at x=.39, and local maximum at x=

44 Genetic Algorithms Coding and initialization At first, the variable x has to be represented as a string of symbols. With longer strings process converges usually faster, so less symbols for one string field are used it is the better. While this string may be the sequence of any symbols, the binary symbols "" and "" are usually used. In our example, let us use for coding six bit binary numbers having a decimal value of 4x. Process starts ith a random generation of the initial population given in Table 87 Genetic Algorithms 3 Initial Population string decimal value variable value function value fraction of total Total

45 Genetic Algorithms 4 Selection and reproduction Selection of the best members of the population is an important step in the genetic algorithm. Many different approaches can be used to rank individuals. In our example the ranking function is given. Highest rank has member number 6 and loest rank has member number 3. Members ith higher rank should have higher chances to reproduce. The probability of reproduction for each member can be obtained as fraction of the sum of all objective function values. This fraction is shon in the last column of the Table. Using a random reproduction process the folloing population arranged in pairs could be generated: -> 4 -> 49 -> 37 -> 49 -> 39 -> 4 -> 49 -> 4 89 Genetic Algorithms Reproduction -> 4 -> 49 -> 37 -> 49 -> 39 -> 4 -> 49 - > 4 If size of the population from one generation to another is the same, therefore to parents should generate to children. By combining to strings to another strings should be generated. The simples ay to do it is to split in half each of the parent string and exchange substrings beteen parents. For example from parent strings and the folloing child strings ill be generated and. This process is knon as the crossover and resultant children are shon belo -> 47 -> 3 -> 33 ->

46 Genetic Algorithms 6 Mutation On the top of properties inherited from parents they are acquiring some ne random properties. This process is knon as mutation. In most cases mutation generates lo ranked children, hich are eliminated in reproduction process. Sometimes hoever, the mutation may introduce a better individual ith a ne property into. This prevents process of reproduction from degeneration. In genetic algorithms mutation plays usually secondary role. Mutation rate is usually assumed on the level much belo %. In our example mutation is equivalent to the random bit change of a given pattern. In this simple example ith short strings and small population ith a typical mutation rate of.%, our patterns remain practically unchanged by the mutation process. The second generation for our example is shon in Table 9 Genetic Algorithms 7 Population of Second Generation string number string decimal value variable value function value fraction of total Total

47 Genetic Algorithms 8 Note, that to identical highest ranking members of the second generation are very close to the solution x=.39. The randomly chosen parents for third generation are -> 47 -> 3 -> 48 -> 4 -> 3 -> 48 -> 4 -> 3 hich produces folloing children: -> -> 48 -> 49 -> 4 -> -> 3 -> 4 -> 49 The best result in the third population is the same as in the second one. By careful inspection of all strings from second or third generation one may conclude that using crossover, here strings are alays split into half, the best solution -> ill never be 93 reached no matter ho many generations are created. Genetic Algorithms

48 Genetic Algorithms 9 Genetic Algorithms 96 48

49 Genetic Algorithms 97 Pulse Coded Neural Netorks 98 49

50 Pulse Coded Neural Netorks input node M3 M M C R C R V DD M4 MP MN eighted output currents NC Neural Cell Transient Graph time...ms V VC VC 99 Pulse Coded Neural Netorks X NC R C 3 Y NC 3

51 Pulse Coded Neural Netorks 3 Pulse Coded Neural Netorks 4 V DD input coupling ith neighbors C R C R coupling ith neighbors output input spatial image mutualy coupled neurons output temporal pattern Burning Fire (complete image) # of events time

52 Pulse Coded Neural Netorks 3

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

Artificial Neural Networks. Part 2

Artificial Neural Networks. Part 2 Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological

More information

Multilayer Feedforward Networks. Berlin Chen, 2002

Multilayer Feedforward Networks. Berlin Chen, 2002 Multilayer Feedforard Netors Berlin Chen, 00 Introduction The single-layer perceptron classifiers discussed previously can only deal ith linearly separable sets of patterns The multilayer netors to be

More information

Networks of McCulloch-Pitts Neurons

Networks of McCulloch-Pitts Neurons s Lecture 4 Netorks of McCulloch-Pitts Neurons The McCulloch and Pitts (M_P) Neuron x x sgn x n Netorks of M-P Neurons One neuron can t do much on its on, but a net of these neurons x i x i i sgn i ij

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Artificial Neural Networks. Edward Gatt

Artificial Neural Networks. Edward Gatt Artificial Neural Networks Edward Gatt What are Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning Very

More information

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories

Neural Networks. Associative memory 12/30/2015. Associative memories. Associative memories //5 Neural Netors Associative memory Lecture Associative memories Associative memories The massively parallel models of associative or content associative memory have been developed. Some of these models

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Artificial Neural Networks Examination, March 2002

Artificial Neural Networks Examination, March 2002 Artificial Neural Networks Examination, March 2002 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7. Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also

More information

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1 90 8 80 7 70 6 60 0 8/7/ Preliminaries Preliminaries Linear models and the perceptron algorithm Chapters, T x + b < 0 T x + b > 0 Definition: The Euclidean dot product beteen to vectors is the expression

More information

Neural networks and support vector machines

Neural networks and support vector machines Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

Neural Networks DWML, /25

Neural Networks DWML, /25 DWML, 2007 /25 Neural networks: Biological and artificial Consider humans: Neuron switching time 0.00 second Number of neurons 0 0 Connections per neuron 0 4-0 5 Scene recognition time 0. sec 00 inference

More information

Handling Uncertainty using FUZZY LOGIC

Handling Uncertainty using FUZZY LOGIC Handling Uncertainty using FUZZY LOGIC Fuzzy Set Theory Conventional (Boolean) Set Theory: 38 C 40.1 C 41.4 C 38.7 C 39.3 C 37.2 C 42 C Strong Fever 38 C Fuzzy Set Theory: 38.7 C 40.1 C 41.4 C More-or-Less

More information

Lecture 3a: The Origin of Variational Bayes

Lecture 3a: The Origin of Variational Bayes CSC535: 013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton The origin of variational Bayes In variational Bayes, e approximate the true posterior across parameters

More information

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS

ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS RBFN and TS systems Equivalent if the following hold: Both RBFN and TS use same aggregation method for output (weighted sum or weighted average) Number of basis functions

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR 4.1 Introduction Fuzzy Logic control is based on fuzzy set theory. A fuzzy set is a set having uncertain and imprecise nature of abstract thoughts, concepts

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification

Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification Computational Intelligence Lecture 3: Simple Neural Networks for Pattern Classification Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 arzaneh Abdollahi

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18 CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Lecture 9 Evolutionary Computation: Genetic algorithms

Lecture 9 Evolutionary Computation: Genetic algorithms Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Simulation of natural evolution Genetic algorithms Case study: maintenance scheduling with genetic

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks Topics in Machine Learning-EE 5359 Neural Networks 1 The Perceptron Output: A perceptron is a function that maps D-dimensional vectors to real numbers. For notational convenience, we add a zero-th dimension

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

AI Programming CS F-20 Neural Networks

AI Programming CS F-20 Neural Networks AI Programming CS662-2008F-20 Neural Networks David Galles Department of Computer Science University of San Francisco 20-0: Symbolic AI Most of this class has been focused on Symbolic AI Focus or symbols

More information

Neural Networks Lecture 2:Single Layer Classifiers

Neural Networks Lecture 2:Single Layer Classifiers Neural Networks Lecture 2:Single Layer Classifiers H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence Artificial Intelligence (AI) Artificial Intelligence AI is an attempt to reproduce intelligent reasoning using machines * * H. M. Cartwright, Applications of Artificial Intelligence in Chemistry, 1993,

More information

Intelligent Systems Discriminative Learning, Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm

More information

CSC321 Lecture 5: Multilayer Perceptrons

CSC321 Lecture 5: Multilayer Perceptrons CSC321 Lecture 5: Multilayer Perceptrons Roger Grosse Roger Grosse CSC321 Lecture 5: Multilayer Perceptrons 1 / 21 Overview Recall the simple neuron-like unit: y output output bias i'th weight w 1 w2 w3

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Backpropagation Neural Net

Backpropagation Neural Net Backpropagation Neural Net As is the case with most neural networks, the aim of Backpropagation is to train the net to achieve a balance between the ability to respond correctly to the input patterns that

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

Bidirectional Representation and Backpropagation Learning

Bidirectional Representation and Backpropagation Learning Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Artificial Neural Networks

Artificial Neural Networks Introduction ANN in Action Final Observations Application: Poverty Detection Artificial Neural Networks Alvaro J. Riascos Villegas University of los Andes and Quantil July 6 2018 Artificial Neural Networks

More information

CSC 411 Lecture 10: Neural Networks

CSC 411 Lecture 10: Neural Networks CSC 411 Lecture 10: Neural Networks Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 10-Neural Networks 1 / 35 Inspiration: The Brain Our brain has 10 11

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

2010/07/12. Content. Fuzzy? Oxford Dictionary: blurred, indistinct, confused, imprecisely defined

2010/07/12. Content. Fuzzy? Oxford Dictionary: blurred, indistinct, confused, imprecisely defined Content Introduction Graduate School of Science and Technology Basic Concepts Fuzzy Control Eamples H. Bevrani Fuzzy GC Spring Semester, 2 2 The class of tall men, or the class of beautiful women, do not

More information

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6 Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects

More information

Reification of Boolean Logic

Reification of Boolean Logic 526 U1180 neural networks 1 Chapter 1 Reification of Boolean Logic The modern era of neural networks began with the pioneer work of McCulloch and Pitts (1943). McCulloch was a psychiatrist and neuroanatomist;

More information

Computational statistics

Computational statistics Computational statistics Lecture 3: Neural networks Thierry Denœux 5 March, 2016 Neural networks A class of learning methods that was developed separately in different fields statistics and artificial

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler + Machine Learning and Data Mining Multi-layer Perceptrons & Neural Networks: Basics Prof. Alexander Ihler Linear Classifiers (Perceptrons) Linear Classifiers a linear classifier is a mapping which partitions

More information

A Novel Activity Detection Method

A Novel Activity Detection Method A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment Enhancing Generalization Capability of SVM Classifiers ith Feature Weight Adjustment Xizhao Wang and Qiang He College of Mathematics and Computer Science, Hebei University, Baoding 07002, Hebei, China

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

N-bit Parity Neural Networks with minimum number of threshold neurons

N-bit Parity Neural Networks with minimum number of threshold neurons Open Eng. 2016; 6:309 313 Research Article Open Access Marat Z. Arslanov*, Zhazira E. Amirgalieva, and Chingiz A. Kenshimov N-bit Parity Neural Netorks ith minimum number of threshold neurons DOI 10.1515/eng-2016-0037

More information

Linear Discriminant Functions

Linear Discriminant Functions Linear Discriminant Functions Linear discriminant functions and decision surfaces Definition It is a function that is a linear combination of the components of g() = t + 0 () here is the eight vector and

More information

Christian Mohr

Christian Mohr Christian Mohr 20.12.2011 Recurrent Networks Networks in which units may have connections to units in the same or preceding layers Also connections to the unit itself possible Already covered: Hopfield

More information

Linear models and the perceptron algorithm

Linear models and the perceptron algorithm 8/5/6 Preliminaries Linear models and the perceptron algorithm Chapters, 3 Definition: The Euclidean dot product beteen to vectors is the expression dx T x = i x i The dot product is also referred to as

More information

Lecture 5: Logistic Regression. Neural Networks

Lecture 5: Logistic Regression. Neural Networks Lecture 5: Logistic Regression. Neural Networks Logistic regression Comparison with generative models Feed-forward neural networks Backpropagation Tricks for training neural networks COMP-652, Lecture

More information

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow Plan training single neuron with continuous activation function training 1-layer of continuous neurons training multi-layer network - back-propagation method single neuron with continuous activation function

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

Bounded Approximation Algorithms

Bounded Approximation Algorithms Bounded Approximation Algorithms Sometimes we can handle NP problems with polynomial time algorithms which are guaranteed to return a solution within some specific bound of the optimal solution within

More information

Method of Measurement of Capacitance and Dielectric Loss Factor Using Artificial Neural Networks

Method of Measurement of Capacitance and Dielectric Loss Factor Using Artificial Neural Networks .55/msr-5-9 MESUREMENT SCIENCE REVIEW, Volume 5, No. 3, 5 Method of Measurement of Capacitance Dielectric Loss Factor Using rtificial Neural Netorks Jerzy Roj, dam Cichy Institute of Measurement Science,

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

The Multi-Layer Perceptron

The Multi-Layer Perceptron EC 6430 Pattern Recognition and Analysis Monsoon 2011 Lecture Notes - 6 The Multi-Layer Perceptron Single layer networks have limitations in terms of the range of functions they can represent. Multi-layer

More information

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009 AN INTRODUCTION TO NEURAL NETWORKS Scott Kuindersma November 12, 2009 SUPERVISED LEARNING We are given some training data: We must learn a function If y is discrete, we call it classification If it is

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated

More information

Neural networks III: The delta learning rule with semilinear activation function

Neural networks III: The delta learning rule with semilinear activation function Neural networks III: The delta learning rule with semilinear activation function The standard delta rule essentially implements gradient descent in sum-squared error for linear activation functions. We

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: Ryan Lowe (ryan.lowe@mail.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted,

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information