PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

Size: px
Start display at page:

Download "PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER"

Transcription

1 PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER The Project has been realised with the support of the European Union and has been co-financed by the European Social Fund *** **Molekuláris bionika és Infobionika Szakok tananyagának komplex fejlesztése konzorciumi keretben ***A projekt az Európai Unió támogatásával, az Európai Szociális Alap társfinanszírozásával valósul meg. 10/5/2011. TÁMOP /2/A/KMR

2 Peter Pazmany Catholic University Faculty of Information Technology Digital- and Neural Based Signal Processing & Kiloprocessor Arrays Digitális- neurális-, és kiloprocesszoros architektúrákon alapuló jelfeldolgozás Signal Processing by a Single Neuron (Jelfeldolgozás Mesterséges Neuronnal) Treplán Gergely 10/5/2011. TÁMOP /2/A/KMR

3 Historical notes Signal processing on digital, neural, and kiloprocessor based Outline Artificial neuron (McCulloch-Pitts neuron) Elementary set separation by a single neuron Implementation of a single logical function by a single neuron Pattern recognition by a single neuron The learning algorithm Questions Example problems 10/5/2011. TÁMOP /2/A/KMR

4 Historical notes Threshold Logic Unit (TLU) proposed by Warren McCulloch and Walter Pitts in 1943; Hebb s first rule for self-organized learning in 1949; Perceptron developed by Frank Rosenblatt in 1957; ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) by Hoff and Widrow in 1960; Perceptron learning rule (LMS algorithm) by Widrow in 1960, Limitations of the perceptron (Minsky and Papert-1969); Back-propagation algorithm (1986); Radial-basis function network (Broomhead and Lowe-1988). 10/5/2011. TÁMOP /2/A/KMR

5 The artificial neuron (1) The artificial neuron is an information processing unit that is elementary element of an artificial neural network. extracted from the biological model 10/5/2011. TÁMOP /2/A/KMR

6 The artificial neuron (2) Stimuli Synapse Output stimulus (response) A crude simplification of a nerve cell is depicted in the Figure. Stimuli arrives from other neurons. From the synapses the dendrites carry the signal to the nerve cell body where it gets summed up and if it reaches a certain level an output is generated. A synapse is called excitatory if stimulating it increases the probability of generating an output or inhibitory if the stimulus on the synapse attenuates the overall sum. 10/5/2011. TÁMOP /2/A/KMR

7 Dendrite The artificial neuron (3) Soma Nodes of Ranvier Axon terminal Nucleus Myelin sheath Schwann cell The following artificial model is just simple copy of the nerve cell, however some important features can be extracted from this model! 10/5/2011. TÁMOP /2/A/KMR

8 The artificial neuron (4) The artificial neuron is connected to the outside world with the input signal is x i, where the synaptic strength is represented by the weight w i. Basically what arrives to the AN is a weighted sum of the input signal. Then this w i quantifies two general effects, if w i > 0 then the input is amplified, else it is attenuated, so it means that the w i is the descriptor of the synapse. There is also a b threshold what the nerve compares to the weighted sum of the input. 10/5/2011. TÁMOP /2/A/KMR

9 The artificial neuron (5) The output value is the value determined according to a φ(.) nonlinearity, which is also called the threshold function. In order to have a more compact form the extended weight vector a new w 0 = b can be defined, in order to have an inequality like this. This representation can be seen in the Figure. 10/5/2011. TÁMOP /2/A/KMR

10 The artificial neuron (6) Using this interpretation we have same model as above if the input x 0 of w 0 is a constant 1, and now it can be easily seen that the final output is a nonlinearity of the inner product of the weight and inputs. Mathematically it means the following equation for the output y: = N y ϕ wx i i = ϕ wx i=0 Or using the threshold notation: T ( ) = N y ϕ wx i i b i=1 10/5/2011. TÁMOP /2/A/KMR

11 The artificial neuron (7) Activation function (1) Let the activation or threshold function φ(.) be is a monotone differential increasing function, for example it can be the inverse of arctan(). Let it be called the soft nonlinearity function, which is showed in the next Figure. In this case the output where y 2 = ϕ( u, λ) =, 1 + exp N i i i= 1 u = wx b. ( λu) 10/5/2011. TÁMOP /2/A/KMR

12 Sigmoid nonlinearity function The artificial neuron (8) Activation function (2) 10/5/2011. TÁMOP /2/A/KMR

13 The artificial neuron (8) Activation function (3) If the activation function is the sgn(.) function then it is the so called hard nonlinearity function shown in the Figure. And now this is the formula which fully described the operation of an ANN. In this case the output where 1, if u 0 y = sgn ( u) =, 1, else N i i i= 1 u = wx b. 10/5/2011. TÁMOP /2/A/KMR

14 The artificial neuron (8) Activation function (4) Signum hard linearity function 10/5/2011. TÁMOP /2/A/KMR

15 The artificial neuron (9) Relation between activation functions: 10/5/2011. TÁMOP /2/A/KMR

16 The artificial neuron (10) Formula which fully described the operation of an McCulloch- Pitts or artificial neuron: If an input vector arrives to the AN, 1. computes the weighted sum of it, 2. compare the result with a threshold, 3. then the output is getting throw on a nonlinearity function. Again: it is a growth simplification, getting some important feature of biological nerve cell. This is reveled soon, that it is a very complex model, with we can solve hard IT problems. 10/5/2011. TÁMOP /2/A/KMR

17 Elementary set separation by a single neuron (1) Let be the φ(.) hard nonlinear function, and then the output is discrete -1 or 1 with this assumption: ϕ + 1, if u 0 = sgn =. 1, else ( u) ( u) Rewrite the formula substituting u to w T x and then the output +1, if the weighted sum of the input is greater than zero or 1 if the argument is smaller than zero. y T T + 1, if wx 0 = ϕ ( u) = sgn ( wx) =. 1, else 10/5/2011. TÁMOP /2/A/KMR

18 Elementary set separation by a single neuron (2) It is a very important formula, because if we look this formula in an geometrical interpretation, it is a separation by a linear hyperspace. From elementary geometric we are aware of the fact that this is the equation of a hyper plane: T wx = 0 While it is an equality of an N-dimensional hyper plane, the weights of the artificial neuron represent a linear decision boundary in a two class pattern-classification problem. 10/5/2011. TÁMOP /2/A/KMR

19 Elementary set separation by a single neuron (3) Illustration of the hyper plane (in this example, a straight line) as decision boundary for a two-dimensional, two class patternclassification problem. 10/5/2011. TÁMOP /2/A/KMR

20 Elementary set separation by a single neuron (4) If we represent the hyperspace in a 2-D input space, this equation will determine a hyper plane, which is a simple line. What is beyond that line is going to be classified +1, and what is under the line is going to be 1. The most simplest artificial neuron with a 2-D input. 10/5/2011. TÁMOP /2/A/KMR

21 Elementary set separation by a single neuron (5) To give a specific example let the weight vector be w = (3, 2, 1) so the hyper plane can be describer the following equation: x + 1 x = Explicitly it means: x = 3 2 x 2 1 Following Figure shows the decision line of this equation. 10/5/2011. TÁMOP /2/A/KMR

22 Elementary set separation by a single neuron (6) Decision boundary of the example. 10/5/2011. TÁMOP /2/A/KMR

23 Elementary set separation by a single neuron (7) The decision domain can be easily caught, according to the sign of w 2. As a result when a given vector is the weight vector this is the way how we can visualize the set separation. Furthermore, if we change the weight vector s components, then we will have different numbers in the equation, what means an other separation line. As a result that basically w vector represents the programmability of the artificial neuron and this fact can be carried out from the figure was shown above. 10/5/2011. TÁMOP /2/A/KMR

24 Elementary set separation by a single neuron (8) Why it is so important to use set separation by hyper plane? We can come up with the applications when we would like to implement some logic functions. Furthermore there are plenty of mathematical and computational task which can be derived to a set separation problem by a linear hyper plane. Let us observe the implementation of logical functions by a single neuron! 10/5/2011. TÁMOP /2/A/KMR

25 Implementation of a single logical function by a single neuron (1) Firstly let us actually focus on 2d-and function. We can come up with the truth table of the logical AND function. Figure also shows the input space with its simple geometric interpretation : as far as x 1 and x 2 are the inputs. 2-D AND from truth table to visualization 10/5/2011. TÁMOP /2/A/KMR

26 Implementation of a single logical function by a single neuron (2) All the input points can be seen on the plot, and basically the 2-D AND function means set separation, because only one have to classified as +1, and all others must be minus one. So it is a set separation, because it can be implemented the good decision, if we choose the right weights. The truth table determines for given input vector whether the output is +1 or - 1. On left side the geometric interpretation can be seen, and it is easy to notice that is really a problem, which can be solved with a linear separator. If we know the decision boundary we can give the weight form the equation of the line. 10/5/2011. TÁMOP /2/A/KMR

27 Implementation of a single logical function by a single neuron (3) Than what we have on the figure is actually the separation surface which we was needed, which mathematically is the following equation: As a result: x + x = 0 x 1 2 = And that means that this separation surface is look like in the figure, as a result it can be easily seen that the weight vector is w = ( 1.5, 1, 1). x 10/5/2011. TÁMOP /2/A/KMR

28 Implementation of a single logical function by a single neuron (4) Next figure shows us how to design the implementation of a 2-D AND function by an artificial neurons. Solution of the logical 2-D AND by a single neuron. 10/5/2011. TÁMOP /2/A/KMR

29 Implementation of a single logical function by a single neuron (5) Furthermore instead of 2D, we can actually come up with the R dimensional AND function. The corresponding weight vector to implement an R dimensional AND function are the following program. The weights corresponding to the inputs are all 1 and threshold should be R 0.5. As a result the actual weights of the neuron are the following: w T ( ( R 0.5 ),1,,1) = 10/5/2011. TÁMOP /2/A/KMR

30 Implementation of a single logical function by a single neuron (6) In a same way OR function can also be implemented by a single artificial neuron, being a linear separation problem, which is shown by the next Figure, and the weight must be w =( 0.5, 1, 1). 2-D OR problem solved by a linear separator. 10/5/2011. TÁMOP /2/A/KMR

31 Implementation of a single logical function by a single neuron (7) However we cannot implement every logical function by a linear hyper plane. Unfortunately there are some ones which cannot be implemented by a single neuron for example excluded OR (XOR) is like that, which entails a separation given in the next Figure. 2-D XOR problem can not be solved by one linear separator 10/5/2011. TÁMOP /2/A/KMR

32 Implementation of a single logical function by a single neuron (8) As it can be seen, it cannot be separated by one linear hyper plane so more neuron should be used, as in this example one of the neuron implement one line, the other is going to implement the other line and then the 3rd neuron realizes the AND function to combine the two separation. Is there any neural based solution for nonlinear separation problems? Let us build neural networks! 10/5/2011. TÁMOP /2/A/KMR

33 Implementation of a single logical function by a single neuron (9) As a result even the XOR function can be implemented by a neural network, for example with 3 neurons in a feed forward manner. 2-D XOR problem can be solved by a network of neurons. 10/5/2011. TÁMOP /2/A/KMR

34 Implementation of a single logical function by a single neuron (10) The very important conclusion is that elementary artificial neuron is a linear set separator in the N dimensional input space, where programmability is actually ensured by changing the free parameters of the system, depending how well it classifies, and then AN can implement a class of logical functions, more precisely the linear separable functions. So basically we need ANN to implement any logical function by deploying several neurons, which results a network 10/5/2011. TÁMOP /2/A/KMR

35 Implementation of a single logical function by a single neuron (11) Feed forward artificial neural network. 10/5/2011. TÁMOP /2/A/KMR

36 Pattern recognition by a single neuron (1) Now it is going to be shown how to solve elementary pattern recognition task with a single neuron, what is the reason that artificial neuron is also called perceptron, because it can intelligently recognize patterns. Let assume that at the input, there is a speech pattern, which is a continuous signal. Let assume that someone can say yes or can pronounce no. Many of the ATM systems in the US already follow this pattern: no client orders are executed without verbal validation. (your own yes-or no verbal verification order is usually needed to proceed with any of your financial actions.) 10/5/2011. TÁMOP /2/A/KMR

37 Pattern recognition by a single neuron (2) Let assume that the input is a speech pattern in a form of continuous signal which represents a yes or a no. This continuous signal is going in an A/D converter where it is transformed to a simple digital signal. After that, the next step is to carry out the features with an FFT, which is represented by X vector. Then all of the components are plugged into an artificial neuron, which computes the weighted sum of the input, compares with a threshold, and if the state is greater or equal than the threshold the output is going to be +1, otherwise it is going to be 1. 10/5/2011. TÁMOP /2/A/KMR

38 Pattern recognition by a single neuron (3) The block diagram of the speech pattern recognition by an artificial neuron. 10/5/2011. TÁMOP /2/A/KMR

39 Pattern recognition by a single neuron (4) This is how we would like to solve the speech pattern recognition task, with a separation by a linear hyper plane. The model is now correct, the next question is that what the weights are of this neuron, so what should be the program of the perceptron to get correct recognition. In a more general view (next Figure) every special pattern can arrive which has two possible value s (1) or s (2),which are going to represent the Fourier transformation of yes and the no. 10/5/2011. TÁMOP /2/A/KMR

40 Pattern recognition by a single neuron (5) Generalization of a pattern recognition task by an artificial neuron. 10/5/2011. TÁMOP /2/A/KMR

41 Pattern recognition by a single neuron (6) Then this pattern hits the system, and a preprocessing calculation is used, depending what the specific task. After the preprocessing, an artificial neuron should be implemented that finally provide +1 or -1. Having elaborated this model the mathematical analysis of the pattern recognition is going to be derived, which proves that an elementary artificial neuron can decide optimally under assumptions, or in other words a linear separator is so to speak good enough to carry out a pattern recognition task. Furthermore, the method is going to be given, how to implement the optimal weight vector. 10/5/2011. TÁMOP /2/A/KMR

42 Pattern recognition by a single neuron (7) Pattern recognition under Gaussian noise (1) To address the first theorem, let us assume whether the task can be implemented under Gaussian noise. The largest problem when someone pronounces yes, that everybody can pronounce with many type of yes, so there are several versions of the individual. Let us assume that we have a standard yes as s (1) and a standard no as s (2). Standard pattern basically as we mentioned earlier is either represented by yes or no. 10/5/2011. TÁMOP /2/A/KMR

43 Pattern recognition by a single neuron (8) Pattern recognition under Gaussian noise (2) However we want to be sure that a special input yes is going to be classified as yes. Let x be the observation which belongs to the spoken speech pattern. This observation differs from the standard one subject to a multidimensional Gaussian formulation with 0 mean and K covariance matrix. That is the most general assumption when the aim is to design a system. 10/5/2011. TÁMOP /2/A/KMR

44 Pattern recognition by a single neuron (9) Pattern recognition under Gaussian noise (3) Formally it means the following equality for the observation x: x = ξ+ ν, where the original signal must be one of the standard pattern: { ( 1) ( 2) } ξ s, s, and the noise is: ν N 0K,. ( ) 10/5/2011. TÁMOP /2/A/KMR

45 Pattern recognition by a single neuron (10) Pattern recognition under Gaussian noise (4) That is the most general assumption when, the aim is to design a system. Generally the block diagram of the task can be seen in the next Figure. x Pattern recognition under Gaussian noise solved by an AN. 10/5/2011. TÁMOP /2/A/KMR

46 Pattern recognition by a single neuron (11) Pattern recognition under Gaussian noise (5) Due to this statistic what we observed for the random vector that the probabilities of the pronounced speech vector, if the standard was yes or no are the followings: P P ( () 1 ) ( 1) ( ( 1) exp ) x ξ = s = x s K x s, 2 ( ()) ( ( )) N ( 2π ) det( K) ( ( 2) ) ( 1) ( ( 2) exp ) x ξ = s = x s K x s. 2 N ( 2π ) det( K) 10/5/2011. TÁMOP /2/A/KMR

47 Pattern recognition by a single neuron (12) Pattern recognition under Gaussian noise (6) These are the traditional multidimensional Gaussian density functions with different expected value depending on the condition. This means again (geometrically speaking) that a kind of separation problem given. If we observe x which before the distortion was s (1) or s (2), the best we can do is find the closest original point in the probability space. 10/5/2011. TÁMOP /2/A/KMR

48 Pattern recognition by a single neuron (13) Pattern recognition under Gaussian noise (7) Pattern classification from a geometrical point of view. 10/5/2011. TÁMOP /2/A/KMR

49 Pattern recognition by a single neuron (14) Pattern recognition under Gaussian noise (8) This is the classical Bayesian decision. This is the method how we can guarantee minimal error probability. Bayes decision is an optimal decision, because it always chooses according to the likelihood functions above, so it is going to choose the more possible original point. 10/5/2011. TÁMOP /2/A/KMR

50 Pattern recognition by a single neuron (15) Pattern recognition under Gaussian noise (9) Formally the following inequality has to be evaluated, to decide which standard is the more probable: ( ) ( ( ) = > P = ) ( ) ( ) s : P x ξ s x ξ s, s () ( () 1 ) ( 1 )( ( 1 ) : exp ) > 2 x s K x s N ( 2π ) det( K) 1 1 ( ( 2) ) ( 1) ( ( 2) exp ) > x s K x s. 2 N ( 2π ) det( K) 10/5/2011. TÁMOP /2/A/KMR

51 Pattern recognition by a single neuron (16) Pattern recognition under Gaussian noise (10) Which can be rewritten as: 1 ( ()) T ( ) ( ) 1 x s K x s > x s K x s 2 2 After decomposition: ( ) ( ( )) T ( )( ( )) ( ) ( ) ( ) ( ) ( ) T ( ) ( ()) T ( ) ( ) T xk x+ s K x s K s > ( ) T ( ) ( ( )) T ( ) ( ) T > xk x+ s K x s K s.. 10/5/2011. TÁMOP /2/A/KMR

52 Pattern recognition by a single neuron (17) Pattern recognition under Gaussian noise (11) Rearranging the inequality: 1 1 s s K x> s K s s K s 2 2 ( () ( )) ( ) () ( ) ( ) ( ) ( ( )) ( ) ( ) 1 2 T 1 1 T T 1 2 And now it can be seen that if we choose: ( () 1 ( 2 )) T ( 1 ) T w = s s K,. and 1 ( ()) T ( ) ( ) 1 ( ) b= s K s s K s 2 2 ( ) T ( ) ( ) T wx > b. 10/5/2011. TÁMOP /2/A/KMR

53 Pattern recognition by a single neuron (18) Pattern recognition under Gaussian noise (12) Therefore we get s (1) if: T wx > b. As a result it is a linear set separation problem, so it can be implemented on an artificial neuron in a way like next Figure shows, and it can carry out the solution of the task under Gaussian noise. 10/5/2011. TÁMOP /2/A/KMR

54 Pattern recognition by a single neuron (19) Pattern recognition under Gaussian noise (13) Implementation of AN solving the pattern recognition task. 10/5/2011. TÁMOP /2/A/KMR

55 Pattern recognition by a single neuron (20) Pattern recognition under Gaussian noise (14) Basically the AN can decide if an observed pattern arrives, after we had downloaded the optimal weights what we could have calculated offline if the standard patterns and covariance matrix was given. The conclusion is that an elementary artificial neuron can solve any pattern recognition when two patterns has to be distinguished. 10/5/2011. TÁMOP /2/A/KMR

56 Pattern recognition by a single neuron (21) Pattern recognition under Gaussian noise (14) This model is very well defined now if the parameters are fully given, since s (1),s (2),K are given and w free parameters can be calculated, and the actual neuron can be implemented. On the other hand in a real life application this quantities are not known, an this is why the next issue is how it is possible to get w and b in the neck of these parameters. The next topic provides a learning algorithm, where neurons are updating themselves optimally. 10/5/2011. TÁMOP /2/A/KMR

57 The learning algorithm (1) We do not know what are the actually the standard patterns and the covariance matrix is also unknown so what we have is only a set of examples which is called learning set: X X + { x : d 1} { x : d 1} = =+ = = Example unlike that can be always given because it can be told by a human expert. An artificial neuron can function properly, if the two classes X + and X must be linearly separable. 10/5/2011. TÁMOP /2/A/KMR

58 The learning algorithm (2) This, in turn, means that the patterns to be classified must be sufficiently separated from each other to ensure that the decision surface consists of a hyper plane. The question is how to develop an algorithm which based on these examples can find the right decision, even the original parameters are unknown. So instead of the actual parameter only a learning set is available to us. 10/5/2011. TÁMOP /2/A/KMR

59 The learning algorithm (3) Suppose then that the input variables of the perceptron originate from two linearly separable classes X + and X. Given the sets of vectors X + and X to train the classifier (the perceptron), the training process involves the adjustment of the weight vector w opt in such a way that the two classes X + and X linearly separable. Furthermore separability means that there exists an optimal w opt vector, for which is true that the whole set of X + and X fulfills the next relationships: X X + T { x woptx } T { x woptx } = : 0, = : < 0. 10/5/2011. TÁMOP /2/A/KMR

60 The learning algorithm (4) Furthermore this linear separation can be carried out with an artificial neuron shown the next Figure, and the only problem is that the w opt program of the neuron is fully unknown. General artificial neuron. 10/5/2011. TÁMOP /2/A/KMR

61 The learning algorithm (5) But we have some examples which is represented as: τ ( K ) {( x( k) d( k) ) k K} : =,, = 1,...,. Actually we are looking for the optimal weights which the neuron perform with perfectly to the learning set. Formally the object is to find w: d + T + 1, x X = sgn ( wx) =. 1, x X 10/5/2011. TÁMOP /2/A/KMR

62 The learning algorithm (6) Therefore basically we have to develop a recursive algorithm called learning, which can learn step by step, based on observing the previous weight vector, the desired output and the actual output of the system. On these specific examples it is going to recursively adopt the weight vector in order to converge w opt. This can be described formally as follows: ( ) opt ( k 1 ) ( k), d( k), y( k) w + =Ψ w w 10/5/2011. TÁMOP /2/A/KMR

63 The learning algorithm (7) In a more ambitious way it can be called intelligent, because artificial intelligence is a kind of philosophy that we can learn from example, even the parameters are fully hidden. The so called Rosenblatt learning algorithm which is a manifestation of learning can be expressed with a special update rule. Given the sets of vectors X + and X and an initial weight vector w(0), this algorithm can set an optimal weights vector w opt to the perceptron. 10/5/2011. TÁMOP /2/A/KMR

64 The learning algorithm (8) The algorithm for adapting the weight vector of the elementary perceptron may now be formulates as follows: 1. Initialization. Set w(0)=0. Then perform the following computations for time step n=1,2, 2. Activation. At time step k, activate the perceptron by applying continuous-valued input vector x(k) and desired d(k). 3. Computation of Actual response. Compute the actual response of the perceptron: { } T ( ) w ( ) x( ) y k = sgn k k. 10/5/2011. TÁMOP /2/A/KMR

65 The learning algorithm (9) 4. Adaptation of the weight vector. Update the weight vector of the perceptron according to rule: where ( k + ) = ( k ) + d ( k ) y ( k ) ( k ) w 1 w x, ( ) d k ( k) ( k ) + 1 if x belongs to class X =, 1 if x belongs to class X and ε ( k) = d( k) y( k) is the error function. 10/5/2011. TÁMOP /2/A/KMR

66 The learning algorithm (10) 5. Continuation. Increment time step n-by-one and go back to step 2. Basically we feedback the error signal to adopt the weights more efficiently, and in next Figure it can be seen the block diagram of the algorithm. One can come with the following questions: if the algorithm converges to any fix point? if there is a fix point, what is the speed of convergence? 10/5/2011. TÁMOP /2/A/KMR

67 The learning algorithm (11) The Rosenblatt learning algorithm. 10/5/2011. TÁMOP /2/A/KMR

68 The learning algorithm (12) Proof of the algorithm convergence (1) Let us define the following sets: T { opt } X + = x : w x 0, T { opt } X = x : w x< 0, T { opt ( ) } X = -x : w -x 0, and: T { x wopt x } + X = X X = : ( ) > 0. 10/5/2011. TÁMOP /2/A/KMR

69 The learning algorithm (13) Proof of the algorithm convergence (2) Suppose that w(k)x(k)<0 for k=1,2,, and the input vector x(k) belongs to the class X +. That is the perceptron incorrectly classifies the vectors x(0), x(1), x(2),. In this case the perceptron learning rule is the follow: ( k + ) = ( k ) + d ( k ) y ( k ) ( k ) = ( k ) + ε ( k ) ( k ) w 1 w x w x, where the error signal may be 0 (no error) and 2 (error decision with d(k)=+1 and y(k)=-1) : ε ( k) = d( k) y( k) { } 0,2. 10/5/2011. TÁMOP /2/A/KMR

70 The learning algorithm (14) Proof of the algorithm convergence (3) We define two sets at the stage k (k=1,2, ) with the updated w(k): and NOK k { x: w( ) x( ) } X = k k <0, OK k { x: w( ) x( ) } X = k k >0, where X k OK is the Not OK (it s not a correct classification) set, and the X k NOK is the OK (it s a correct classification) set. 10/5/2011. TÁMOP /2/A/KMR

71 The learning algorithm (15) Proof of the algorithm convergence (4) Then the input vectors are elements of the X k NOK set, and the weight vector w(k) is updated by learning rule: x x... x ( 0) () 1 X X ( k ) NOK 0 NOK 1 NOK 1 X 1 k e e... ( ) () 0 = 2 1 = 2 ( ) ek 1 = 2... ( 1 ) (0) 2 ( 0) ( 2 ) (1) 2 ( 1) w = w + x w = w + x ( k) ( k 1) 2 ( k 1) w = w + x 10/5/2011. TÁMOP /2/A/KMR

72 The learning algorithm (16) Proof of the algorithm convergence (5) If the initial condition is w( 0 ) = 0; then we get: 1 w( 1) = 2x( 0 ); w( 2) = 2( x( 0) + x( 1) ) ( k) x( i) = 2 k. i= 0 Hence, multiplying both sides by the w optt, we get where the d min is defined as a positive number: w k 1 k 1 T T opt ( k) = opt ( i) dmin = kdmin i= 0 i= 0 w w 2 w x 2 2, w T optx dmin > 0. 10/5/2011. TÁMOP /2/A/KMR

73 The learning algorithm (17) Proof of the algorithm convergence (6) Next we make use of an inequality known as the Cauchy- Schwarz: ab a b where. denotes the Euclidean norm of enclosed argument vector, and inner product is a scalar quantity. Given two vectors and, the Cauchy-Schwarz inequality states that: 2 T ( ) ( ) 2 2 T w w k w w k 4 k d. opt opt, 2 2 min 10/5/2011. TÁMOP /2/A/KMR

74 The learning algorithm (18) Proof of the algorithm convergence (7) By taking the squared Euclidean norm of both sides, we obtain: ( k) ( k 1) 2 ( k 1) 2 2 w = w + x = But, under the assumption that the perceptron incorrectly classifies an input vector x(k) belonging to the class X -, we have: x ( k ) ( k ) ( k ) ( k ) ( k 1) X k 1 NOK 2 2 = w 1 + 4w 1 x x 1. w ( k ) x( k ) 1 1 < 0. 10/5/2011. TÁMOP /2/A/KMR

75 The learning algorithm (19) Proof of the algorithm convergence (8) We therefore deduce that or equivalently ( k) ( k ) + ( k ) w w 1 4 x 1, ( ) ( ) ( ) w k w k 1 4 x k 1 4 d, where d max is a positive number defined by dmax = max x. x X 2 max 10/5/2011. TÁMOP /2/A/KMR

76 The learning algorithm (20) Proof of the algorithm convergence (9) Adding the upper inequalities for k=1,2,, and invoking the assumed initial condition w(0)=0, we get the following inequality: 2 2 k n= 1 ( ) x( ) Then we get an upper bound: w k n 1 4 kd. 2 max w ( ) 2 2 k kd 4. max 10/5/2011. TÁMOP /2/A/KMR

77 The learning algorithm (21) Proof of the algorithm convergence (10) Analyzed the upper and lower bound at the same time: 4kd w 2 2 min T 2 opt ( k) 2 w 4 kd, 2 max we can take cognizance of that the lower bound increases faster (O(k 2 )) than upper bound (O(k)): 4kd w 2 2 min T 2 opt 4 kd. 2 max 10/5/2011. TÁMOP /2/A/KMR

78 Problems and Questions (1) Describe the perceptron convergence theorem (algorithm)! (proof of convergence and order of convergence)! Design an analog circuit to realize a single artificial neuron! Design an analog circuit to realize a single artificial neuron! Give the weights and the biases of the neurons implementing the 5 dimensional NAND and NOR logical functions! Give the weights in the case of 12 dimensional input! 10/5/2011. TÁMOP /2/A/KMR

79 Problems and Questions (2) Why is it impossible to solve XOR problem with a single neuron? Give the network implementation of XOR problem using 3 artificial neuron! Can a pattern recognition task (recognizing only two distinct patterns under Gaussian noise) solved by a single neuron (justify your answer)? 10/5/2011. TÁMOP /2/A/KMR

80 Example problems (1) Solve the following classification task using a single neuron! Solve the problem with designing the weight vector analytically! Solve the problem, adapting the weights by using Rosenblatt learning algorithm, where after the initial weighs an learning rate are w 1 (0)=0.7, w 2 (0)=2, b(0) = 0.9 and μ = /5/2011. TÁMOP /2/A/KMR

81 Example problems (2) In a bearing factory the quality of the produced pillows have to be analyzed. The bearings have to be classified according to two parameter given a mean value with a given limited range: Radius: r = 10mm + 1mm Sleekness: d = 0.4mm + 0.2mm The task have to be solved by a neural network. If the given bearing have parameters in the limited range, the output of the neural network has to be +1, other way it has to be 1. Give the weights by analytical solution, in such a way, that the network separate well! Plot you solution! Give the block diagram of the separator network! 10/5/2011. TÁMOP /2/A/KMR

82 Example problems (3) Give the number of logic functions which can be implemented on a single AN with 2 input! Adopt the weights optimally using the perceptron learning rule to realize NOR logic function on the perceptron! The initial weights are w(0) = (0, 0.5, 1), and the learning rate μ = 1! Plot the sample points and separator lines at time step 0, and after the training! 10/5/2011. TÁMOP /2/A/KMR

83 Example problems (4) Which logic function is implemented on the given network? Can another one layer perceptron network realize this function? If yes, give the network! 10/5/2011. TÁMOP /2/A/KMR

84 Example problems (5) The subscribers of an ISP have to be categorized in to 4 class according to the download and upload speed. We would like to make a neuron based classification, which can decide whether class the subscriber belongs to. Give the structure of the perceptron, which can solve the task! Give the weights, computing it analytically if the ISP defines the following class: Draw down the separator lines in the x 1 - x 2 input space! 10/5/2011. TÁMOP /2/A/KMR

85 Example problems (6) A two layer neural network is given as the Figure shows. Give the weights, with which the network can solve the set separation problem given in the Table! Plot the solution giving the decision boundaries! 10/5/2011. TÁMOP /2/A/KMR

86 Example problems (7) Give the specific neural network, which solve the three classed separation problem given in the Figure! Is it possible to train the given network with Rosenblatt learning rule? 10/5/2011. TÁMOP /2/A/KMR

87 Example problems (8) A perceptron network and a classification task are given in the Figure. Train the network to decide optimally with perceptron learning rule! Let the initial weights as follows: 10/5/2011. TÁMOP /2/A/KMR

88 Example problems (9) Plot the decision ranges of the following perceptron network! 10/5/2011. TÁMOP /2/A/KMR

89 Example problems (10) We would like to implement the A B +C logic function with the perceptron network given in the Figure. Give the weights of the network! How this function could be implemented on a two layer perceptron network? Draw the network and give the weights! 10/5/2011. TÁMOP /2/A/KMR

90 Example problems (11) The classification task is given in Figure. Give the perceptron network (with certain weights) solving the problem. Can the network be trained with perceptron learning rule? How many classes can be separated with this network? 10/5/2011. TÁMOP /2/A/KMR

91 Example problems (12) The classification task is given in Figure. Give the perceptron network (with certain weights) solving the problem. Can the network be trained with perceptron learning rule? 10/5/2011. TÁMOP /2/A/KMR

92 Example problems (13) We would like to approximate the nonlinear decision curve of a set separation task given in the Figure with lines using a network, which contains three perceptron in the hidden layer. Give the weights of network! Create a τ (5) training set with K = 5 elements for the nonlinear set separation problem! Draw the polygonal decision boundary of the AN network! 10/5/2011. TÁMOP /2/A/KMR

93 Summary McCulloch-Pitts neuron can implement plenty of logical functions, moreover combination of this kind of neurons (neural network) can implement any type of logical function. McCulloch-Pitts (or artificial) neuron acts as a linear separator. Pattern recognition can be solved using perceptron (one AN or one layer of neurons) The artificial neuron adapts to the environment using training set (input and desired output pairs) in polynomial time,. Next lecture: Hopfield network, Hopfield net as associative memory and combinatorial optimizer 10/5/2011. TÁMOP /2/A/KMR

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER SEMMELWEIS UNIVERSITY PETER PAZMANY CATHOLIC UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PEER PAZMANY CAHOLIC UNIVERSIY SEMMELWEIS UNIVERSIY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PEER PAZMANY CAHOLIC

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER SEMMELWEIS UNIVERSITY PETER PAZMANY CATLIC UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY CATLIC

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 2. Perceptrons COMP9444 17s2 Perceptrons 1 Outline Neurons Biological and Artificial Perceptron Learning Linear Separability Multi-Layer Networks COMP9444 17s2

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

In the Name of God. Lecture 11: Single Layer Perceptrons

In the Name of God. Lecture 11: Single Layer Perceptrons 1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions Chapter ML:VI VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-1 Neural Networks STEIN 2005-2018 The iological Model Simplified model of a neuron:

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY SEMMELWEIS CATHOLIC UNIVERSITY UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET Unit-. Definition Neural network is a massively parallel distributed processing system, made of highly inter-connected neural computing elements that have the ability to learn and thereby acquire knowledge

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, 2012 Sasidharan Sreedharan www.sasidharan.webs.com 3/1/2012 1 Syllabus Artificial Intelligence Systems- Neural Networks, fuzzy logic,

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Neural Networks: Introduction

Neural Networks: Introduction Neural Networks: Introduction Machine Learning Fall 2017 Based on slides and material from Geoffrey Hinton, Richard Socher, Dan Roth, Yoav Goldberg, Shai Shalev-Shwartz and Shai Ben-David, and others 1

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

Computational Intelligence Winter Term 2009/10

Computational Intelligence Winter Term 2009/10 Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Plan for Today Organization (Lectures / Tutorials)

More information

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER

PETER PAZMANY CATHOLIC UNIVERSITY Consortium members SEMMELWEIS UNIVERSITY, DIALOG CAMPUS PUBLISHER PETER PAZMANY CATHOLIC UNIVERSITY SEMMELWEIS UNIVERSITY Development of Complex Curricula for Molecular Bionics and Infobionics Programs within a consortial* framework** Consortium leader PETER PAZMANY

More information

Neural Networks Introduction

Neural Networks Introduction Neural Networks Introduction H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011 H. A. Talebi, Farzaneh Abdollahi Neural Networks 1/22 Biological

More information

Computational Intelligence

Computational Intelligence Plan for Today Computational Intelligence Winter Term 29/ Organization (Lectures / Tutorials) Overview CI Introduction to ANN McCulloch Pitts Neuron (MCP) Minsky / Papert Perceptron (MPP) Prof. Dr. Günter

More information

Single layer NN. Neuron Model

Single layer NN. Neuron Model Single layer NN We consider the simple architecture consisting of just one neuron. Generalization to a single layer with more neurons as illustrated below is easy because: M M The output units are independent

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

Artificial neural networks

Artificial neural networks Artificial neural networks Chapter 8, Section 7 Artificial Intelligence, spring 203, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 8, Section 7 Outline Brains Neural

More information

Artificial Neural Network and Fuzzy Logic

Artificial Neural Network and Fuzzy Logic Artificial Neural Network and Fuzzy Logic 1 Syllabus 2 Syllabus 3 Books 1. Artificial Neural Networks by B. Yagnanarayan, PHI - (Cover Topologies part of unit 1 and All part of Unit 2) 2. Neural Networks

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation

Neural Networks. Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Fundamentals Framework for distributed processing Network topologies Training of ANN s Notation Perceptron Back Propagation Neural Networks Historical Perspective A first wave of interest

More information

Rosenblatt s Perceptron

Rosenblatt s Perceptron M01_HAYK1399_SE_03_C01.QXD 10/1/08 7:03 PM Page 47 C H A P T E R 1 Rosenblatt s Perceptron ORGANIZATION OF THE CHAPTER The perceptron occupies a special place in the historical development of neural networks:

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

COMP304 Introduction to Neural Networks based on slides by:

COMP304 Introduction to Neural Networks based on slides by: COMP34 Introduction to Neural Networks based on slides by: Christian Borgelt http://www.borgelt.net/ Christian Borgelt Introduction to Neural Networks Motivation: Why (Artificial) Neural Networks? (Neuro-)Biology

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Contents 2 What is ANN? Biological Neuron Structure of Neuron Types of Neuron Models of Neuron Analogy with human NN Perceptron OCR Multilayer Neural Network Back propagation

More information

Computational Intelligence

Computational Intelligence Plan for Today Computational Intelligence Winter Term 207/8 Organization (Lectures / Tutorials) Overview CI Introduction to ANN McCulloch Pitts Neuron (MCP) Minsky / Papert Perceptron (MPP) Prof. Dr. Günter

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. 1 The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. The algorithm applies only to single layer models

More information

Artificial Neural Networks. Part 2

Artificial Neural Networks. Part 2 Artificial Neural Netorks Part Artificial Neuron Model Folloing simplified model of real neurons is also knon as a Threshold Logic Unit x McCullouch-Pitts neuron (943) x x n n Body of neuron f out Biological

More information

Fundamentals of Neural Networks

Fundamentals of Neural Networks Fundamentals of Neural Networks : Soft Computing Course Lecture 7 14, notes, slides www.myreaders.info/, RC Chakraborty, e-mail rcchak@gmail.com, Aug. 10, 2010 http://www.myreaders.info/html/soft_computing.html

More information

Unit III. A Survey of Neural Network Model

Unit III. A Survey of Neural Network Model Unit III A Survey of Neural Network Model 1 Single Layer Perceptron Perceptron the first adaptive network architecture was invented by Frank Rosenblatt in 1957. It can be used for the classification of

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92 BIOLOGICAL INSPIRATIONS Some numbers The human brain contains about 10 billion nerve cells (neurons) Each neuron is connected to the others through 10000

More information

Intelligent Systems Discriminative Learning, Neural Networks

Intelligent Systems Discriminative Learning, Neural Networks Intelligent Systems Discriminative Learning, Neural Networks Carsten Rother, Dmitrij Schlesinger WS2014/2015, Outline 1. Discriminative learning 2. Neurons and linear classifiers: 1) Perceptron-Algorithm

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture Neural Networks for Machine Learning Lecture 2a An overview of the main types of neural network architecture Geoffrey Hinton with Nitish Srivastava Kevin Swersky Feed-forward neural networks These are

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Introduction To Artificial Neural Networks

Introduction To Artificial Neural Networks Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav

Neural Networks Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav Neural Networks 30.11.2015 Lecturer: J. Matas Authors: J. Matas, B. Flach, O. Drbohlav 1 Talk Outline Perceptron Combining neurons to a network Neural network, processing input to an output Learning Cost

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks CPSC 533 Winter 2 Christian Jacob Neural Networks in the Context of AI Systems Neural Networks as Mediators between Symbolic AI and Statistical Methods 2 5.-NeuralNets-2.nb Neural

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

CMSC 421: Neural Computation. Applications of Neural Networks

CMSC 421: Neural Computation. Applications of Neural Networks CMSC 42: Neural Computation definition synonyms neural networks artificial neural networks neural modeling connectionist models parallel distributed processing AI perspective Applications of Neural Networks

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Jeff Clune Assistant Professor Evolving Artificial Intelligence Laboratory Announcements Be making progress on your projects! Three Types of Learning Unsupervised Supervised Reinforcement

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore Lecture - 27 Multilayer Feedforward Neural networks with Sigmoidal

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

COMP-4360 Machine Learning Neural Networks

COMP-4360 Machine Learning Neural Networks COMP-4360 Machine Learning Neural Networks Jacky Baltes Autonomous Agents Lab University of Manitoba Winnipeg, Canada R3T 2N2 Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky http://aalab.cs.umanitoba.ca

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing ΗΥ418 Διδάσκων Δημήτριος Κατσαρός @ Τμ. ΗΜΜΥ Πανεπιστήμιο Θεσσαλίαρ Διάλεξη 4η 1 Perceptron s convergence 2 Proof of convergence Suppose that we have n training

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

The Perceptron Algorithm 1

The Perceptron Algorithm 1 CS 64: Machine Learning Spring 5 College of Computer and Information Science Northeastern University Lecture 5 March, 6 Instructor: Bilal Ahmed Scribe: Bilal Ahmed & Virgil Pavlu Introduction The Perceptron

More information

Using Variable Threshold to Increase Capacity in a Feedback Neural Network

Using Variable Threshold to Increase Capacity in a Feedback Neural Network Using Variable Threshold to Increase Capacity in a Feedback Neural Network Praveen Kuruvada Abstract: The article presents new results on the use of variable thresholds to increase the capacity of a feedback

More information

Chapter 2 Single Layer Feedforward Networks

Chapter 2 Single Layer Feedforward Networks Chapter 2 Single Layer Feedforward Networks By Rosenblatt (1962) Perceptrons For modeling visual perception (retina) A feedforward network of three layers of units: Sensory, Association, and Response Learning

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 Neural Networks Neural Networks Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 An Introduction to Neural Networks (nd Ed). Morton, IM, 1995 Neural Networks

More information

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2

Neural Nets in PR. Pattern Recognition XII. Michal Haindl. Outline. Neural Nets in PR 2 Neural Nets in PR NM P F Outline Motivation: Pattern Recognition XII human brain study complex cognitive tasks Michal Haindl Faculty of Information Technology, KTI Czech Technical University in Prague

More information

18.6 Regression and Classification with Linear Models

18.6 Regression and Classification with Linear Models 18.6 Regression and Classification with Linear Models 352 The hypothesis space of linear functions of continuous-valued inputs has been used for hundreds of years A univariate linear function (a straight

More information