Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions

Size: px
Start display at page:

Download "Chapter ML:VI. VI. Neural Networks. Perceptron Learning Gradient Descent Multilayer Perceptron Radial Basis Functions"

Transcription

1 Chapter ML:VI VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-1 Neural Networks STEIN

2 The iological Model Simplified model of a neuron: dendrites cell body axon synapse ML:VI-2 Neural Networks STEIN

3 The iological Model (continued) Neuron characteristics: The numerous dendrites of a neuron serve as its input channels for electrical signals. t particular contact points between the dendrites, the so-called synapses, electrical signals can be initiated. synapse can initiate signals of different strengths, where the strength is encoded by the frequency of a pulse train. The cell body of a neuron accumulates the incoming signals. If a particular stimulus threshold is exceeded, the cell body generates a signal, which is output via the axon. The processing of the signals is unidirectional. (from left to right in the figure) ML:VI-3 Neural Networks STEIN

4 History 1943 Warren McCulloch and Walter Pitts present a model of the neuron Donald Hebb postulates a new learning paradigm: reinforcement only for active neurons. (those neurons that are involved in a decision process) 1958 Frank Rosenblatt develops the perceptron model Rosenblatt proves the perceptron convergence theorem Marvin Minsky and Seymour Papert publish a book on the limitations of the perceptron model David Rumelhart and James McClelland present the multilayer perceptron. ML:VI-4 Neural Networks STEIN

5 The Perceptron of Rosenblatt [1958] Inputs Output ML:VI-5 Neural Networks STEIN

6 The Perceptron of Rosenblatt [1958] x 1 x 2. x p Inputs Output x j, w j R, j = 1... p ML:VI-6 Neural Networks STEIN

7 The Perceptron of Rosenblatt [1958] x 1 w 1 x 2. w 2. w p Σ θ y x p Inputs Output x j, w j R, j = 1... p ML:VI-7 Neural Networks STEIN

8 Remarks: The perceptron of Rosenblatt is based on the neuron model of McCulloch and Pitts. The perceptron is a feed forward system. ML:VI-8 Neural Networks STEIN

9 Specification of Classification Problems [ ML Introduction] Characterization of the model (model world): X is a set of feature vectors, also called feature space. X R p C = {0, 1} is a set of classes. C = { 1, 1} in the regression setting. c : X C is the ideal classifier for X. c is approximated by y (perceptron). D = {(x 1, c(x 1 )),..., (x n, c(x n ))} X C is a set of examples. How could the hypothesis space H look like? ML:VI-9 Neural Networks STEIN

10 Computation in the Perceptron [ Regression] p If w j x j θ then y(x) = 1, and j=1 p if w j x j < θ then y(x) = 0. j=1 ML:VI-10 Neural Networks STEIN

11 Computation in the Perceptron [ Regression] p If w j x j θ then y(x) = 1, and j=1 p if w j x j < θ then y(x) = 0. j=1 1 0 Σ w j x j p θ j=0 j=1 where p w j x j = w T x. j=1 (or other notations for the scalar product) hypothesis is determined by θ, w 1,..., w p. ML:VI-11 Neural Networks STEIN

12 Computation in the Perceptron (continued) p y(x) = heaviside( w j x j θ) = heaviside( j=1 p w j x j ) with w 0 = θ, x 0 = 1 j=0 1 0 Σ w j x j j=0 p x 0 =1 w 0 = θ x 1 w 1 x 2. w 2. w p Σ 0θ y x p Inputs Output hypothesis is determined by w 0, w 1,..., w p. ML:VI-12 Neural Networks STEIN

13 Remarks: If the weight vector is extended by w 0 = θ, and, if the feature vectors are extended by the constant feature x 0 = 1, the learning algorithm gets a canonical form. Implementations of neural networks introduce this extension often implicitly. e careful with regard to the dimensionality of the weight vector: it is always denoted as w here, regardless of the fact whether the w 0 -dimension, with w 0 = θ, is included. The function heaviside is named after the mathematician Oliver Heaviside. [Heaviside: step function O. Heaviside] ML:VI-13 Neural Networks STEIN

14 Weight daptation [IGD lgorithm] lgorithm: PT Perceptron Training Input: D Training examples (x, c(x)) with x = p + 1, c(x) {0, 1}. η Learning rate, a small positive constant. Internal: y(d) Set of y(x)-values computed from the elements x in D given some w. Output: w Weight vector. PT (D, η) 1. initialize_random_weights(w), t = 0 2. REPET 3. t = t (x, c(x)) = random_select(d) 5. error = c(x) heaviside(w T x) // c(x) {0, 1}, heaviside {0, 1}, error {0, 1, 1} 6. w = η error x 7. w = w + w 8. UNTIL(convergence(D, y(d)) OR t > t max ) 9. return(w) ML:VI-14 Neural Networks STEIN

15 Remarks: The variable t denotes the time. t each point in time the learning algorithm gets an example presented and, as a consequence, may adapt the weight vector. The weight adaptation rule compares the true class c(x) (the ground truth) to the class computed by the perceptron. In case of a wrong classification of a feature vector x, error is either 1 or +1, regardless of the exact numeric difference between c(x) and w T x. y(d) is the set of y(x)-values given w for the elements x in D. ML:VI-15 Neural Networks STEIN

16 Weight daptation: Illustration in Input Space x 2 x 2 n n d x 1 x 1 Definition of an (affine) hyperplane: L = { x n T x = d } [Wikipedia] n denotes a normal vector that is perpendicular to the hyperplane L. If n = 1 then n T x d gives the distance of any point x to L. If sgn(n T x 1 d) = sgn(n T x 2 d), then x 1 and x 2 lie on the same side of the hyperplane. ML:VI-16 Neural Networks STEIN

17 Weight daptation: Illustration in Input Space (continued) x 2 x 2 (w 1,...,w p ) T θ x 1 x 1 Definition of an (affine) hyperplane: w T x = 0 p w j x j = θ = w 0 j=1 (hyperplane definition as before, with notation taken from the classification problem setting) ML:VI-17 Neural Networks STEIN

18 Remarks: perceptron defines a hyperplane that is perpendicular (= normal) to (w 1,..., w p ) T. θ or w 0 specify the offset of the hyperplane from the origin, along (w 1,..., w p ) T and as multiple of 1/ (w 1,..., w p ) T. The set of possible weight vectors w = (w 0, w 1,..., w p ) T form the hypothesis space H. Weight adaptation means learning, and the shown learning paradigm is supervised. For the weight adaptation in Line 6 7 of the PT lgorithm, note that if some x j is zero, w j will be zero as well. Keyword: Hebbian learning [Hebb 1949] Note that here (and in the following illustrations) the hyperplane movement is not the result of solving a regression problem in the (p + 1)-dimensional input-output-space, where the residuals are to be minimized. Instead, the PT lgorithm takes each missclassified example as a trigger to correct the hyperplane s normal vector without taking the effect on the other residuals into account. ML:VI-18 Neural Networks STEIN

19 Example The examples are presented to the perceptron. The perceptron computes a value that is interpreted as class label. ML:VI-19 Neural Networks STEIN

20 Example (continued) Encoding: The encoding of the examples is based on expressive features such as the number of line crossings, most acute angle, longest line, etc. The class label, c(x), is encoded as a number. Examples from are labeled with 1, examples from are labeled with 0. x 11 x 12. x 1p... x k1 x k2. x kp }{{} Class c(x) = 1 x l1 x l2. x lp... x m1 x m2. x mp }{{} Class c(x) = 0 ML:VI-20 Neural Networks STEIN

21 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 possible configuration of encoded objects in the feature space X. ML:VI-21 Neural Networks STEIN

22 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-22 Neural Networks STEIN

23 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-23 Neural Networks STEIN

24 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 (w 1,...,w p ) T x ML:VI-24 Neural Networks STEIN

25 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-25 Neural Networks STEIN

26 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-26 Neural Networks STEIN

27 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-27 Neural Networks STEIN

28 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-28 Neural Networks STEIN

29 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-29 Neural Networks STEIN

30 Example: Illustration in Input Space [PT lgorithm] x 2 x 1 ML:VI-30 Neural Networks STEIN

31 Perceptron Convergence Theorem Questions: 1. Which kind of learning tasks can be addressed with the functions in the hypothesis space H? 2. Can the PT lgorithm construct such a function for a given task? ML:VI-31 Neural Networks STEIN

32 Perceptron Convergence Theorem Questions: 1. Which kind of learning tasks can be addressed with the functions in the hypothesis space H? 2. Can the PT lgorithm construct such a function for a given task? Theorem 1 (Perceptron Convergence [Rosenblatt 1962]) Let X 0 and X 1 be two finite sets with vectors of the form x = (1, x 1,..., x p ) T, let X 1 X 0 =, and let ŵ define a separating hyperplane with respect to X 0 and X 1. Moreover, let D be a set of examples of the form (x, 0), x X 0 and (x, 1), x X 1. Then holds: If the examples in D are processed with the PT lgorithm, the constructed weight vector w will converge within a finite number of iterations. ML:VI-32 Neural Networks STEIN

33 Perceptron Convergence Theorem: Proof Preliminaries: The sets X 1 and X 0 are separated by a hyperplane ŵ. The proof requires that for all x X 1 the inequality ŵ T x > 0 holds. This condition is always fulfilled, as the following consideration shows. Let x X 1 with ŵ T x = 0. Since X 0 is finite, the members x X 0 have a minimum positive distance δ with regard to the hyperplane ŵ. Hence, ŵ can be moved by δ 2 towards X 0, resulting in a new hyperplane ŵ that still fulfills (ŵ ) T x < 0 for all x X 0, but that now also fulfills (ŵ ) T x > 0 for all x X 1. ML:VI-33 Neural Networks STEIN

34 Perceptron Convergence Theorem: Proof Preliminaries: The sets X 1 and X 0 are separated by a hyperplane ŵ. The proof requires that for all x X 1 the inequality ŵ T x > 0 holds. This condition is always fulfilled, as the following consideration shows. Let x X 1 with ŵ T x = 0. Since X 0 is finite, the members x X 0 have a minimum positive distance δ with regard to the hyperplane ŵ. Hence, ŵ can be moved by δ 2 towards X 0, resulting in a new hyperplane ŵ that still fulfills (ŵ ) T x < 0 for all x X 0, but that now also fulfills (ŵ ) T x > 0 for all x X 1. y defining X = X 1 { x x X 0 }, the searched w fulfills w T x > 0 for all x X. Then, with c(x) = 1 for all x X, error {0, 1} (instead of {0, 1, 1}). [PT lgorithm, Line 5] ML:VI-34 Neural Networks STEIN

35 Perceptron Convergence Theorem: Proof Preliminaries: The sets X 1 and X 0 are separated by a hyperplane ŵ. The proof requires that for all x X 1 the inequality ŵ T x > 0 holds. This condition is always fulfilled, as the following consideration shows. Let x X 1 with ŵ T x = 0. Since X 0 is finite, the members x X 0 have a minimum positive distance δ with regard to the hyperplane ŵ. Hence, ŵ can be moved by δ 2 towards X 0, resulting in a new hyperplane ŵ that still fulfills (ŵ ) T x < 0 for all x X 0, but that now also fulfills (ŵ ) T x > 0 for all x X 1. y defining X = X 1 { x x X 0 }, the searched w fulfills w T x > 0 for all x X. Then, with c(x) = 1 for all x X, error {0, 1} (instead of {0, 1, 1}). [PT lgorithm, Line 5] The PT lgorithm performs a number of iterations, where w(t) denotes the weight vector for iteration t, which form the basis for the weight vector w(t + 1). x(t) X denotes the feature vector chosen in round t. The first (and randomly chosen) weight vector is denoted as w(0). ML:VI-35 Neural Networks STEIN

36 Perceptron Convergence Theorem: Proof Preliminaries: The sets X 1 and X 0 are separated by a hyperplane ŵ. The proof requires that for all x X 1 the inequality ŵ T x > 0 holds. This condition is always fulfilled, as the following consideration shows. Let x X 1 with ŵ T x = 0. Since X 0 is finite, the members x X 0 have a minimum positive distance δ with regard to the hyperplane ŵ. Hence, ŵ can be moved by δ 2 towards X 0, resulting in a new hyperplane ŵ that still fulfills (ŵ ) T x < 0 for all x X 0, but that now also fulfills (ŵ ) T x > 0 for all x X 1. y defining X = X 1 { x x X 0 }, the searched w fulfills w T x > 0 for all x X. Then, with c(x) = 1 for all x X, error {0, 1} (instead of {0, 1, 1}). [PT lgorithm, Line 5] The PT lgorithm performs a number of iterations, where w(t) denotes the weight vector for iteration t, which form the basis for the weight vector w(t + 1). x(t) X denotes the feature vector chosen in round t. The first (and randomly chosen) weight vector is denoted as w(0). Recall the Cauchy-Schwarz inequality: a 2 b 2 (a T b) 2, where x := x T x denotes the Euclidean norm. ML:VI-36 Neural Networks STEIN

37 Perceptron Convergence Theorem: Proof (continued) Line of argument: (a) We state a lower bound for how much w must change from its initial value after n iterations (to become a separating hyperplane). The derivation of this lower bound exploits the presupposed linear separability of X 0 and X 1. (b) (c) We state an upper bound for how much w can change from its initial value after n iterations. The derivation of this upper bound exploits the finiteness of X 0 and X 1, which in turn guarantees the existence of an upper bound for the norm of the maximum feature vector. We observe that the lower bound grows quadratically in n, whereas the upper bound grows linearly. From the relation lower bound < upper bound we derive a finite upper bound for n. ML:VI-37 Neural Networks STEIN

38 Perceptron Convergence Theorem: Proof (continued) 1. The PT lgorithm computes in iteration t the scalar product w(t) T x(t). If classified correctly, w(t) T x(t) > 0 and w is unchanged. Otherwise, w(t + 1) = w(t) + η x(t) [Line 5-7]. ML:VI-38 Neural Networks STEIN

39 Perceptron Convergence Theorem: Proof (continued) 1. The PT lgorithm computes in iteration t the scalar product w(t) T x(t). If classified correctly, w(t) T x(t) > 0 and w is unchanged. Otherwise, w(t + 1) = w(t) + η x(t) [Line 5-7]. 2. sequence of n incorrectly classified feature vectors, (x(t)), along with the weight adaptation, w(t + 1) = w(t) + η x(t), results in the series w(n) : w(1) = w(0) + η x(0) w(2) = w(1) + η x(1) = w(0) + η x(0) + η x(1). w(n) = w(0) + η x(0) η x(n 1) ML:VI-39 Neural Networks STEIN

40 Perceptron Convergence Theorem: Proof (continued) 1. The PT lgorithm computes in iteration t the scalar product w(t) T x(t). If classified correctly, w(t) T x(t) > 0 and w is unchanged. Otherwise, w(t + 1) = w(t) + η x(t) [Line 5-7]. 2. sequence of n incorrectly classified feature vectors, (x(t)), along with the weight adaptation, w(t + 1) = w(t) + η x(t), results in the series w(n) : w(1) = w(0) + η x(0) w(2) = w(1) + η x(1) = w(0) + η x(0) + η x(1). w(n) = w(0) + η x(0) η x(n 1) 3. The hyperplane defined by ŵ separates X 1 and X 0 : x X : ŵ T x > 0 Let δ := min x X ŵt x. Observe that δ > 0 holds. ML:VI-40 Neural Networks STEIN

41 Perceptron Convergence Theorem: Proof (continued) 1. The PT lgorithm computes in iteration t the scalar product w(t) T x(t). If classified correctly, w(t) T x(t) > 0 and w is unchanged. Otherwise, w(t + 1) = w(t) + η x(t) [Line 5-7]. 2. sequence of n incorrectly classified feature vectors, (x(t)), along with the weight adaptation, w(t + 1) = w(t) + η x(t), results in the series w(n) : w(1) = w(0) + η x(0) w(2) = w(1) + η x(1) = w(0) + η x(0) + η x(1). w(n) = w(0) + η x(0) η x(n 1) 3. The hyperplane defined by ŵ separates X 1 and X 0 : x X : ŵ T x > 0 Let δ := min x X ŵt x. Observe that δ > 0 holds. 4. nalyze the scalar product of w(n) and ŵ : ŵ T w(n) = ŵ T w(0) + η ŵ T x(0) η ŵ T x(n 1) ŵ T w(n) ŵ T w(0) + nηδ 0 (for n n 0 with sufficiently large n 0 N) (ŵ T w(n)) 2 (ŵ T w(0) + nηδ) 2 ML:VI-41 Neural Networks STEIN

42 Perceptron Convergence Theorem: Proof (continued) 1. The PT lgorithm computes in iteration t the scalar product w(t) T x(t). If classified correctly, w(t) T x(t) > 0 and w is unchanged. Otherwise, w(t + 1) = w(t) + η x(t) [Line 5-7]. 2. sequence of n incorrectly classified feature vectors, (x(t)), along with the weight adaptation, w(t + 1) = w(t) + η x(t), results in the series w(n) : w(1) = w(0) + η x(0) w(2) = w(1) + η x(1) = w(0) + η x(0) + η x(1). w(n) = w(0) + η x(0) η x(n 1) 3. The hyperplane defined by ŵ separates X 1 and X 0 : x X : ŵ T x > 0 Let δ := min x X ŵt x. Observe that δ > 0 holds. 4. nalyze the scalar product of w(n) and ŵ : ŵ T w(n) = ŵ T w(0) + η ŵ T x(0) η ŵ T x(n 1) ŵ T w(n) ŵ T w(0) + nηδ 0 (for n n 0 with sufficiently large n 0 N) (ŵ T w(n)) 2 (ŵ T w(0) + nηδ) 2 5. pply the Cauchy-Schwarz inequality: ŵ 2 w(n) 2 (ŵ T w(0) + nηδ) 2 w(n) 2 (ŵt w(0) + nηδ) 2 ŵ 2 ML:VI-42 Neural Networks STEIN

43 Perceptron Convergence Theorem: Proof (continued) 6. Consider again the weight adaptation w(t + 1) = w(t) + η x(t) : w(t + 1) 2 = w(t) + η x(t) 2 = (w(t) + η x(t)) T (w(t) + η x(t)) = w(t) T w(t) + η 2 x(t) T x(t) + 2η w(t) T x(t) w(t) 2 + η x(t) 2 (since w(t) T x(t) < 0) ML:VI-43 Neural Networks STEIN

44 Perceptron Convergence Theorem: Proof (continued) 6. Consider again the weight adaptation w(t + 1) = w(t) + η x(t) : w(t + 1) 2 = w(t) + η x(t) 2 = (w(t) + η x(t)) T (w(t) + η x(t)) = w(t) T w(t) + η 2 x(t) T x(t) + 2η w(t) T x(t) w(t) 2 + η x(t) 2 (since w(t) T x(t) < 0) 7. Consider the series w(n) from Step 2 : w(n) 2 w(n 1) 2 + η x(n 1) 2 w(n 2) 2 + η x(n 2) 2 + η x(n 1) 2 w(0) 2 + η x(0) η x(n 1) 2 n 1 = w(0) 2 + η x(i) 2 j=0 ML:VI-44 Neural Networks STEIN

45 Perceptron Convergence Theorem: Proof (continued) 6. Consider again the weight adaptation w(t + 1) = w(t) + η x(t) : w(t + 1) 2 = w(t) + η x(t) 2 = (w(t) + η x(t)) T (w(t) + η x(t)) = w(t) T w(t) + η 2 x(t) T x(t) + 2η w(t) T x(t) w(t) 2 + η x(t) 2 (since w(t) T x(t) < 0) 7. Consider the series w(n) from Step 2 : w(n) 2 w(n 1) 2 + η x(n 1) 2 w(n 2) 2 + η x(n 2) 2 + η x(n 1) 2 w(0) 2 + η x(0) η x(n 1) 2 n 1 = w(0) 2 + η x(i) 2 j=0 8. With ε := max x X x 2 follows w(n) 2 w(0) 2 + nη 2 ε ML:VI-45 Neural Networks STEIN

46 Perceptron Convergence Theorem: Proof (continued) 9. oth inequalities (see Step 5 and Step 8) must be fulfilled: w(n) 2 (ŵt w(0) + nηδ) 2 ŵ 2 and w(n) 2 w(0) 2 + nη 2 ε (ŵ T w(0) + nηδ) 2 w(n) 2 w(0) 2 + nη 2 ε ŵ 2 (ŵ T w(0) + nηδ) 2 w(0) 2 + nη 2 ε ŵ 2 Set w(0) = 0 : n 2 η 2 δ 2 ŵ 2 nη 2 ε n ε δ 2 ŵ 2 ML:VI-46 Neural Networks STEIN

47 Perceptron Convergence Theorem: Proof (continued) 9. oth inequalities (see Step 5 and Step 8) must be fulfilled: w(n) 2 (ŵt w(0) + nηδ) 2 ŵ 2 and w(n) 2 w(0) 2 + nη 2 ε (ŵ T w(0) + nηδ) 2 w(n) 2 w(0) 2 + nη 2 ε ŵ 2 (ŵ T w(0) + nηδ) 2 w(0) 2 + nη 2 ε ŵ 2 Set w(0) = 0 : n 2 η 2 δ 2 ŵ 2 nη 2 ε n ε δ 2 ŵ 2 The PT lgorithm terminates within a finite number of iterations. Observe: (ŵ T w(0) + nηδ) 2 ŵ 2 Θ(n 2 ) and w(0) 2 + nη 2 ε Θ(n) ML:VI-47 Neural Networks STEIN

48 Perceptron Convergence Theorem: Discussion If a separating hyperplane between X 0 and X 1 exists, the PT lgorithm will converge. If no such hyperplane exists, convergence cannot be guaranteed. separating hyperplane can be found in polynomial time with linear programming. The PT lgorithm, however, may require an exponential number of iterations. ML:VI-48 Neural Networks STEIN

49 Perceptron Convergence Theorem: Discussion If a separating hyperplane between X 0 and X 1 exists, the PT lgorithm will converge. If no such hyperplane exists, convergence cannot be guaranteed. separating hyperplane can be found in polynomial time with linear programming. The PT lgorithm, however, may require an exponential number of iterations. Classification problems with noise (right-hand side) are problematic: x 2 x 1 x 2 x 1 ML:VI-49 Neural Networks STEIN

50 Gradient Descent Classification Error Gradient descent considers the true error (better: the hyperplane distance) and will converge even if X 1 and X 0 cannot be separated by a hyperplane. However, this convergence process is of an asymptotic nature and no finite iteration bound can be stated. Gradient descent applies the so-called delta rule, which will be derived in the following. The delta rule forms the basis of the backpropagation algorithm. ML:VI-50 Neural Networks STEIN

51 Gradient Descent Classification Error Gradient descent considers the true error (better: the hyperplane distance) and will converge even if X 1 and X 0 cannot be separated by a hyperplane. However, this convergence process is of an asymptotic nature and no finite iteration bound can be stated. Gradient descent applies the so-called delta rule, which will be derived in the following. The delta rule forms the basis of the backpropagation algorithm. Consider the linear perceptron without a threshold function: p y(x) = w T x = w j x j j=0 [Heaviside] The classification error Err(w) of a weight vector (= hypothesis) w with regard to D can be defined as follows: Err(w) = 1 (c(x) y(x)) 2 [Singleton error] 2 (x,c(x)) D ML:VI-51 Neural Networks STEIN

52 Gradient Descent Classification Error Err(w) w w 1-2 The gradient of Err(w), Err(w), defines the steepest ascent or descent: ( Err(w) Err(w) =, Err(w),, Err(w) ) w 0 w 1 w p ML:VI-52 Neural Networks STEIN

53 Gradient Descent Weight daptation w w + w where w = η Err(w) Componentwise (j = 0,..., p) weight adaptation [PT lgorithm] : w j w j + w j where w j = η w j Err(w) = η (x, c(x)) D (c(x) w T x) x j ML:VI-53 Neural Networks STEIN

54 Gradient Descent Weight daptation w w + w where w = η Err(w) Componentwise (j = 0,..., p) weight adaptation [PT lgorithm] : w j w j + w j where w j = η w j Err(w) = η (x, c(x)) D (c(x) w T x) x j w j Err(w) = 1 w j 2 (c(x) y(x)) 2 = 1 2 (x,c(x)) D (x,c(x)) D w j (c(x) y(x)) 2 = 1 2 = = (x,c(x)) D (x,c(x)) D (x,c(x)) D 2(c(x) y(x)) (c(x) w T x) (c(x) w T x)( x j ) w j (c(x) y(x)) w j (c(x) w T x) ML:VI-54 Neural Networks STEIN

55 Gradient Descent Weight daptation w w + w where w = η Err(w) Componentwise (j = 0,..., p) weight adaptation [PT lgorithm] : w j w j + w j where w j = η w j Err(w) = η (x, c(x)) D (c(x) w T x) x j w j Err(w) = 1 w j 2 (c(x) y(x)) 2 = 1 2 (x,c(x)) D (x,c(x)) D w j (c(x) y(x)) 2 = 1 2 = = (x,c(x)) D (x,c(x)) D (x,c(x)) D 2(c(x) y(x)) (c(x) w T x) (c(x) w T x)( x j ) w j (c(x) y(x)) w j (c(x) w T x) ML:VI-55 Neural Networks STEIN

56 Gradient Descent Weight daptation w w + w where w = η Err(w) Componentwise (j = 0,..., p) weight adaptation [PT lgorithm] : w j w j + w j where w j = η w j Err(w) = η (x, c(x)) D (c(x) w T x) x j w j Err(w) = 1 w j 2 (c(x) y(x)) 2 = 1 2 (x,c(x)) D (x,c(x)) D w j (c(x) y(x)) 2 = 1 2 = = (x,c(x)) D (x,c(x)) D (x,c(x)) D 2(c(x) y(x)) (c(x) w T x) (c(x) w T x)( x j ) w j (c(x) y(x)) w j (c(x) w T x) ML:VI-56 Neural Networks STEIN

57 Gradient Descent Weight daptation w w + w where w = η Err(w) Componentwise (j = 0,..., p) weight adaptation [PT lgorithm] : w j w j + w j where w j = η w j Err(w) = η (x, c(x)) D (c(x) w T x) x j w j Err(w) = 1 w j 2 (c(x) y(x)) 2 = 1 2 (x,c(x)) D (x,c(x)) D w j (c(x) y(x)) 2 = 1 2 = = (x,c(x)) D (x,c(x)) D (x,c(x)) D 2(c(x) y(x)) (c(x) w T x) (c(x) w T x)( x j ) w j (c(x) y(x)) w j (c(x) w T x) ML:VI-57 Neural Networks STEIN

58 Gradient Descent Weight daptation: atch Gradient Descent [IGD lgorithm] lgorithm: GD atch Gradient Descent Input: D Training examples (x, c(x)) with x = p + 1, c(x) {0, 1}. (c(x) { 1, 1}) η Learning rate, a small positive constant. Internal: y(d) Set of y(x)-values computed from the elements x in D given some w. Output: w Weight vector. GD(D, η) 1. initialize_random_weights(w), t = 0 2. REPET 3. t = t w = 0 5. FORECH (x, c(x)) D DO 6. error = c(x) w T x 7. w = w + η error x 8. ENDDO 9. w = w + w 10. UNTIL(convergence(D, y(d)) OR t > t max ) 11. return(w) ML:VI-58 Neural Networks STEIN

59 Remarks: w Err(w) (and not to + ) to descend to the minimum. Each GD iteration REPET... UNTIL corresponds to finding the direction of steepest error descent as Err(w t ) = ( (x,c(x)) D c(x) w T t x ) ( x w x,c(x) D c(x) T wt x ) 2 and updating w t by taking a step of length η in this direction. Using a constant step size η is can severely impair the speed of convergence. When taking the optimal step size η t := argmin η Err(w t η Err(w t )) at each iteration t, it can be shown that gradient descent (merely) has a linear rate of convergence. [Meza 2010] s criterion for the convergence function may serve the global error, either quantified as the sum of the squared residuals, Err(w t ), or as the norm of the error gradient, Err(w t ), which are compared to some small positive bound ε. ML:VI-59 Neural Networks STEIN

60 Gradient Descent Weight daptation: Delta Rule The weight adaptation in the GD lgorithm is set-based: before modifying a weight component in w, the total error of all examples (the batch ) is computed. Weight adaptation with regard to a single example (x, c(x)) D : w = η (c(x) w T x) x This adaptation rule is known under different names: delta rule Widrow-Hoff rule adaline rule least mean squares (LMS) rule The classification error Err d (w) of a weight vector (= hypothesis) w with regard to a single example d D, d = (x, c(x)), is given as: Err d (w) = 1 2 (c(x) wt x) 2 [atch error] ML:VI-60 Neural Networks STEIN

61 Gradient Descent Weight daptation: Incremental Gradient Descent [lgorithms: LMS GD PT ] lgorithm: IGD Incremental Gradient Descent Input: D Training examples (x, c(x)) with x = p + 1, c(x) {0, 1}. (c(x) { 1, 1}) η Learning rate, a small positive constant. Internal: y(d) Set of y(x)-values computed from the elements x in D given some w. Output: w Weight vector. IGD(D, η) 1. initialize_random_weights(w), t = 0 2. REPET 3. t = t FORECH (x, c(x)) D DO 5. error = c(x) w T x 6. w = η error x 7. w = w + w 8. ENDDO 9. UNTIL(convergence(D, y(d)) OR t > t max ) 10. return(w) ML:VI-61 Neural Networks STEIN

62 Remarks: The classification error Err of incremental gradient descent is specific for each training example d D, d = (x, c(x)): Err d (w) = 1 2 (c(x) wt x) 2 The sequence of incremental weight adaptations approximates the gradient descent of the batch approach. If η is chosen sufficiently small, this approximation can happen at arbitrary accuracy. The computation of the total error of batch gradient descent enables larger weight adaptation increments. Compared to batch gradient descent, the example-based weight adaptation of incremental gradient descent can better avoid getting stuck in a local minimum of the error function. Incremental gradient descent is also called stochastic gradient descent. ML:VI-62 Neural Networks STEIN

63 Remarks (continued): When, as is done here, the residual sum squares, RSS, is chosen as error (loss) function, the incremental gradient descent algorithm [IGD] corresponds to the least mean squares algorithm [ LMS]. The incremental gradient descend algorithm [IGD] looks similar to the perceptron training algorithm [PT ], since these algorithms differ only in the error computation (Line 5) where the latter applies the Heaviside function. However, this subtle syntactic difference is a significant conceptual difference, entailing a number of consequences: Gradient descent is a regression approach and exploits the residua, which are provided by an error function of choice, and whose differential is evaluated to control the hyperplane movement. The PT algorithm is not based on residuals (in the (p + 1)-dimensional input-outputspace) but refers to the input space only, where it simply evaluates the side of the hyperplane as a binary feature (correct side or not). Provided linear separability, the PT algorithm will converge within a finite number of iterations, which, however, cannot be guaranteed for gradient descent. Gradient descent will converge even if the data is not linearly separable. Data sets can be constructed whose classes are linearly separable, but where gradient descent will not determine a hyperplane that classifies all examples correctly (whereas the PT lgorithm of course does). ML:VI-63 Neural Networks STEIN

Chapter ML:VI (continued)

Chapter ML:VI (continued) Chapter ML:VI (continued) VI Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-64 Neural Networks STEIN 2005-2018 Definition 1 (Linear Separability)

More information

Chapter ML:VI (continued)

Chapter ML:VI (continued) Chapter ML:VI (continued) VI. Neural Networks Perceptron Learning Gradient Descent Multilayer Perceptron Radial asis Functions ML:VI-56 Neural Networks STEIN 2005-2013 Definition 1 (Linear Separability)

More information

Artificial Neural Networks The Introduction

Artificial Neural Networks The Introduction Artificial Neural Networks The Introduction 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Single layer NN. Neuron Model

Single layer NN. Neuron Model Single layer NN We consider the simple architecture consisting of just one neuron. Generalization to a single layer with more neurons as illustrated below is easy because: M M The output units are independent

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Single Layer Percetron PMR5406 Redes Neurais e Aula 3 Single Layer Percetron Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrije Unviersity Architecture We consider

More information

Neural Networks (Part 1) Goals for the lecture

Neural Networks (Part 1) Goals for the lecture Neural Networks (Part ) Mark Craven and David Page Computer Sciences 760 Spring 208 www.biostat.wisc.edu/~craven/cs760/ Some of the slides in these lectures have been adapted/borrowed from materials developed

More information

In the Name of God. Lecture 11: Single Layer Perceptrons

In the Name of God. Lecture 11: Single Layer Perceptrons 1 In the Name of God Lecture 11: Single Layer Perceptrons Perceptron: architecture We consider the architecture: feed-forward NN with one layer It is sufficient to study single layer perceptrons with just

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Feedforward Neural Nets and Backpropagation

Feedforward Neural Nets and Backpropagation Feedforward Neural Nets and Backpropagation Julie Nutini University of British Columbia MLRG September 28 th, 2016 1 / 23 Supervised Learning Roadmap Supervised Learning: Assume that we are given the features

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7.

Linear models: the perceptron and closest centroid algorithms. D = {(x i,y i )} n i=1. x i 2 R d 9/3/13. Preliminaries. Chapter 1, 7. Preliminaries Linear models: the perceptron and closest centroid algorithms Chapter 1, 7 Definition: The Euclidean dot product beteen to vectors is the expression d T x = i x i The dot product is also

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. 1 The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. The algorithm applies only to single layer models

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

Data Mining Part 5. Prediction

Data Mining Part 5. Prediction Data Mining Part 5. Prediction 5.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline How the Brain Works Artificial Neural Networks Simple Computing Elements Feed-Forward Networks Perceptrons (Single-layer,

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition Last updated: Oct 22, 2012 LINEAR CLASSIFIERS Problems 2 Please do Problem 8.3 in the textbook. We will discuss this in class. Classification: Problem Statement 3 In regression, we are modeling the relationship

More information

y(x n, w) t n 2. (1)

y(x n, w) t n 2. (1) Network training: Training a neural network involves determining the weight parameter vector w that minimizes a cost function. Given a training set comprising a set of input vector {x n }, n = 1,...N,

More information

Linear discriminant functions

Linear discriminant functions Andrea Passerini passerini@disi.unitn.it Machine Learning Discriminative learning Discriminative vs generative Generative learning assumes knowledge of the distribution governing the data Discriminative

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

The Perceptron Algorithm 1

The Perceptron Algorithm 1 CS 64: Machine Learning Spring 5 College of Computer and Information Science Northeastern University Lecture 5 March, 6 Instructor: Bilal Ahmed Scribe: Bilal Ahmed & Virgil Pavlu Introduction The Perceptron

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Introduction To Artificial Neural Networks

Introduction To Artificial Neural Networks Introduction To Artificial Neural Networks Machine Learning Supervised circle square circle square Unsupervised group these into two categories Supervised Machine Learning Supervised Machine Learning Supervised

More information

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 2. Perceptrons COMP9444 17s2 Perceptrons 1 Outline Neurons Biological and Artificial Perceptron Learning Linear Separability Multi-Layer Networks COMP9444 17s2

More information

Inf2b Learning and Data

Inf2b Learning and Data Inf2b Learning and Data Lecture : Single layer Neural Networks () (Credit: Hiroshi Shimodaira Iain Murray and Steve Renals) Centre for Speech Technology Research (CSTR) School of Informatics University

More information

CE213 Artificial Intelligence Lecture 14

CE213 Artificial Intelligence Lecture 14 CE213 Artificial Intelligence Lecture 14 Neural Networks: Part 2 Learning Rules -Hebb Rule - Perceptron Rule -Delta Rule Neural Networks Using Linear Units [ Difficulty warning: equations! ] 1 Learning

More information

Rosenblatt s Perceptron

Rosenblatt s Perceptron M01_HAYK1399_SE_03_C01.QXD 10/1/08 7:03 PM Page 47 C H A P T E R 1 Rosenblatt s Perceptron ORGANIZATION OF THE CHAPTER The perceptron occupies a special place in the historical development of neural networks:

More information

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Engineering Part IIB: Module 4F0 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 202 Engineering Part IIB:

More information

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1

Preliminaries. Definition: The Euclidean dot product between two vectors is the expression. i=1 90 8 80 7 70 6 60 0 8/7/ Preliminaries Preliminaries Linear models and the perceptron algorithm Chapters, T x + b < 0 T x + b > 0 Definition: The Euclidean dot product beteen to vectors is the expression

More information

Supervised Learning. George Konidaris

Supervised Learning. George Konidaris Supervised Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom Mitchell,

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

Chapter 2 Single Layer Feedforward Networks

Chapter 2 Single Layer Feedforward Networks Chapter 2 Single Layer Feedforward Networks By Rosenblatt (1962) Perceptrons For modeling visual perception (retina) A feedforward network of three layers of units: Sensory, Association, and Response Learning

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6 Administration Registration Hw3 is out Due on Thursday 10/6 Questions Lecture Captioning (Extra-Credit) Look at Piazza for details Scribing lectures With pay; come talk to me/send email. 1 Projects Projects

More information

Introduction Biologically Motivated Crude Model Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation Introduction Biologically Motivated Crude Model Backpropagation 1 McCulloch-Pitts Neurons In 1943 Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, published A logical calculus of the

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

Lecture 7 Artificial neural networks: Supervised learning

Lecture 7 Artificial neural networks: Supervised learning Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Neural Networks biological neuron artificial neuron 1

Neural Networks biological neuron artificial neuron 1 Neural Networks biological neuron artificial neuron 1 A two-layer neural network Output layer (activation represents classification) Weighted connections Hidden layer ( internal representation ) Input

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture Neural Networks for Machine Learning Lecture 2a An overview of the main types of neural network architecture Geoffrey Hinton with Nitish Srivastava Kevin Swersky Feed-forward neural networks These are

More information

The Perceptron algorithm

The Perceptron algorithm The Perceptron algorithm Tirgul 3 November 2016 Agnostic PAC Learnability A hypothesis class H is agnostic PAC learnable if there exists a function m H : 0,1 2 N and a learning algorithm with the following

More information

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs. artifical neural

More information

Sections 18.6 and 18.7 Artificial Neural Networks

Sections 18.6 and 18.7 Artificial Neural Networks Sections 18.6 and 18.7 Artificial Neural Networks CS4811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline The brain vs artifical neural networks

More information

The Perceptron Algorithm

The Perceptron Algorithm The Perceptron Algorithm Greg Grudic Greg Grudic Machine Learning Questions? Greg Grudic Machine Learning 2 Binary Classification A binary classifier is a mapping from a set of d inputs to a single output

More information

Input layer. Weight matrix [ ] Output layer

Input layer. Weight matrix [ ] Output layer MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.034 Artificial Intelligence, Fall 2003 Recitation 10, November 4 th & 5 th 2003 Learning by perceptrons

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

Artifical Neural Networks

Artifical Neural Networks Neural Networks Artifical Neural Networks Neural Networks Biological Neural Networks.................................. Artificial Neural Networks................................... 3 ANN Structure...........................................

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial basis () if its output depends on (is a non-increasing function of) the distance of the input from a given stored vector. s represent local receptors,

More information

Artificial Neural Network

Artificial Neural Network Artificial Neural Network Eung Je Woo Department of Biomedical Engineering Impedance Imaging Research Center (IIRC) Kyung Hee University Korea ejwoo@khu.ac.kr Neuron and Neuron Model McCulloch and Pitts

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

Neural Networks Lecture 2:Single Layer Classifiers

Neural Networks Lecture 2:Single Layer Classifiers Neural Networks Lecture 2:Single Layer Classifiers H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate

More information

Revision: Neural Network

Revision: Neural Network Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radial-Basis Function etworks A function is radial () if its output depends on (is a nonincreasing function of) the distance of the input from a given stored vector. s represent local receptors, as illustrated

More information

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire

More information

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples Back-Propagation Algorithm Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples 1 Inner-product net =< w, x >= w x cos(θ) net = n i=1 w i x i A measure

More information

Lecture 6. Notes on Linear Algebra. Perceptron

Lecture 6. Notes on Linear Algebra. Perceptron Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

ADALINE for Pattern Classification

ADALINE for Pattern Classification POLYTECHNIC UNIVERSITY Department of Computer and Information Science ADALINE for Pattern Classification K. Ming Leung Abstract: A supervised learning algorithm known as the Widrow-Hoff rule, or the Delta

More information

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller 2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks Todd W. Neller Machine Learning Learning is such an important part of what we consider "intelligence" that

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

A MULTILAYER EXTENSION OF THE SIMILARITY NEURAL NETWORK

A MULTILAYER EXTENSION OF THE SIMILARITY NEURAL NETWORK Master in Artificial Intelligence (UPC- URV- UB) Master of Science Thesis A MULTILAYER EXTENSION OF THE SIMILARITY NEURAL NETWORK David Buchaca Prats Advisor: Lluís Belanche November 24, 2014 Acknowledgements

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Deep Learning. Ali Ghodsi. University of Waterloo

Deep Learning. Ali Ghodsi. University of Waterloo University of Waterloo Deep learning attempts to learn representations of data with multiple levels of abstraction. Deep learning usually refers to a set of algorithms and computational models that are

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

COMP-4360 Machine Learning Neural Networks

COMP-4360 Machine Learning Neural Networks COMP-4360 Machine Learning Neural Networks Jacky Baltes Autonomous Agents Lab University of Manitoba Winnipeg, Canada R3T 2N2 Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky http://aalab.cs.umanitoba.ca

More information

CSC Neural Networks. Perceptron Learning Rule

CSC Neural Networks. Perceptron Learning Rule CSC 302 1.5 Neural Networks Perceptron Learning Rule 1 Objectives Determining the weight matrix and bias for perceptron networks with many inputs. Explaining what a learning rule is. Developing the perceptron

More information

GRADIENT DESCENT AND LOCAL MINIMA

GRADIENT DESCENT AND LOCAL MINIMA GRADIENT DESCENT AND LOCAL MINIMA 25 20 5 15 10 3 2 1 1 2 5 2 2 4 5 5 10 Suppose for both functions above, gradient descent is started at the point marked red. It will walk downhill as far as possible,

More information

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Neural Networks Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1 Brains as Computational Devices Brains advantages with respect to digital computers: Massively parallel Fault-tolerant Reliable

More information

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Fall, 2018 Outline Introduction A Brief History ANN Architecture Terminology

More information

Machine Learning: Logistic Regression. Lecture 04

Machine Learning: Logistic Regression. Lecture 04 Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input

More information

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Artificial Neural Networks and Nonparametric Methods CMPSCI 383 Nov 17, 2011! Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011! 1 Todayʼs lecture" How the brain works (!)! Artificial neural networks! Perceptrons! Multilayer feed-forward networks! Error

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks

Vasil Khalidov & Miles Hansard. C.M. Bishop s PRML: Chapter 5; Neural Networks C.M. Bishop s PRML: Chapter 5; Neural Networks Introduction The aim is, as before, to find useful decompositions of the target variable; t(x) = y(x, w) + ɛ(x) (3.7) t(x n ) and x n are the observations,

More information

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation

Master Recherche IAC TC2: Apprentissage Statistique & Optimisation Master Recherche IAC TC2: Apprentissage Statistique & Optimisation Alexandre Allauzen Anne Auger Michèle Sebag LIMSI LRI Oct. 4th, 2012 This course Bio-inspired algorithms Classical Neural Nets History

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w 1 x 1 + w 2 x 2 + w 0 = 0 Feature 1 x 2 = w 1 w 2 x 1 w 0 w 2 Feature 2 A perceptron

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE

LECTURE NOTE #NEW 6 PROF. ALAN YUILLE LECTURE NOTE #NEW 6 PROF. ALAN YUILLE 1. Introduction to Regression Now consider learning the conditional distribution p(y x). This is often easier than learning the likelihood function p(x y) and the

More information

ECS171: Machine Learning

ECS171: Machine Learning ECS171: Machine Learning Lecture 4: Optimization (LFD 3.3, SGD) Cho-Jui Hsieh UC Davis Jan 22, 2018 Gradient descent Optimization Goal: find the minimizer of a function min f (w) w For now we assume f

More information

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function

More information

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5 Artificial Neural Networks Q550: Models in Cognitive Science Lecture 5 "Intelligence is 10 million rules." --Doug Lenat The human brain has about 100 billion neurons. With an estimated average of one thousand

More information

Classification with Perceptrons. Reading:

Classification with Perceptrons. Reading: Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters

More information

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing ΗΥ418 Διδάσκων Δημήτριος Κατσαρός @ Τμ. ΗΜΜΥ Πανεπιστήμιο Θεσσαλίαρ Διάλεξη 4η 1 Perceptron s convergence 2 Proof of convergence Suppose that we have n training

More information

CSC242: Intro to AI. Lecture 21

CSC242: Intro to AI. Lecture 21 CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages

More information

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994

Neural Networks. Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 Neural Networks Neural Networks Fundamentals of Neural Networks : Architectures, Algorithms and Applications. L, Fausett, 1994 An Introduction to Neural Networks (nd Ed). Morton, IM, 1995 Neural Networks

More information

Multi-layer Perceptron Networks

Multi-layer Perceptron Networks Multi-layer Perceptron Networks Our task is to tackle a class of problems that the simple perceptron cannot solve. Complex perceptron networks not only carry some important information for neuroscience,

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Lecture 4: Linear predictors and the Perceptron

Lecture 4: Linear predictors and the Perceptron Lecture 4: Linear predictors and the Perceptron Introduction to Learning and Analysis of Big Data Kontorovich and Sabato (BGU) Lecture 4 1 / 34 Inductive Bias Inductive bias is critical to prevent overfitting.

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information