CS4442/9542b Artificial Intelligence II prof. Olga Veksler. Lecture 5 Machine Learning. Neural Networks. Many presentation Ideas are due to Andrew NG

Similar documents
Neural Networks. Learning and Computer Vision Prof. Olga Veksler CS9840. Lecture 10

Multilayer Neural Networks

Artificial Neural Networks. Historical description

Multilayer Perceptron = FeedForward Neural Network

Neural networks. Chapter 19, Sections 1 5 1

AI Programming CS F-20 Neural Networks

Neural networks. Chapter 20. Chapter 20 1

Data Mining Part 5. Prediction

Introduction Biologically Motivated Crude Model Backpropagation

Lecture 7 Artificial neural networks: Supervised learning

ECE 471/571 - Lecture 17. Types of NN. History. Back Propagation. Recurrent (feedback during operation) Feedforward

Lecture 4: Perceptrons and Multilayer Perceptrons

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Artifical Neural Networks

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Multilayer Neural Networks

Neural networks. Chapter 20, Section 5 1

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Part 8: Neural Networks

Feedforward Neural Nets and Backpropagation

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Unit III. A Survey of Neural Network Model

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Lecture 4: Feed Forward Neural Networks

Neural Networks (Part 1) Goals for the lecture

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

An artificial neural networks (ANNs) model is a functional abstraction of the

CS:4420 Artificial Intelligence

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Introduction to Artificial Neural Networks

Artificial Intelligence

Multilayer Perceptron

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

18.6 Regression and Classification with Linear Models

Artificial Neural Networks The Introduction

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Chapter 9: The Perceptron

Pattern Classification

Course 395: Machine Learning - Lectures

Artificial Neural Networks. Q550: Models in Cognitive Science Lecture 5

Supervised Learning. George Konidaris

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Machine Learning. Neural Networks

Bidirectional Representation and Backpropagation Learning

Lecture 5: Logistic Regression. Neural Networks

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Artificial Neural Networks Examination, June 2005

Input layer. Weight matrix [ ] Output layer

Multilayer Neural Networks

SGD and Deep Learning

Multilayer Neural Networks

Multilayer Neural Networks

Neural Networks biological neuron artificial neuron 1

y(x n, w) t n 2. (1)

Neural Networks and Deep Learning

Artificial Neural Network

8. Lecture Neural Networks

Introduction to Neural Networks

ECE521 Lecture 7/8. Logistic Regression

Artificial neural networks

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

EEE 241: Linear Systems

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Neural Networks

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Introduction To Artificial Neural Networks

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Simple Neural Nets For Pattern Classification

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

The error-backpropagation algorithm is one of the most important and widely used (and some would say wildly used) learning techniques for neural

Last updated: Oct 22, 2012 LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

Chapter 3 Supervised learning:

EE04 804(B) Soft Computing Ver. 1.2 Class 2. Neural Networks - I Feb 23, Sasidharan Sreedharan

Sections 18.6 and 18.7 Artificial Neural Networks

Computational Intelligence Winter Term 2017/18

Grundlagen der Künstlichen Intelligenz

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Introduction to Neural Networks

Learning in State-Space Reinforcement Learning CIS 32

1 What a Neural Network Computes

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Computational Intelligence

Artificial Neural Network and Fuzzy Logic

Sections 18.6 and 18.7 Artificial Neural Networks

Lecture 16: Introduction to Neural Networks

Neural Networks Introduction

COMS 4771 Introduction to Machine Learning. Nakul Verma

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Machine Learning and Data Mining. Multi-layer Perceptrons & Neural Networks: Basics. Prof. Alexander Ihler

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Transcription:

CS4442/9542b Artificial Intelligence II prof. Olga Vesler Lecture 5 Machine Learning Neural Networs Many presentation Ideas are due to Andrew NG

Outline Motivation Non linear discriminant functions Introduction to Neural Networs Inspiration from Biology History Perceptron Multilayer Perceptron Practical Tips for Implementation

Need for Non Linear Discriminant x 2 x 2 x g(x) = w 0 +w x +w 2 x 2 x Previous lecture studied linear discriminant Wors for linearly (or almost) separable cases Many problems are far from linearly separable underfitting with linear model

Need for Non Linear Discriminant Can use other discriminant functions, lie quadratics g(x) = w 0 +w x +w 2 x 2 + w 2 x x 2 +w x 2 +w 22 x 2 2 Methodology is almost the same as in the linear case: f(x) = sign(w 0 +w x +w 2 x 2 +w 2 x x 2 +w x 2 + w 22 x 22 ) z = [ x x x 2 x 2 x 2 x 22 ] a = [ w 0 w w w 2 2 w w 22 ] normalization : multiply negative class samples by gradient descent to minimize Perceptron obective function J p t a a z Z a z x 2 x

Need for Non Linear Discriminant May need highly non linear decision boundaries This would require too many high order polynomial terms to fit x 2 g(x) = w 0 +w x +w 2 x 2 + + w 2 x x 2 + w x 2 +w 22 x 22 + + w x 3 + w 2 x 2 x 2 +w 22 x x 22 + w 222 x 23 + + even more terms of degree 4 + super many terms of degree For n features, there O(n ) polynomial terms of degree Many real world problems are modeled with hundreds and even thousands features 00 0 is too large of function to deal with x

Neural Networs Neural Networs correspond to some discriminant function g NN (x) Can carve out arbitrarily complex decision boundaries without requiring so many terms as polynomial functions Neural Nets were inspired by research in how human brain wors But also proved to be quite successful in practice Are used nowadays successfully for a wide variety of applications too some time to get them to wor now used by US post for postal code recognition x 2 x

Neural Nets: Character Recognition http://yann.lecun.com/exdb/lenet/index.html 7 Yann LeCun et. al.

Brain vs. Computer usually one very fast processor high reliability designed to solve logic and arithmetic problems absolute precision can solve a gazillion arithmetic and logic problems in an hour huge number of parallel but relatively slow and unreliable processors not perfectly precise, not perfectly reliable evolved (in a large part) for pattern recognition learns to solve various PR problems see inspiration for classification from human brain

One Learning Algorithm Hypothesis Brain does many different things Seems lie it runs many different programs Seems we have to write tons of different programs to mimic brain Hypothesis: there is a single underlying learning algorithm shared by different parts of the brain Evidence from neuro rewiring experiments Cut the wire from ear to auditory cortex Route signal from eyes to the auditory cortex Auditory cortex learns to see animals will eventually learn to perform a variety of obect recognition tass There are other similar rewiring experiments [Roe et al, 992]

Seeing with Tongue Scientists use the amazing ability of the brain to learn to retrain brain tissue Seeing with tongue BrainPort Technology Camera connected to a tongue array sensor Pictures are painted on the tongue Bright pixels correspond to high voltage Gray pixels correspond to medium voltage Blac pixels correspond to no voltage Learning taes from 2 0 hours Some users describe experience resembling a low resolution version of vision they once had able to recognize high contrast obect, their location, movement tongue array sensor

One Learning Algorithm Hypothesis Experimental evidence that we can plug any sensor to any part of the brain, and brain can learn how to deal with it Since the same physical piece of brain tissue can process sight, sound, etc. Maybe there is one learning algorithm can process sight, sound, etc. Maybe we need to figure out and implement an algorithm that approximates what the brain does Neural Networs were developed as a simulation of networs of neurons in human brain

Neuron: Basic Brain Processor Neurons (or nerve cells) are special cells that process and transmit information by electrical signaling in brain and also spinal cord Human brain has around 0 neurons A neuron connects to other neurons to form a networ Each neuron cell communicates to anywhere from 000 to 0,000 other neurons

Neuron: Main Components dendrites cell body axon terminals cell body computational unit dendrites axon nucleus input wires, receive inputs from other neurons a neuron may have thousands of dendrites, usually short axon output wire, sends signal to other neurons single long structure (up to meter) splits in possibly thousands branches at the end, axon terminals 3

Neurons in Action (Simplified Picture) Cell body collects and processes signals from other neurons through dendrites If there the strength of incoming signals is large enough, the cell body sends an electricity pulse (a spie) to its axon Its axon, in turn, connects to dendrites of other neurons, transmitting spies to other neurons This is the process by which all human thought, sensing, action, etc. happens

Artificial Neural Networ (ANN) History: Birth 943, famous paper by W. McCulloch (neurophysiologist) and W. Pitts (mathematician) Using only math and algorithms, constructed a model of how neural networ may wor Showed it is possible to construct any computable function with their networ Was it possible to mae a model of thoughts of a human being? Can be considered to be the birth of AI 949, D. Hebb, introduced the first (purely pshychological) theory of learning Brain learns at tass through life, thereby it goes through tremendous changes If two neurons fire together, they strengthen each other s responses and are liely to fire together in the future

ANN History: First Successes 958, F. Rosenblatt, perceptron, oldest neural networ still in use today that s what we studied in lecture on linear classifiers Algorithm to train the perceptron networ Built in hardware Proved convergence in linearly separable case 959, B. Widrow and M. Hoff Madaline First ANN applied to real problem eliminate echoes in phone lines Still in commercial use

ANN History: Stagnation Early success lead to a lot of claims which were not fulfilled 969, M. Minsy and S. Pappert Boo Perceptrons Proved that perceptrons can learn only linearly separable classes In particular cannot learn very simple XOR function Conectured that multilayer neural networs also limited by linearly separable functions No funding and almost no research (at least in North America) in 970 s as the result of 2 things above

ANN History: Revival Revival of ANN in 980 s 982, J. Hopfield New ind of networs (Hopfield s networs) Not ust model of how human brain might wor, but also how to create useful devices Implements associative memory 982 oint US Japanese conference on ANN US worries that it will stay behind Many examples of mulitlayer NN appear 986, re discovery of bacpropagation algorithm by Werbos, Rumelhart, Hinton and Ronald Williams Allows a networ to learn not linearly separable classes

Artificial Neural Nets (ANN): Perceptron bias unit layer input layer x x 2 w 0 w layer 2 output layer h()=sign() w 2 sign(w t x+w 0 ) w 3 x 3 Linear classifier f(x) = sign(w t x+w 0 ) is a single neuron net Input layer units output features, except bias outputs Output layer unit applies sign() or some other function h() h() is also called an activation function

Multilayer Neural Networ (MNN) layer Input layer layer 2 hidden layer layer 3 output layer x x 2 w w h( w h( )+w h( ) ) x 3 First hidden unit outputs: h( ) = h(w 0 +w x +w 2 x 2 +w 3 x 3 ) Second hidden unit outputs: h( ) = h(w 0 +w x +w 2 x 2 +w 3 x 3 ) Networ corresponds to classifier f(x) = h( w h( )+w h( ) ) More complex than Perceptron, more complex boundaries

MNN Small Example layer : input x x 2 3 6 5 layer 2: hidden 7 4 2 3 layer 3: output Let activation function h() = sign() MNN Corresponds to classifier f(x) = sign( 4h( )+2h( ) + 7 ) = sign(4sign(3x +5x 2 )+2sign(6+3x 2 ) + 7) MNN terminology: computing f(x) is called feed forward operation graphically, function is computed from left to right Edge weights are learned through training

MNN: Multiple Classes layer Input layer layer 2 hidden layer layer 3 output layer x x 2 3 classes, 2 features, hidden layer 3 input units, one for each feature 3 output units, one for each class 2 hidden units bias unit, usually drawn in layer

MNN: General Structure layer Input layer x x 2 layer 2 hidden layer layer 3 output layer h(...) h(...) h(...) = f (x) = f 2 (x) = f 3 (x) f(x) = [f (x), f 2 (x), f 3 (x)] is multi dimensional Classification: If f (x)is largest, decide class If f 2 (x)is largest, decide class 2 If f 3 (x)is largest, decide class 3

MNN: General Structure layer Input layer layer 2 hidden layer layer 3 output layer x x 2 Input layer: d features, d input units Output layer: m classes, m output units Hidden layer: how many units?

MNN: General Structure layer Input layer layer 2 hidden layer layer 3 hidden layer layer 4 output layer w w w h(...) x w x 2 Can have more than hidden layer ith layer connects to (i+)th layer except bias unit can connect to any layer can have different number of units in each hidden layer First output unit outputs: h(...) = h( wh( ) + w ) = h( wh(wh( ) + wh( )) + w )

MNN: Activation Function h() = sign() is discontinuous, not good for gradient descent Instead can use continuous sigmoid function Or another differentiable function Can even use different activation functions at different layers/units From now, assume h() is a differentiable function

MNN: Overview A neural networ corresponds to a classifier f(x,w) that can be rather complex complexity depends on the number of hidden layers/units f(x,w) is a composition of many functions easier to visualize as a networ notation gets ugly To train neural networ, ust as before formulate an obective function J(w) optimize it with gradient descent That s all! Except we need quite a few slides to write down details due to complexity of f(x,w)

Expressive Power of MNN Every continuous function from input to output can be implemented with enough hidden units, hidden layer, and proper nonlinear activation functions easy to show that with linear activation function, multilayer neural networ is equivalent to perceptron This is more of theoretical than practical interest Proof is not constructive (does not tell how construct MNN) Even if constructive, would be of no use, we do not now the desired function, our goal is to learn it through the samples But this result gives confidence that we are on the right trac MNN is general (expressive) enough to construct any required decision boundaries, unlie the Perceptron

Decision Boundaries Perceptron (single layer neural net) Arbitrarily complex decision regions Even not contiguous

Nonlinear Decision Boundary: Example Start with two Perceptrons, h() = sign() x + x 2 > 0 class x + x 2 3 > 0 class 3 x x x 2 x 2 x 2 x 2 x 3 x 3

Nonlinear Decision Boundary: Example Now combine them into a 3 layer NN x 3 x 2.5 x 2 x 2 x 2 + x x 3 x 3 3 3

MNN: Modes of Operation For Neural Networs, due to historical reasons, training and testing stages have special names Bacpropagation (or training) Minimize obective function with gradient descent Feedforward (or testing)

x d. MNN: Notation for Edge Weights layer input layer 2 hidden layer hidden layer output unit x.. w m. unit d bias unit or unit 0 w 0m w p is edge weight from unit p in layer to unit in layer w 0 is edge weight from bias unit to unit in layer w is all weights to unit in layer, i.e. w 0, w,, w N( ) N() is the number of units in layer, excluding the bias unit

x MNN: More Notation layer layer 2 layer 3 z 0 = z 3 2 = h( ) x 2 z 2 = x 2 z 2 2 = h( ) Denote the output of unit in layer as z For the input layer (=), z 0 = and z = x, 0 For all other layers, ( > ), z = h( ) Convenient to set z 0 = for all Set z = [z 0, z,, z N() ]

MNN: More Notation layer layer 2 layer 3 x z 0w 2 0 z w2 z 2 w2 2 x 2 Net activation at unit in layer > is the sum of inputs a a N p z p w p w 0 For >, z = h(a ) N p0 2 2 2 2 z0w0 zw z2w2 z p wp z w

MNN: Class Representation m class problem, let Neural Net have t layers Let x i be a example of class c It is convenient to denote its label as y i = row c 0 0 Recall that z t c is the output of unit c in layer t (output layer) f(x)= z t =. If x i is of class c, want z t = t m t c t z z z 0 0 row c

Training MNN: Obective Function Want to minimize difference between y i and f(x i ) Use squared difference Let w be all edge weights in MNN collected in one vector m i i 2 Error on one example x i : Ji w fc x yc 2 Error on all examples: J 2 c n m i i w f c x yc i c 2 Gradient descent: initialize w to random choose, while J(w) > w = w J(w)

Training MNN: Single Sample For simplicity, first consider error for one example x i m i i 2 i i 2 Ji w y f x fc x yc 2 2 f c (x i ) depends on w y i is independent of w Compute partial derivatives w.r.t. w p for all, p, Suppose have t layers f c c i t t t t x z h a h z w c c c

Training MNN: Single Sample For derivation, we use: For weights w t p w t p w J to the output layer t: i i i w f x y f x w t p J i w t p i i t t f x y h' a z t p Therefore, p J i i i w f c x yc y i 2 m c i i t t w f x y h' a z a z p we don t mae this dependence explicit. f c i t t t x h a h z w t t both h' and depend on x i. For simpler notation, c p 2 c

Training MNN: Single Sample For a layer, compute partial derivatives w.r.t. w p Gets complex, since have lots of function compositions Will give the rest of derivatives First define e, the error attributed to unit in layer : For layer t (output): For layers < t: Thus for 2 t: e e t w p f J i i x N c e y i h' w e h' a z a w c c c p

MNN Training: Multiple Samples n i m c i c i c y x f w J 2 2 Error on all examples: m c i c i c i y x f w J 2 2 Error on one example x i : p i p z a h' e w J w n i p p z a h' e w J w

Batch Protocol Training Protocols true gradient descent weights are updated only after all examples are processed might be slow to converge Single Sample Protocol examples are chosen randomly from the training set weights are updated after every example converges faster than batch, but maybe to an inferior solution Online Protocol each example is presented only once, weights update after each example presentation used if number of examples is large and does not fit in memory should be avoided when possible

MNN Training: Single Sample initialize w to small random numbers choose, while J(w) > for i = to n r = random index from {,2,,n} delta p = 0 p,, e t r r f x y for = tto2 delta delta e p N e c h' p e w p = w p + delta p p,, c h' a w c c a z p

MNN Training: Batch initialize w to small random numbers choose, while J(w) > for i = to n delta p = 0 p,, e t i i f x y for = tto2 delta delta e p N w p = w p + delta p p,, e c c h' p e h' a w c c a z p

BacPropagation of Errors In MNN terminology, training is called bacpropagation errors computed (propagated) bacwards from the output to the input layer while J(w) > for i = to n delta p = 0 p,, e t r r y f x for = tto2 delta delta e p N e c h' p w p = w p + delta p p,, c first last layer errors computed then errors computed bacwards e h' a w c c a z p

MNN Training Important: weights should be initialized to random nonzero numbers Ji w e h' a zp w e p if w c = 0, errors e are zero for layers < t weights in layers < t will not be updated N c e h' a w c c c

MNN Training: How long to Train? training time Large training error: random decision regions in the beginning underfit Small training error: decision regions improve with time Zero training error: decision regions fit training data perfectly overfit can learn when to stop training through validation

MNN as Non Linear Feature Mapping x x 2 MNN can be interpreted as first mapping input features to new features Then applying Perceptron (linear classifier) to the new features

MNN as Non Linear Feature Mapping y x y 2 x 2 y 3 this part implements Perceptron (liner classifier)

MNN as Non Linear Feature Mapping y x y 2 x 2 y 3 this part implements mapping to new features y

MNN as Nonlinear Feature Mapping Consider 3 layer NN example we saw previously: + x 2 x x 2 3 y 2.5 x y non linearly separable in the original feature space linearly separable in the new feature space

Neural Networ Demo http://www.youtube.com/watch?v=nirgzgezgi

Practical Tips: Weight Decay To avoid overfitting, it is recommended to eep weights small Implement weight decay after each weight update: w new = w new ( ), 0 < < Additional benefit is that unused weights grow small and may be eliminated altogether a weight is unused if it is left almost unchanged by the bacpropagation algorithm

Practical Tips for BP: Momentum Gradient descent finds only a local minima Momentum: popular method to avoid local minima and speed up descent in flat (plateau) regions Add temporal average direction in which weights have been moving recently Previous direction: w t =w t w t Weight update rule with momentum: w t w t J w t w steepest descent direction previous direction

Practical Tips for BP: Activation Function Gradient descent wors with any differentiable h, however some choices are better Desirable properties for h: nonlinearity to express nonlinear decision boundaries Saturation, that is h has minimum and maximum values Keeps weights bounded, thus training time is reduced Monotonicity so that activation function itself does not introduce additional local minima Linearity for a small values, so that networ can produce linear model, if data supports it antisymmetry, that is h( ) = h(), leads to faster learning

Practical Tips for BP: Activation Function Sigmoid function h satisfies all of the properties h q a e e bq bq e e bq bq Good parameter choices are a =.76, b = 2/3 Asymptotic values ±.76 bigger than our labels, which are If asymptotic values were smaller than, training error will not be small Linear range is roughly for < q <

Practical Tips for BP: Normalization Features should be normalized for faster convergence Suppose we measure fish length in meters and weight in grams Typical sample [length = 0.5, weight = 3000] Feature length will be almost ignored If length is in fact important, learning will be very slow Any normalization we looed at before (lecture on NN) will do Test samples should be normalized exactly as the training samples

Practical Tips: Initializing Weights Depends on the activation function Rule of thumb for commonly used sigmoid function recall that N() is the number of units in layer for layer, choose weights from the range at random N w p N

Practical Tips: Learning Rate As any gradient descent algorithm, bacpropagation depends on the learning rate Rule of thumb = 0. However can adust at the training time The obective function J(w) should decrease during gradient descent If J(w) oscillates, is too large, decrease it If J(w) goes down but very slowly, is too small, increase it

Practical Tips for BP: # Hidden Layers Practical Tips: Number of Hidden Layers Networ with hidden layer has the same expressive power as with several hidden layers Having more than hidden layer may result in faster learning and less hidden units However, networs with more than hidden layer are more prone to stuc in a local minima

Practical Tips for BP: Number of Hidden Units Number of hidden units determines the expressive power of the networ Too small may not be sufficient to learn complex decision boundaries Too large may overfit the training data Sometimes recommended that number of hidden units is larger than the number of input units number of hidden units is the same in all hidden layers Can choose number of hidden units through validation

Advantages MNN can learn complex mappings from inputs to outputs, based only on the training samples Easy to use Easy to incorporate a lot of heuristics Disadvantages Concluding Remars It is a blac box, i.e. it is difficult to analyze and predict its behavior May tae a long time to train May get trapped in a bad local minima A lot of trics for best implementation