Machine Learning in Bioinformatics

Similar documents
Typical Supervised Learning Problem Setting

Question of the Day. Machine Learning 2D1431. Decision Tree for PlayTennis. Outline. Lecture 4: Decision Tree Learning

Data Mining in Bioinformatics

Decision Tree Learning Mitchell, Chapter 3. CptS 570 Machine Learning School of EECS Washington State University

Decision-Tree Learning. Chapter 3: Decision Tree Learning. Classification Learning. Decision Tree for PlayTennis

Chapter 3: Decision Tree Learning

Administration. Chapter 3: Decision Tree Learning (part 2) Measuring Entropy. Entropy Function

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Decision Tree Learning

Artificial Neural Networks

Decision Tree Learning

Decision Trees.

Decision Trees.

Notes on Machine Learning for and

Chapter 3: Decision Tree Learning (part 2)

Introduction. Decision Tree Learning. Outline. Decision Tree 9/7/2017. Decision Tree Definition

Decision Tree Learning and Inductive Inference

Learning Classification Trees. Sargur Srihari

Artificial Neural Networks

Lecture 3: Decision Trees

DECISION TREE LEARNING. [read Chapter 3] [recommended exercises 3.1, 3.4]

Lecture 3: Decision Trees

Imagine we ve got a set of data containing several types, or classes. E.g. information about customers, and class=whether or not they buy anything.

Learning Decision Trees

Decision Tree Learning - ID3

Learning Decision Trees

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

CS 6375 Machine Learning

Artificial Intelligence. Topic

Lecture 4: Perceptrons and Multilayer Perceptrons

CS6375: Machine Learning Gautam Kunapuli. Decision Trees

Dan Roth 461C, 3401 Walnut

M chi h n i e n L e L arni n n i g Decision Trees Mac a h c i h n i e n e L e L a e r a ni n ng

the tree till a class assignment is reached

Introduction to ML. Two examples of Learners: Naïve Bayesian Classifiers Decision Trees

Decision Tree Learning

Classification and Prediction

Machine Learning

Decision Trees / NLP Introduction

Artificial Neural Networks

Lecture 24: Other (Non-linear) Classifiers: Decision Tree Learning, Boosting, and Support Vector Classification Instructor: Prof. Ganesh Ramakrishnan

Decision Trees. Tirgul 5

Classification and regression trees

Supervised Learning! Algorithm Implementations! Inferring Rudimentary Rules and Decision Trees!

Machine Learning & Data Mining

Machine Learning Recitation 8 Oct 21, Oznur Tastan

Apprentissage automatique et fouille de données (part 2)

Introduction Association Rule Mining Decision Trees Summary. SMLO12: Data Mining. Statistical Machine Learning Overview.

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

The Solution to Assignment 6

CSCE 478/878 Lecture 6: Bayesian Learning

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. Le Song. CSE6740/CS7641/ISYE6740, Fall Lecture 7, September 11, 2012 Based on slides from Eric Xing, CMU

Machine Learning 2nd Edi7on

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Intelligent Data Analysis. Decision Trees

Classification: Decision Trees

Data classification (II)

Decision Trees. Gavin Brown

ARTIFICIAL INTELLIGENCE. Supervised learning: classification

Decision trees. Special Course in Computer and Information Science II. Adam Gyenge Helsinki University of Technology

EECS 349:Machine Learning Bryan Pardo

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Classification: Decision Trees

Neural Networks (Part 1) Goals for the lecture

Supervised Learning (contd) Decision Trees. Mausam (based on slides by UW-AI faculty)

Machine Learning

Introduction to Machine Learning CMU-10701

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation

Tutorial 6. By:Aashmeet Kalra

Incremental Stochastic Gradient Descent

Classification Using Decision Trees

Induction of Decision Trees

Information Theory & Decision Trees

Mining Classification Knowledge

Concept Learning. Space of Versions of Concepts Learned

Lecture 5: Logistic Regression. Neural Networks

Classification and Regression Trees

Algorithms for Classification: The Basic Methods

ML techniques. symbolic techniques different types of representation value attribute representation representation of the first order

Decision Tree Learning

Data Mining Part 5. Prediction

CS 380: ARTIFICIAL INTELLIGENCE MACHINE LEARNING. Santiago Ontañón

Classification II: Decision Trees and SVMs

Rule Generation using Decision Trees

Multilayer Perceptron

Decision Tree Learning Lecture 2

Decision Support. Dr. Johan Hagelbäck.

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

Decision Trees Part 1. Rao Vemuri University of California, Davis

Mining Classification Knowledge

Multilayer Neural Networks

Back-Propagation Algorithm. Perceptron Gradient Descent Multilayered neural network Back-Propagation More on Back-Propagation Examples

Question of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning

PATTERN CLASSIFICATION

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Numerical Learning Algorithms

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Name (NetID): (1 Point)

Transcription:

Machine Learning in Bioinformatics Arlindo Oliveira aml@inesc-id.pt Data Mining: Concepts and Techniques Data mining concepts Learning from examples Decision trees Neural Networks 1

Typical Supervised Learning Problem Setting Given a set (database) of observations Each observation (x1,, xn, y) Xi are input variables Y is a particular output Build a model to predict y = f(x1,, xn) First define criterion to measure model quality Split dataset into training and test sets Build model using training set Validate model using test set A Database (Example) X1 X2 X3 X4 X5 X6 Y f(x1,,x6) O5001 876.029-193.660-98.179 1.067 54.598 1.076 GOOD GOOD O5002 1110.880-423.190-119.300 1.119 58.228 1.112 GOOD GOOD O5003 980.132 79.722-122.600 1.063 62.537 1.062 GOOD GOOD O5004 974.139 217.073-100.520 1.015 64.428 1.010 GOOD GOOD O5005 927.198-618.470-100.000 1.020 42.557 1.017 GOOD GOOD O5006 1192.590 617.266-103.460 1.073 51.230 1.065 GOOD GOOD O5007 1069.120 7.137-109.460 1.016 66.446 1.010 BAD BAD O5008 1189.200 905.121-87.433 1.056 66.052 1.071 GOOD GOOD O5009 999.084 685.442-107.870 1.109 59.726 1.110 GOOD GOOD O5010 1241.880-442.250-105.680 1.078 58.917 1.079 GOOD BAD O5011 845.574 816.962-106.180 1.103 57.922 1.098 GOOD GOOD O5012 1151.250 10.003-86.426 1.108 64.069 1.113 GOOD GOOD O5013 963.600-312.430-96.890 0.988 61.517 0.986 BAD BAD O5014 721.150 155.468-95.262 1.050 63.566 1.027 GOOD GOOD O5015 1135.190 320.912-117.480 1.049 56.051 1.041 GOOD GOOD O5016 939.189 234.557-109.450 1.100 49.913 1.110 BAD GOOD O5017 923.754 294.472-100.300 1.106 74.579 1.120 GOOD GOOD O5018 886.942 446.574-87.805 0.950 57.044 0.950 GOOD GOOD O5019 1201.850-415.250-108.290 1.116 52.578 1.123 BAD BAD O5020 909.856-424.200-94.270 1.086 53.906 1.089 GOOD GOOD O5021 1065.490-21.762-102.260 0.953 58.404 0.984 BAD BAD O5022 1007.220-450.000-91.618 1.055 56.688 1.047 GOOD GOOD O5023 859.116-631.820-99.983 1.010 59.397 1.012 GOOD GOOD O5024 846.327-17.598-104.480 1.006 72.391 1.012 GOOD GOOD O5025 786.674-314.950-110.000 1.056 63.269 1.062 GOOD GOOD O5026 759.662 761.961-98.087 1.124 63.185 1.113 GOOD GOOD 2

Main Steps Select subset of relevant input variables Build a model using these variables Generate sequence of models Identify one (or various) as being good models Use only the training set Validate selected models Quantitatively : using the test set Qualitatively : using expert knowledge Main Classes of Methods Supervised learning (= input/output models) Decision/regression trees Neural networks Unsupervised learning (=p(x1,,xn) models) Bayesian networks Clustering 3

Inductive Learning Learning from examples The general problem of inductive inference Inductive bias Examples Training Examples for Concept Enjoy Sport Concept: days on which my friend Aldo enjoys his favourite water sports Task: predict the value of Enjoy Sport for an arbitrary day based on the values of the other attributes Sky Temp Humid Wind Water Forecast Sunny Sunny Rainy Sunny Warm Warm Cold Warm Normal High High High Strong Strong Strong Strong instance Warm Warm Warm Cool Same Same Change Change Enjoy Sport No 4

Inductive Learning Hypothesis Any hypothesis found to approximate the target function well over the training examples, will also approximate the target function well over the unobserved examples. Futility of Bias-Free Learning A learner that makes no prior assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. No Free Lunch! 5

Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting What Is a Decision Tree? Value of X1 Small Medium or Large Value of X2 Y is big < 0.34 > 0.34 Y is very big Y is small 6

Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak D4 Rain Mild High Weak D5 Rain Cool Normal Weak D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak D10 Rain Mild Normal Strong D11 Sunny Mild Normal Strong D12 Overcast Mild High Strong D13 Overcast Hot Normal Weak D14 Rain Mild High Strong No Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No 7

Decision Tree for PlayTennis Outlook Sunny Overcast Rain Humidity Each internal node tests an attribute High No Normal Each branch corresponds to an attribute value node Each leaf node assigns a classification Decision Tree for PlayTennis Outlook Temperature Humidity Wind PlayTennis Sunny Hot High Weak? No Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No 8

Decision Tree for Conjunction Outlook=Sunny Wind=Weak Outlook Sunny Overcast Rain Wind No No Strong No Weak Decision Tree for Disjunction Outlook=Sunny Wind=Weak Outlook Sunny Overcast Rain Wind Wind Strong Weak Strong Weak No No 9

Decision Tree for XOR Outlook=Sunny XOR Wind=Weak Outlook Sunny Overcast Rain Wind Wind Wind Strong Weak Strong Weak Strong Weak No No No Decision Tree decision trees represent disjunctions of conjunctions Outlook Sunny Overcast Rain Humidity Wind High Normal Strong Weak No No (Outlook=Sunny Humidity=Normal) (Outlook=Overcast) (Outlook=Rain Wind=Weak) 10

When to consider Decision Trees Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Missing attribute values Examples: Medical diagnosis Credit risk analysis Object classification for robot manipulator (Tan 1993) Growing and Pruning Pictorially Data mis-fit Underfitting Overfitting Pruning Growing Tree complexity Final tree 11

An Application in Bioinformatics Genetics of complex traits Data base (see) Composed of observations on 1086 animals Inputs : 20x2 genetic markers Outputs : phenotypic measurements (numbers) Identify the location of involved chromosomal regions Results : unpruned and pruned Another Application in Bioinformatics Identification of protein origin Data base (see) Composed of frequency of aminoacids in different families Inputs : 20 frequencies Outputs : class of protein Objective: identify the family of the protein 12

Yet Another Application in Bioinformatics Identification of regulatory mechanisms between yeast genes Data from microarray experiments [Spellman et al., (1998). Comprehensive Identification of Cell Cycle-regulated Genes of the Yeast Saccharomyces cerevisiae by Microarray Hybridization. Molecular Biology of the Cell 9, 3273-3297] Want to predict which genes activate: CLN1, CLN2, CLN3, SW14 Decision tree for CNL1 activation CLN2 YPL256C <-0,375-0,375 CLN1 Não activo YPL120C CLB5 <-0,285-0,285 CLN1 Não activo YDR328C SKP1 <0,695 0,695 CLN1 CLN1 Activo Não activo -1 1-1 78% 22% 1 0% 100% Confusion matrix 13

Decision tree for CLN2 activation CLB5 <0 0 CLN2 Activo CLN3 <-0,455-0,455 CLN2 Não activo CLN2 Activo CDH1 <-0,475-0,475 CLN2 Não activo -1 1-1 66,6% 33,3% 1 20% 80% Confusion matrix Decision tree for CLN3 activation YGL003 CDH1 <-0.14-0.14 CLN3 Não activo SKP1 <0.3 0.3 CLN3 CDC53 Não activo <0.025 >0,025 CLN3 CLN3 Não activo Activo -1 1-1 83,3% 16,6% 1 14,2% 85,7% Confusion matrix 14

Decision tree for SW14 activation MBP1 <-0.005-0.005 <-0,28 SW14 Não activo MCM1-0,28 CLB1 SIC1 CLN2 <0.1 0.1 SW14 Activo -1 1-1 70% 37% 1 9% 90% <1,24 >1,24 SW14 Activo SW14 Não activo <0.025 >0,025 SW14 Não activo SW14 Activo Confusion matrix Top-Down Induction of Decision Trees ID3 1. A the best decision attribute for next node 2. Assign A as decision attribute for node 3. For each value of A create new descendant 4. Sort training examples to leaf node according to the attribute value of the branch 5. If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes. 15

Which Attribute is best? [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Entropy S is a sample of training examples p + is the proportion of positive examples p - is the proportion of negative examples Entropy measures the impurity of S Entropy(S) = -p + log 2 p + - p - log 2 p - 16

Entropy Entropy(S)= expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code) Why? Information theory optimal length code assign log 2 p bits to messages having probability p. So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + - p - log 2 p - Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - v values(a) S v / S Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 35/64 log 2 35/64 = 0.99 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] 17

Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 Information Gain Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 [29+,35-] A 1 =? A 2 =? [29+,35-] True False True False [21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-] Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak D4 Rain Mild High Weak D5 Rain Cool Normal Weak D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Weak D8 Sunny Mild High Weak No D9 Sunny Cold Normal Weak D10 Rain Mild Normal Strong D11 Sunny Mild Normal Strong D12 Overcast Mild High Strong D13 Overcast Hot Normal Weak D14 Rain Mild High Strong No 18

Selecting the Next Attribute S=[9+,5-] E=0.940 Humidity S=[9+,5-] E=0.940 Wind High Normal Weak Strong [3+, 4-] [6+, 1-] E=0.985 Gain(S,Humidity) =0.940-(7/14)*0.985 (7/14)*0.592 =0.151 E=0.592 [6+, 2-] [3+, 3-] E=0.811 E=1.0 Gain(S,Wind) =0.940-(8/14)*0.811 (6/14)*1.0 =0.048 Selecting the Next Attribute Sunny S=[9+,5-] E=0.940 Outlook Rain [2+, 3-] [4+, 0] [3+, 2-] E=0.971 Over cast E=0.0 E=0.971 Gain(S,Outlook) =0.940-(5/14)*0.971 -(4/14)*0.0 (5/14)*0.0971 =0.247 19

ID3 Algorithm [D1,D2,,D14] [9+,5-] Outlook Sunny Overcast Rain S sunny =[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [4+,0-]?? [D4,D5,D6,D10,D14] [3+,2-] Gain(S sunny, Humidity)=0.970-(3/5)0.0 2/5(0.0) = 0.970 Gain(S sunny, Temp.)=0.970-(2/5)0.0 2/5(1.0)-(1/5)0.0 = 0.570 Gain(S sunny, Wind)=0.970= -(2/5)1.0 3/5(0.918) = 0.019 ID3 Algorithm Outlook Sunny Overcast Rain Humidity [D3,D7,D12,D13] Wind High Normal Strong Weak No No [D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10] 20

Overfitting in Decision Tree Learning Avoid Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree then post-prune Minimum description length (MDL): Minimize: size(tree) + size(misclassifications(tree)) 21

Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that most improves the validation set accuracy Produces smallest version of most accurate subtree Effect of Reduced Error Pruning 22

Continuous Valued Attributes Create a discrete attribute to test continuous Temperature = 24.5 0 C (Temperature > 20.0 0 C) = {true, false} Where to set the threshold? Temperatur 15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennis No No No (see paper by [Fayyad, Irani 1993] Attributes with many Values Problem: if an attribute has many values, maximizing InformationGain will select it. E.g.: Imagine using Date=12.7.1996 as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -Σ i=1..c S i / S log 2 S i / S Where S i is the subset for which attribute A has the value v i 23

Attributes with Cost Consider: Medical diagnosis : blood test costs 1000 SEK Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w [0,1] [Nunez 1988] Unknown Attribute Values What is some examples missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion 24

Cross-Validation Estimate the accuracy of a hypothesis induced by a supervised learning algorithm Predict the accuracy of a hypothesis over future unseen instances Select the optimal hypothesis from a given set of alternative hypotheses Pruning decision trees Model selection Feature selection Combining multiple classifiers (boosting) Holdout Method Partition data set D = {(v 1,y 1 ),,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h (vi,yi) Dh δ(i(d t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i δ(i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated 25

Cross-Validation k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,,D k D 1 D 2 D 3 D 4 Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 D 1 D 2 D 3 D 4 acc cv = 1/n (vi,yi) D δ(i(d\d i,v i ),y i ) Cross-Validation Uses all the data for training and testing Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n-fold crossvalidation) In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set 26

Neural networks Perceptrons Gradient descent Multi-layer networks Backpropagation Biological Neural Systems Neuron switching time : > 10-3 secs Number of neurons in the human brain: ~10 10 Connections (synapses) per neuron : ~10 4 10 5 Face recognition : 0.1 secs High degree of parallel computation Distributed representations 27

Properties of Artificial Neural Nets (ANNs) Many simple neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed processing Learning by tuning the connection weights Appropriate Problem Domains for Neural Network Learning Input is high-dimensional discrete or real-valued (e.g. raw sensor input) Output is discrete or real valued Output is a vector of values Form of target function is unknown Humans do not need to interpret the results (black box model) 28

Perceptron Linear treshold unit (LTU) x 1 x 2. x n w 2 w n w 1 x 0=1 w 0 Σ Σ i=0n w i x i o(x i )= { 1 if Σ i=0 n w i x i >0-1 otherwise o Decision Surface of a Perceptron + + x 2 + - + - - - x 1 Perceptron is able to represent some useful functions And(x 1,x 2 ) choose weights w 0 =-1.5, w 1 =1, w 2 =1 But functions that are not linearly separable (e.g. Xor) are not representable + - x 2 - + x 1 29

Perceptron Learning Rule w i = w i + w i w i = η (t - o) x i t=c(x) is the target value o is the perceptron output η Is a small constant (e.g. 0.1) called learning rate If the output is correct (t=o) the weights w i are not changed If the output is incorrect (t o) the weights w i are changed such that the output of the perceptron for the new weights is closer to t. The algorithm converges to the correct classification if the training data is linearly separable and η is sufficiently small Perceptron Learning Rule t=1 w=[0.25 0.1 0.5] x 2 = 0.2 x 1 0.5 (x,t)=([-1,-1],1) (x,t)=([2,1],-1) o=sgn(0.25+0.1-0.5) o=sgn(0.45-0.6+0.3) (x,t)=([1,1],1) o=sgn(0.25-0.7+0.1) =-1 =1 =-1 w=[-0.2 w=[0.2 0.2 0.4 0.2] w=[0.2 0.2 0.2] o=1 o=-1 t=-1 30

Gradient Descent Learning Rule Consider linear unit without threshold and continuous output o (not just 1,1) o=w 0 + w 1 x 1 + + w n x n Train the w i s such that they minimize the squared error E[w 1,,w n ] = ½ Σ d D (t d -o d ) 2 where D is the set of training examples Gradient Descent D={<(1,1),1>,<(-1,-1),1>, <(1,-1),-1>,<(-1,1),-1>} Gradient: E[w]=[ E/ w 0, E/ w n ] w=-η E[w] (w 1,w 2 ) (w 1 + w 1,w 2 + w 2 ) w i =-η E/ w i = / w i 1/2Σ d (t d -o d ) 2 = / w i 1/2Σ d (t d -Σ i w i x i ) 2 = Σ d (t d - o d )(-x i ) 31

Gradient Descent Gradient-Descent(training_examples, η) Each training example is a pair of the form <(x 1, x n ),t> where (x 1,,x n ) is the vector of input values, and t is the target output value, η is the learning rate (e.g. 0.1) Initialize each w i to some small random value Until the termination condition is met, Do Initialize each w i to zero For each <(x 1, x n ),t> in training_examples Do Input the instance (x 1,,x n ) to the linear unit and compute the output o For each linear unit weight w i Do w i = w i + η (t-o) x i For each linear unit weight wi Do w i =w i + w i Incremental Stochastic Gradient Descent Batch mode : gradient descent w=w -η E D [w] over the entire data D E D [w]=1/2σ d (t d -o d ) 2 Incremental mode: gradient descent w=w -η E d [w] over individual training examples d E d [w]=1/2 (t d -o d ) 2 Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if η is small enough 32

Comparison Perceptron and Gradient Descent Rule Perceptron learning rule guaranteed to succeed if Training examples are linearly separable Sufficiently small learning rate η Linear unit training rules uses gradient descent Guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate η Even when training data contains noise Even when training data not separable by H Multi-Layer Networks output layer hidden layer input layer 33

x 1 x 2. x n w 2 w n w 1 x 0=1 w 0 Σ Sigmoid Unit net=σ i=0n w i x i o=σ(net)=1/(1+e -net ) σ(x) is the sigmoid function: 1/(1+e -x) dσ(x)/dx= σ(x) (1- σ(x)) Derive gradient decent rules to train: one sigmoid function E/ w i = -Σ d (t d -o d ) o d (1-o d ) x i Multilayer networks of sigmoid units backpropagation: o Backpropagation Algorithm Initialize each w i to some small random value Until the termination condition is met, Do For each training example <(x 1, x n ),t> Do Input the instance (x 1,,x n ) to the network and compute the network outputs o k For each output unit k δ k =o k (1-o k )(t k -o k ) For each hidden unit h δ h =o h (1-o h ) Σ k w h,k δ k For each network weight w,j Do w i,j =w i,j + w i,j where w i,j = ηδ j x i,j 34

Backpropagation Gradient descent over entire network weight vector Easily generalized to arbitrary directed graphs Will find a local, not necessarily global error minimum -in practice often works well (can be invoked multiple times with different initial weights) Often include weight momentum term w i,j (n)= ηδ j x i,j + α w i,j (n-1) Minimizes error training examples Will it generalize well to unseen instances (over-fitting)? Training can be slow typical 1000-10000 iterations (use Levenberg-Marquardt instead of gradient descent) Using network after training is fast 8-3-8 Binary Encoder -Decoder 8 inputs 3 hidden 8 outputs Hidden values.89.04.08.01.11.88.01.97.27.99.97.71.03.05.02.22.99.99.80.01.98.60.94.01 35

Sum of Squared Errors for the Output Units Hidden Unit Encoding for Input 0100000 36

Convergence of Backprop Gradient descent to some local minimum Perhaps not global minimum Add momentum Stochastic gradient descent Train multiple nets with different initial weights Nature of convergence Initialize weights near zero Therefore, initial networks near-linear Increasingly non-linear functions possible as training progresses Expressive Capabilities of ANN Boolean functions Every boolean function can be represented by network with single hidden layer But might require exponential (in number of inputs) hidden units Continuous functions Every bounded continuous function can be approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989, Hornik 1989] Any function can be approximated to arbitrary accuracy by a network with two hidden layers [Cybenko 1988] 37