COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

Similar documents
Revision: Neural Network

Lab 5: 16 th April Exercises on Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Reification of Boolean Logic

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Feedforward Neural Nets and Backpropagation

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Backpropagation Neural Net

Data Mining Part 5. Prediction

4. Multilayer Perceptrons

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

Artificial Neural Networks Examination, June 2005

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

EPL442: Computational

Artificial Intelligence

Artificial Neural Networks Examination, March 2004

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Unit III. A Survey of Neural Network Model

Multilayer Perceptrons and Backpropagation

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Neural Networks and the Back-propagation Algorithm

Simple Neural Nets For Pattern Classification

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

Introduction to Neural Networks

EEE 241: Linear Systems

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Recurrent Neural Networks

MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Neural Networks biological neuron artificial neuron 1

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

C4 Phenomenological Modeling - Regression & Neural Networks : Computational Modeling and Simulation Instructor: Linwei Wang

Training Multi-Layer Neural Networks. - the Back-Propagation Method. (c) Marcin Sydow

CSE 352 (AI) LECTURE NOTES Professor Anita Wasilewska. NEURAL NETWORKS Learning

Analysis of Multilayer Neural Network Modeling and Long Short-Term Memory

Input layer. Weight matrix [ ] Output layer

Neural Networks Task Sheet 2. Due date: May

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural networks III: The delta learning rule with semilinear activation function

Artificial Neural Networks Examination, June 2004

Neural Network Tutorial & Application in Nuclear Physics. Weiguang Jiang ( 蒋炜光 ) UTK / ORNL

a. See the textbook for examples of proving logical equivalence using truth tables. b. There is a real number x for which f (x) < 0. (x 1) 2 > 0.

Neural Networks and Deep Learning

Neural Networks: Basics. Darrell Whitley Colorado State University

Artificial Neural Networks. Edward Gatt

Linear Regression, Neural Networks, etc.

AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Lecture 7 Artificial neural networks: Supervised learning

1 Machine Learning Concepts (16 points)

Artificial Neural Network

Computational Explorations in Cognitive Neuroscience Chapter 5: Error-Driven Task Learning

Stanford Machine Learning - Week V

arxiv: v3 [cs.lg] 14 Jan 2018

Statistical NLP for the Web

y(x n, w) t n 2. (1)

Internal Covariate Shift Batch Normalization Implementation Experiments. Batch Normalization. Devin Willmott. University of Kentucky.

Based on the original slides of Hung-yi Lee

Computational Intelligence Winter Term 2017/18

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Simulating Neural Networks. Lawrence Ward P465A

Artificial Neural Networks

NN V: The generalized delta learning rule

Week 5: Logistic Regression & Neural Networks

AI Programming CS F-20 Neural Networks

Fundamentals of Neural Network

Neural Nets Supervised learning

Networks of McCulloch-Pitts Neurons

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks

Computational Intelligence

Artifical Neural Networks

Tutorial on Tangent Propagation

The Multi-Layer Perceptron

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

Learning Neural Networks

Lecture 13 Back-propagation

Intro to Neural Networks and Deep Learning

Security Analytics. Topic 6: Perceptron and Support Vector Machine

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing

Artificial Neural Network : Training

Feedforward Neural Networks

Lecture 4: Perceptrons and Multilayer Perceptrons

Chapter 3 Supervised learning:

Artificial Neural Networks

Learning Deep Architectures for AI. Part I - Vijay Chakilam

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

CE213 Artificial Intelligence Lecture 14

Neural Networks and Ensemble Methods for Classification

CSCI567 Machine Learning (Fall 2018)

CSC242: Intro to AI. Lecture 21

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

An artificial neural networks (ANNs) model is a functional abstraction of the

Lecture 17: Neural Networks and Deep Learning

Neural networks. Chapter 20. Chapter 20 1

Transcription:

COGS Q250 Fall 2012 Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November. For the first two questions of the homework you will need to understand the learning algorithm using the delta rule (which we went over on Friday session that half of you missed). For the third question of the homework you will need to understand the Backpropagation Algorithm (which we will go over on Monday). Important: When answering, please make sure to work cleanly and systematically and show every step of the learning process for the network, from start to finish. Learning Algorithm: 1. Calculate the output of the neuron for all of the training examples with the current weights and biases. 2. Pick a training pattern that the network gets wrong, and change all of its incoming weights and bias using the delta rule (see below). 3. Go back to step 2. The algorithm stops once the output of the network is correct for all possible training patterns. Delta Rule: wij = n x p i (t p - y p j) where wij is the weight of the connection between the input xi and the output neuron yj, n is a constant learning rate (for this exercises you can use either 0.5 or 1.0), x p i is the input i for pattern p, t p is the target output for pattern p, and y p j is the output of the neuron j for pattern p. Remember that this rule must be applied to the bias as well. Remember that the input to the bias is always 1. Remember that means that the weight is going to change by that amount. For example, if the w is currently 0, and the learning rate is 1, and for the pattern of input is 1, the output of the neuron is 0, but the target output should be 1, then the weight is going to change, according to: w = w + n x (t - y) = 0 + 1 1 ( 1-0 ) = 1. So the weight is going to change from 0 to 1.

Transfer Function All neurons have the following step function as a transfer function: f(x) = 1 if x > 0 0 if x 0 Output 1.0 0.8 0.6 0.4 0.2-3 -2-1 1 2 3 Input Logic interpretation In order to solve logic functions, we will assume 0 means False and 1 means True.

1. Use the learning algorithm with the delta rule given above to train a single-layer feedforward network with two inputs and one output to solve the AND logic gate, using the transfer function f(x) and the logic interpretation give in the previously. Assume the weights and bias of the network start off with the following values: w0 = 0 w1 = 0 w2 = 0 Remember that w0 is the threshold term. You can think of it as the weight from an additional input x0, which is always 1. Show clearly each of the steps of the training process, until you arrive the weights change enough to produce the correct solution. x0 (1.0) Training Pattern x1 x2 Target output x1 w1 w0 1 0 0 0 2 0 1 0 outout 3 1 0 0 x2 w2 4 1 1 1

2. Use the learning algorithm with the delta rule given above to train a single-layer feedforward network with two inputs and one output to solve the OR logic gate, using the transfer function f(x) and the logic interpretation give in the previously. Assume the weights and bias of the network start off with the following values: w0 = 0 w1 = 0 w2 = 0 Show clearly each of the steps of the training process, until you arrive the weights change enough to produce the correct solution. x0 (1.0) Training Pattern x1 x2 Target output x1 w1 w0 1 0 0 0 2 0 1 1 outout 3 1 0 1 x2 w2 4 1 1 1

3. Use the backpropagation algorithm to change the weights of the network below. Do only one run through the algorithm. This includes four main steps: a. Propagate the activity of the network forward. Use the sigmoid function. ( ) ( ) = 1 1+ e x i. f x N ii. y k = f ( w jk y j +θ j=1 k ), where θ is the threshold or bias, the w0k weights. b. Calculate the error of the output neuron. i. Δ k = (y k )(1 y k )(T y k ), where T is the target output. c. Propagate the error backwards to the hidden neurons. i. Δ j = (y j )(1 y j )(w jk Δ k ), where Δ k is the error of the output neuron. d. Update the weights. i. w jk w jk + η y j Δ k, where η is the learning rate (use η =1.0). When training a network to solve a task, you will start your weights at random. For the purposes of this assignment, you can start off with the following weights and biases: w01 = 0.0 w11 = 0.1 w21 = 0.5 w02 = 0.0 w12 = 0.3 w22 = 0.2 w03 = 0.0 w13 = 0.2 w23 = 0.1 x 1 x 2 w 21 w 11 w 22 w 12 y 1 y 2 w 01 w 02 w 13 w 23 y 3 w 03 When training a network, you will usually have a whole set of training examples. For the purposes of this assignment, you will only run the algorithm once through the loop, with just one training example: x1 = 0.1 x2 = 0.2 Target output, T = 0.0 Important: Show as clearly as possible each of the steps of the training process, including your equations and calculations. Show also the resulting weights after that first pass through the algorithm.