Introduction to TensorFlow

Size: px
Start display at page:

Download "Introduction to TensorFlow"

Transcription

1 Large Scale Data Analysis Using Deep Learning (Prof. U Kang) Introduction to TensorFlow Beunguk Ahn ( beunguk.ahn@gmail.com) 1

2 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 2

3 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 3

4 Prerequisites You already know how to use Python You already installed TensorFlow 4

5 TensorFlow Open source so ware library for numerical computation using data flow graphs. Simply, it is one of widely used frameworks for deep learning programs. 5

6 TensorFlow Open source so ware library for numerical computation using data flow graphs. Simply it is one of widely used frameworks for deep learning programs. 6

7 Role of TensorFlow Machine learning flow in general 7

8 Why use TensorFlow? Large community engagement (Google) Multiple GPU Support Data and Model Parallelism Python + Numpy Tensorboard However, you don't have to use TensorFlow. Other frameworks like Theano, Caffe, Torch, etc are also good options. 8

9 Tensor Tensor is simply multi dimension array 1-D Tensor : vector 2-D Tensor : matrix 3-D Tensor : tensor 4-D Tensor : tensor... 1Dtensor = [ 1, 2] # vector 2Dtensor = [[ 1, 2], # matrix [ 3,4]] 3Dtensor = [[[ 1, 2], # tensor [ 3, 4]], [[ 5, 6], [ 7, 8]], [[ 9,10], [11,12]]] 9

10 Shape of Tensor Shape of tensor is expressed as tuple import tensorflow as tf scalar = tf.constant(1) # () zero = tf.constant([]) # (0,) vector = tf.constant([1,2,3,4]) # (4,) matrix = tf.constant([[1,2],[3,4]]) # (2, 2) tensor = tf.constant([[[1],[ 3]], # (3, 2, 1) [[5],[ 7]], [[9],[11]]]) print '\n'.join( map(str, [scalar, zero, vector, matrix, tensor])) >>> Tensor("Const:0", shape=(1,), dtype=int32) >>> Tensor("Const_1:0", shape=(0,), dtype=float32) >>> Tensor("Const_2:0", shape=(4,), dtype=int32) >>> Tensor("Const_3:0", shape=(2, 2), dtype=int32) >>> Tensor("Const_4:0", shape=(3, 2, 1), dtype=int32) 10

11 Data Type of Tensor What kind of data contained in tensor import tensorflow as tf int32 = tf.constant([[1,2],[3,4]]) # int32 float32 = tf.constant([[1.0, 2], [3, 4.0]]) # float32 float64 = tf.constant([[1.0, 2], [3, 4.0]], # float64 dtype=tf.float64) _string = tf.constant([["a", "B"], ["C", "D"]], # string dtype=tf.string) print '\n'.join( map(str, [int32, float32, float64, _string])) >>> Tensor("Const:0", shape=(2, 2), dtype=int32) >>> Tensor("Const_1:0", shape=(2, 2), dtype=float32) >>> Tensor("Const_2:0", shape=(2, 2), dtype=float64) >>> Tensor("Const_3:0", shape=(2, 2), dtype=string) Also float(32,64), [u]int(8,16,32,64), bool, complex(64,128) are also available 11

12 TensorFlow program in two parts Construction Phase Build a graph Declare Variables, Placeholders, Operations, Optimizers Execution Phase Initialize model and run Start session and iterate through the graph 12

13 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 13

14 Variable Stores a scalar or tensor Used as an input for Operations Requires initial value for future use At constuction phase, variables are "just" defined. Value will be assigned a er Session initialized You will get an idea through slides 14

15 Variable import tensorflow as tf # Define variable, yet initialized w = tf.variable(tf.random_normal([1, 3]), name="weight") with tf.session() as sess: w_value = sess.run(w) # get value for var 'w' print '\n'.join(map(str, ([w, w_value]))) "tensorflow.python.framework.errors_impl. FailedPreconditionError: Attempting to use uninitialized value weight" 15

16 Variable import tensorflow as tf # Define variable, yet initialized w = tf.variable(tf.random_normal([1, 3]), name="weight") # Define init operation init_op = tf.global_variables_initializer() with tf.session() as sess: sess.run(init_op) # initialize all declared variables w_value = sess.run(w) # get value for var 'w' print '\n'.join(map(str, ([w, w_value]))) >>> Tensor("weight/read:0", shape=(2, 3), dtype=float32) >>> [[ ]] 16

17 Variable import tensorflow as tf # Define variable, yet initialized w = tf.variable(tf.random_normal([1, 3]), name="weight") # Define init operation init_op = tf.global_variables_initializer() with tf.session() as sess: sess.run(init_op) # initialize all declared variables w_value = sess.run(w); x_value = sess.run(w) sess.run(init_op) # re-initialize y_value = sess.run(w) print '\n'.join(map(str, ([w_value, x_value, y_value]))) >>> [[ ]] >>> [[ ]] # same >>> [[ ]] # changed 17

18 Placeholder Placeholder for tensor Must be fed with data on execution for operation that takes the placeholder as operand 18

19 Placeholder import tensorflow as tf x = [[1, 2]] y = [[1], [3]] pla = tf.placeholder(tf.int32, (1,2)) matmul = tf.matmul(pla, y) with tf.session() as sess: A = sess.run(matmul) print '\n'.join(map(str, ([pla, A]))) >>> "You must feed a value for placeholder tensor 'Placeholder' with dtype int32 and shape [1,2]" 19

20 Placeholder import tensorflow as tf x = [[1, 2]] y = [[1], [3]] pla = tf.placeholder(tf.int32, (1,2)) matmul = tf.matmul(pla, y) with tf.session() as sess: A = sess.run(matmul, feed_dict={pla : x}) print '\n'.join(map(str, ([pla, A]))) >>> Tensor("Placeholder:0", shape=(2, 3), dtype=int32) >>> [[7]] 20

21 Shape of Placeholder Shape could be partially known import tensorflow as tf p1 = tf.placeholder(tf.float32, [1, None]) p2 = tf.placeholder(tf.float32, [None, 2]) p3 = tf.placeholder(tf.float32, [None, None]) ops = [tf.multiply(x, 10) for x in [p1, p2, p3]] mat1 = [[1, 2]] mat2 = [[3, 4], [5, 6]] mat3 = [[1], [2]] with tf.session() as sess: print '\n'.join(map(str, sess.run(ops, feed_dict={ p1:mat1, p2:mat2, p3:mat3}))) >>> [[ ]] >>> [[ ] [ ]] >>> [[ 10.] [ 20.]] 21

22 Shape of Placeholder Useful when you feed input/output to your model without giving the size of batch import tensorflow as tf # Placeholder MNIST # image size 28x28 x = tf.placeholder(tf.float32, [None, 784]) # 10 classes [0..9] y_ = tf.placeholder(tf.float32, [None, 10]) # Softmax regression Wx = b W = tf.variable(tf.zeros([784, 10])) b = tf.variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b)... 22

23 Operation Operation is graph node that performs computation on tensors Computation should be run on Session It takes inputs and generates outputs(optional) Usually, it ends up with prediction, cost function, or updating variables We have already seen some operations through the slides 23

24 Operation Linear Regression (Mean square error) 1 xw + b = y, L = ( x W i=1 i y i ) 2 2m import tensorflow as tf, numpy as np # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") m # Set model weights W = tf.variable(np.random.randn(), name="weight") b = tf.variable(np.random.randn(), name="bias") # Construct a linear model pred = tf.add(tf.multiply(x, W), b) # Mean square error cost = tf.reduce_sum(tf.pow(pred-y, 2))/(2*n_samples) 24

25 Operation It is better to read through documentation so that you don't have to reinvent the wheel Math matmul, reduce_sum, argmax, norm... Assign assign, assign_add, assign_sub... Tensor transform slice, split, stack, concat... 25

26 Optimizer Provides methods to compute gradients for a loss and apply gradients to variables Getting cost function as input and Calcualte gradients Process gradients Apply gradients "minimize" method on Optmizers do all 3 steps Many optimizers are alreafy implemented Gradient Descent Optimizer, Adagrad Optimizer, RMSProp Optmizer... 26

27 Optimizer import tensorflow as tf, numpy as np... # Construct a linear model pred = tf.add(tf.multiply(x, W), b) # Mean square error cost = tf.reduce_sum(tf.pow(pred-y, 2))/(2*n_samples) # Gradient descent optimizer = tf.train\.gradientdescentoptimizer(learning_rate)\.minimize(cost) with tf.session() as sess: for epoch in range(training_epochs): for (x, y) in zip(train_x, train_y): # SGD m'=1 sess.run(optimizer, feed_dict={x: x, Y: y}) # update weights 27

28 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 28

29 Session Session launchs the graph, initalize them, and run operations as you defined We have seen sessions through the slide, and I hope you got the idea of what it is 29

30 Session Your data will be lost when you close the session import tensorflow as tf var = tf.variable(tf.random_normal((1, 3))) init = tf.global_variables_initializer() with tf.session() as sess: sess.run(init) print sess.run(var) # Fetch train data # Calcuate cost # Optimize # Do Test # Display or Save print var.eval() "No default session is registered. Use `with sess.as_default()` or pass an explicit session to..." 30

31 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 31

32 MNIST Famous example for deep learning Data: Hand-writing digit[0-9] image data (28*28) Each image will be treated as 784 length of vector Task: Classify what digit in each image 10 classes Model: So max regression model Also use 3 hidden layers 32

33 x = set of images y = set of labels laye r 1 = W 1 x + b 1 MNIST laye r 2 = W 2 laye r 1 + b 2 pred = W 3 laye r 2 + b 3 m 1 cost = y log(softmax(pred )) m i=1 i y i 33

34 Hyper parameters # Parameters learning_rate = training_epochs = 15 batch_size = 100 display_step = 1 # Network Parameters n_hidden_1 = 256 # 1st layer number of features n_hidden_2 = 256 # 2nd layer number of features n_input = 784 # MNIST data input (img shape: 28*28) n_classes = 10 # MNIST total classes (0-9 digits) Reference: TensorFlow Exmaples 34

35 Placeholder & Variables import tensorflow as tf # tf Graph input x = tf.placeholder("float", [None, n_input]) # input y = tf.placeholder("float", [None, n_classes]) # output # Store layers weight & bias h1 = tf.variable(tf.random_normal([n_input, n_hidden_1])) h2 = tf.variable(tf.random_normal([n_hidden_1, n_hidden_2])) out = tf.variable(tf.random_normal([n_hidden_2, n_classes])) b1 = tf.variable(tf.random_normal([n_hidden_1])) b2 = tf.variable(tf.random_normal([n_hidden_2])) b3 = tf.variable(tf.random_normal([n_classes])) 35

36 36

37 Build a model # Hidden layer 1 with RELU activation layer_1 = tf.add(tf.matmul(x, h1), b1) layer_1 = tf.nn.relu(layer_1) # Hidden layer 2 with RELU activation layer_2 = tf.add(tf.matmul(layer_1, h2), b2) layer_2 = tf.nn.relu(layer_2) # Output layer with linear activation pred = tf.matmul(layer_2, out) + b3 37

38 38

39 Optimizer # Define loss and optimizer y_hat_softmax = tf.nn.softmax(pred) cost = tf.reduce_mean( -tf.reduce_sum(y * tf.log(y_hat_softmax), [1])) optimizer = tf.train.adamoptimizer( learning_rate=learning_rate)\.minimize(cost) # Initializing the variables init = tf.global_variables_initializer() 39

40 40

41 Run & Learn # Launch the graph with tf.session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(avg_cost) print "Optimization Finished!" # Test model correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}) 41

42 Result >>>... >>> Epoch: 0010 cost= >>> Epoch: 0011 cost= >>> Epoch: 0012 cost= >>> Epoch: 0013 cost= >>> Epoch: 0014 cost= >>> Epoch: 0015 cost= >>> Optimization Finished! >>> Accuracy:

43 What is TensorFlow? Consturction Phase Execution Phase Examples Gotchas and Tips Contents 43

44 Gotchas & Tips Don't forget to put value to feed_dict import tensorflow as tf pla = tf.placeholder(tf.int32, [2,2]) x = tf.variable([[1,2],[3,4]]) y = tf.variable([[3,4],[1,2]]) init = tf.global_variables_initializer() matmul = tf.matmul(pla, y) with tf.session() as sess: sess.run(init) f_dict = { pla: x } # x is actually tensor "object" result = sess.run(matmul, feed_dict=f_dict) print result "ValueError: setting an array element with a sequence." 44

45 Gotchas & Tips Don't forget to put value to feed_dict import tensorflow as tf pla = tf.placeholder(tf.int32, [2,2]) x = tf.variable([[1,2],[3,4]]) y = tf.variable([[3,4],[1,2]]) init = tf.global_variables_initializer() matmul = tf.matmul(pla, y) with tf.session() as sess: sess.run(init) x_v = sess.run(x) f_dict = { pla: x_v } # x_v is now "array(2,2)" result = sess.run(matmul, feed_dict=f_dict) print result >>> [[ 5 8] [13 20]] 45

46 Gotchas & Tips Better not to make Variable/Placeholder/Operation inside epoch import tensorflow as tf with tf.session() as sess: for epoch in range(100): # It will make session overflow! v = tf.variable([[1,2]]) op = tf.add(1,2) pla = tf.placeholder(tf.int32, [1,2]) print '\n'.join(map(str, [v, op, pla])) >>>... >>> Tensor("Variable_99/read:0", shape=(1, 2), dtype=int32) >>> Tensor("Add_99:0", shape=(), dtype=int32) >>> Tensor("Placeholder_99:0", shape=(1, 2), dtype=int32) 46

47 Useful References TensorFlow-Examples DCGAN-tensorflow Style Transfer in TensorFlow Word2vec in TensorFlow 47

48 Q & A 48

(Artificial) Neural Networks in TensorFlow

(Artificial) Neural Networks in TensorFlow (Artificial) Neural Networks in TensorFlow By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Recall Supervised Learning Setup II. 2. Artificial Neural

More information

Crash Course on TensorFlow! Vincent Lepetit!

Crash Course on TensorFlow! Vincent Lepetit! Crash Course on TensorFlow Vincent Lepetit 1 TensorFlow Created by Google for easily implementing Deep Networks; Library for Python, but can also be used with C and Java; Exists for Linux, Mac OSX, Windows;

More information

(Artificial) Neural Networks in TensorFlow

(Artificial) Neural Networks in TensorFlow (Artificial) Neural Networks in TensorFlow By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Recall Supervised Learning Setup II. 2. Artificial Neural

More information

TensorFlow. Dan Evans

TensorFlow. Dan Evans TensorFlow Presentation references material from https://www.tensorflow.org/get_started/get_started and Data Science From Scratch by Joel Grus, 25, O Reilly, Ch. 8 Dan Evans TensorFlow www.tensorflow.org

More information

introduction to convolutional networks using tensorflow

introduction to convolutional networks using tensorflow introduction to convolutional networks using tensorflow Jesús Fernández Bes, jfbes@ing.uc3m.es 8 de febrero de 2016 contents Install What is Tensorflow? Implementing Softmax Regression Deep Convolutional

More information

TensorFlow: A Framework for Scalable Machine Learning

TensorFlow: A Framework for Scalable Machine Learning TensorFlow: A Framework for Scalable Machine Learning You probably Outline want to know... What is TensorFlow? Why did we create TensorFlow? How does Tensorflow Work? Example: Linear Regression Example:

More information

CSC 498R: Internet of Things 2

CSC 498R: Internet of Things 2 CSC 498R: Internet of Things Lecture 09: TensorFlow Instructor: Haidar M. Harmanani Fall 2017 IoT Components Things we connect: Hardware, sensors and actuators Connectivity Medium we use to connect things

More information

Tensor Flow. Tensors: n-dimensional arrays Vector: 1-D tensor Matrix: 2-D tensor

Tensor Flow. Tensors: n-dimensional arrays Vector: 1-D tensor Matrix: 2-D tensor Tensor Flow Tensors: n-dimensional arrays Vector: 1-D tensor Matrix: 2-D tensor Deep learning process are flows of tensors A sequence of tensor operations Can represent also many machine learning algorithms

More information

>TensorFlow and deep learning_

>TensorFlow and deep learning_ >TensorFlow and deep learning_ without a PhD deep Science! #Tensorflow deep Code... @martin_gorner Hello World: handwritten digits classification - MNIST? MNIST = Mixed National Institute of Standards

More information

APPLIED DEEP LEARNING PROF ALEXIEI DINGLI

APPLIED DEEP LEARNING PROF ALEXIEI DINGLI APPLIED DEEP LEARNING PROF ALEXIEI DINGLI TECH NEWS TECH NEWS HOW TO DO IT? TECH NEWS APPLICATIONS TECH NEWS TECH NEWS NEURAL NETWORKS Interconnected set of nodes and edges Designed to perform complex

More information

Stephen Scott.

Stephen Scott. 1 / 35 (Adapted from Vinod Variyam and Ian Goodfellow) sscott@cse.unl.edu 2 / 35 All our architectures so far work on fixed-sized inputs neural networks work on sequences of inputs E.g., text, biological

More information

INF 5860 Machine learning for image classification. Lecture 5 : Introduction to TensorFlow Tollef Jahren February 14, 2018

INF 5860 Machine learning for image classification. Lecture 5 : Introduction to TensorFlow Tollef Jahren February 14, 2018 INF 5860 Machine learning for image classification Lecture 5 : Introduction to TensorFlow Tollef Jahren February 14, 2018 OUTLINE Deep learning frameworks TensorFlow TensorFlow graphs TensorFlow session

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Time Series Data I. 1.1. Deterministic II. 1.2. Stochastic III. 1.3. Dealing

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Time Series Data I. 1.1. Deterministic II. 1.2. Stochastic III. 1.3. Dealing

More information

TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout

TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout This is adapted from Frossard's tutorial (http://www.cs.toronto.edu/~frossard/post/tensorflow/). This approach is

More information

ECE521 W17 Tutorial 1. Renjie Liao & Min Bai

ECE521 W17 Tutorial 1. Renjie Liao & Min Bai ECE521 W17 Tutorial 1 Renjie Liao & Min Bai Schedule Linear Algebra Review Matrices, vectors Basic operations Introduction to TensorFlow NumPy Computational Graphs Basic Examples Linear Algebra Review

More information

Introduction to TensorFlow

Introduction to TensorFlow Introduction to TensorFlow Oliver Dürr Datalab-Lunch Seminar Series Winterthur, 17 Nov, 2016 1 Abstract Introduc)on to TensorFlow TensorFlow is a mul/purpose open source so2ware library for numerical computa/on

More information

MNIST Example Kailai Xu September 8, This is one of the series notes on deep learning. The short note and code is based on [1].

MNIST Example Kailai Xu September 8, This is one of the series notes on deep learning. The short note and code is based on [1]. MNIST Example Kailai Xu September 8, 2017 This is one of the series notes on deep learning. The short note and code is based on [1]. The MNIST classification example is a classical example to illustrate

More information

Deep Learning & Artificial Intelligence WS 2018/2019

Deep Learning & Artificial Intelligence WS 2018/2019 Deep Learning & Artificial Intelligence WS 2018/2019 Linear Regression Model Model Error Function: Squared Error Has no special meaning except it makes gradients look nicer Prediction Ground truth / target

More information

Pytorch Tutorial. Xiaoyong Yuan, Xiyao Ma 2018/01

Pytorch Tutorial. Xiaoyong Yuan, Xiyao Ma 2018/01 (Li Lab) National Science Foundation Center for Big Learning (CBL) Department of Electrical and Computer Engineering (ECE) Department of Computer & Information Science & Engineering (CISE) Pytorch Tutorial

More information

More on Neural Networks

More on Neural Networks More on Neural Networks Yujia Yan Fall 2018 Outline Linear Regression y = Wx + b (1) Linear Regression y = Wx + b (1) Polynomial Regression y = Wφ(x) + b (2) where φ(x) gives the polynomial basis, e.g.,

More information

Deep Learning In An Afternoon

Deep Learning In An Afternoon Deep Learning In An Afternoon John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Deep Learning / Neural Nets Without question the biggest thing in ML and computer

More information

@SoyGema GEMA PARREÑO PIQUERAS

@SoyGema GEMA PARREÑO PIQUERAS @SoyGema GEMA PARREÑO PIQUERAS WHAT IS AN ARTIFICIAL NEURON? WHAT IS AN ARTIFICIAL NEURON? Image Recognition Classification using Softmax Regressions and Convolutional Neural Networks Languaje Understanding

More information

Deep Learning 101 a Hands-on Tutorial

Deep Learning 101 a Hands-on Tutorial Deep Learning 101 a Hands-on Tutorial Yarin Gal yg279@cam.ac.uk A TALK IN THREE ACTS, based in part on the online tutorial deeplearning.net/software/theano/tutorial Synopsis Deep Learning is not rocket

More information

TTIC 31230, Fundamentals of Deep Learning David McAllester, Winter Multiclass Logistic Regression. Multilayer Perceptrons (MLPs)

TTIC 31230, Fundamentals of Deep Learning David McAllester, Winter Multiclass Logistic Regression. Multilayer Perceptrons (MLPs) TTIC 31230, Fundamentals of Deep Learning David McAllester, Winter 2018 Multiclass Logistic Regression Multilayer Perceptrons (MLPs) Stochastic Gradient Descent (SGD) 1 Multiclass Classification We consider

More information

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Learning Deep Architectures for AI. Part II - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part II - Vijay Chakilam Limitations of Perceptron x1 W, b 0,1 1,1 y x2 weight plane output =1 output =0 There is no value for W and b such that the model

More information

CS224N: Natural Language Processing with Deep Learning Winter 2017 Midterm Exam

CS224N: Natural Language Processing with Deep Learning Winter 2017 Midterm Exam CS224N: Natural Language Processing with Deep Learning Winter 2017 Midterm Exam This examination consists of 14 printed sides, 5 questions, and 100 points. The exam accounts for 17% of your total grade.

More information

Recurrent Neural Networks Seoul AI Meetup, August 5

Recurrent Neural Networks Seoul AI Meetup, August 5 Recurrent Neural Networks Seoul AI Meetup, August 5 Martin Kersner, m.kersner@gmail.com References Books Hands-On Machine Learning with Scikit-Learn and Tensor ow (Chapter 14. Recurrent Neural Networks)

More information

Deep Learning: Pre- Requisites. Understanding gradient descent, autodiff, and softmax

Deep Learning: Pre- Requisites. Understanding gradient descent, autodiff, and softmax Deep Learning: Pre- Requisites Understanding gradient descent, autodiff, and softmax Gradient Descent autodiff Gradient descent requires knowledge of, well, the gradient from your cost function (MSE) Mathematically

More information

Theano: A Few Examples

Theano: A Few Examples October 21, 2015 Theano in a Nutshell Python library for creating, optimizing and evaluating mathematical expressions. Designed for Machine-Learning applications by the LISA Lab in Montreal, Canada, one

More information

CSCI 315: Artificial Intelligence through Deep Learning

CSCI 315: Artificial Intelligence through Deep Learning CSCI 35: Artificial Intelligence through Deep Learning W&L Fall Term 27 Prof. Levy Convolutional Networks http://wernerstudio.typepad.com/.a/6ad83549adb53ef53629ccf97c-5wi Convolution: Convolution is

More information

Deep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation

Deep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation Deep Neural Networks (3) Computational Graphs, Learning Algorithms, Initialisation Steve Renals Machine Learning Practical MLP Lecture 5 16 October 2018 MLP Lecture 5 / 16 October 2018 Deep Neural Networks

More information

Index. Santanu Pattanayak 2017 S. Pattanayak, Pro Deep Learning with TensorFlow,

Index. Santanu Pattanayak 2017 S. Pattanayak, Pro Deep Learning with TensorFlow, Index A Activation functions, neuron/perceptron binary threshold activation function, 102 103 linear activation function, 102 rectified linear unit, 106 sigmoid activation function, 103 104 SoftMax activation

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

Deep Learning II: Momentum & Adaptive Step Size

Deep Learning II: Momentum & Adaptive Step Size Deep Learning II: Momentum & Adaptive Step Size CS 760: Machine Learning Spring 2018 Mark Craven and David Page www.biostat.wisc.edu/~craven/cs760 1 Goals for the Lecture You should understand the following

More information

A Practitioner s Guide to MXNet

A Practitioner s Guide to MXNet 1/34 A Practitioner s Guide to MXNet Xingjian Shi Hong Kong University of Science and Technology (HKUST) HKUST CSE Seminar, March 31st, 2017 2/34 Outline 1 Introduction Deep Learning Basics MXNet Highlights

More information

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions? Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

Deep neural networks and fraud detection

Deep neural networks and fraud detection U.U.D.M. Project Report 2017:38 Deep neural networks and fraud detection Yifei Lu Examensarbete i matematik, 30 hp Handledare: Kaj Nyström Examinator: Erik Ekström Oktober 2017 Department of Mathematics

More information

Neural networks (NN) 1

Neural networks (NN) 1 Neural networks (NN) 1 Hedibert F. Lopes Insper Institute of Education and Research São Paulo, Brazil 1 Slides based on Chapter 11 of Hastie, Tibshirani and Friedman s book The Elements of Statistical

More information

Deep Learning In An Afternoon

Deep Learning In An Afternoon Deep Learning In An Afternoon John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Deep Learning / Neural Nets Without question the biggest thing in ML and computer

More information

Homework 3 COMS 4705 Fall 2017 Prof. Kathleen McKeown

Homework 3 COMS 4705 Fall 2017 Prof. Kathleen McKeown Homework 3 COMS 4705 Fall 017 Prof. Kathleen McKeown The assignment consists of a programming part and a written part. For the programming part, make sure you have set up the development environment as

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Introduction to Machine Learning HW6

Introduction to Machine Learning HW6 CS 189 Spring 2018 Introduction to Machine Learning HW6 Your self-grade URL is http://eecs189.org/self_grade?question_ids=1_1,1_ 2,2_1,2_2,3_1,3_2,3_3,4_1,4_2,4_3,4_4,4_5,4_6,5_1,5_2,6. This homework is

More information

Exercise 1. In the lecture you have used logistic regression as a binary classifier to assign a label y i { 1, 1} for a sample X i R D by

Exercise 1. In the lecture you have used logistic regression as a binary classifier to assign a label y i { 1, 1} for a sample X i R D by Exercise 1 Deadline: 04.05.2016, 2:00 pm Procedure for the exercises: You will work on the exercises in groups of 2-3 people (due to your large group size I will unfortunately not be able to correct single

More information

Deep Learning Lab Course 2017 (Deep Learning Practical)

Deep Learning Lab Course 2017 (Deep Learning Practical) Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of

More information

Introduction to Python

Introduction to Python Introduction to Python Luis Pedro Coelho luis@luispedro.org @luispedrocoelho European Molecular Biology Laboratory Lisbon Machine Learning School 2015 Luis Pedro Coelho (@luispedrocoelho) Introduction

More information

Introduction to Machine Learning (67577)

Introduction to Machine Learning (67577) Introduction to Machine Learning (67577) Shai Shalev-Shwartz School of CS and Engineering, The Hebrew University of Jerusalem Deep Learning Shai Shalev-Shwartz (Hebrew U) IML Deep Learning Neural Networks

More information

CSC321 Lecture 15: Exploding and Vanishing Gradients

CSC321 Lecture 15: Exploding and Vanishing Gradients CSC321 Lecture 15: Exploding and Vanishing Gradients Roger Grosse Roger Grosse CSC321 Lecture 15: Exploding and Vanishing Gradients 1 / 23 Overview Yesterday, we saw how to compute the gradient descent

More information

Introduction to Python

Introduction to Python Introduction to Python Luis Pedro Coelho Institute for Molecular Medicine (Lisbon) Lisbon Machine Learning School II Luis Pedro Coelho (IMM) Introduction to Python Lisbon Machine Learning School II (1

More information

Recurrent Neural Network

Recurrent Neural Network Recurrent Neural Network The Deepest of All Deep Learning Slides by Chen Liang Deep Learning Deep Learning Deep learning works like the human brain? Demystify Deep Learning Deep Learning: Building Blocks

More information

13. Machine learning II. Neural networks (deep learning) Standardization of data Training neural networks

13. Machine learning II. Neural networks (deep learning) Standardization of data Training neural networks 13. Machine learning II Neural networks (deep learning) Standardization of data Training neural networks 63 Neural networks: Units and activation functions A unit receives multiple input signals as their

More information

Introduction to Deep Learning CMPT 733. Steven Bergner

Introduction to Deep Learning CMPT 733. Steven Bergner Introduction to Deep Learning CMPT 733 Steven Bergner Overview Renaissance of artificial neural networks Representation learning vs feature engineering Background Linear Algebra, Optimization Regularization

More information

Deep Learning Lecture 2

Deep Learning Lecture 2 Fall 2016 Machine Learning CMPSCI 689 Deep Learning Lecture 2 Sridhar Mahadevan Autonomous Learning Lab UMass Amherst COLLEGE Outline of lecture New type of units Convolutional units, Rectified linear

More information

ECE521: Inference Algorithms and Machine Learning University of Toronto. Assignment 1: k-nn and Linear Regression

ECE521: Inference Algorithms and Machine Learning University of Toronto. Assignment 1: k-nn and Linear Regression ECE521: Inference Algorithms and Machine Learning University of Toronto Assignment 1: k-nn and Linear Regression TA: Use Piazza for Q&A Due date: Feb 7 midnight, 2017 Electronic submission to: ece521ta@gmailcom

More information

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Neural Networks Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Outline Part 1 Introduction Feedforward Neural Networks Stochastic Gradient Descent Computational Graph

More information

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat

Neural Networks, Computation Graphs. CMSC 470 Marine Carpuat Neural Networks, Computation Graphs CMSC 470 Marine Carpuat Binary Classification with a Multi-layer Perceptron φ A = 1 φ site = 1 φ located = 1 φ Maizuru = 1 φ, = 2 φ in = 1 φ Kyoto = 1 φ priest = 0 φ

More information

Machine Learning. Boris

Machine Learning. Boris Machine Learning Boris Nadion boris@astrails.com @borisnadion @borisnadion boris@astrails.com astrails http://astrails.com awesome web and mobile apps since 2005 terms AI (artificial intelligence)

More information

Name: Student number:

Name: Student number: UNIVERSITY OF TORONTO Faculty of Arts and Science APRIL 2018 EXAMINATIONS CSC321H1S Duration 3 hours No Aids Allowed Name: Student number: This is a closed-book test. It is marked out of 35 marks. Please

More information

Lecture 4 Backpropagation

Lecture 4 Backpropagation Lecture 4 Backpropagation CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago April 5, 2017 Things we will look at today More Backpropagation Still more backpropagation Quiz

More information

Natural Language Processing with Deep Learning CS224N/Ling284

Natural Language Processing with Deep Learning CS224N/Ling284 Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 4: Word Window Classification and Neural Networks Richard Socher Organization Main midterm: Feb 13 Alternative midterm: Friday Feb

More information

Source localization in an ocean waveguide using supervised machine learning

Source localization in an ocean waveguide using supervised machine learning Source localization in an ocean waveguide using supervised machine learning Haiqiang Niu, Emma Reeves, and Peter Gerstoft Scripps Institution of Oceanography, UC San Diego Part I Localization on Noise09

More information

NLP Homework: Dependency Parsing with Feed-Forward Neural Network

NLP Homework: Dependency Parsing with Feed-Forward Neural Network NLP Homework: Dependency Parsing with Feed-Forward Neural Network Submission Deadline: Monday Dec. 11th, 5 pm 1 Background on Dependency Parsing Dependency trees are one of the main representations used

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Neural Network Tutorial & Application in Nuclear Physics. Weiguang Jiang ( 蒋炜光 ) UTK / ORNL

Neural Network Tutorial & Application in Nuclear Physics. Weiguang Jiang ( 蒋炜光 ) UTK / ORNL Neural Network Tutorial & Application in Nuclear Physics Weiguang Jiang ( 蒋炜光 ) UTK / ORNL Machine Learning Logistic Regression Gaussian Processes Neural Network Support vector machine Random Forest Genetic

More information

OPTIMIZATION METHODS IN DEEP LEARNING

OPTIMIZATION METHODS IN DEEP LEARNING Tutorial outline OPTIMIZATION METHODS IN DEEP LEARNING Based on Deep Learning, chapter 8 by Ian Goodfellow, Yoshua Bengio and Aaron Courville Presented By Nadav Bhonker Optimization vs Learning Surrogate

More information

Logistic Regression. COMP 527 Danushka Bollegala

Logistic Regression. COMP 527 Danushka Bollegala Logistic Regression COMP 527 Danushka Bollegala Binary Classification Given an instance x we must classify it to either positive (1) or negative (0) class We can use {1,-1} instead of {1,0} but we will

More information

HOMEWORK #4: LOGISTIC REGRESSION

HOMEWORK #4: LOGISTIC REGRESSION HOMEWORK #4: LOGISTIC REGRESSION Probabilistic Learning: Theory and Algorithms CS 274A, Winter 2018 Due: Friday, February 23rd, 2018, 11:55 PM Submit code and report via EEE Dropbox You should submit a

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Advanced computational methods X Selected Topics: SGD

Advanced computational methods X Selected Topics: SGD Advanced computational methods X071521-Selected Topics: SGD. In this lecture, we look at the stochastic gradient descent (SGD) method 1 An illustrating example The MNIST is a simple dataset of variety

More information

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation

BACKPROPAGATION. Neural network training optimization problem. Deriving backpropagation BACKPROPAGATION Neural network training optimization problem min J(w) w The application of gradient descent to this problem is called backpropagation. Backpropagation is gradient descent applied to J(w)

More information

Scaling Deep Learning Algorithms on Extreme Scale Architectures

Scaling Deep Learning Algorithms on Extreme Scale Architectures Scaling Deep Learning Algorithms on Extreme Scale Architectures ABHINAV VISHNU Team Lead, Scalable Machine Learning, Pacific Northwest National Laboratory MVAPICH User Group (MUG) 2017 1 The rise of Deep

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 6: Neural Networks and Deep Learning January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology

More information

Lecture 35: Optimization and Neural Nets

Lecture 35: Optimization and Neural Nets Lecture 35: Optimization and Neural Nets CS 4670/5670 Sean Bell DeepDream [Google, Inceptionism: Going Deeper into Neural Networks, blog 2015] Aside: CNN vs ConvNet Note: There are many papers that use

More information

Introduction to (Convolutional) Neural Networks

Introduction to (Convolutional) Neural Networks Introduction to (Convolutional) Neural Networks Philipp Grohs Summer School DL and Vis, Sept 2018 Syllabus 1 Motivation and Definition 2 Universal Approximation 3 Backpropagation 4 Stochastic Gradient

More information

Machine Learning Basics

Machine Learning Basics Security and Fairness of Deep Learning Machine Learning Basics Anupam Datta CMU Spring 2019 Image Classification Image Classification Image classification pipeline Input: A training set of N images, each

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

text classification 3: neural networks

text classification 3: neural networks text classification 3: neural networks CS 585, Fall 2018 Introduction to Natural Language Processing http://people.cs.umass.edu/~miyyer/cs585/ Mohit Iyyer College of Information and Computer Sciences University

More information

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS LAST TIME Intro to cudnn Deep neural nets using cublas and cudnn TODAY Building a better model for image classification Overfitting

More information

Supporting Information

Supporting Information Supporting Information Convolutional Embedding of Attributed Molecular Graphs for Physical Property Prediction Connor W. Coley a, Regina Barzilay b, William H. Green a, Tommi S. Jaakkola b, Klavs F. Jensen

More information

Understanding Neural Networks : Part I

Understanding Neural Networks : Part I TensorFlow Workshop 2018 Understanding Neural Networks Part I : Artificial Neurons and Network Optimization Nick Winovich Department of Mathematics Purdue University July 2018 Outline 1 Neural Networks

More information

Day 3 Lecture 3. Optimizing deep networks

Day 3 Lecture 3. Optimizing deep networks Day 3 Lecture 3 Optimizing deep networks Convex optimization A function is convex if for all α [0,1]: f(x) Tangent line Examples Quadratics 2-norms Properties Local minimum is global minimum x Gradient

More information

Lecture on Practical Deep Learning Statistical Physics Winter School, Pohang, January Prof. Kang-Hun Ahn.

Lecture on Practical Deep Learning Statistical Physics Winter School, Pohang, January Prof. Kang-Hun Ahn. Lecture on Practical Deep Learning Statistical Physics Winter School, Pohang, January 2018 Prof. Kang-Hun Ahn ahnkanghun@gmail.com http://deephearing.org Basic of Python Numerical Methods TensorFlow Convolutional

More information

A NOVEL ALGORITHM FOR TRAINING NEURAL NETWORKS ON INCOMPLETE INPUT DATA SETS

A NOVEL ALGORITHM FOR TRAINING NEURAL NETWORKS ON INCOMPLETE INPUT DATA SETS TALLINN UNIVERSITY OF TECHNOLOGY School of Information Technologies Serkan Ongan 156395IASM A NOVEL ALGORITHM FOR TRAINING NEURAL NETWORKS ON INCOMPLETE INPUT DATA SETS Master s thesis Supervisor: Eduard

More information

15780: GRADUATE AI (SPRING 2018) Homework 3: Deep Learning and Probabilistic Modeling

15780: GRADUATE AI (SPRING 2018) Homework 3: Deep Learning and Probabilistic Modeling 15780: GRADUATE AI (SPRING 2018) Homework 3: Deep Learning and Probabilistic Modeling Release: March 19, 2018, Last Updated: March 30, 2018, 7:30 pm Due: April 2, 2018, 11:59pm 1 Maximum Likelihood Estimation

More information

CNTK Microsoft s Open Source Deep Learning Toolkit. Taifeng Wang Lead Researcher, Microsoft Research Asia 2016 GTC China

CNTK Microsoft s Open Source Deep Learning Toolkit. Taifeng Wang Lead Researcher, Microsoft Research Asia 2016 GTC China CNTK Microsoft s Open Source Deep Learning Toolkit Taifeng Wang Lead Researcher, Microsoft Research Asia 2016 GTC China Deep learning in Microsoft Cognitive Services https://how-old.net http://www.captionbot.ai

More information

HOMEWORK #4: LOGISTIC REGRESSION

HOMEWORK #4: LOGISTIC REGRESSION HOMEWORK #4: LOGISTIC REGRESSION Probabilistic Learning: Theory and Algorithms CS 274A, Winter 2019 Due: 11am Monday, February 25th, 2019 Submit scan of plots/written responses to Gradebook; submit your

More information

EVERYTHING YOU NEED TO KNOW TO BUILD YOUR FIRST CONVOLUTIONAL NEURAL NETWORK (CNN)

EVERYTHING YOU NEED TO KNOW TO BUILD YOUR FIRST CONVOLUTIONAL NEURAL NETWORK (CNN) EVERYTHING YOU NEED TO KNOW TO BUILD YOUR FIRST CONVOLUTIONAL NEURAL NETWORK (CNN) TARGETED PIECES OF KNOWLEDGE Linear regression Activation function Multi-Layers Perceptron (MLP) Stochastic Gradient Descent

More information

CS Deep Reinforcement Learning HW2: Policy Gradients due September 19th 2018, 11:59 pm

CS Deep Reinforcement Learning HW2: Policy Gradients due September 19th 2018, 11:59 pm CS294-112 Deep Reinforcement Learning HW2: Policy Gradients due September 19th 2018, 11:59 pm 1 Introduction The goal of this assignment is to experiment with policy gradient and its variants, including

More information

Logistic Regression & Neural Networks

Logistic Regression & Neural Networks Logistic Regression & Neural Networks CMSC 723 / LING 723 / INST 725 Marine Carpuat Slides credit: Graham Neubig, Jacob Eisenstein Logistic Regression Perceptron & Probabilities What if we want a probability

More information

NEURAL LANGUAGE MODELS

NEURAL LANGUAGE MODELS COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,

More information

INTRODUCTION TO DATA SCIENCE

INTRODUCTION TO DATA SCIENCE INTRODUCTION TO DATA SCIENCE JOHN P DICKERSON Lecture #13 3/9/2017 CMSC320 Tuesdays & Thursdays 3:30pm 4:45pm ANNOUNCEMENTS Mini-Project #1 is due Saturday night (3/11): Seems like people are able to do

More information

CSC321 Lecture 4: Learning a Classifier

CSC321 Lecture 4: Learning a Classifier CSC321 Lecture 4: Learning a Classifier Roger Grosse Roger Grosse CSC321 Lecture 4: Learning a Classifier 1 / 31 Overview Last time: binary classification, perceptron algorithm Limitations of the perceptron

More information

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets) Sanjeev Arora Elad Hazan Recap: Structure of a deep

More information

Regularization and Optimization of Backpropagation

Regularization and Optimization of Backpropagation Regularization and Optimization of Backpropagation The Norwegian University of Science and Technology (NTNU) Trondheim, Norway keithd@idi.ntnu.no October 17, 2017 Regularization Definition of Regularization

More information

Recurrent Neural Networks. COMP-550 Oct 5, 2017

Recurrent Neural Networks. COMP-550 Oct 5, 2017 Recurrent Neural Networks COMP-550 Oct 5, 2017 Outline Introduction to neural networks and deep learning Feedforward neural networks Recurrent neural networks 2 Classification Review y = f( x) output label

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information