CENG 783. Special topics in. Deep Learning. AlchemyAPI. Week 8. Sinan Kalkan

Similar documents
Deep Learning. Convolutional Neural Network (CNNs) Ali Ghodsi. October 30, Slides are partially based on Book in preparation, Deep Learning

Introduction to Convolutional Neural Networks (CNNs)

Lecture 7 Convolutional Neural Networks

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

Machine Learning for Computer Vision 8. Neural Networks and Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Introduction to Deep Learning CMPT 733. Steven Bergner

Convolutional neural networks

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Topics in AI (CPSC 532L): Multimodal Learning with Vision, Language and Sound. Lecture 3: Introduction to Deep Learning (continued)

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Training Neural Networks Practical Issues

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

Convolutional Neural Networks. Srikumar Ramalingam

Convolutional Neural Network Architecture

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Neural Networks 2. 2 Receptive fields and dealing with image inputs

Introduction to Convolutional Neural Networks 2018 / 02 / 23

CSCI 315: Artificial Intelligence through Deep Learning

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Jakub Hajic Artificial Intelligence Seminar I

Deep Residual. Variations

Convolution and Pooling as an Infinitely Strong Prior

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

Neural networks and optimization

Machine Learning Lecture 14

Convolutional Neural Networks

1 What a Neural Network Computes

Lecture 17: Neural Networks and Deep Learning

Neural networks and optimization

Deep neural networks

Deep Learning (CNNs)

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

CS 1674: Intro to Computer Vision. Final Review. Prof. Adriana Kovashka University of Pittsburgh December 7, 2016

Deep Learning & Artificial Intelligence WS 2018/2019

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Introduction to Neural Networks

Normalization Techniques

Νεςπο-Ασαυήρ Υπολογιστική Neuro-Fuzzy Computing

PATTERN RECOGNITION AND MACHINE LEARNING

Recurrent Neural Networks

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Advanced Training Techniques. Prajit Ramachandran

TTIC 31230, Fundamentals of Deep Learning David McAllester, April Vanishing and Exploding Gradients. ReLUs. Xavier Initialization

Deep Learning Recurrent Networks 2/28/2018

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Handwritten Indic Character Recognition using Capsule Networks

Neural Architectures for Image, Language, and Speech Processing

Introduction to (Convolutional) Neural Networks

Lecture 35: Optimization and Neural Nets

Cheng Soon Ong & Christian Walder. Canberra February June 2018

CSC321 Lecture 9: Generalization

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Deep Feedforward Networks

Neural Networks and Deep Learning

Neural Network Training

Introduction to Machine Learning (67577)

From perceptrons to word embeddings. Simon Šuster University of Groningen

Lecture 2: Learning with neural networks

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Introduction to Deep Neural Networks

STA 414/2104: Lecture 8

CSCI567 Machine Learning (Fall 2018)

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Deep Learning Lecture 2

CSC321 Lecture 9: Generalization

How to do backpropagation in a brain

Non-Convex Optimization. CS6787 Lecture 7 Fall 2017

Agenda. Digit Classification using CNN Digit Classification using SAE Visualization: Class models, filters and saliency 2 DCT

Administration. Registration Hw3 is out. Lecture Captioning (Extra-Credit) Scribing lectures. Questions. Due on Thursday 10/6

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep Learning Lab Course 2017 (Deep Learning Practical)

CSC321 Lecture 15: Exploding and Vanishing Gradients

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

The connection of dropout and Bayesian statistics

Feed-forward Network Functions

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

Artificial Neural Networks

CSC 578 Neural Networks and Deep Learning

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

1 Machine Learning Concepts (16 points)

Group Equivariant Convolutional Networks. Taco Cohen

A thorough derivation of back-propagation for people who really want to understand it by: Mike Gashler, September 2010

Artificial neural networks

CS 229 Project Final Report: Reinforcement Learning for Neural Network Architecture Category : Theory & Reinforcement Learning

Understanding How ConvNets See

Deep Feedforward Networks

Learning Long Term Dependencies with Gradient Descent is Difficult

Regularization in Neural Networks

SP-CNN: A Scalable and Programmable CNN-based Accelerator. Dilan Manatunga Dr. Hyesoon Kim Dr. Saibal Mukhopadhyay

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

STA 414/2104: Lecture 8

How to do backpropagation in a brain. Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto

Transcription:

CENG 783 Special topics in Deep Learning AlchemyAPI Week 8 Sinan Kalkan

Loss functions Many correct labels case: Binary prediction for each label, independently: L i = σ j max 0, 1 y ij f j y ij = +1 if example i is labeled with label j; otherwise y ij = 1. Alternatively, train logistic loss for each label (0 or 1): 2

Data Preprocessing Mean subtraction Normalization PCA and whitening 3

Data Preprocessing: Normalization (or conditioning) Necessary if you believe that your dimensions have different scales Might need to reduce this to give equal importance to each dimension Normalize each dimension by its std. dev. after mean subtraction: x i = x i μ i x i = x i /σ i Effect: Make the dimensions have the same scale http://cs231n.github.io/neural-networks-2/ 4

Data Preprocessing: Principle Component Analysis First center the data Find the eigenvectors e 1,, e n Project the data onto the eigenvectors: x i R = x i [e 1,, e n ] This corresponds to rotating the data to have the eigenvectors as the axes If you take the first M eigenvectors, it corresponds to dimensionality reduction http://cs231n.github.io/neural-networks-2/ 5

Data Preprocessing: Whitening Normalize the scale with the norm of the eigenvalue: x w i = x R i /( λ 1,, λ n + ε) ε: a very small number to avoid division by zero This stretches each dimension to have the same scale. Side effect: this may exaggerate noise. http://cs231n.github.io/neural-networks-2/ 6

Data Preprocessing: Summary We mostly don t use PCA or whitening They are computationally very expensive Whitening has side effects It is quite crucial and common to zero-center the data Most of the time, we see normalization with the std. deviation 7

Weight Initialization Zero weights Wrong! Leads to updating weights by the same amounts for every input Symmetry! Initialize the weights randomly to small values: Sample from a small range, e.g., Normal(0,0.01) Don t initialize too small The bias may be initialized to zero For ReLU units, this may be a small number like 0.01. 8 Note: None of these provide guarantees. Moreover, there is no guarantee that one of these will always be better.

Initial Weight Normalization Solution: Get rid of n in Var s = (n Var(w))Var(x) How? w i = rand(0,1)/ n Why? Var ax = a 2 Var X If the number of inputs & outputs are not fixed: w i = rand 0,1 2 n in +n out Glorot & Bengio, Understanding the difficulty of training deep feedforward neural networks, 2010. 9

Alternative: Batch Normalization Normalization is differentiable So, make it part of the model (not only at the beginning) I.e., perform normalization during every step of processing More robust to initialization Shown to also regularize the network in some cases (dropping the need for dropout) Issue: How to normalize at test time? 1. Store means and variances during training, or 2. Calculate mean & variance over your test data Ioffe & Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, 2015. 10

Alternative Normalizations https://medium.com/syncedreview/facebook-ai-proposes-group-normalization-alternative-tobatch-normalization-fb0699bffae7 11

To sum up Initialization and normalization are crucial Different initialization & normalization strategies may be needed for different deep learning methods E.g., in CNNs, normalization might be performed only on convolution etc. More on this later 12

Issues & tricks Vanishing gradient Saturated units block gradient propagation (why?) A problem especially present in recurrent networks or networks with a lot of layers Overfitting Drop-out, regularization and other tricks. Tricks: Unsupervised pretraining Batch normalization (each unit s preactivation is normalized) Helps keeping the preactivation non-saturated Do this for mini-batches (adds stochasticity) Backprop needs to be updated 13

today Convolutional Neural Networks 14

16 January 2018 15

14 November 2018 16

Can we predict which model is best without look at test data? https://calculatedcontent.com/2018/11/18/dont-peek-part-2-predictions-without-test-data/ 17

TRUBA http://wiki.truba.gov.tr/index.php/truba-akya-cuda Adet: 24 adet sunucu GPU: 4 x NVIDIA Tesla V100 (NVLink bağlantı arayüzü, 16GB bellek) İşlemci : 2x 20 çekirdek Intel Xeon Scalable 6148 İşlemci Bellek: 384 GB ECC 2600Mhz bellek Performans ağı: 100Gps EDR Infiniband network bağlantısı Sabit disk: 2x240GB SSD disk Scratch/tmp: 1 x 1.4TB NVME disk Teorik CPU performansı: 2.2 Tflops (double) GPU Tensor Performansı: 4x 125 Tflops GPU Performansı: 4x 7.8 Tflops (double) 4x 15.7 Tflops (single) HPL Performansı: 2.1 Tfops (işlemci) Spec_fP Performansı: 1400 (işlemci) 18

CONVOLUTIONAL NEURAL NETWORKS 19

Motivation A fully-connected network has too many parameters On CIFAR-10: Images have size 32x32x3 one neuron in hidden layer has 3072 weights! With images of size 512x512x3 one neuron in hidden layer has 786,432 weights! This explodes quickly if you increase the number of neurons & layers. Alternative: enforce local connectivity! Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

A CRASH COURSE ON CONVOLUTION

Formulating Signals in Terms of Impulse Signal November 18 F. Y. Vural & S. Kalkan - CEng 384 22

Formulating Signals in Terms of Impulse Signal To sift: Elemek November 18 F. Y. Vural & S. Kalkan - CEng 384 23

Unit Sample Response November 18 F. Y. Vural & S. Kalkan - CEng 384 24

Conclusion November 18 F. Y. Vural & S. Kalkan - CEng 384 25

Power of convolution Describe a system (or operation) with a very simple function (impulse response). Determine the output by convolving the input with the impulse response

Convolution Definition of continuous-time convolution x t h t = න x τ h t τ dτ

Convolution Definition of discrete-time convolution x[n] h[n] = x k h[n k]

Discrete-time 2D Convolution For images, we need two-dimensional convolution: These multi-dimensional arrays are called tensors We have commutative property: Instead of subtraction, we can also write (easy to drive by a change of variables). This is called cross-correlation:

Example multi-dimensional convolution Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

OVERVIEW OF CNN

CNN layers Stages of CNN: Convolution (in parallel) to produce pre-synaptic activations Detector: Non-linear function Pooling: A summary of a neighborhood Pooling of a rectangular region: Max Average L2 norm Weighted average acc. to the distance to the center Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Regular ANN CNN http://cs231n.github.io/convolutional-networks/

An example architecture http://cs231n.github.io/convolutional-networks/

CONVOLUTION IN CNN

Convolution in CNN y t = x t h t = න x τ h t τ dτ x(t): input h(t): impulse response (in engineering), kernel (in CNN) y(t): output, feature map (in CNN) And h t = 0 for t < 0. Otherwise, it means it uses future input. Effectively, we are learning the filters that best work for our problem.

Convolution in CNN The weights correspond to the kernel The weights are shared in a channel (depth slice) We are effectively learning filters that respond to some part/entities/visua l-cues etc. Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Local connectivity in CNN = Receptive fields Each neuron is connected to only a local neighborhood, i.e., receptive field The size of the receptive field another hyper-parameter.

Having looked at convolution in general and convolution in CNN: Why convolution in neural networks? In regular ANNs, each output node considers all data. Matrix multiplication in essence. This is redundant in most cases. Expensive. Instead, a smaller matrix multiplication operation. Sparse interaction. Can extract meaningful entities, such edges, corners, that occupy less space. This reduces space requirements, and improves speed. Regular ANN: m x n parameters With CNN (if we restrict the output to link to k units): n x k parameters.

Having looked at convolution in general and convolution in CNN: Why convolution in neural networks? When things go deep, an output may depend on all or most of the input: Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Having looked at convolution in general and convolution in CNN: Why convolution in neural networks? Parameter sharing In regular ANN, each weight is independent In CNN, a layer might re-apply the same convolution and therefore, share the parameters of a convolution Reduces storage and learning time 320x280 320x280 For a neuron in the next layer: With ANN: 320x280x320x280 multiplications With CNN: 320x280x3x3 multiplications Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Having looked at convolution in general and convolution in CNN: Why convolution in neural networks? Equivariant to translation The output will be the same, just translated, since the weights are shared. Not equivariant to scale or rotation.

Connectivity in CNN Local: The behavior of a neuron does not change other than being restricted to a subspace of the input. Each neuron is connected to slice of the previous layer A layer is actually a volume having a certain width x height and depth (or channel) A neuron is connected to a subspace of width x height but to all channels (depth) Example: CIFAR-10 Input: 32 x 32 x 3 (3 for RGB channels) A neuron in the next layer with receptive field size 5x5 has input from a volume of 5x5x3. http://cs231n.github.io/convolutional-networks/

Important parameters Depth (number of channels) We will have more neurons getting input from the same receptive field This is similar to the hidden neurons with connections to the same input These neurons learn to become selective to the presence of different signals in the same receptive field Stride The amount of space between neighboring receptive fields If it is small, RFs overlap more It it is big, RFs overlap less How to handle the boundaries? i. Option 1: Don t process the boundaries. Only process pixels on which convolution window can be placed fully. ii. Option 2: Zero-pad the input so that convolution can be performed at the boundary pixels. Weights: http://cs231n.github.io/convolutional-networks/

Padding illustration Only convolution layers are shown. Top: no padding layers shrink in size. Bottom: zero padding layers keep their size fixed. Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Along a dimension: W: Size of the input Size of the network F: Size of the receptive field S: Stride P: Amount of zero-padding Then: the number of neurons as the output of a convolution layer: W F + 2P + 1 S If this number is not an integer, your strides are incorrect and your neurons cannot tile nicely to cover the input volume Weights: Zero padding http://cs231n.github.io/convolutional-networks/

Size of the network Arranging these hyperparameters can be problematic Example: If W=10, P=0, and F=3, then W F + 2P S + 1 = 10 3 + 0 S + 1 = 7 S + 1 i.e., S cannot be an integer other than 1 or 7. Zero-padding is your friend here.

Real example AlexNet (Krizhevsky et al., 2012) Image size: 227 227 3 W=227, F=11, S=4, P=0 227 11 S convolutional layer) + 1 = 55 (the size of the Convolution layer: 55 55 96 neurons (96: the depth, the number of channels) Therefore, the first layer has 55 55 96 = 290,400 neurons Each has 11 11 3 receptive field 363 weights and 1 bias Then, 290,400 364 = 105,705,600 parameters just for the first convolution layer (if there were no no sharing) Very high number

Real example AlexNet (Krizhevsky et al., 2012) However, we can share the parameters For each channel (slice of depth), have the same set of weights If 96 channels, this means 96 different set of weights Then, 96 364 = 34,944 parameters 364 weights shared by 55 55 neurons in each channel http://cs231n.github.io/convolutional-networks/

More on connectivity Small RF & Stacking E.g., 3 CONV layers of 3x3 RFs Pros: Same extent for these example figures With non-linearity added with 2 nd and 3 rd layers More expressive! More representational capacity! Less parameters: 3 x [(3 x 3 x C) x C] = 27CxC Cons? Large RF & Single Layer 7x7 RFs of single CONV layer Cons: Linear operation in one layer More parameters: (7x7xC)xC = 49CxC Pros? So, we prefer a stack of small filter sizes against big ones http://cs231n.github.io/convolutional-networks/

Convolution layer: How to train? How to backpropagate? Calculate error for each neuron, add up gradients but update only one set of weights per slice More on this later

Implementation Details: Numpy example Suppose input is volume is X of shape (11,11,4) Depth slice at depth d (i.e., channel d): X[:,:,d] Depth column at position (x,y): X[x,y,:] F: 5, P:0 (no padding), S=2 Output volume (V) width, height = (11-5+0)/2+1 = 4 Example computation for some neurons in first channel: Note that this is just along one dimension http://cs231n.github.io/convolutional-networks/

Implementation Details: Numpy example A second activation map (channel): To utilize fast processing of matrix multiplications, we will convert each w x h x d volume (input & weights) into a single vector using im2col function. This leads to redundancy since the receptive fields overlap We can then have just a single dot product: np.dot(w_row, X_col) The result needs to be shaped back http://cs231n.github.io/convolutional-networks/

http://cs231n.github.io/convolutional-networks/

Unshared convolution In some cases, sharing the weights do not make sense When? Different parts of the input might require different types of features In such a case, we just have a network with local connectivity E.g., a face. Features are not repeated across the space.

Dilated Convolution

Deconvolution https://github.com/vdumoulin/conv_arithmetic

Convolution demos https://github.com/vdumoulin/conv_arithmetic http://cs231n.github.io/assets/convdemo/index.html https://ezyang.github.io/convolutionvisualizer/index.html https://ikhlestov.github.io/pages/machinelearning/convolutions-types/

Efficient convolution Convolution in time-domain means multiplication in frequency domain For some problem sizes, converting the signals to the frequency domain, performing multiplication in the frequency domain and transforming them back is more efficient A kernel is called separable if it can be written as a product of two vectors: Separable kernels can be stored with less memory if you store just the vectors Moreover, processing using the vectors is faster than using naïve convolution http://cs231n.github.io/convolutional-networks/

Position-sensitive convolution Learn to use position information when necessary 2018

POOLING LAYER / OPERATION

Pooling Apply an operation on the detector results to combine or to summarize the answers of a set of units. Applied to each channel (depth slice) independently The operation has to be differentiable of course. Alternatives: Maximum Sum Average Weighted average with distance from the value of the center pixel L2 norm Second-order statistics? Different problems may perform better with different pooling methods Pooling can be overlapping or non-overlapping http://cs231n.github.io/convolutional-networks/

Pooling Example Pooling layer with filters of size 2x2 With stride = 2 Discards %75 of the activations Depth dimension remains unchanged Max pooling with F=3, S=2 or F=2, S=2 are quite common. Pooling with bigger receptive field sizes can be destructive Avg pooling is an obsolete choice. Max pooling is shown to work better in practice. http://cs231n.github.io/convolutional-networks/

Pooling Pooling provides invariance to small translation. Shifted to right If you pool over different convolution operators, you can gain invariance to different transformations. Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Pooling can downsample Especially needed when to produce an output with fixed-length on varying length input. Figure: Goodfellow et al., Deep Learning, MIT Press, 2016.

Pooling can downsample If you want to use the network on images of varying size, you can arrange this with pooling (with the help of convolutional layers) E.g., to the last classification layer, provide pooling results of four quadrants in the image. Invariant on the size.

Backpropagation for pooling E.g., for a max operation: Remember the derivative of max(x, y): f x, y = max(x, y) f = 1 x y, 1 y x Interpretation: only routing the gradient to the input that had the biggest value in the forward pass. This requires that we save the index of the max activation (sometimes also called the switches) so that gradient routing is handled efficiently during backpropagation. http://cs231n.github.io/convolutional-networks/

Recent developments Striving for Simplicity: The All Convolutional Net proposes to discard the pooling layer in favor of architecture that only consists of repeated CONV layers. To reduce the size of the representation they suggest using larger stride in CONV layer once in a while. http://cs231n.github.io/convolutional-networks/

For more on pooling This is for binary features though.

Convolution & pooling Provide strong bias on the model and the solution They directly affect the overall performance of the system

NON-LINEARITY LAYER

Non-linearity Sigmoid Tanh ReLU and its variants The common choice Faster Easier (in backpropagation etc.) Avoids saturation issues

NORMALIZATION LAYER

Normalization layer Normalize the responses of neurons Necessary especially when the activations are not bounded (e.g., a ReLU unit) Replaced by drop-out, batch normalization, better initialization etc. Has little impact Not favored anymore

From Krizhevsky et al. (2012):

FULLY-CONNECTED LAYER

Fully-connected layer At the top of the network for mapping the feature responses to output labels Full connectivity Can be many layers Various activation functions can be used

FC & Conv Layer Conversion CONV to FC conversion: weight matrix would be a large matrix that is mostly zero except for at certain blocks (due to local connectivity) where the weights in many of the blocks are equal (due to parameter sharing). FC to CONV conversion: Treat the neurons of the next layer as the different channels.