EIE6207: Deep Learning and Deep Neural Networks

Size: px
Start display at page:

Download "EIE6207: Deep Learning and Deep Neural Networks"

Transcription

1 EIE6207: Deep Learning and Deep Neural Networs Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University mwma References: mwma/papers/bp.pdf S.Y. Kung, M.W. Ma and S.H. Lin, Biometric Authentication: A Machine Learning Approach, Prentice Hall, 2005, Chapter 5. M.W. Ma, Deep Learning: Concepts, Tools, and Applications, IEEE Hong Kong Section, Robotics and Automation/Control Systems Joint Chapter, The Hong Kong Polytechnic University, 16 March ( mwma/papers/ieee-racs-seminar18.pdf) November 9, 2018 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

2 Overview 1 Motivations 2 Deep Learning and DNN Neural Networs NN as Nonlinear Classifier Deep Neural Networs 3 Gradient Descent Simple Gradient Descent Stochastic Gradient Descent 4 Bacpropagation Loss Function: Mean Square Error Loss Function: Cross Entropy 5 Convolutional Neural Networs Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

3 Motivations Deep learning is currently the most exciting area in AI, perhaps because of the success of AlaphaGO and Master. Deep learning finds its way in many disciplines, e.g., Power Electronics: Neural Networ Applications in Power Electronics and Motor Drives: An Introduction and Perspective, Autonomous Vehicle: End-to-End Learning for Self-Driving Cars, Nvidia. Multimedia: DNNs win almost all competitions in computer vision, face recognition, speech recognition, natural language processing, etc. Big-Data Analytics: Recommendation systems, customer relationship management. Why does deep learning wor so well? Why you need to learn deep learning? Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

4 Definitions AI, Machine Learning, and Deep Deep learning is a branch of machine learning algorithms for training Learning deep neural networs. Deep Learning Machine Learning Artificial Intelligence 5 Conventionally, we have shallow networs such as multi-layer perceptrons (discussed later), SVMs, and GMMs. Deep neural networs have narrow structure but many layers of non-linear processing elements. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

5 What Maes DNN Successful? More complex models can be build, provided that we have more data In the big data era, we have a large amount of data With GPUs and the cloud, we also have a large amount of computational resources Open source efforts (e.g., Theano, TensorFlow, Kaldi, PyTorch, CNTK) also facilitate the development of complex models. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

6 Neural Networs Neural Networs Artificial Ar&ficial neural neural networs networs (ANNs) (ANNs) are aare family a family of statistical of sta&s&cal learning models learning inspired models by biological inspired neural by biological networs neural (the brain). networs (the They brain). are used to approximate some unnown functions that depend on athey largeare number used of to inputs approximate some unnown func&ons that Thedepend simpleston type a large of neural number networs, of inputs called perceptrons, has one neuron The only: simplest type of neural networs has one neuron only! # x = # # "# x 1 x 2 x 3 $ & & & %& h w,b (x) =f(w T x + b) =f X3 w ix i + b i=1 f(z) = 1 1+e z 15 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

7 Geometric Interpretation of Perceptrons Geometric Interpretation For 2-D inputs, we may use coordinate geometry to understand how a Slopes-intercept form of a plane: perceptron wors. x1 -slope <latexit sha1_base64="s+kna/148abgczadp+omdb7mffu=">aaacinicbzdlsgmxfiyzxmu9bp0eywfrswzrdcnuhtsok9qfugje3b0gqyjblphfomvosv4fb37ssv4monmw1nyvt/opdxn3m4yy8zrtxvc9naxlldw09s5hd3nre2xx39qtaxorqcpfcqpgmniw0ophhtn6pcgsmnma7t+m+7uhqst4b0zrrqluddhuaqsvbgnuqfr0tgw4gtuyicoqwijzy1tlngouonjcrhqvuzit4e8ff8fpigvtlwp1ptiwjbq0n4urhu9fppugzrhdjrtxppgiprrlzyshhq3uomxxrbvhxascovrddaift3i0fc66hadlig09pzvbh5x68rm85lk2fhfbsaumhtsyhxccd2wzrynhqwuikgbfcpkusmtxhmchara3fn49gearfgu8v/lvzxo6scddseroay+uaalcavkoaiieaiv4bw8oc/ou/phfe5hl5x05wdmypn+bbvsobc=</latexit> z x2 -slope z-intercept <latexit sha1_base64="daqjcoinapwqchdypgvwjirye=">aaacinicbzdnsgmxemezftb6terrs7auflhsfevqtglxwr2a9qyjgnahiabjclk69jn8cv8ba969yaebe8+iwm7b9s6mothf2ayyr9hngnev/o0vlk6tp6zio7ubw9s+vu7ve1bwhfsk5vhwmnouspbxddkf1sfemkc13l8z12spvgmw3szghlog7ioowgy6xapc/xonahwobp1aerutfszhr36bacpccas4ogrcnffwjgexwu8hb9iob+5psy1jlghocedan3wvmq0ekcmip6nsm9y0qqspurrhmusc6lyy+dii5q3shh2pbiygtts/ewswg8ftp0cmz6er43f/2qn2hquwwlo9qewxdwioyrf2cbkuomh1pardf7kyq9pbax1swzlvimstyuf96cragwc75x8o/oc6xr1j4moarh4b44akuwc0ogwog4am8gffw5w7786h8zltxxlsmqmwe873l71zobg=</latexit> z = m1 x1 + m2 x2 + b <latexit sha1_base64="nuhx8yxnaxrh6r4ktzgydip0x4=">aaacehicbzdlssnafizpvnz6i7pw4wawcijqilorii6cvnbxqanytkdtennazewvos/gkbnxvttz6bm59eqdtfrb1hwmf/zmhm/mhcwdko863tbs8srq2xtgobm5t7+zae/snfaes0dqjesxbavaus4wndocthjjsqg4bqadm3g/+uclynf0r4cj9qtursxbgt+fbh05xwxfro6gwjv2koyiw7zjtdizci+dmuijcnd/+6xrgoaackxum3xsbsxyaz4xru7kskjpgmci+2duzyuovlw+m0ilxuiimpalio4n7dypdqqmhcmywlqv5nt879eo9xhpzexke1c0ujhypgm0tgn1mare86ebtcqzb0wyum2mq2cyuqo6ijxz2pybealblrln2781l1oo+naedwdkfgwgvu4rzquacci3ibv3iznq1368p6ni4uwfnoaczi+voficuacg==</latexit> z = m2 x 2 + b b <latexit sha1_base64="a56ltu50zucgz3wpc0w/bqtrc=">aaacrhicbzbnswmxeiazftb1q+rrs7aublhsfevhaixwusf2wrtsirpqsfsyrzabv0r/n/avitt17e69i2i5q1ygbz96zyzixx5xp43mpzszs3pzcym7jxv5zxvvpb2w2tewuoxuiuvsxggnkwutrhhlol2nfcccnvhnyafvkvkmxldmh5ma4guitzlbbrhfmz4qaiqh/2bo5bezytls1ht2ihlbdspfuay5go3eiavud3mhyn9ddkmc94jw8c8c/4grrafruw/9lusjiighncdyt34tncjlgoh2ydvrnebbl3rlsuicaqddpzpisxapqo7utmmdbyrpzdsjltuc2wnbtlx+ndvjp7xaywmexsliotqymyodrnodqshyehayombxvarhf7fshuuykewn9nrqcxdc1pvi/lfglxlj90r++ughepzzwpbyafsah8cgio4btvqbwtcgqfwbj6de+fvexpej6mztrazbabc+fge24srpg==</latexit> x1 mx m x2 y <latexit sha1_base64="wyecycvp5xcgkcrmymcrj00i4=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixytififzzxmwrcbn8gbtclspd/axutgrboddp3ytnotium7uzlwgsi9xhxdk0kbanup1o/wcrat0ycsvtwhgrp07sku9zchtpm09oltbh4x62zmoiynvirj4yipd0ujqwacccmwq5vbbs2ticwovatepeqqthyk+euhhyus6b4ixysq63+p7r350xytcze7lgcbyde+cdc1agt6acqgcdj/acxsgb8+y8ox/o57q148xmdsfcon+/ov6iq==</latexit> x1 <latexit sha1_base64="wyecycvp5xcgkcrmymcrj00i4=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixytififzzxmwrcbn8gbtclspd/axutgrboddp3ytnotium7uzlwgsi9xhxdk0kbanup1o/wcrat0ycsvtwhgrp07sku9zchtpm09oltbh4x62zmoiynvirj4yipd0ujqwacccmwq5vbbs2ticwovatepeqqthyk+euhhyus6b4ixysq63+p7r350xytcze7lgcbyde+cdc1agt6acqgcdj/acxsgb8+y8ox/o57q148xmdsfcon+/ov6iq==</latexit> <latexit sha1_base64="bfyl/9dqng08euahcwimyuzww=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixyvgybc/1jggxwz9bacyieur/wh2je06ewqxp3fs92lrtpazfzo5sjzrecpdrlzqtcssjbqetf41g0sodgelluxg4uf9opihrpesh7eti9pribsz+v2smjrpsp1teisectw9fcyngwrfjsemvwyynlscsqh0rxd2edbwyrrir/lrcn+ogxlucu5vuf6d+ef8vxmniw4asfgbpgaptblaiaksdgcbyav/dmpdvvzofzow3nolozqzaxzvcvovkig==</latexit> z = m1 x 1 + b z = f (x1, x2 ) <latexit sha1_base64="1gnhunkaqefrce3fsf877lbfmbs=">aaacvhicbzblswmxfixtqy9atvzdugmwgqkwmsloplb047kcfubbhitn2gaygzkmtb364/wtgls3bnxvwvqh1eqfc1/ouzcbdo4408z1x1joemv1bt2zd3c2s7t5hf3glrgita6vyqfaachbsumgg01akbky0yz+uj74zueqnjphnrlgtcvqfcgcrpcxp9vf0cv4xtwypsecr9sqwwjz4swogllqxkmuyzoofscwqpcdonvybezeqdh161yrgfl7gld1rwl3hzkib51fz8a6cnssxoaahhwrc9nzldbcndclcf6msarog8ohvathgiqxu3myywhwr9gagle3qwkn6cynbquuhwhzsinpxy95e/m9rxya47cysgjdqzi7fmqcggnicieu5qyprsaigl2r5d0ule2nx/xcfinlwhemsr/ivgues5je/2vfc9msetaqfgebwbd1yakrgbnvahbdybn/aopllpqu8n7azorp3ufgcf/con9wxwl6z7</latexit> z b <latexit sha1_base64="oaxcprt5/yyfpgu3tbpzzgslio=">aaacm3icbzdlsgmxfiyz9vbrrerstbaublhmfeercu3livyc7rlsnk0du0mq5krtpfxzfwfdzqvtwvt76d6qw0rqcofpznlvw45ewb1/1wemvrg5tbye3uzu7e/h68kiizaqilrpjpaphpclnas0bzithyoigtmt4t7dpf59oozgtyaquibanuc1myegsv56zvsscb8d/ztnph5y3llefu1jdynmplzsxir2lhraaf3uxn864oxcacbw8owtapep+etxosrijghcdz1zw1nm0bkmmlt/aayhid3vo3wkabnxneprhecxapqxbutmdjyqfydijlqecgw7btjdvvybip/v6pfpxzdfosroqgzhwphhbojj4bbflougd6wgihi9q2qdjfcxfhbf65gmupzu7xlc1ahs95bs57umwub+f2jmejoavnwanxoauqqmuaqhp4bw8gxfnxfl0xs7xrdxhzgeowui43z+lyqz6</latexit> 1 x2 <latexit sha1_base64="bfyl/9dqng08euahcwimyuzww=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixyvgybc/1jggxwz9bacyieur/wh2je06ewqxp3fs92lrtpazfzo5sjzrecpdrlzqtcssjbqetf41g0sodgelluxg4uf9opihrpesh7eti9pribsz+v2smjrpsp1teisectw9fcyngwrfjsemvwyynlscsqh0rxd2edbwyrrir/lrcn+ogxlucu5vuf6d+ef8vxmniw4asfgbpgaptblaiaksdgcbyav/dmpdvvzofzow3nolozqzaxzvcvovkig==</latexit> 6 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

8 Geometric Interpretation of Perceptrons Apply non-linear activation function on z: (z) z (z) = 1 1+e z z z0 σ b x2 x1 <latexit sha1_base64="bfyl/9dqng08euahcwimyuzww=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixyvgybc/1jggxwz9bacyieur/wh2je06ewqxp3fs92lrtpazfzo5sjzrecpdrlzqtcssjbqetf41g0sodgelluxg4uf9opihrpesh7eti9pribsz+v2smjrpsp1teisectw9fcyngwrfjsemvwyynlscsqh0rxd2edbwyrrir/lrcn+ogxlucu5vuf6d+ef8vxmniw4asfgbpgaptblaiaksdgcbyav/dmpdvvzofzow3nolozqzaxzvcvovkig==</latexit> <latexit sha1_base64="wyecycvp5xcgkcrmymcrj00i4=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixytififzzxmwrcbn8gbtclspd/axutgrboddp3ytnotium7uzlwgsi9xhxdk0kbanup1o/wcrat0ycsvtwhgrp07sku9zchtpm09oltbh4x62zmoiynvirj4yipd0ujqwacccmwq5vbbs2ticwovatepeqqthyk+euhhyus6b4ixysq63+p7r350xytcze7lgcbyde+cdc1agt6acqgcdj/acxsgb8+y8ox/o57q148xmdsfcon+/ov6iq==</latexit> x1 <latexit sha1_base64="wyecycvp5xcgkcrmymcrj00i4=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixytififzzxmwrcbn8gbtclspd/axutgrboddp3ytnotium7uzlwgsi9xhxdk0kbanup1o/wcrat0ycsvtwhgrp07sku9zchtpm09oltbh4x62zmoiynvirj4yipd0ujqwacccmwq5vbbs2ticwovatepeqqthyk+euhhyus6b4ixysq63+p7r350xytcze7lgcbyde+cdc1agt6acqgcdj/acxsgb8+y8ox/o57q148xmdsfcon+/ov6iq==</latexit> x2 σ <latexit sha1_base64="bfyl/9dqng08euahcwimyuzww=">aaacj3icbzdlsgmxfiyz9vbrrerstbaubhgykyjuhkiblxxsbdoyznjmg5rlgsdehb+bk+glvduxnd6poyxha29ccb//c+mgdxg876czmrq2vpgdo3tb2zu5ffp6hpmshmqlgyqroh0orrqaqggyasskih4zuw/7nuf5/ieptke7nmcztrqcrhqy6ug7xyfr3gw4hnu8idqwsptbxtndioryz5rjmixyvgybc/1jggxwz9bacyieur/wh2je06ewqxp3fs92lrtpazfzo5sjzrecpdrlzqtcssjbqetf41g0sodgelluxg4uf9opihrpesh7eti9pribsz+v2smjrpsp1teisectw9fcyngwrfjsemvwyynlscsqh0rxd2edbwyrrir/lrcn+ogxlucu5vuf6d+ef8vxmniw4asfgbpgaptblaiaksdgcbyav/dmpdvvzofzow3nolozqzaxzvcvovkig==</latexit> z0 (z) = max{z, 0} Man-Wai MAK (EIE) Deep Learning and DNN 7 November 9, / 50

9 Neural Networs Neural Networs with 1 Hidden Layer A neural networ is constructed by hooing many neurons together. W (1) 2< 3 3 A neural networ is constructed by hooing many neurons together. Bias unit W (1) Hidden layer W (2) 2< 1 3 Output layer Both input and output layers can have mul<ple nodes. a (2) 1 a (2) 2 a (2) 3 Both input and output layers can have multiple nodes. o(x) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

10 Neural Networs with 1 Hidden Layer (z) z y z10 x o = F (x, y) z20 y 1 1 x Hidden Layer Man-Wai MAK (EIE) Deep Learning and DNN 8 November 9, / 50

11 Neural Networs with 1 Hidden Layer A Neural Networ with Multiple Neurons z0 (z) = max{z, 0} y z10 x o = F (x, y) z20 y 1 1 x Hidden Layer Man-Wai MAK (EIE) Deep Learning and DNN 9 November 9, / 50

12 Comparing Activation Functions You may observe the effect of different activation functions through Tensorflow Playground ( Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

13 Comparing Activation Functions Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

14 Neural Networs Forward propagation: Forward propaga,on: Neural Networs General equations General for equa,on forward propagation: forward propaga,on: a (l) o (l) o (0) x = (w(l) )T o (l 1) + b (l) = f(a(l) ), l = 1, 2 18 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

15 NN as Nonlinear Classifier NN as Nonlinear Classifier Networs can have multiple layers and multiple outputs: Networs can have mul4ple layers and mul4ple outputs: Feature Extrac4on 2 3 x 1 x = 4 x 2 5 x 3 x 1 x 2 x 3 P (M x) P (F x) Hidden layers To train this networ, we need training samples To train this{x networ, n, t n } N we need n=1 where training x n 2< samples: 3 and t n 2< 2 {x n, t n } N n=1, where x n R 3 and t n R 2 19 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

16 NN as Nonlinear Classifier To ensure that the output nodes produce posterior probabilities given the input vectors, we typically apply the softmax function on the output nodes: ) P (Class = x) y = o (L) = exp(a(l) exp(a(l) ) where a (L) is the activation of the -th output node (at layer L). Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

17 Deep Neural Networs Both SVM and GMM can be considered as networs with one hidden layer Deep neural networs (DNNs) are networs with many hidden layers (22 layers in GoogLeNet) Since the 80s, researchers new that DNNs with many hidden layers are very powerful as long as they can be trained. From the 80s to 2005, NNs were trained by the bacpropagation (BP) algorithm. Unfortunately, BP fails to train any networs with more than one hidden layer. People found that shallow networs such as SVM and GMM perform much better than NN with multiple layers. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

18 Deep Neural Networs This situation has changed in 2006 when G. Hinton discovered a new approach to training DNNs. The method uses unsupervised learning to initialize the DNN and use BP to fine-tune the networ weights. During the unsupervised learning stage, the networ is built one layer at a time (from bottom to top) by stacing a number of Restricted Boltzmann Machines (RBM) The RBM is trained by an algorithm called contrastive divergence. For details of RBM and contrastive divergence, see mwma/papers/rbm.pdf Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

19 DNN for Classification Internal Representation in DNNs P(neutral x) = 0.2 P(anxiety x) = 0.7 P(happy x) = 0.09 P(angry x) = 0.01 x Identifying light and dar Identifying edge and simple shapes Identifying complex shapes and obects Identifying obects that can be used for recognizing facial expression 10 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

20 Internal Representation in DNNs Feature Representa,on in DNNs Data Data presented presented to the tonetwor the networ are raw arepixel rawvalues. pixel values. But, internally But, internally the networ generates the networ much higher generates level much features. higher For level example, features. there For might example, be a node there in the networ mighthat be responds a node into the networ presence that of diagonal responds lines to in the an presence image or, of at an even diagonal higher level, linesto inthe an presence image or, of at faces an even higher level, to the presence of faces. 36 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

21 DNN for Dimension Reduction DNN Encoding Face (Autoencoder) Reduce high-dim facial images to 30-dim vectors by autoencoder: Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50 38

22 DNN for Dimension Reduction DNN for Dimension Reduc0on Reduce 784 dimension in handwri5en digit images to 2 Reducedimension: 784 dimension in handwritten digit images to 2 dimension: : Input Hidden Output PCA DNN 39 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

23 Simple Gradient Descent Assume that we have a set of weights in the form of a weight vector w and an error function E(w). Our obective is to minimize E(w) with respect to w based on the gradient of E: w (t+1) = w t η E(w (t) ) where t is the iteration index (epoch) and η is a small positive learning rate. Techniques that use the whole data set at once are called batch methods. At each step the weight vector is moved in the direction of the greatest rate of decrease of the error function. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

24 Simple Gradient Descent Gradient Descent for 1D Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

25 Stochastic Gradient Descent There is an on-line version of gradient descent that has proved useful in practice: N E(w) = E n (w) The update formula becomes: n=1 w (t+1) = w t η E n (w (t) ) This update is repeated by cycling through the data either in sequence or by selecting points at random with replacement. There are intermediate scenarios in which the updates are based on batches of data points. This is called mini-batches in the deep learning community. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

26 Bacpropagation for Training Multilayer Networs Bac- propaga*on Algorithm The networ weights W (1) and W (2) in multiple-layer neural networ below The can networ be trained weights by the W in Bac-propagation a mul5ple- layer neural (BP) networ algorithm. can be trained by the Bac- propaga5on (BP) algorithm. W (2) W (1) X 18 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

27 Loss Function: Mean Square Error Output Layer: y = o (2) = h(a (2) ) h(z) = e z a (2) = w (2) o(1) Hidden Layer: o (1) = h(a (1) ) a (1) = i w (1) i o(0) i = i w (1) i x i where x i is the i-th elements of the input vector x. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

28 Loss Function: Mean Square Error We aim to minimize the sum of square error between the target output and the actual output: E tot = 1 (y n, t n, ) 2 2 n = 1 ( ) o (2) 2 2 n, t n, = 1 2 n n E n To minimize E tot with respect to W (2), we compute the gradient E tot w (2) = n E n w (2) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

29 Error Gradient in SGD In stochastic gradient descent (SGD), we compute the error for each training vector: E = 1 (y t ) 2 = 1 ( ) o (2) t Error gradient for a training vector: where E w (2) δ (2) = E = a (2) = E a (2) a (2) w (2) = E o (2) o (2) a (2) ( o (2) t ) h (a (2) ) = = δ (2) o(1) (1) ( ) o (2) (2) h(a t a (2) ) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

30 Error Gradient in SGD Error gradient w.r.t. hidden-layer weights where δ (1) = E a (1) = E w (1) i = = E a (1) a (1) w (1) i E a (2) δ (2) w(2) h (a (1) ) a (2) a (1) = δ (1) x i (2) = δ (2) a (2) o (1) o (1) a (1) = h (a (1) ) δ (2) w(2) The above derivative is based on the fact that E depends on a (2) and that each of the a (2) depends on a (1). Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

31 Error Gradient in SGD Weight update formula for output-layer weights: w (2) w (2) w (2) w(2) w(2) w(2) η E w (2) ηδ(2) o(1) η(y t )h (a (2) )o(1) (3) where h (a (2) ) = y (1 y ) for non-linear sigmoid output. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

32 Error Gradient in SGD Weight update formula for hidden-layer weights: w (1) i w (1) i η E w (1) i w (1) i w (1) i ηδ (1) [ w (1) i w (1) i η x (1) i h (a (1) ) δ (2) w(2) ] x i (4) where h (a (1) ) = o (1 o (1) ). Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

33 Loss Function: Cross Entropy Consider a BP networ with a single output node with output denoted as y. Using Eq of Bishop s boo, E tot = n {t n log y n + (1 t n ) log(1 y n )} where n is the sample index and t n as the target output for the n-th sample. The instantaneous error is E n = t n log y n (1 t n ) log(1 y n ) To simplify notation, we drop the subscript n: E = t log y (1 t) log(1 y) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

34 Loss Function: Cross Entropy Instantaneous error gradident: where δ (2) 1 = E = = a (2) 1 E w (2) 1 = E [ t y + 1 t 1 y = E a (2) 1 a (2) 1 w (2) 1 o (2) 1 o (2) 1 a (2) 1 = E y = δ (2) 1 o(1) y a (2) 1 ] h (a (2) 1 ) h(z) = 1 t(1 y) + (1 t)y [ ] h(a (2) 1 y(1 y) ) 1 h(a (2) 1 ) = y t y = h(a (2) E w (2) 1 = (y t)o (1) 1 ) 1 + e z Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

35 Loss Function: Cross Entropy Similarly, where E w (1) i = E a (1) a (1) w (1) i = δ (1) x i Update Formulae: δ (1) = E a (1) = E a (2) 1 a (2) 1 a (1) = (y t)w (2) 1 o(1) (1 o (1) ) w (2) 1 w (2) 1 w (1) i w (1) i η η(y t)o(1) = δ (2) a (2) 1 1 o (1) [ (y t)w (2) 1 o(1) (1 o (1) ) o (1) a (1) ] x i Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

36 Cross Entropy for Multi-class [Optional Notes] The cross-entropy loss function can also be used in networs with multiple outputs. Assume that the softmax function is used for the output nodes: ) y = o (2) = exp(a(2) exp(a(2) Because y depends on a (2) where = 1,..., K, we need to compute the derivative of y w.r.t. a (2), i.e., where y a (2) = I exp(a (2) = y I y y I = ) ) exp(a(2) ) exp(a (2) ( ) 2 exp(a(2) ) { 1 = ; 0 otherwise. ) exp(a(2) ) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50 (5)

37 Cross Entropy for Multi-class [Optional Notes] The instantaneously cross-entropy is E = K t log y. (6) =1 Eq. 6 suggests that E is a function of y = 1,..., K, and according to Eq. 5, y depends on a (2) r r = 1,..., K. Therefore, we need to use the chain rule to compute E : w (2) E w (2) = K r=1 E a (2) r a (2) r w (2) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

38 Cross Entropy for Multi-class [Optional Notes] E a (2) r = K =1 E y y a (2) r = E y r y r a (2) + E y r r y a (2) r = t r y r (1 y r ) t y (I r 1)y r (using Eq. 6) y r y r = t r (1 y r ) t (0 1)y r r = t r (1 y r ) + y r r K = t r + y r t + t r y r r t K = t r + y r t = y r t r (One-of-K encoding) =1 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

39 Cross Entropy for Multi-class [Optional Notes] Therefore, we have E w (2) = r E a (2) r a (2) r w (2) = (y t )o (1) a(2) r w (2) = 0 r. Similarly, E a (1) i = = K =1 =1 E a (2) a (2) o (1) i o (1) i a (1) i K (y t )w (2) i (1 o (1) o(1) i ) Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

40 Cross Entropy for Multi-class [Optional Notes] Instantaneous error gradient: E w (1) i = s = s = E a (1) s a (1) s w (1) i (y t )w (2) s o(1) s (1 o (1) s (y t )w (2) o(1) (1 o (1) )x i ) a(1) s w (1) i Update Formula w (2) w(2) η(y t )o (1) K w (1) i w (1) i η =1 (y t )w (2) o(1) (1 o (1) )x i Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

41 Deep Learning Software Stac [Optional Notes] Deep Learning Software Stac Keras pytorch CNTK (Microsoft) TensorFlow (Google) Eigen::Tensor NumPy Theano (U. of Montreal) Gpu array Torch (Faceboo) Caffe/C affe2 (Bereley AI Researcy) BLAS cublas, cudnn, curand CPU GPU They are all free! 11 Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

42 Convolutional Neural Networs A convolutional neural networ (CNN) is a type of machine learning program thats well-suited for image classification tass, lie spotting the early signs of cervical cancer and recognition of obects from images. A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers of a CNN typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

43 Why CNN? If we use a DNN for handwritten digit recognition, we need to stac (flatten) the rows or columns of an image to form vector Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

44 Why CNN? This method wors fine but we have problem when the digit is off-centered The flattened vector is now completely different from the training data. A good solution is to use convolution as human do not loo at an image at the pixel level. Instead, we loo for the spatial structure of an image. This means that it does not matter where the 8 is as long as it is an 8. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

45 CNN for Digit Recognition Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

46 How CNN Detect Patterns in Images Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

47 Recent Development in Deep Learning Xavier Initialization (Glorot and Bengio, 2010): A simple but effective weight initialization method With Xavier initialization, per-layer generative pre-training is not necessary Rectified Linear Unit, ReLU (Glorot and Bengio, 2011): Rectifier units are closer to biological neurons They can avoid the gradient vanishing problem. DNNs with ReLU as non-linear units do not require pre-training on supervised tass with large labeled datasets. Generative Adversarial Networs, GAN (Goodfellow, 2014): Use opposite obective functions during training. The generator attempts to synthesize images that can fool the discriminator into believing that the synthetic images are real. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

48 Recent Development in Deep Learning Batch Normalization (Ioffe and Szegedy, 2015): During BP training, apply z-norm to the output of each layer, using the statistics within a mini-batch. Can achieve the same accuracy with 1/14 of the training steps Residual Networs ((He et al. 2015)): Aim to solve a common problem in DNNs: When depth increases, accuracy gets saturated and then degrades. Networs are trained to predict the residual H(x)x instead of H(x), where x is the input and H(x) is the desired mapping of the application. Can train networs up to 1000 layers. Variational Autoencoders (Kingma and Welling, 2014): Combine DNNs with probabilistic models (bottlenec-layer nodes are stochastic) Wor very well in generating complicated data. Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

49 References G. Hinton and R.R. Salahutdinov, Reducing the dimensionality of data with neural networs, Science, vol. 313, no. 5786, pp , Y. Bengio, Learning deep architectures for AI, Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1127, Y. LeCun, Y. Bengio and G.E. Hinton, Deep Learning, Nature, vol. 521, pp , May P. Vincent, H. Larochelle, I. Laoie, Y.H. Bengio, and P.A. Manzagol, Staced denoising autoencoders: Learning useful representations in a deep networ with a local denoising criterion, The Journal of Machine Learning Research, vol. 11, pp , D. Erhan, Y. Bengio, A. Courville, P.A. Manzagol, P. Vincent, and S. Bengio, Why does unsupervised pre-training help deep learning?, Journal of Machine Learning Research, 11(Feb), G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18: , Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp ). Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

50 References S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep networ training by reducing internal covariate shift, in Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015, pp X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networs., in Aistats, 2010, vol. 9, pp K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp D. P. Kingma, and M. Welling. Auto-encoding variational bayes. arxiv preprint arxiv: (2014). Man-Wai MAK (EIE) Deep Learning and DNN November 9, / 50

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Learning Deep Architectures for AI. Part II - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part II - Vijay Chakilam Limitations of Perceptron x1 W, b 0,1 1,1 y x2 weight plane output =1 output =0 There is no value for W and b such that the model

More information

Feature Design. Feature Design. Feature Design. & Deep Learning

Feature Design. Feature Design. Feature Design. & Deep Learning Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Jakub Hajic Artificial Intelligence Seminar I

Jakub Hajic Artificial Intelligence Seminar I Jakub Hajic Artificial Intelligence Seminar I. 11. 11. 2014 Outline Key concepts Deep Belief Networks Convolutional Neural Networks A couple of questions Convolution Perceptron Feedforward Neural Network

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 4 October 2017 / 9 October 2017 MLP Lecture 3 Deep Neural Networs (1) 1 Recap: Softmax single

More information

Notes on Adversarial Examples

Notes on Adversarial Examples Notes on Adversarial Examples David Meyer dmm@{1-4-5.net,uoregon.edu,...} March 14, 2017 1 Introduction The surprising discovery of adversarial examples by Szegedy et al. [6] has led to new ways of thinking

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Deep Learning: a gentle introduction

Deep Learning: a gentle introduction Deep Learning: a gentle introduction Jamal Atif jamal.atif@dauphine.fr PSL, Université Paris-Dauphine, LAMSADE February 8, 206 Jamal Atif (Université Paris-Dauphine) Deep Learning February 8, 206 / Why

More information

Deep Neural Networks (1) Hidden layers; Back-propagation

Deep Neural Networks (1) Hidden layers; Back-propagation Deep Neural Networs (1) Hidden layers; Bac-propagation Steve Renals Machine Learning Practical MLP Lecture 3 2 October 2018 http://www.inf.ed.ac.u/teaching/courses/mlp/ MLP Lecture 3 / 2 October 2018 Deep

More information

UNSUPERVISED LEARNING

UNSUPERVISED LEARNING UNSUPERVISED LEARNING Topics Layer-wise (unsupervised) pre-training Restricted Boltzmann Machines Auto-encoders LAYER-WISE (UNSUPERVISED) PRE-TRAINING Breakthrough in 2006 Layer-wise (unsupervised) pre-training

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

Deep Belief Networks are compact universal approximators

Deep Belief Networks are compact universal approximators 1 Deep Belief Networks are compact universal approximators Nicolas Le Roux 1, Yoshua Bengio 2 1 Microsoft Research Cambridge 2 University of Montreal Keywords: Deep Belief Networks, Universal Approximation

More information

Deep Generative Models. (Unsupervised Learning)

Deep Generative Models. (Unsupervised Learning) Deep Generative Models (Unsupervised Learning) CEng 783 Deep Learning Fall 2017 Emre Akbaş Reminders Next week: project progress demos in class Describe your problem/goal What you have done so far What

More information

Speaker Representation and Verification Part II. by Vasileios Vasilakakis

Speaker Representation and Verification Part II. by Vasileios Vasilakakis Speaker Representation and Verification Part II by Vasileios Vasilakakis Outline -Approaches of Neural Networks in Speaker/Speech Recognition -Feed-Forward Neural Networks -Training with Back-propagation

More information

Introduction to Deep Neural Networks

Introduction to Deep Neural Networks Introduction to Deep Neural Networks Presenter: Chunyuan Li Pattern Classification and Recognition (ECE 681.01) Duke University April, 2016 Outline 1 Background and Preliminaries Why DNNs? Model: Logistic

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost

More information

Fast Learning with Noise in Deep Neural Nets

Fast Learning with Noise in Deep Neural Nets Fast Learning with Noise in Deep Neural Nets Zhiyun Lu U. of Southern California Los Angeles, CA 90089 zhiyunlu@usc.edu Zi Wang Massachusetts Institute of Technology Cambridge, MA 02139 ziwang.thu@gmail.com

More information

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY

DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY DEEP LEARNING AND NEURAL NETWORKS: BACKGROUND AND HISTORY 1 On-line Resources http://neuralnetworksanddeeplearning.com/index.html Online book by Michael Nielsen http://matlabtricks.com/post-5/3x3-convolution-kernelswith-online-demo

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Neural networks Daniel Hennes 21.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Logistic regression Neural networks Perceptron

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Neural Networks Varun Chandola x x 5 Input Outline Contents February 2, 207 Extending Perceptrons 2 Multi Layered Perceptrons 2 2. Generalizing to Multiple Labels.................

More information

Machine Learning

Machine Learning Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Importance Reweighting Using Adversarial-Collaborative Training

Importance Reweighting Using Adversarial-Collaborative Training Importance Reweighting Using Adversarial-Collaborative Training Yifan Wu yw4@andrew.cmu.edu Tianshu Ren tren@andrew.cmu.edu Lidan Mu lmu@andrew.cmu.edu Abstract We consider the problem of reweighting a

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Deep learning attracts lots of attention.

Deep learning attracts lots of attention. Deep Learning Deep learning attracts lots of attention. I believe you have seen lots of exciting results before. Deep learning trends at Google. Source: SIGMOD/Jeff Dean Ups and downs of Deep Learning

More information

Deep Neural Networks

Deep Neural Networks Deep Neural Networks DT2118 Speech and Speaker Recognition Giampiero Salvi KTH/CSC/TMH giampi@kth.se VT 2015 1 / 45 Outline State-to-Output Probability Model Artificial Neural Networks Perceptron Multi

More information

Artificial Neural Networks. MGS Lecture 2

Artificial Neural Networks. MGS Lecture 2 Artificial Neural Networks MGS 2018 - Lecture 2 OVERVIEW Biological Neural Networks Cell Topology: Input, Output, and Hidden Layers Functional description Cost functions Training ANNs Back-Propagation

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu

Convolutional Neural Networks II. Slides from Dr. Vlad Morariu Convolutional Neural Networks II Slides from Dr. Vlad Morariu 1 Optimization Example of optimization progress while training a neural network. (Loss over mini-batches goes down over time.) 2 Learning rate

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

RegML 2018 Class 8 Deep learning

RegML 2018 Class 8 Deep learning RegML 2018 Class 8 Deep learning Lorenzo Rosasco UNIGE-MIT-IIT June 18, 2018 Supervised vs unsupervised learning? So far we have been thinking of learning schemes made in two steps f(x) = w, Φ(x) F, x

More information

Neural Networks and the Back-propagation Algorithm

Neural Networks and the Back-propagation Algorithm Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely

More information

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box

Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses about the label (Top-5 error) No Bounding Box ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton Motivation Classification goals: Make 1 guess about the label (Top-1 error) Make 5 guesses

More information

Convolutional Neural Networks. Srikumar Ramalingam

Convolutional Neural Networks. Srikumar Ramalingam Convolutional Neural Networks Srikumar Ramalingam Reference Many of the slides are prepared using the following resources: neuralnetworksanddeeplearning.com (mainly Chapter 6) http://cs231n.github.io/convolutional-networks/

More information

How to do backpropagation in a brain

How to do backpropagation in a brain How to do backpropagation in a brain Geoffrey Hinton Canadian Institute for Advanced Research & University of Toronto & Google Inc. Prelude I will start with three slides explaining a popular type of deep

More information

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Basics of Neural Networks Connelly Barnes Overview Simple neural networks Perceptron Feedforward neural networks Multilayer perceptron and properties Autoencoders

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Deep Learning Lecture 2

Deep Learning Lecture 2 Fall 2016 Machine Learning CMPSCI 689 Deep Learning Lecture 2 Sridhar Mahadevan Autonomous Learning Lab UMass Amherst COLLEGE Outline of lecture New type of units Convolutional units, Rectified linear

More information

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17 3/9/7 Neural Networks Emily Fox University of Washington March 0, 207 Slides adapted from Ali Farhadi (via Carlos Guestrin and Luke Zettlemoyer) Single-layer neural network 3/9/7 Perceptron as a neural

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Index. Santanu Pattanayak 2017 S. Pattanayak, Pro Deep Learning with TensorFlow,

Index. Santanu Pattanayak 2017 S. Pattanayak, Pro Deep Learning with TensorFlow, Index A Activation functions, neuron/perceptron binary threshold activation function, 102 103 linear activation function, 102 rectified linear unit, 106 sigmoid activation function, 103 104 SoftMax activation

More information

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch

Course Structure. Psychology 452 Week 12: Deep Learning. Chapter 8 Discussion. Part I: Deep Learning: What and Why? Rufus. Rufus Processed By Fetch Psychology 452 Week 12: Deep Learning What Is Deep Learning? Preliminary Ideas (that we already know!) The Restricted Boltzmann Machine (RBM) Many Layers of RBMs Pros and Cons of Deep Learning Course Structure

More information

Introduction to (Convolutional) Neural Networks

Introduction to (Convolutional) Neural Networks Introduction to (Convolutional) Neural Networks Philipp Grohs Summer School DL and Vis, Sept 2018 Syllabus 1 Motivation and Definition 2 Universal Approximation 3 Backpropagation 4 Stochastic Gradient

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35 Neural Networks David Rosenberg New York University July 26, 2017 David Rosenberg (New York University) DS-GA 1003 July 26, 2017 1 / 35 Neural Networks Overview Objectives What are neural networks? How

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. Nicholas Ruozzi University of Texas at Dallas Neural Networks Nicholas Ruozzi University of Texas at Dallas Handwritten Digit Recognition Given a collection of handwritten digits and their corresponding labels, we d like to be able to correctly classify

More information

Learning Deep Architectures

Learning Deep Architectures Learning Deep Architectures Yoshua Bengio, U. Montreal Microsoft Cambridge, U.K. July 7th, 2009, Montreal Thanks to: Aaron Courville, Pascal Vincent, Dumitru Erhan, Olivier Delalleau, Olivier Breuleux,

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Deep Learning for NLP

Deep Learning for NLP Deep Learning for NLP CS224N Christopher Manning (Many slides borrowed from ACL 2012/NAACL 2013 Tutorials by me, Richard Socher and Yoshua Bengio) Machine Learning and NLP NER WordNet Usually machine learning

More information

Deep Learning. Hung-yi Lee 李宏毅

Deep Learning. Hung-yi Lee 李宏毅 Deep Learning Hung-yi Lee 李宏毅 Deep learning attracts lots of attention. I believe you have seen lots of exciting results before. Deep learning trends at Google. Source: SIGMOD 206/Jeff Dean 958: Perceptron

More information

Neural Networks and Deep Learning.

Neural Networks and Deep Learning. Neural Networks and Deep Learning www.cs.wisc.edu/~dpage/cs760/ 1 Goals for the lecture you should understand the following concepts perceptrons the perceptron training rule linear separability hidden

More information

Machine Learning Lecture 14

Machine Learning Lecture 14 Machine Learning Lecture 14 Tricks of the Trade 07.12.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory Probability

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1)

11/3/15. Deep Learning for NLP. Deep Learning and its Architectures. What is Deep Learning? Advantages of Deep Learning (Part 1) 11/3/15 Machine Learning and NLP Deep Learning for NLP Usually machine learning works well because of human-designed representations and input features CS224N WordNet SRL Parser Machine learning becomes

More information

Deep Learning Made Easier by Linear Transformations in Perceptrons

Deep Learning Made Easier by Linear Transformations in Perceptrons Deep Learning Made Easier by Linear Transformations in Perceptrons Tapani Raiko Aalto University School of Science Dept. of Information and Computer Science Espoo, Finland firstname.lastname@aalto.fi Harri

More information

Bidirectional Representation and Backpropagation Learning

Bidirectional Representation and Backpropagation Learning Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing

More information

SGD and Deep Learning

SGD and Deep Learning SGD and Deep Learning Subgradients Lets make the gradient cheating more formal. Recall that the gradient is the slope of the tangent. f(w 1 )+rf(w 1 ) (w w 1 ) Non differentiable case? w 1 Subgradients

More information

Classification of Hand-Written Digits Using Scattering Convolutional Network

Classification of Hand-Written Digits Using Scattering Convolutional Network Mid-year Progress Report Classification of Hand-Written Digits Using Scattering Convolutional Network Dongmian Zou Advisor: Professor Radu Balan Co-Advisor: Dr. Maneesh Singh (SRI) Background Overview

More information

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY,

WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY, WHY ARE DEEP NETS REVERSIBLE: A SIMPLE THEORY, WITH IMPLICATIONS FOR TRAINING Sanjeev Arora, Yingyu Liang & Tengyu Ma Department of Computer Science Princeton University Princeton, NJ 08540, USA {arora,yingyul,tengyu}@cs.princeton.edu

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

Multilayer Perceptrons (MLPs)

Multilayer Perceptrons (MLPs) CSE 5526: Introduction to Neural Networks Multilayer Perceptrons (MLPs) 1 Motivation Multilayer networks are more powerful than singlelayer nets Example: XOR problem x 2 1 AND x o x 1 x 2 +1-1 o x x 1-1

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

Constrained Optimization and Support Vector Machines

Constrained Optimization and Support Vector Machines Constrained Optimization and Support Vector Machines Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w x + w 2 x 2 + w 0 = 0 Feature x 2 = w w 2 x w 0 w 2 Feature 2 A perceptron can separate

More information

COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR

COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR COMPLEX INPUT CONVOLUTIONAL NEURAL NETWORKS FOR WIDE ANGLE SAR ATR Michael Wilmanski #*1, Chris Kreucher *2, & Alfred Hero #3 # University of Michigan 500 S State St, Ann Arbor, MI 48109 1 wilmansk@umich.edu,

More information

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer CSE446: Neural Networks Spring 2017 Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer Human Neurons Switching time ~ 0.001 second Number of neurons 10 10 Connections per neuron 10 4-5 Scene

More information

Denoising Autoencoders

Denoising Autoencoders Denoising Autoencoders Oliver Worm, Daniel Leinfelder 20.11.2013 Oliver Worm, Daniel Leinfelder Denoising Autoencoders 20.11.2013 1 / 11 Introduction Poor initialisation can lead to local minima 1986 -

More information

Artificial Neural Networks. Historical description

Artificial Neural Networks. Historical description Artificial Neural Networks Historical description Victor G. Lopez 1 / 23 Artificial Neural Networks (ANN) An artificial neural network is a computational model that attempts to emulate the functions of

More information

Unit 8: Introduction to neural networks. Perceptrons

Unit 8: Introduction to neural networks. Perceptrons Unit 8: Introduction to neural networks. Perceptrons D. Balbontín Noval F. J. Martín Mateos J. L. Ruiz Reina A. Riscos Núñez Departamento de Ciencias de la Computación e Inteligencia Artificial Universidad

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 6: Neural Networks and Deep Learning January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Liu Yang March 30, 2017 Liu Yang Short title March 30, 2017 1 / 24 Overview 1 Background A general introduction Example 2 Gradient based learning Cost functions Output Units 3

More information

Based on the original slides of Hung-yi Lee

Based on the original slides of Hung-yi Lee Based on the original slides of Hung-yi Lee Google Trends Deep learning obtains many exciting results. Can contribute to new Smart Services in the Context of the Internet of Things (IoT). IoT Services

More information

10. Artificial Neural Networks

10. Artificial Neural Networks Foundations of Machine Learning CentraleSupélec Fall 217 1. Artificial Neural Networks Chloé-Agathe Azencot Centre for Computational Biology, Mines ParisTech chloe-agathe.azencott@mines-paristech.fr Learning

More information

Unsupervised Learning

Unsupervised Learning CS 3750 Advanced Machine Learning hkc6@pitt.edu Unsupervised Learning Data: Just data, no labels Goal: Learn some underlying hidden structure of the data P(, ) P( ) Principle Component Analysis (Dimensionality

More information

Neural networks and optimization

Neural networks and optimization Neural networks and optimization Nicolas Le Roux Criteo 18/05/15 Nicolas Le Roux (Criteo) Neural networks and optimization 18/05/15 1 / 85 1 Introduction 2 Deep networks 3 Optimization 4 Convolutional

More information

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks Delivered by Mark Ebden With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable

More information

TUTORIAL PART 1 Unsupervised Learning

TUTORIAL PART 1 Unsupervised Learning TUTORIAL PART 1 Unsupervised Learning Marc'Aurelio Ranzato Department of Computer Science Univ. of Toronto ranzato@cs.toronto.edu Co-organizers: Honglak Lee, Yoshua Bengio, Geoff Hinton, Yann LeCun, Andrew

More information

Lecture 14: Deep Generative Learning

Lecture 14: Deep Generative Learning Generative Modeling CSED703R: Deep Learning for Visual Recognition (2017F) Lecture 14: Deep Generative Learning Density estimation Reconstructing probability density function using samples Bohyung Han

More information

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation)

<Special Topics in VLSI> Learning for Deep Neural Networks (Back-propagation) Learning for Deep Neural Networks (Back-propagation) Outline Summary of Previous Standford Lecture Universal Approximation Theorem Inference vs Training Gradient Descent Back-Propagation

More information

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen Neural Networks - I Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Neural Networks 1 /

More information

Logistic Regression. COMP 527 Danushka Bollegala

Logistic Regression. COMP 527 Danushka Bollegala Logistic Regression COMP 527 Danushka Bollegala Binary Classification Given an instance x we must classify it to either positive (1) or negative (0) class We can use {1,-1} instead of {1,0} but we will

More information

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1

What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 What Do Neural Networks Do? MLP Lecture 3 Multi-layer networks 1 Multi-layer networks Steve Renals Machine Learning Practical MLP Lecture 3 7 October 2015 MLP Lecture 3 Multi-layer networks 2 What Do Single

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

Neural Networks. William Cohen [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ]

Neural Networks. William Cohen [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ] Neural Networks William Cohen 10-601 [pilfered from: Ziv; Geoff Hinton; Yoshua Bengio; Yann LeCun; Hongkak Lee - NIPs 2010 tutorial ] WHAT ARE NEURAL NETWORKS? William s notation Logis;c regression + 1

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Lecture 4: Perceptrons and Multilayer Perceptrons

Lecture 4: Perceptrons and Multilayer Perceptrons Lecture 4: Perceptrons and Multilayer Perceptrons Cognitive Systems II - Machine Learning SS 2005 Part I: Basic Approaches of Concept Learning Perceptrons, Artificial Neuronal Networks Lecture 4: Perceptrons

More information

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs)

Multilayer Neural Networks. (sometimes called Multilayer Perceptrons or MLPs) Multilayer Neural Networks (sometimes called Multilayer Perceptrons or MLPs) Linear separability Hyperplane In 2D: w 1 x 1 + w 2 x 2 + w 0 = 0 Feature 1 x 2 = w 1 w 2 x 1 w 0 w 2 Feature 2 A perceptron

More information

Deep Learning & Neural Networks Lecture 2

Deep Learning & Neural Networks Lecture 2 Deep Learning & Neural Networks Lecture 2 Kevin Duh Graduate School of Information Science Nara Institute of Science and Technology Jan 16, 2014 2/45 Today s Topics 1 General Ideas in Deep Learning Motivation

More information

Deep Learning Architectures and Algorithms

Deep Learning Architectures and Algorithms Deep Learning Architectures and Algorithms In-Jung Kim 2016. 12. 2. Agenda Introduction to Deep Learning RBM and Auto-Encoders Convolutional Neural Networks Recurrent Neural Networks Reinforcement Learning

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information