Artificial Neural Networks Backpropagation & Deep Neural Networks

Size: px
Start display at page:

Download "Artificial Neural Networks Backpropagation & Deep Neural Networks"

Transcription

1 Artificial Neural Netwrs Bacprpagatin & Deep Neural Netwrs Jan Drcal Cmputatinal Intelligence Grup Department f Cmputer Science and Engineering Faculty f Electrical Engineering Czec Tecnical University in Prague

2 Outline Learning MLPs: Bacprpagatin. Deep Neural Netwrs. Tis presentatin is partially inspired and uses several images and citatins frm Geffrey Hintn's Neural Netwrs fr Macine Learning curse at Cursera. G trug te curse, it is great!

3 Bacprpagatin (BP) Paul Werbs, 1974, Harvard, PD tesis. Still ppular metd, many mdificatins. BP is a learning metd fr MLP: cntinuus, differentiable activatin functins!

4 BP Overview (Online Versin) randm weigt initializatin repeat repeat // epc cse an instance frm te training set apply it t te netwr evaluate netwr utputs cmpare utputs t desired values mdify te weigts until all instances selected frm te training set until glbal errr < criterin

5 ANN Energy Bacprpagatin is based n a minimalizatin f ANN energy (= errr). Energy is a measure describing w te netwr is trained n given data. Fr BP we define te energy functin: E TOTAL = p E p Te ttal sum cmputed ver all patterns f te training set. were 1 N 2 E p = i=1 d i y i 2 we will mit p in fllwing slides Nte, ½ nly fr cnvenience we will see later...

6 ANN Energy II Te energy/errr is a functin f:,w E= f X W weigts (treslds) variable, X inputs fixed (fr given pattern).

7 Bacprpagatin Keynte Fr given values at netwr inputs we btain an energy value. Our tas is t minimize tis value. Te minimizatin is dne via mdificatin f weigts and treslds. We ave t identify w te energy canges wen a certain weigt is canged by Δw. Tis crrespnds t partial derivatives We emply a gradient metd. w.

8 Gradient Descent in Energy Landscape Energy/Errr Landscape (w1,w2) (w1+δw1,w2 +Δw2) 36NaN - Neurnvé sítě M. Šnre 8

9 Weigt Update We want t update weigts in ppsite directin t te gradient: Δ w j = η w j weigt delta learning rate Nte: gradient f energy functin is a vectr wic cntains partial derivatives fr all weigts (treslds)

10 Ntatin +1 w j w tresld f neurn in layer m w m j weigt f cnnectin frm layer m-1 t m m inner ptential f neurn in layer m m utput f neurn in layer m s y x w 11 x1 w 12 x2 w 01 y 1 +1 y 2 +1 xn -t input Ni, N, N w 01 weigt f cnnectin frm neurn j t neurn m 0 +1 number f neurns in input, idden and utput layers input i y N idden w 11 w 12 y 1 +1 y 2 +1 y N utput

11 Energy as a Functin Cmpsitin E E s = w j s w j

12 Energy as a Functin Cmpsitin E E s = w j s w j use s = j w j y j s =y j w j

13 Energy as a Functin Cmpsitin E E s = w j s w j dente = s use s = j w j y j s =y j w j

14 Energy as a Functin Cmpsitin E E s = w j s w j dente = s use s = j w j y j s =y j w j w j = y j Remember te delta rule?

15 Output Layer = s utput layer = s

16 Output Layer = s = s utput layer y = s y s

17 Output Layer = s = s utput layer y = s y s y s derivative f activatin functin =S ' s

18 Output Layer = s 1 N 2 E = i=1 d i y i 2 use y = s y s = d y y = s utput layer y s derivative f activatin functin =S ' s

19 Output Layer = s 1 N 2 E = i=1 d i y i 2 use dependency f energy n a netwr utput y = s y s = d y y = s utput layer y s derivate f activatin functin =S ' s

20 Output Layer = s 1 N 2 E = i=1 d i y i 2 use dependency f energy n a netwr utput y = s y s = d y y Tat is wy we used te ½ in energy definitin. = s utput layer y s derivate f activatin functin =S ' s

21 Output Layer = s 1 N 2 E = i=1 d i y i 2 use dependency f energy n a netwr utput y = s y s = d y Again, remember te delta rule? y Tat is wy we used 1/2. = s utput layer y s derivate f activatin functin =S ' s

22 Output Layer = s 1 N 2 E = i=1 d i y i 2 use dependency f energy n a netwr utput = s utput layer y = s y s y = d y Again, remember te delta rule? y s Tat is wy we used 1/2. j j derivate f activatin functin =S ' s w = y = d y S ' s y j

23 Hidden Layer = s idden layer = s

24 Hidden Layer = s = s idden layer y = s y s

25 Hidden Layer = s = s idden layer y = s y s y s =S ' s Same as utput layer.

26 Hidden Layer = s y = s y s Nte, tis is utput f a idden neurn. = s idden layer y s =S ' s Same as utput layer.

27 Hidden Layer = s y = s y s Nte, tis is utput f a idden neurn.? let's l at tis partial derivatin = s idden layer y s =S ' s Same as utput layer.

28 Hidden Layer II l N N s = l=1 = l=1 w il y i = i=1 y sl y sl y N w 1 Apply te cain rule (ttp://en.wiipedia.rg/wii/cain_rule). N = l =1 y1 N w l = l =1 l w l sl y 1 w 2 2 w N N y 2 y N

29 Hidden Layer II l N N s = l=1 = l=1 w il y i = i=1 y sl y sl y N w 1 Apply te cain rule (ttp://en.wiipedia.rg/wii/cain_rule). N = l =1 N w l = l =1 l w l sl But we nw tis already. y1 y 1 w 2 2 w N N y 2 y N = s

30 Hidden Layer II l N N s = l=1 = l=1 w il y i = i=1 y sl y sl y N w 1 Apply te cain rule (ttp://en.wiipedia.rg/wii/cain_rule). N = l =1 N w l = l =1 l w l sl But we nw tis already. = s y1 Tae te errr f te utput neurn y 1 w 2 2 w N N y 2 y N and multiply it by te input weigt.

31 Hidden Layer II l N N s = l=1 = l=1 w il y i = i=1 y sl y sl y N w 1 Apply te cain rule (ttp://en.wiipedia.rg/wii/cain_rule). N = l =1 N w l = l =1 l w l sl But we nw tis already. = s y1 y Tae te errr f te utput neurn Here te bac-prpagatin actually appens... 1 w 2 2 w N N y 2 y N and multiply it by te input weigt.

32 Hidden Layer III Nw, let's put it all tgeter! N y = = l=1 δl w l S ' ( s ) s y s ( )

33 Hidden Layer III Nw, let's put it all tgeter! N y = = l=1 δl w l S ' ( s ) s y s ( ) N δ = = l =1 δl w l S ' ( s ) s ( )

34 Hidden Layer III Nw, let's put it all tgeter! N y = = l=1 δl w l S ' ( s ) s y s ( ) Te derivatin f te activatin functin is te last ting t deal wit! N δ = = l =1 δl w l S ' ( s ) s ( ) ( N ) Δ w =ηδ x j =η l =1 δ w S ' ( s ) x j j l l

35 Sigmid Derivatin γ s ' γ 1 e S ' (s )= = =γ y (1 y ) γ s γ s γ s 1+e 1+e 1+e ( ) Tat is wy we needed cntinuus & differentiable activatin functins!

36 BP Put All Tgeter Output layer: j w = y 1 y d y y j Tis is equal t xj wen we get t inputs. Hidden layer m (nte tat +1 = ): m j m Δ w =ηδ y m 1 j ( N m +1 =η γ y (1 y ) l=1 δ m m m+1 l w m+1 l )y m 1 j Weigt (tresld) updates: w j t 1 =w j t w j t

37 Hw Abut General Case? Arbitrary number f idden layers? 1 x1 1 w 12 w 11 w 11 w 11 y 1 1 w 12 y 1 y 2 x2 y 1 2 xn w 12 y 1 y 2 y N i input y 1 N y N 1 utput idden It's te same: fr layer -1 use.

38 Ptential Prblems Hig dimensin f weigt (tresld) space. Cmplexity f energy functin: multimdality, large plateaus & narrw peas. Many layers: bac-prpagated errr signal vanises, see later...

39 Weigt Updates Wen t apply delta weigts? Online learning/stcastic Gradient Descent (SGD): after eac training pattern. (Full) Batc learning: apply average/sum f delta weigts after sweeping trug te wle training set. Mini-batc learning: after a small sample f training patterns.

40 Mmentum Simple, but greatly elps wen aviding lcal minima: w ij t = j t y i t w ij t 1 mmentum cnstant: [ 0,1 Analgy: a ball (parameter vectr) rlling dwn a ill (errr landscape).

41 Resilient Prpagatin (RPROP) Mtivatin: magnitude f gradient differ a lt fr different weigts in practise. RPROP des nt use gradient value te step size fr eac weigt is adapted using its sign, nly. Metd: increase te step size fr a weigt if te signs f te last tw partial derivatives agree (e.g., multiply by 1.2), decrease (e.g., multiply by 0.5) terwise, limit step size (e.g., [ ]).

42 Resilient Prpagatin (RPROP) II. Read: Igel, Hüsen: Imprving te Rprp Learning Algritm, Gd news: typically faster by an rder f magnitude tan plain BPROP, rbust t parameter settings, n learning rate parameter. Bad news: wrs fr full batc learning nly!

43 Resilient Prpagatin (RPROP) III. Wy nt mini-batces? weigt gets 9 times a gradient f +0.1, and nce a gradient f -0.9 (te tent minibatc), we expect it t stay rugly were it was at te beginning, but it will grw a lt (assuming adaptatin f te step size is small)! Tis example is due t Geffrey Hintn (Neural Netwrs fr Macine Learning, Cursera)

44 Oter Metds Quic Prpagatin (QUICKPROP) based n Newtn's metd secnd-rder apprac. Levenberg Marquardt cmbines Gauss-Newtn algritm and Gradient Descent.

45 Oter Appraces Based n Numerical Optimizatin Cmpute partial derivatives ver te ttal energy: E TOTAL w j and use any numerical ptimizatin metd, i.e.: Cnjugated gradients, Quasi-Newtn metds,...

46 Deep Neural Netwrs (DNNs) ANNs aving many layers: cmplex nnlinear transfrmatins. DNNs are ard t train: many parameters, vanising (explding) gradient prblem: bacprpagated signal gets quicly reduced. Slutins: reduce cnnectivity/sare weigts, use large training sets (t prevent verfitting), unsupervised pretraining.

47 Cnvlutinal Neural Netwrs (CNNs) Feed-frward arcitecture using cnvlutinal, pling and ter layers. Based n visual crtex researc: receptive field. Fuusima: NeCgnitrn (1980) Yann LeCun: LeNet-5 (1998) receptive field size 5 receptive field size 3 sared weigts sparse cnnectivity Images frm ttp://deeplearning.net/tutrial/lenet.tml Detect features regardless f teir psitin in te visual field.

48 BACKPROP fr Sared Weigts Fr tw weigts w 1 =w 2 we need Δ w 1 =Δ w 2 Cmpute Use, w1 w2 r average. + w1 w 2

49 Cnvlutin Kernel Image frm tttp://develper.apple.cm, Cpyrigt 2011 Apple Inc.

50 Pling Reducing dimensinality. Max-pling is te metd f cice. Prblem: After several levels f pling, we lse infrmatin abut te precise psitins. Image frm ttp://vaaaaaanquis.atenablg.cm

51 CNN Example TORCS: Kutni, Gmez, Scmiduber: Evlving deep unsupervised cnvlutinal netwrs fr visin-based RL, 2014.

52 CNN Example II. Images frm Kutni, Gmez, Scmiduber: Evlving deep unsupervised cnvlutinal netwrs fr visin-based RL, 2014.

53 CNN LeNet5: Arcitecture fr MNIST MNIST: written caracter recgnitin dataset. See ttp://yann.lecun.cm/exdb/mnist/ training set 60,000, testing set 10,000 examples. Image by LeCun et al.: Gradient-based learning applied t dcument recgnitin, 1998.

54 Errrs by LeNet5 82 errrs (can be reduced t abut 30). Human errr rate wuld be abut 20 t 30.

55 ImageNet Dataset f ig-reslutin clr images. Based n Large Scale Visual Recgnitin Callenge 2012 (ILSVRC2012). 1,200,000 training examples, 1000 classes. 23 layers!

56 Principal Cmpnents Analysis (PCA) Tae N-dimensinal data, find M rtgnal directins in wic te data ave te mst variance. M principal directins: a lwer-dimensinal subspace. Linear prjectin wit dimensinality reductin at te end.

57 PCA wit N=2 and M=1 directin f first principal cmpnent i.e. directin f greatest variance

58 PCA by MLP wit BACKPROP (inefficiently) Linear idden & utput layers. Te M idden units will span te same space as te first M cmpnents fund by PCA utput vectr cde input vectr N units M units N units

59 Generalize PCA: Autencder Wat abut nn-linear units? utput vectr decding weigts cde encding weigts input vectr

60 Staced Autencders: Unsupervised Pretraining We nw, it is ard t train DNNs. We can use te fllwing weigt initializatin metd: 1.Train te first layer as a sallw autencder. 2.Use te idden units utputs as an input t anter sallw autencder. 3.Repeat (2) t until desired number f layers is reaced. 4.Fine-tune using supervised learning. Steps 1 & 2 are unsupervised (n labels needed).

61 Staced Autencders II.

62 Staced Autencders III.

63 Staced Autencders IV.

64 Oter Deep Learning Appraces Deep Neural Netwrs are nt te nly implementatin f Deep Learning. Grapical Mdel appraces. Key wrds: Restricted Bltzmann Macine (RBM), Staced RBM, Deep Belief Netwr.

65 Tls fr Deep Learning GPU acceleratin. cuda-cnvnet2 (C++), Caffe (C++, Pytn, Matlab, Matematica), Tean (Pytn), DL4J (Java), and many ters.

66 Next Lecture Recurrent ANNs = RNNs

Artificial Neural Networks MLP, Backpropagation

Artificial Neural Networks MLP, Backpropagation Artificial Neural Netwrks MLP, Backprpagatin 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

Online Handwritten Character Recognition Using an Optical Backpropagation Neural Network

Online Handwritten Character Recognition Using an Optical Backpropagation Neural Network Issues in Infrming Science and Infrmatin Tecnlgy Online Handwritten Caracter Recgnitin Using an Optical Backprpagatin Neural Netwrk Walid A Salame Princess Summaya University fr Science and Tecnlgy, Amman,

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

Chapter 7. Neural Networks

Chapter 7. Neural Networks Chapter 7. Neural Netwrks Wei Pan Divisin f Bistatistics, Schl f Public Health, University f Minnesta, Minneaplis, MN 55455 Email: weip@bistat.umn.edu PubH 7475/8475 c Wei Pan Intrductin Chapter 11. nly

More information

Reading Group on Deep Learning Session 4 Unsupervised Neural Networks

Reading Group on Deep Learning Session 4 Unsupervised Neural Networks Reading Group on Deep Learning Session 4 Unsupervised Neural Networks Jakob Verbeek & Daan Wynen 206-09-22 Jakob Verbeek & Daan Wynen Unsupervised Neural Networks Outline Autoencoders Restricted) Boltzmann

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs Natural Language Understanding Lecture 12: Recurrent Neural Networks and LSTMs Recap: probability, language models, and feedforward networks Simple Recurrent Networks Adam Lopez Credits: Mirella Lapata

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

NEURAL NETWORKS. Neural networks

NEURAL NETWORKS. Neural networks NEURAL NETWORKS Neural netwrks Mtivatin Humans are able t prcess cmplex tasks efficiently (perceptin, pattern recgnitin, reasning, etc.) Ability t learn frm examples Adaptability and fault tlerance Engineering

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedfrward Neural Netwrks Yagmur Gizem Cinar, Eric Gaussier AMA, LIG, Univ. Grenble Alpes 17 March 2017 Yagmur Gizem Cinar, Eric Gaussier Multilayer Perceptrns (MLP) 17 March 2017 1 / 42 Reference Bk Deep

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o

* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o Pseudinverse Learning Algrithm fr Feedfrward Neural Netwrs Λ PING GUO and MICHAEL R. LYU Department f Cmputer Science and Engineering, The Chinese University f Hng Kng, Shatin, NT, Hng Kng. SAR f P.R.

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

19 Better Neural Network Training; Convolutional Neural Networks

19 Better Neural Network Training; Convolutional Neural Networks 108 Jnathan Richard Shewchuk 19 Better Neural Netwrk Training; Cnvlutinal Neural Netwrks [I m ging t talk abut a bunch f heuristics that make gradient descent faster, r make it find better lcal minima,

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

Physics 2010 Motion with Constant Acceleration Experiment 1

Physics 2010 Motion with Constant Acceleration Experiment 1 . Physics 00 Mtin with Cnstant Acceleratin Experiment In this lab, we will study the mtin f a glider as it accelerates dwnhill n a tilted air track. The glider is supprted ver the air track by a cushin

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Rigid Body Dynamics (continued)

Rigid Body Dynamics (continued) Last time: Rigid dy Dynamics (cntinued) Discussin f pint mass, rigid bdy as useful abstractins f reality Many-particle apprach t rigid bdy mdeling: Newtn s Secnd Law, Euler s Law Cntinuus bdy apprach t

More information

NEURAL NETWORKS. modifications of EBP

NEURAL NETWORKS. modifications of EBP NEURAL NETWORKS ELEC 50 and ELEC 60 mdiicatins EBP Bdgan M. Wilamwsi + EBP Errr Bac Pragatin algrithm F { z} z F { z } F n {z} The case with multile ututs The weight increment is a suersitin weight mdiicatins

More information

SPH3U1 Lesson 06 Kinematics

SPH3U1 Lesson 06 Kinematics PROJECTILE MOTION LEARNING GOALS Students will: Describe the mtin f an bject thrwn at arbitrary angles thrugh the air. Describe the hrizntal and vertical mtins f a prjectile. Slve prjectile mtin prblems.

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

Chapter 4 The debroglie hypothesis

Chapter 4 The debroglie hypothesis Capter 4 Te debrglie yptesis In 194, te Frenc pysicist Luis de Brglie after lking deeply int te special tery f relatiity and ptn yptesis,suggested tat tere was a mre fundamental relatin between waes and

More information

Activity Guide Loops and Random Numbers

Activity Guide Loops and Random Numbers Unit 3 Lessn 7 Name(s) Perid Date Activity Guide Lps and Randm Numbers CS Cntent Lps are a relatively straightfrward idea in prgramming - yu want a certain chunk f cde t run repeatedly - but it takes a

More information

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification Agenda Artificial Intelligence and its applicatins Lecture 6 Supervised Learning Prfessr Daniel Yeung danyeung@ieee.rg Dr. Patrick Chan patrickchan@ieee.rg Suth China University f Technlgy, China Learning

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy

Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Moammad Ali Keyvanrad a, Moammad Medi Homayounpour a a Laboratory for Intelligent Multimedia Processing (LIMP), Computer

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Simple Linear Regression (single variable)

Simple Linear Regression (single variable) Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

AP Physics Kinematic Wrap Up

AP Physics Kinematic Wrap Up AP Physics Kinematic Wrap Up S what d yu need t knw abut this mtin in tw-dimensin stuff t get a gd scre n the ld AP Physics Test? First ff, here are the equatins that yu ll have t wrk with: v v at x x

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

Turing Machines. Human-aware Robotics. 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Announcement:

Turing Machines. Human-aware Robotics. 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Announcement: Turing Machines Human-aware Rbtics 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Annuncement: q q q q Slides fr this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse355/lectures/tm-ii.pdf

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

Chapter 11: Neural Networks

Chapter 11: Neural Networks Chapter 11: Neural Netwrks DD3364 December 16, 2012 Prjectin Pursuit Regressin Prjectin Pursuit Regressin mdel: Prjectin Pursuit Regressin f(x) = M g m (wmx) t i=1 where X R p and have targets Y R. Additive

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

Surface and Contact Stress

Surface and Contact Stress Surface and Cntact Stress The cncept f the frce is fundamental t mechanics and many imprtant prblems can be cast in terms f frces nly, fr example the prblems cnsidered in Chapter. Hwever, mre sphisticated

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS

ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS Particle Acceleratrs, 1986, Vl. 19, pp. 99-105 0031-2460/86/1904-0099/$15.00/0 1986 Grdn and Breach, Science Publishers, S.A. Printed in the United States f America ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS

More information

Introduction to Convolutional Neural Networks (CNNs)

Introduction to Convolutional Neural Networks (CNNs) Introduction to Convolutional Neural Networks (CNNs) nojunk@snu.ac.kr http://mipal.snu.ac.kr Department of Transdisciplinary Studies Seoul National University, Korea Jan. 2016 Many slides are from Fei-Fei

More information

Ray tracing equations in transversely isotropic media Cosmin Macesanu and Faruq Akbar, Seimax Technologies, Inc.

Ray tracing equations in transversely isotropic media Cosmin Macesanu and Faruq Akbar, Seimax Technologies, Inc. Ray tracing equatins in transversely istrpic media Csmin Macesanu and Faruq Akbar, Seimax Technlgies, Inc. SUMMARY We discuss a simple, cmpact apprach t deriving ray tracing equatins in transversely istrpic

More information

Checking the resolved resonance region in EXFOR database

Checking the resolved resonance region in EXFOR database Checking the reslved resnance regin in EXFOR database Gttfried Bertn Sciété de Calcul Mathématique (SCM) Oscar Cabells OECD/NEA Data Bank JEFF Meetings - Sessin JEFF Experiments Nvember 0-4, 017 Bulgne-Billancurt,

More information

Review Problems 3. Four FIR Filter Types

Review Problems 3. Four FIR Filter Types Review Prblems 3 Fur FIR Filter Types Fur types f FIR linear phase digital filters have cefficients h(n fr 0 n M. They are defined as fllws: Type I: h(n = h(m-n and M even. Type II: h(n = h(m-n and M dd.

More information

Day 7: Optimization, regularization for NN

Day 7: Optimization, regularization for NN Day 7: Optimizatin, regularizatin fr NN Intrductin t Machine Learning Summer Schl June 18, 2018 - June 29, 2018, Chicag Instructr: Suriya Gunasekar, TTI Chicag 26 June 2018 Day 7: Tricks and tls Day 7:

More information

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

NAME Borough of Manhattan Community College Course Physics 110 Sec 721 Instructor: Dr. Hulan E. Jack Jr. Date December 19, 2006

NAME Borough of Manhattan Community College Course Physics 110 Sec 721 Instructor: Dr. Hulan E. Jack Jr. Date December 19, 2006 Brug f Manattan unity llege urse Pysics 110 Sec 721 nstructr: Dr. Hulan E. Jack Jr. Date Deceber 19, 2006 inal Exa NSTRUTONS - D 7 prbles : D Prble 1, 2 fr Prble 2,3 and 4, 2 fr Prbles 5,6 and 7, 2 fr

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences Cmplex Dynamics f Neurnal Threshlds David HORN and Irit OPHER Schl f Physics and Astrnmy Raymnd and Beverly Sackler Faculty f Exact Sciences Tel Aviv University, Tel Aviv 69978, Israel hrn@neurn.tau.ac.il

More information

Math 302 Learning Objectives

Math 302 Learning Objectives Multivariable Calculus (Part I) 13.1 Vectrs in Three-Dimensinal Space Math 302 Learning Objectives Plt pints in three-dimensinal space. Find the distance between tw pints in three-dimensinal space. Write

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

Feasibility Testing Report Muscle Optimization Stage I

Feasibility Testing Report Muscle Optimization Stage I Feasibility Testing Reprt Muscle Optimizatin Stage I Team: P15001: Active Ankle Ft Orthtic Engineer: Nah Schadt Mechanical Engineer Engineer: Tyler Leichtenberger Mechanical Engineer Test Date: 10/14/2014

More information

Neural Network Based Dimming Level Control of LED Network

Neural Network Based Dimming Level Control of LED Network Internatinal Jurnal n Recent and Innvatin Trends in Cmputing and Cmmunicatin ISSN: 2321-8169 Vlume: 3 Issue: 2 438-443 Neural Netwrk Based Dimming Level Cntrl f LED Netwrk A. P. Jaya Muruga Raja, PG Sclar,

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals f Diffusin Diffusin: Transprt in a slid, liquid, r gas driven by a cncentratin gradient (r, in the case f mass transprt, a chemical ptential

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

CSC321 Lecture 20: Autoencoders

CSC321 Lecture 20: Autoencoders CSC321 Lecture 20: Autoencoders Roger Grosse Roger Grosse CSC321 Lecture 20: Autoencoders 1 / 16 Overview Latent variable models so far: mixture models Boltzmann machines Both of these involve discrete

More information

Notes on Neural Networks

Notes on Neural Networks Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

Thermodynamics Partial Outline of Topics

Thermodynamics Partial Outline of Topics Thermdynamics Partial Outline f Tpics I. The secnd law f thermdynamics addresses the issue f spntaneity and invlves a functin called entrpy (S): If a prcess is spntaneus, then Suniverse > 0 (2 nd Law!)

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b . REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but

More information

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction Neural Netwrks with Wavelet Based Denising Layers fr Time Series Predictin UROS LOTRIC 1 AND ANDREJ DOBNIKAR University f Lublana, Faculty f Cmputer and Infrmatin Science, Slvenia, e-mail: {urs.ltric,

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute

More information

Chapter 9 Vector Differential Calculus, Grad, Div, Curl

Chapter 9 Vector Differential Calculus, Grad, Div, Curl Chapter 9 Vectr Differential Calculus, Grad, Div, Curl 9.1 Vectrs in 2-Space and 3-Space 9.2 Inner Prduct (Dt Prduct) 9.3 Vectr Prduct (Crss Prduct, Outer Prduct) 9.4 Vectr and Scalar Functins and Fields

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Margin Distribution and Learning Algorithms

Margin Distribution and Learning Algorithms ICML 03 Margin Distributin and Learning Algrithms Ashutsh Garg IBM Almaden Research Center, San Jse, CA 9513 USA Dan Rth Department f Cmputer Science, University f Illinis, Urbana, IL 61801 USA ASHUTOSH@US.IBM.COM

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

Deep learning for smart agriculture: Concepts, tools, applications, and opportunities

Deep learning for smart agriculture: Concepts, tools, applications, and opportunities 32 July, 2018 Int J Agric & Bil Eng Open Access at ttps://www.ijabe.rg Vl. 11 N.4 Deep learning fr smart agriculture: Cncepts, tls, applicatins, and pprtunities Nanyang Zu 1,2, Xu Liu 1,2, Ziqian Liu 1,2,

More information

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers

LHS Mathematics Department Honors Pre-Calculus Final Exam 2002 Answers LHS Mathematics Department Hnrs Pre-alculus Final Eam nswers Part Shrt Prblems The table at the right gives the ppulatin f Massachusetts ver the past several decades Using an epnential mdel, predict the

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information

Deep Learning Lab Course 2017 (Deep Learning Practical)

Deep Learning Lab Course 2017 (Deep Learning Practical) Deep Learning Lab Course 207 (Deep Learning Practical) Labs: (Computer Vision) Thomas Brox, (Robotics) Wolfram Burgard, (Machine Learning) Frank Hutter, (Neurorobotics) Joschka Boedecker University of

More information

Linear Classification

Linear Classification Linear Classificatin CS 54: Machine Learning Slides adapted frm Lee Cper, Jydeep Ghsh, and Sham Kakade Review: Linear Regressin CS 54 [Spring 07] - H Regressin Given an input vectr x T = (x, x,, xp), we

More information

Conceptual Dynamics SDC. An Interactive Text and Workbook. Kirstie Plantenberg Richard Hill. Better Textbooks. Lower Prices.

Conceptual Dynamics SDC. An Interactive Text and Workbook. Kirstie Plantenberg Richard Hill. Better Textbooks. Lower Prices. Cnceptual Dynamics An Interactive Text and Wrkbk Kirstie Plantenberg Richard Hill SDC P U B L I C AT I O N S Better Textbks. Lwer Prices. www.sdcpublicatins.cm Pwered by TCPDF (www.tcpdf.rg) Visit the

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms RU, Rand GOLOSINSKI, T.S. Early detectin f mining truck failure by mdelling its peratin with neural netwrks classificatin algrithms. Applicatin f Cmputers and Operatins Research ill the Minerals Industries,

More information

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets Department f Ecnmics, University f alifrnia, Davis Ecn 200 Micr Thery Prfessr Giacm Bnann Insurance Markets nsider an individual wh has an initial wealth f. ith sme prbability p he faces a lss f x (0

More information

Introduction: A Generalized approach for computing the trajectories associated with the Newtonian N Body Problem

Introduction: A Generalized approach for computing the trajectories associated with the Newtonian N Body Problem A Generalized apprach fr cmputing the trajectries assciated with the Newtnian N Bdy Prblem AbuBar Mehmd, Syed Umer Abbas Shah and Ghulam Shabbir Faculty f Engineering Sciences, GIK Institute f Engineering

More information

Plasticty Theory (5p)

Plasticty Theory (5p) Cmputatinal Finite Strain Hyper-Elast Plasticty Thery (5p) à General ü Study nn-linear material and structural respnse (due t material as well as gemetrical effects) ü Fundamental principles fl Cntinuum

More information

Fundamental concept of metal rolling

Fundamental concept of metal rolling Fundamental cncept metal rlling Assumptins 1) Te arc cntact between te rlls and te metal is a part a circle. v x x α L p y y R v 2) Te ceicient rictin, µ, is cnstant in tery, but in reality µ varies alng

More information