Artificial Neural Networks MLP, Backpropagation

Size: px
Start display at page:

Download "Artificial Neural Networks MLP, Backpropagation"

Transcription

1 Artificial Neural Netwrks MLP, Backprpagatin Jan Drcal Cmputatinal Intelligence Grup Department f Cmputer Science and Engineering Faculty f Electrical Engineering Czec Tecnical University in Prague

2 Outline MultiLayer Perceptrn (MLP). Hw many layers and neurns? Hw t train ANNs? Backprpagatin. Derived and ter metds.

3 Layered ANNs single layer tw layers feed-frward netwrks recurrent netwrk

4 MultiLayer Perceptrn (MLP) Inputs x 1 x 2 x 3 Input Layer w ij Hidden Layers ϕ ϕ ϕ ϕ w jk ϕ ϕ ϕ ϕ w kl Output Layer ϕ ϕ y 1 y 2 Outputs

5 Neurns in MLPs nnlinear transfer functin x 1 x 2 w 1 w 2 neurn utput McCullc-Pitts perceptrn. x 3 w 3 y Θ tresld x n w n weigts neurn inputs

6 Lgistic Sigmid Functin S s = 1 1 e s Sigmid fr different gain/slpe parameter γ. But als many ter (nn)-linear functins...

7 Hw Many Hidden Layers? MLPs wit Discrete Activatin Functins see ftp://ftp.sas.cm/pub/neural/faq3.tml#a_l fr verview Structure Types f Decisin Regins Exclusive-OR Prblem Classes wit Mesed regins Mst General Regin Sapes Single-Layer Half Plane Bunded By Hyperplane A B B A B A Tw-Layer Cnvex Open Or Clsed Regins A B B A B A Tree-Layer Arbitrary (Cmplexity Limited by N. f Ndes) A B B A B A

8 Hw Many Hidden Layers? Cntinuus MLPs Universal Apprximatin prperty. Kurt Hrnik: Fr MLP using cntinuus, bunded, and nn-cnstant activatin functins a single idden layer is enug t apprximate any functin. Q: wat abut linear activatin?

9 Hw Many Hidden Layers? Cntinuus MLPs Universal Apprximatin prperty. Kurt Hrnik: Fr MLP using cntinuus, bunded, and nn-cnstant activatin functins a single idden layer is enug t apprximate any functin. Q: wat abut linear activatin? A: it is cntinuus, nn-cnstant but nt bunded! We will get back t tis tpic later...

10 Cntinuus MLPs Altug ne idden layer is enug fr a cntinuus MLP: we dn't knw w many neurns t use, fewer neurns are ften sufficient fr ANN arcitectures wit tw (r mre) idden layers. See ftp://ftp.sas.cm/pub/neural/faq3.tml#a_l fr example.

11 Hw Many Neurns? N ne knws :( we ave nly rug estimates (upper bunds): ANN wit a single idden layer: N id = N in. N ut, ANN wit tw idden layers: N id 1 =N ut. 3 N in N ut 2, N id 2 =N ut. 3 N in N ut. Yu ave t experiment.

12 Generalizatin vs. Overfitting Wen training ANNs we typically want tem t perfrm accurately n new previusly unseen data. Tis ability is knwn as te generalizatin. Wen ANN rater memrizes te training data wile giving bad results n new data, we talk abut verfitting (vertraining).

13 Training/Testing Sets Training set Testing set Randm samples. ~70% ~30% Used t train ANN mdel. Testing set errr generalizatin

14 Overfitting Example ere, we suld stp training testing errr training errr ttp://uplad.wikimedia.rg/wikipedia/cmmns/1/1f/overfitting_svg.svg

15 Example: Bad Cice f A Training Set

16 Training/Validatin/Testing Sets Ripley Pattern Recgnitin and Neural Netwrks, 1996: Training set: A set f examples used fr learning, tat is t fit te parameters [i.e., weigts] f te ANN. Validatin set: A set f examples used t tune te parameters [i.e., arcitecture, nt weigts] f an ANN, fr example t cse te number f idden units. Test set: A set f examples used nly t assess te perfrmance (generalizatin) f a fully-specified ANN. Separated: ~60%, ~20%, ~20%. Nte: meaning f te validatin and test sets is ften reversed in literature (macine-learning vs. statististics). Fr example see Priddy, Keller: Artificial neural netwrks: an intrductin (Ggle bks, p. 44)

17 Training/Validatin/Testing Sets II Taken frm Ricard Gutierrez-Osuna's slides: ttp://curses.cs.tamu.edu/rgutier/ceg499_s02/l13.pdf

18 k-fld Crss-validatin Example 10-fld crss-validatin: Split training data t 10 flds f equal size. Training data Repeat 10 times, always cse different fld fr validatin set (1 f 10). 90% 10% Training set Create 10 ANN mdels. Cse te best fr testing data. Suitable fr small datasets, reduces te prblems caused by randm selectin f training/testing sets Validatin set Te crss-validatin errr is te average ver all (10) validatin sets.

19 Cleaning Dataset Imputing missing values. Outlier identificatin: te instance f te data distant frm te rest f te data Smting-ut te nisy data.

20 Data Reductin Often needed fr large data sets. Te reduced dataset suld be representative sample f te riginal dataset. Te simplest algritm: randmly remve data instances.

21 Data Transfrmatin Nrmalizatin: scaling/sifting values t fit given interval.. Aggregatin: i.e. binning - discretizatin (cntinuus values t classes).

22 Learning Prcess Ntes We dn't ave t cse instances sequentially randm selectin. We can apply certain instances mre frequently tan ters. We need ften undreds t tusands epcs t train te netwrk. Gd strategy migt speed tings up.

23 Backprpagatin (BP) Paul Werbs, 1974, Harvard, PD tesis. Still ppular metd, many mdificatins. BP is a learning metd fr MLP: cntinuus, differentiable activatin functins!

24 BP Overview randm weigt initializatin repeat repeat // epc cse an instance frm te training set apply it t te netwrk evaluate netwrk utputs cmpare utputs t desired values mdify te weigts until all instances selected frm te training set until glbal errr < criterin

25 ANN Energy Backprpagatin is based n a minimalizatin f ANN energy (= errr). Energy is a measure describing w te netwrk is trained n given data. Fr BP we define te energy functin: were E TOTAL = p E p E p = 1 2 i=1 N d i y i 2 Te ttal sum cmputed ver all patterns f te training set. we will mit p in fllwing slides Nte, ½ nly fr cnvenience we will see later...

26 ANN Energy II Te energy is a functin f: E= f X, W W X weigts (treslds) variable, inputs fixed (fr given pattern).

27 Backprpagatin Keynte Fr given values at netwrk inputs we btain an energy value. Our task is t minimize tis value. Te minimizatin is dne via mdificatin f weigts and treslds. We ave t identify w te energy canges wen a certain weigt is canged by w. Tis crrespnds t partial derivatives. w We emply a gradient metd.

28 Gradient Descent in Energy Landscape Energy/Errr Landscape (w 1,w 2 ) (w 1 + w 1,w 2 +Δw 2 ) 36NaN - Neurnvé sítě M. Šnrek 28

29 Weigt Update We want t update weigts in ppsite directin t te gradient: w jk = w jk weigt delta learning rate Nte: gradient f energy functin is a vectr wic cntains partial derivatives fr all weigts (treslds)

30 Ntatin w jk m w 0k m w jk s k m m y k x k N i, N, N weigt f cnnectin frm neurn j t neurn k tresld f neurn k in layer m weigt f cnnectin frm layer m-1 t m inner ptential f neurn k in layer m utput f neurn k in layer m k-t input number f neurns in input, idden and utput layers x 1 x 2 x N i w 11 w 12 w y 1 +1 y 2 +1 y N w 11 w 12 y 1 y 2 y N input idden utput +1 w

31 Energy as a Functin Cmpsitin = w s jk k w jk

32 Energy as a Functin Cmpsitin = w s jk k w jk use s k = j w jk y j w jk = y j

33 Energy as a Functin Cmpsitin = w s jk k w jk use s k = j w jk y j dente k = w jk = y j

34 Energy as a Functin Cmpsitin = w s jk k w jk use s k = j w jk y j dente k = w jk = y j w jk = k y j Remember te delta rule?

35 Output Layer k = utput layer k =

36 k = Output Layer utput layer = s k k =

37 Output Layer k = utput layer = s k k = derivate f activatin functin =S ' s k

38 Output Layer E= 1 2 i=1 use k = N d i y i 2 y = d k y k k utput layer = s k k = derivate f activatin functin =S ' s k

39 Output Layer E= 1 2 i=1 use dependency f energy n a netwrk utput k = N d i y i 2 y = d k y k k utput layer = s k k = derivate f activatin functin =S ' s k

40 Output Layer E= 1 2 i=1 use k = N d i y i 2 y = d k y k k dependency f energy n a netwrk utput Tat is wy we used 1/2. utput layer = s k k = derivate f activatin functin =S ' s k

41 E= 1 2 i=1 use k = N d i y i 2 y = d k y k k dependency f energy n a netwrk utput Tat is wy we used 1/2. Output Layer utput layer = s k Again, remember te delta rule? k = derivate f activatin functin =S ' s k

42 E= 1 2 i=1 use k = N d i y i 2 y = d k y k k dependency f energy n a netwrk utput Tat is wy we used 1/2. Output Layer utput layer = s k Again, remember te delta rule? k = derivate f activatin functin =S ' s k w = y = d y S ' s y jk k j k k k j

43 Hidden Layer k = idden layer k =

44 Hidden Layer k = = s k idden layer k =

45 Hidden Layer k = = s k idden layer k = =S ' s k Same as utput layer.

46 Hidden Layer k = = s k idden layer k = Nte, tis is utput f a idden neurn. =S ' s k Same as utput layer.

47 Hidden Layer k = = s k idden layer k = Nte, tis is utput f a idden neurn.? let's lk at tis partial derivatin =S ' s k Same as utput layer.

48 Hidden Layer II y = N l=1 k s l s l N = l=1 s l i=1 N wil y i = Apply te cain rule (ttp://en.wikipedia.rg/wiki/cain_rule). y k w k1 w k2 2 1 y 1 y 2 N w kn y N

49 Hidden Layer II y = N l=1 k s l s l N = l=1 s l i=1 N wil y i = N = l=1 Apply te cain rule (ttp://en.wikipedia.rg/wiki/cain_rule). s w N kl= l=1 l l w kl y k w k1 w k2 2 N 1 w kn y 1 y 2 y N

50 Hidden Layer II y = N l=1 k s l s l N = l=1 s l i=1 N wil y i = N = l=1 But we knw tis already. k = Apply te cain rule (ttp://en.wikipedia.rg/wiki/cain_rule). s w N kl= l=1 l l w kl y k w k1 w k2 2 N 1 w kn y 1 y 2 y N

51 Hidden Layer II y = N l=1 k s l s l N = l=1 s l i=1 N wil y i = N = l=1 But we knw tis already. k = Apply te cain rule (ttp://en.wikipedia.rg/wiki/cain_rule). s w N kl= l=1 l Take te errr f te utput neurn l w kl and multiply it by te input weigt. y k w k1 w k2 2 N 1 w kn y 1 y 2 y N

52 Hidden Layer II y = N l=1 k s l s l N = l=1 s l i=1 N wil y i = N = l=1 But we knw tis already. k = Apply te cain rule (ttp://en.wikipedia.rg/wiki/cain_rule). s w N kl= l=1 l Take te errr f te utput neurn l w kl and multiply it by te input weigt. y k Here te back-prpagatin actually appens... w k1 w k2 2 N 1 w kn y 1 y 2 y N

53 Hidden Layer III Nw, let's put it all tgeter! s = k N = l=1 k w kl S ' s k

54 Hidden Layer III Nw, let's put it all tgeter! s = k N = l=1 k w kl S ' s k k = s = N l=1 k k w kl S ' s k

55 Hidden Layer III Nw, let's put it all tgeter! s = k N = l=1 k = s = N l=1 k k w kl S ' s k k w kl S ' s k Te derivatin f te activatin functin is te last ting t deal wit! N w = x = jk k j l=1 w S ' s x k kl k j

56 Sigmid Derivatin S ' s k = 1 1 e s k ' = 1 e s k e s k 1 e s k = y k 1 y k Tat is wy we needed cntinuus & differentiable activatin functins!

57 Hw Abut General Case? Arbitrary number f idden layers? x 1 1 w 11 1 w 12 y 1 1 w 11 w 12 y 1 w 11 w 12 y 1 x 2 y 2 y 2 1 y 2 x N i y N input 1 y N 1 y N utput idden k It's te same: fr layer -1 use.

58 BP Put All Tgeter Output layer: w jk = y k 1 y k d k y k y j Tis is equal t x j wen we get t inputs. Hidden layer m (nte tat +1 = ): w m jk = m k y m 1 j = y m k 1 y m k l=1 N m 1 k m 1 w kl m 1 y j m 1 Weigt (tresld) updates: w jk t 1 =w jk t w jk t

59 Ptential Prblems Hig dimensin f weigt (tresld) space. Cmplexity f energy functin. Different sape f energy functin fr different input vectrs.

60 Mdificatins t Standard BP Mmentum Simple, but greatly elps wen aviding lcal minima: w ij t = j t y i t w ij t 1 mmentum cnstant: [0,1

61 Mdificatins t Standard BP II Nrmalized cumulated delta, all canges applied tgeter, Delta-bar-delta rule, using als previus gradient Extended, bt abve tgeter, Quickprp, Rprp: secnd rder metds (apprximate Hessian).

62 Oter Appraces Based n Numerical Optimizatin Cmpute partial derivatives ver te ttal energy: TOTAL w jk and use any numerical ptimizatin metd, i.e.: Cnjugated gradients, Quasi-Newtn metds, Levenberg-Marquardt

63 Typical Energy Beaviur During Learning

64 Typical Learning Runs E E E # iter # iter # iter Tis is best! Cnstant and fast decrease f errr. Seems like lcal minimum. Cange learning rate, mmentum, ANN arcitecture, run again (randm weigt initializatin)... Nisy data.

65 Next Lecture Unsupervised learning, SOM etc.

Artificial Neural Networks Backpropagation & Deep Neural Networks

Artificial Neural Networks Backpropagation & Deep Neural Networks Artificial Neural Netwrs Bacprpagatin & Deep Neural Netwrs Jan Drcal drcajan@fel.cvut.cz Cmputatinal Intelligence Grup Department f Cmputer Science and Engineering Faculty f Electrical Engineering Czec

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Online Handwritten Character Recognition Using an Optical Backpropagation Neural Network

Online Handwritten Character Recognition Using an Optical Backpropagation Neural Network Issues in Infrming Science and Infrmatin Tecnlgy Online Handwritten Caracter Recgnitin Using an Optical Backprpagatin Neural Netwrk Walid A Salame Princess Summaya University fr Science and Tecnlgy, Amman,

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedfrward Neural Netwrks Yagmur Gizem Cinar, Eric Gaussier AMA, LIG, Univ. Grenble Alpes 17 March 2017 Yagmur Gizem Cinar, Eric Gaussier Multilayer Perceptrns (MLP) 17 March 2017 1 / 42 Reference Bk Deep

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

NEURAL NETWORKS. Neural networks

NEURAL NETWORKS. Neural networks NEURAL NETWORKS Neural netwrks Mtivatin Humans are able t prcess cmplex tasks efficiently (perceptin, pattern recgnitin, reasning, etc.) Ability t learn frm examples Adaptability and fault tlerance Engineering

More information

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction Neural Netwrks with Wavelet Based Denising Layers fr Time Series Predictin UROS LOTRIC 1 AND ANDREJ DOBNIKAR University f Lublana, Faculty f Cmputer and Infrmatin Science, Slvenia, e-mail: {urs.ltric,

More information

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification Agenda Artificial Intelligence and its applicatins Lecture 6 Supervised Learning Prfessr Daniel Yeung danyeung@ieee.rg Dr. Patrick Chan patrickchan@ieee.rg Suth China University f Technlgy, China Learning

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o

* o * * o o 1.5. o o. o * 0.5 * * o * * o. o o. oo o. * * * o * 0.5. o o * * * o o. * * oo. o o o o. * o * * o* o Pseudinverse Learning Algrithm fr Feedfrward Neural Netwrs Λ PING GUO and MICHAEL R. LYU Department f Cmputer Science and Engineering, The Chinese University f Hng Kng, Shatin, NT, Hng Kng. SAR f P.R.

More information

Chapter 7. Neural Networks

Chapter 7. Neural Networks Chapter 7. Neural Netwrks Wei Pan Divisin f Bistatistics, Schl f Public Health, University f Minnesta, Minneaplis, MN 55455 Email: weip@bistat.umn.edu PubH 7475/8475 c Wei Pan Intrductin Chapter 11. nly

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Day 7: Optimization, regularization for NN

Day 7: Optimization, regularization for NN Day 7: Optimizatin, regularizatin fr NN Intrductin t Machine Learning Summer Schl June 18, 2018 - June 29, 2018, Chicag Instructr: Suriya Gunasekar, TTI Chicag 26 June 2018 Day 7: Tricks and tls Day 7:

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

Module 3: Gaussian Process Parameter Estimation, Prediction Uncertainty, and Diagnostics

Module 3: Gaussian Process Parameter Estimation, Prediction Uncertainty, and Diagnostics Mdule 3: Gaussian Prcess Parameter Estimatin, Predictin Uncertainty, and Diagnstics Jerme Sacks and William J Welch Natinal Institute f Statistical Sciences and University f British Clumbia Adapted frm

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

Checking the resolved resonance region in EXFOR database

Checking the resolved resonance region in EXFOR database Checking the reslved resnance regin in EXFOR database Gttfried Bertn Sciété de Calcul Mathématique (SCM) Oscar Cabells OECD/NEA Data Bank JEFF Meetings - Sessin JEFF Experiments Nvember 0-4, 017 Bulgne-Billancurt,

More information

Physics 2010 Motion with Constant Acceleration Experiment 1

Physics 2010 Motion with Constant Acceleration Experiment 1 . Physics 00 Mtin with Cnstant Acceleratin Experiment In this lab, we will study the mtin f a glider as it accelerates dwnhill n a tilted air track. The glider is supprted ver the air track by a cushin

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter Midwest Big Data Summer Schl: Machine Learning I: Intrductin Kris De Brabanter kbrabant@iastate.edu Iwa State University Department f Statistics Department f Cmputer Science June 24, 2016 1/24 Outline

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving. Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Section 5.8 Notes Page Exponential Growth and Decay Models; Newton s Law

Section 5.8 Notes Page Exponential Growth and Decay Models; Newton s Law Sectin 5.8 Ntes Page 1 5.8 Expnential Grwth and Decay Mdels; Newtn s Law There are many applicatins t expnential functins that we will fcus n in this sectin. First let s lk at the expnential mdel. Expnential

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

Rigid Body Dynamics (continued)

Rigid Body Dynamics (continued) Last time: Rigid dy Dynamics (cntinued) Discussin f pint mass, rigid bdy as useful abstractins f reality Many-particle apprach t rigid bdy mdeling: Newtn s Secnd Law, Euler s Law Cntinuus bdy apprach t

More information

Chapter 11: Neural Networks

Chapter 11: Neural Networks Chapter 11: Neural Netwrks DD3364 December 16, 2012 Prjectin Pursuit Regressin Prjectin Pursuit Regressin mdel: Prjectin Pursuit Regressin f(x) = M g m (wmx) t i=1 where X R p and have targets Y R. Additive

More information

ELT COMMUNICATION THEORY

ELT COMMUNICATION THEORY ELT 41307 COMMUNICATION THEORY Matlab Exercise #2 Randm variables and randm prcesses 1 RANDOM VARIABLES 1.1 ROLLING A FAIR 6 FACED DICE (DISCRETE VALIABLE) Generate randm samples fr rlling a fair 6 faced

More information

[COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t o m a k e s u r e y o u a r e r e a d y )

[COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t o m a k e s u r e y o u a r e r e a d y ) (Abut the final) [COLLEGE ALGEBRA EXAM I REVIEW TOPICS] ( u s e t h i s t m a k e s u r e y u a r e r e a d y ) The department writes the final exam s I dn't really knw what's n it and I can't very well

More information

Churn Prediction using Dynamic RFM-Augmented node2vec

Churn Prediction using Dynamic RFM-Augmented node2vec Churn Predictin using Dynamic RFM-Augmented nde2vec Sandra Mitrvić, Jchen de Weerdt, Bart Baesens & Wilfried Lemahieu Department f Decisin Sciences and Infrmatin Management, KU Leuven 18 September 2017,

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

EEO 401 Digital Signal Processing Prof. Mark Fowler

EEO 401 Digital Signal Processing Prof. Mark Fowler EEO 401 Digital Signal Prcessing Prf. Mark Fwler Intrductin Nte Set #1 ading Assignment: Ch. 1 f Prakis & Manlakis 1/13 Mdern systems generally DSP Scenari get a cntinuus-time signal frm a sensr a cnt.-time

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information

Name AP CHEM / / Chapter 1 Chemical Foundations

Name AP CHEM / / Chapter 1 Chemical Foundations Name AP CHEM / / Chapter 1 Chemical Fundatins Metric Cnversins All measurements in chemistry are made using the metric system. In using the metric system yu must be able t cnvert between ne value and anther.

More information

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor Prgress In Electrmagnetics Research M, Vl. 35, 5 6, 24 The Research n Flux Linkage Characteristic Based n BP and RBF Neural Netwrk fr Switched Reluctance Mtr Yan Cai, *, Siyuan Sun, Chenhui Wang, and Cha

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

NEURAL NETWORKS. modifications of EBP

NEURAL NETWORKS. modifications of EBP NEURAL NETWORKS ELEC 50 and ELEC 60 mdiicatins EBP Bdgan M. Wilamwsi + EBP Errr Bac Pragatin algrithm F { z} z F { z } F n {z} The case with multile ututs The weight increment is a suersitin weight mdiicatins

More information

Homology groups of disks with holes

Homology groups of disks with holes Hmlgy grups f disks with hles THEOREM. Let p 1,, p k } be a sequence f distinct pints in the interir unit disk D n where n 2, and suppse that fr all j the sets E j Int D n are clsed, pairwise disjint subdisks.

More information

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets

Department of Economics, University of California, Davis Ecn 200C Micro Theory Professor Giacomo Bonanno. Insurance Markets Department f Ecnmics, University f alifrnia, Davis Ecn 200 Micr Thery Prfessr Giacm Bnann Insurance Markets nsider an individual wh has an initial wealth f. ith sme prbability p he faces a lss f x (0

More information

Learning to Control an Unstable System with Forward Modeling

Learning to Control an Unstable System with Forward Modeling 324 Jrdan and Jacbs Learning t Cntrl an Unstable System with Frward Mdeling Michael I. Jrdan Brain and Cgnitive Sciences MIT Cambridge, MA 02139 Rbert A. Jacbs Cmputer and Infrmatin Sciences University

More information

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling

Lecture 13: Markov Chain Monte Carlo. Gibbs sampling Lecture 13: Markv hain Mnte arl Gibbs sampling Gibbs sampling Markv chains 1 Recall: Apprximate inference using samples Main idea: we generate samples frm ur Bayes net, then cmpute prbabilities using (weighted)

More information

SPH3U1 Lesson 06 Kinematics

SPH3U1 Lesson 06 Kinematics PROJECTILE MOTION LEARNING GOALS Students will: Describe the mtin f an bject thrwn at arbitrary angles thrugh the air. Describe the hrizntal and vertical mtins f a prjectile. Slve prjectile mtin prblems.

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

19 Better Neural Network Training; Convolutional Neural Networks

19 Better Neural Network Training; Convolutional Neural Networks 108 Jnathan Richard Shewchuk 19 Better Neural Netwrk Training; Cnvlutinal Neural Netwrks [I m ging t talk abut a bunch f heuristics that make gradient descent faster, r make it find better lcal minima,

More information

Dataflow Analysis and Abstract Interpretation

Dataflow Analysis and Abstract Interpretation Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key

More information

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:

More information

AP Statistics Notes Unit Two: The Normal Distributions

AP Statistics Notes Unit Two: The Normal Distributions AP Statistics Ntes Unit Tw: The Nrmal Distributins Syllabus Objectives: 1.5 The student will summarize distributins f data measuring the psitin using quartiles, percentiles, and standardized scres (z-scres).

More information

Thermodynamics Partial Outline of Topics

Thermodynamics Partial Outline of Topics Thermdynamics Partial Outline f Tpics I. The secnd law f thermdynamics addresses the issue f spntaneity and invlves a functin called entrpy (S): If a prcess is spntaneus, then Suniverse > 0 (2 nd Law!)

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

Chapter 4 The debroglie hypothesis

Chapter 4 The debroglie hypothesis Capter 4 Te debrglie yptesis In 194, te Frenc pysicist Luis de Brglie after lking deeply int te special tery f relatiity and ptn yptesis,suggested tat tere was a mre fundamental relatin between waes and

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

Margin Distribution and Learning Algorithms

Margin Distribution and Learning Algorithms ICML 03 Margin Distributin and Learning Algrithms Ashutsh Garg IBM Almaden Research Center, San Jse, CA 9513 USA Dan Rth Department f Cmputer Science, University f Illinis, Urbana, IL 61801 USA ASHUTOSH@US.IBM.COM

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

Simple Linear Regression (single variable)

Simple Linear Regression (single variable) Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins

More information

Reinforcement Learning" CMPSCI 383 Nov 29, 2011!

Reinforcement Learning CMPSCI 383 Nov 29, 2011! Reinfrcement Learning" CMPSCI 383 Nv 29, 2011! 1 Tdayʼs lecture" Review f Chapter 17: Making Cmple Decisins! Sequential decisin prblems! The mtivatin and advantages f reinfrcement learning.! Passive learning!

More information

Preparation work for A2 Mathematics [2017]

Preparation work for A2 Mathematics [2017] Preparatin wrk fr A2 Mathematics [2017] The wrk studied in Y12 after the return frm study leave is frm the Cre 3 mdule f the A2 Mathematics curse. This wrk will nly be reviewed during Year 13, it will

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

B. Definition of an exponential

B. Definition of an exponential Expnents and Lgarithms Chapter IV - Expnents and Lgarithms A. Intrductin Starting with additin and defining the ntatins fr subtractin, multiplicatin and divisin, we discvered negative numbers and fractins.

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

Data Mining with Linear Discriminants. Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel

Data Mining with Linear Discriminants. Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel Data Mining with Linear Discriminants Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel Tday s Lecture Objectives 1 Recgnizing the ideas f artificial neural netwrks and their

More information

Flipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System

Flipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System Flipping Physics Lecture Ntes: Simple Harmnic Mtin Intrductin via a Hrizntal Mass-Spring System A Hrizntal Mass-Spring System is where a mass is attached t a spring, riented hrizntally, and then placed

More information

Lecture 5: Equilibrium and Oscillations

Lecture 5: Equilibrium and Oscillations Lecture 5: Equilibrium and Oscillatins Energy and Mtin Last time, we fund that fr a system with energy cnserved, v = ± E U m ( ) ( ) One result we see immediately is that there is n slutin fr velcity if

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

Introduction to Smith Charts

Introduction to Smith Charts Intrductin t Smith Charts Dr. Russell P. Jedlicka Klipsch Schl f Electrical and Cmputer Engineering New Mexic State University as Cruces, NM 88003 September 2002 EE521 ecture 3 08/22/02 Smith Chart Summary

More information

Quantum Harmonic Oscillator, a computational approach

Quantum Harmonic Oscillator, a computational approach IOSR Jurnal f Applied Physics (IOSR-JAP) e-issn: 78-4861.Vlume 7, Issue 5 Ver. II (Sep. - Oct. 015), PP 33-38 www.isrjurnals Quantum Harmnic Oscillatr, a cmputatinal apprach Sarmistha Sahu, Maharani Lakshmi

More information

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs

Natural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs Natural Language Understanding Lecture 12: Recurrent Neural Networks and LSTMs Recap: probability, language models, and feedforward networks Simple Recurrent Networks Adam Lopez Credits: Mirella Lapata

More information

Surface and Contact Stress

Surface and Contact Stress Surface and Cntact Stress The cncept f the frce is fundamental t mechanics and many imprtant prblems can be cast in terms f frces nly, fr example the prblems cnsidered in Chapter. Hwever, mre sphisticated

More information

/ / Chemistry. Chapter 1 Chemical Foundations

/ / Chemistry. Chapter 1 Chemical Foundations Name Chapter 1 Chemical Fundatins Advanced Chemistry / / Metric Cnversins All measurements in chemistry are made using the metric system. In using the metric system yu must be able t cnvert between ne

More information