NEURAL NETWORKS. Neural networks

Size: px
Start display at page:

Download "NEURAL NETWORKS. Neural networks"

Transcription

1 NEURAL NETWORKS Neural netwrks Mtivatin Humans are able t prcess cmplex tasks efficiently (perceptin, pattern recgnitin, reasning, etc.) Ability t learn frm examples Adaptability and fault tlerance Engineering applicatins Nnlinear apprximatin and classificatin Learning (adaptatin) frm data: black-bx mdeling Very-Large-Scale Integratin (VLSI) implementatin 272

2 Bilgical neurn Sma: bdy f the neurn. Dendrites: receptrs (inputs) f the neurn. Axn: utput f neurn; cnnected t dendrites f ther neurns via synapses. Synapses: transfer f infrmatin between neurns (electrchemical signals). 273 Neural netwrks Bilgical neural netwrks Neurn switching time: 0.00 secnd Number f neurns: 0 (00 bilin) Cnnectins per neurn (synapses): 0 4 (00 trilin) Recgnitin time: 0-3 s (milisecnds) parallel cmputatin Artificial neural netwrks Weighted cnnectins amngst units Highly parallel, distributed prcess Emphasis n tuning weights autmatically 274 2

3 Use f neural netwrks Input is high-dimensinal Output is multidimensinal Mathematical frm f system is unknwn Interpretability f identified mdel is unimprtant Applicatins Pattern recgnitin Classificatin Predictin Mdeling Bilgical neural netwrk Sma Dendrite Axn Synapse Artificial neural netwrk Neurn Input Output Weight 275 ANN: histry 943 Warren McCullch & Walter Pitts Definitin f a neurn: The activity f a neurn is an all r nne prcess The structure f the net des nt change with time T simple structure, hwever: Prved that netwrks f their neurns culd represent any finite lgical expressin Used a massively parallel architecture Prvided imprtant fundatin fr further develpment 276 3

4 ANN: histry 948 Dnald Hebb Majr cntributins: Recgnized that infrmatin is stred in the weight f the synapses Pstulated a learning rate that is prprtinal t the prduct f neurn activatin values Pstulated a cell assembly thery: repeated simultaneus activatin f weakly-cnnected cell grup results in a mre strngly cnnected grup. 277 ANN: histry 957 Frank Rsenblatt Defined first cmputer implementatin: the perceptrn Attracted attentin f engineers and physicists, using mdel f bilgical visin Defined infrmatin strage as being in cnnectins r assciatins rather than tpgraphic representatins Defined bth self-rganizing and supervised learning mechanisms 278 4

5 ANN: histry 959 Bernard Widrw & Marcian Hff Engineers wh simulated netwrks n cmputers and implemented designs in hardware (Adaline and Madaline). Frmulated Least Mean Squares (LMS) algrithm that minimizes sum-squared errr. LMS adapts weights even when classifier utput is crrect. 279 ANN: histry 977 David Rummelhart Intrduced cmputer implementatin f backprpagatin learning and delta rule 982 Jhn Hpfield Implemented recurrent netwrk Develped way t minimize energy f netwrk, defined stable states First NNs n silicn chips built by AT&T using Hpfield net 280 5

6 ANN: histry 989 Cybenk (apprximatin thery) 990 Jang et al. (neur-fuzzy systems) 993 Barrn (cmplexity vs. accuracy) 28 ADAPTIVE NETWORKS 6

7 Adaptive (neural) netwrks Massively cnnected cmputatinal units inspired by the wrking f the human brain Prvide a mathematical mdel fr bilgical neural netwrks (brains) Characteristics: learning frm examples adaptive and fault tlerant rbust fr fulfilling cmplex tasks 283 Netwrk classificatin Learning methds (supervised, unsupervised) Architectures (feedfrward, recurrent) Output types (binary, cntinuus) Nde types (unifrm, hybrid) Implementatins (sftware, hardware) Cnnectin weights (adjustable, hard-wired) Inspiratins (bilgical, psychlgical) 284 7

8 Adaptive netwrk architecture Ndes are static (n dynamics) and parametric Netwrk can cnsist f hetergeneus ndes Links d nt have weights r parameters assciated Nde functins are differentiable except at a finite number f pints x 3 x adaptive ndes Input layer Layer Layer 2 Layer3 (Output layer) 8 fixed ndes x 8 x Adaptive netwrks categries Feedfrward x 3 4 x x x 9 Recurrent x 3 4 x x x

9 Adap. netwrk representatins Layered x 3 4 x x x 9 Tplgical rdering x 8 x x x Feedfrward adaptive netwrk Static mapping between input and utput spaces Aim: cnstruct netwrk t btain nnlinear mapping regulated by a data set (training data set) f desired input-utput pairs f a target system t be mdeled Prcedures: learning rules r adaptatin algrithms (parameter adjustment t imprve netwrk perfrmance) Netwrk perfrmance: measured as the discrepancy between desired and netwrk s utput fr same input (errr measure) 288 9

10 Examples f adaptive netwrks Adaptive netwrk with single linear nde x f 3 x 2 x 3 Perceptrn netwrk (linear classifier) x f ( x, x ; a, a, a ) a x a x a x f 3 f 4 x 4 x3 x 2 x f ( x, x ; a, a, a ) a x a x a if x3 0 x4 f4( x3) 0 if x Examples f adaptive netwrks Multilayer perceptrn (3-3-2 neural netwrk) x 7 x x 2 x 3 Layer 0 (Input layer) exp[ ( w x w x w x t )] Parameter set f nde 7: Layer (Hidden Layer) 7 x 7 4,7 4 5,7 5 6, Layer 2 (Output layer) x 8 { w4,7, w5,7, w6,7, t7} weight threshld 290 0

11 SUPERVISED LEARNING NEURAL NETWORKS Perceptrn Early (and ppular) attempt t build intelligent and self-learning systems by using simple cmpnents Derived frm McCullch-Pitts (943) mdel f the bilgical neurn Mdels utput by weighted cmbinatins f selected features (feature classifier) Essentially a linear classifier Incremental learning rughly based n gradient descent 292

12 Perceptrn (fixed) g x g 2 x 2 w w 2 (adaptive) g 3 x 3 w 3 w 4 q Output g 4 x 4 Input Pattern Feature Detectin Layer 293 Training algrithm Perceptrn (Rsenblatt, 958). Can nly learn linearly separable functins. Training algrithm:. Select an input vectr x frm the training data set 2. If the perceptrn gives an incrrect respnse, mdify all cnnectin weights w i 294 2

13 Training algrithm Weight training: wi( l) wi() l wi() l Weight crrectin is given by the delta rule: wi() l xi() l el () learning rate e(l) = y d (l) y(l) Questin: Can we represent a simple exclusive-or (XOR) functin with a single-layer perceptrn? 295 XOR prblem input : class, x: class 2 X Y Class Hw t classify the patterns crrectly? input Linear classificatin is nt pssible! 296 3

14 Example Linearly separable classificatins If classificatin is linearly separable, we can have any number f classes with a perceptrn. Fr example, cnsider classifying furniture accrding t height and width: 297 Example Each categry can be separated frm the ther 2 by a straight line: 3 straight lines each utput nde fires if pint is n right side f straight line: Mre than ne utput nde culd fire at same time! 298 4

15 Artificial neurn x x 2... w 2 Neurn y x n w n x i : i-th input f the neurn w i : synaptic strengh (weight) fr x i y = (w i x i ): utput signal 299 Types f neurns Threshld (McCullch and Pits, 943): n ysignwx i i i Other types f activatin functins (net = w i x i ): 0 step, if net 0 y 0, if net 0 y y linear net sigmid net e 300 5

16 Activatin functins Lgistic * f( x) e Hyperblic tangent * x x e f( x) tanh 2 x e Identity (linear) f ( x) x *sigmidal r squashing functins x Lgistic Functin (a) Hyperblic Tangent Functin (b) Identity Functin (c) 30 Single-layer perceptrn (SLP) Single-layer perceptrn can nly classify linearly separable patterns, regardless f the activatin functin used. Hw t cpe with prblems which are nt linearly separable? Using multilayer neural netwrks! 302 6

17 Multi-Layer Perceptrn fr XOR x w w 2 = -w + x 3 0 x 2 x 3.5 x + + x (a) x3-0.5 x 5 + x 4 (b) x x (c) x x x (d) x x x x x x2 303 Backprpagatin MLP Mst cmmnly used NN structures fr applicatins in wide range f areas: Pattern recgnitin, signal prcessing, data cmpressin and autmatic cntrl Well-knwn applicatins: NETtalk: trained an MLP t prnunce English text; Carnegie Melln University s ALVINN (Autnmus Land Vehicle in a Neural Netwrk) used an NN fr steering an autnmus vehicle; Optical Character Recgnitin (OCR)

18 Multi-Layer Perceptrn Can learn functins that are nt linearly separable. Output signals Input layer st hidden layer 2 nd hidden layer Output layer 305 Mst cmmn MLP Hidden layer w h b h w b Output layer x y x i... w ij h... j w jk k... y k x n w nm h b m h... m w ml b l l y l 306 8

19 Mst cmmn MLP Output f neurns in the hidden-layer h j : tanh n n 0 h h h i i n h wx i0 ij i sigmid h wxb wx j ij i j ij i Output f neurns in the utput-layer y k : m m 0 j j y w h b w h k jk j j jk k m j0 w h jk j linear 307 Learning in NN Bilgical neural netwrks: Synaptic cnnectins amngst neurns which simultaneusly exhibit high activity are strengthned. Artificial neural netwrks: Mathematical apprximatin f bilgical learning. Errr minimizatin (nnlinear ptimizatin prblem). Errr backprpagatin (first-rder gradient) Newtn methds (secnd-rder gradient) Levenberg-Marquardt (secnd-rder gradient) Cnjugate gradients

20 Supervised learning e x Training data: y X x x x T T T 2 N Y y y y T T T 2 N T T 309 Errr backprpagatin Initialize all weights and threshlds t small randm numbers Repeat. Input training examples and cmpute netwrk and hidden layer utputs 2. Adjust utput weights using utput errr 3. Prpagating utput errr backwards, adjust hiddenlayer weights Until satisfied with apprximatin 30 20

21 Backprpagatin in MLP Cmpute the utput f the utput-layer, and cmpute errr: e y, y, k,, l k d k k The cst functin t be minimized is the fllwing: J( w) 2 l N k q e 2 kq N number f data pints 3 Learning using gradient Ouput weight learning fr utput y k : w ( p) w ( p) J( w ) jk jk jk J J J J( wjk ),,, wk w2k w mk T 32 2

22 Output-layer weights h w k w 0k... w 2k w mk Neurn y k h m y w h e y y J w w e m l h 2 k jk j, k d, k k, ( jk, ij ) k j0 2 k 33 Output-layer weights Applying the chain rule with then J ek yk ek,, h e y w J w k k jk jk he j k J J e y w e y w k k jk k k jk j Thus: w ( p) w ( p) J( w ) w ( p) h e jk jk jk jk j k Recall that fr SLP: w xe i i 34 22

23 Hidden-layer weights x w j h h w 0j x 2 w 2j h w nj h Neurn h j x n n h j ij i j j i0 net w x, h tanh( net ) h h h w ( p) w ( p) J( w ) ij ij ij J J h net w h net w j j h h ij j j ij 35 Hidden-layer weights Partial derivatives: J h net h net w k j j ekwjk, j( hj), x h i j k j ij then J w h ij l i j j k jk k x ( h) ( ew ) and l h ij i j j k jk k w( p) x ( h) ( ew ) 36 23

24 Errr backprpagatin algrithm Initialize all weights t small randm numbers Repeat:. Input training example and cmpute netwrk utputs. 2. Adjust utput weights using gradients: w ( p) w ( p) h e jk jk j k 3. Adjust hidden-layer weights: l h h ij ij i j j k jk k w( p ) w( p) x ( h) ( ew ) Until satisfied r fixed number f epchs p 37 First-rder gradient methds Jw ( ) n Jw ( n ) w n w n

25 Secnd-rder gradient methds Update rule fr the weights: w( p) w( p) H( w( p)) J( w( p)) h w( p) w, w, H(w) is the Hessian matrix f w ij jk Learning des nt depend n a learning cefficient Much mre efficient in general 39 Secnd-rder gradient methds Jw ( ) w n w n

26 Apprximatin pwer General functin apprximatrs Feedfrward neural netwrk with ne hidden layer and sigmidal activatin functins can apprximate any cntinuus functin arbitrarily well n a cmpact set (Cybenk) Intuitive relatin t lcalized receptive fields Little cnstructive results 32 Functin apprximatin y w tanh( w xb ) w tanh( w xb ) h h h h h w x+b h z h w x+b h 0 2 x Activatin (weighted summatin) z z

27 Functin apprximatin Transfrmatin thrugh tanh tanh( z 2 ) 2 v 0 2 tanh( z ) x v v 2 2 y wv+wv x Summatin f neurn utputs wv wv RADIAL BASIS FUNCTION NETWORKS 27

28 Radial Basis Functin Netwrks (RBFN) Feedfrward neural netwrks where hidden units d nt implement an activatin functin; they represent a radial basis functin. Develped as an apprach t imprve accuracy and decrease training time cmplexity. 325 Radial Basis Functin Netwrks Activatin functins are radial basis functins Activatin level f i th receptive field (hidden unit): xui Ri( x) Ri i u i center f basis functin i spread f basis functin j =, 2,...,n N cnnectin weights between input and hidden layers... x n... c c m c ml... y y l

29 Radial Basis Functin Netwrks Lcalized activatin functins. Gaussian and lgistic: Ri ( x) exp x u 2 i 2 i Weighted sum r average utput: 2 Ri ( x) exp x u H H cr i i( x) y( x ) cw i i cr i i( x i ) y( x) H i i R( x) H i i i 2 2 i c i can be cnstants r functins f inputs: c i = a it x + b i 327 RBFN architecture Weighted sum Weighted average Hidden layer Hidden layer Lcalized activatin functins in the hidden layer

30 RBFN learning Supervised learning t update all parameters (e.g. with Genetic Algrithms) Sequential training: fix basis functins and then adjust utput weights by: rthgnal least squares data clustering sft cmpetitin based n maximum likelihd estimate i smetimes estimated based n standard deviatins Many ther schemes als exist 329 Least-squares estimate f weights Given basis functins R and a set f input-utput data: [x k, y k ], k =,...,N, estimate ptimal weights c ij. Cmpute the utput f the neurns: 2i Ri( xk) e The utput is linear in the weights: y = Rc. 2. Least squares estimate: 2 i 2 xku T T c[ RR] Ry

31 RBFN and Sugen systems Equivalent if the fllwing hld: Bth RBFN and TS use same aggregatin methd fr utput (weighted sum r weighted average). Number f basis functins in RBFN equals number f rules in TS. TS uses Gaussian membership functins with same (variance) as basis functins and rule firing is determined by prduct. RBFN respnse functin (c i ) and TS rule cnsequents are equal. 33 General functin apprximatr 332 3

32 Apprximatin prperties f NN [Cybenk, 989]: A feedfrward NN with at least ne hidden layer can apprximate any cntinuus functin R p R n n a cmpact interval, if sufficient hidden neurns are available. [Barrn, 993]: A feedfrward NN with ne hidden layer and sigmidal activatin functins can achieve an integrated squared errr f the rder J = O( / h). independently f the dimensin f the input space p h: number f hidden neurns (fr smth functins) 333 Apprximatin prperties Fr a basis functin expansin (plynmial, trignmetric, singletn fuzzy mdel, etc.) with h terms, J = O( / h 2/p ), where p is the dimensin f the input. Examples:. p = 2: plynmial J = O( / h 2/2 )=O( / h) neural net J = O( / h) 2. p = 0, h = 2: plynmial J = O(/2 2/0 ) = 0.54 neural net J = O(/2) =

33 Example f aprximatin T achieve the same accuracy: J = O( / h n )=O( / h b ), h n = h 2/p b, h b h n p Hpfield netwrk Recurrent ANN. Example (single-layer): Learning capability is much higher. Successive iteratins may nt necessarily cnverge; may lead t chatic behavir (unstable netwrk)

34 NEURAL NETWORKS MATLAB EXAMPLE (R2007b) Feedfrward backprpagatin netwrk. Input and target P = [ ]; %input T = [ ]; %target 2. Create net help newff net = newff(p,t,5); 3. Simulate and plt net Yi = sim(net,p); plt(p,t,'rs-',p,yi, -') legend('t','yi',0),xlabel('p')

35 Feedfrward backprpagatin netwrk T Yi P 339 Feedfrward backprpagatin netwrk 4. Train the netwrk fr 50 epchs net.trainparam.epchs = 50; net = train(net,p,t); T = [ ]; %target 5. Simulate net and plt the results Y = sim(net,p); figure, plt(p,t,'rs-',p,yi,'b',p,y,'g^') legend('t','yi','y',0),xlabel('p')

36 Feedfrward backprpagatin netwrk T Yi Y P 34 Feedfrward backprpagatin netwrk Cmpute the mean abslute and squared errrs ma_errr = mae(t-y) ma_errr = 0.20 ms_errr = mse(t-y) ms_errr = Plt the netwrk errr figure,plt(p,t-y,''),grid ylabel('errr'),xlabel('p')

37 Feedfrward backprpagatin netwrk errr P 343 Feedfrward backprpagatin netwrk Check the parameters f the netwrk net Sme imprtant parameters inputs: {x cell} f inputs layers: {2x cell} f layers utputs: {x2 cell} cntaining utput targets: {x2 cell} cntaining target biases: {2x cell} cntaining 2 biases inputweights: {2x cell} cntaining input weight layerweights: {2x2 cell} cntaining layer weight

38 Feedfrward backprpagatin netwrk adaptfcn: 'trains' initfcn: 'initlay' perfrmfcn: 'mse' trainfcn: 'trainlm' adaptparam:.passes trainparam:.epchs,.gal,.shw,.time IW: {2x cell} cntaining input weight matrix LW: {2x2 cell} cntaining layer weight matrix b: {2x cell} cntaining 2 bias vectrs 345 Feedfrward backprpagatin netwrk Nte that every time that a netwrk is initialized, different randm numbers are used fr the weights. Example in the fllwing: Initializatin and training f 0 netwrks Cmputatin f mean abslute errr Cmputatin f mean squared errr

39 Feedfrward backprpagatin netwrk MA_errr = []; MS_errr = []; fr i = :0 net = newff(p,t,5); net.trainparam.epchs = 50; net = train(net,p,t); Y = sim(net,p); MA_errr = [MA_errr mae(t-y)]; MS_errr = [MS_errr mse(t-y)]; end 347 Feedfrward backprpagatin netwrk figure, subplt(2,,),plt(ma_errr,''),grid, title('mean Abslute Errr') subplt(2,,2),plt(ms_errr,''),grid title('mean Squared Errr')

40 Feedfrward backprpagatin netwrk 0.25 Mean Abslute Errr Mean Squared Errr

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

Artificial Neural Networks MLP, Backpropagation

Artificial Neural Networks MLP, Backpropagation Artificial Neural Netwrks MLP, Backprpagatin 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Chapter 7. Neural Networks

Chapter 7. Neural Networks Chapter 7. Neural Netwrks Wei Pan Divisin f Bistatistics, Schl f Public Health, University f Minnesta, Minneaplis, MN 55455 Email: weip@bistat.umn.edu PubH 7475/8475 c Wei Pan Intrductin Chapter 11. nly

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction

Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction Neural Netwrks with Wavelet Based Denising Layers fr Time Series Predictin UROS LOTRIC 1 AND ANDREJ DOBNIKAR University f Lublana, Faculty f Cmputer and Infrmatin Science, Slvenia, e-mail: {urs.ltric,

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

EDA Engineering Design & Analysis Ltd

EDA Engineering Design & Analysis Ltd EDA Engineering Design & Analysis Ltd THE FINITE ELEMENT METHOD A shrt tutrial giving an verview f the histry, thery and applicatin f the finite element methd. Intrductin Value f FEM Applicatins Elements

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification

Agenda. What is Machine Learning? Learning Type of Learning: Supervised, Unsupervised and semi supervised Classification Agenda Artificial Intelligence and its applicatins Lecture 6 Supervised Learning Prfessr Daniel Yeung danyeung@ieee.rg Dr. Patrick Chan patrickchan@ieee.rg Suth China University f Technlgy, China Learning

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor Prgress In Electrmagnetics Research M, Vl. 35, 5 6, 24 The Research n Flux Linkage Characteristic Based n BP and RBF Neural Netwrk fr Switched Reluctance Mtr Yan Cai, *, Siyuan Sun, Chenhui Wang, and Cha

More information

Elements of Machine Intelligence - I

Elements of Machine Intelligence - I ECE-175A Elements f Machine Intelligence - I Ken Kreutz-Delgad Nun Vascncels ECE Department, UCSD Winter 2011 The curse The curse will cver basic, but imprtant, aspects f machine learning and pattern recgnitin

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

Emphases in Common Core Standards for Mathematical Content Kindergarten High School

Emphases in Common Core Standards for Mathematical Content Kindergarten High School Emphases in Cmmn Cre Standards fr Mathematical Cntent Kindergarten High Schl Cntent Emphases by Cluster March 12, 2012 Describes cntent emphases in the standards at the cluster level fr each grade. These

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

CN700 Additive Models and Trees Chapter 9: Hastie et al. (2001)

CN700 Additive Models and Trees Chapter 9: Hastie et al. (2001) CN700 Additive Mdels and Trees Chapter 9: Hastie et al. (2001) Madhusudana Shashanka Department f Cgnitive and Neural Systems Bstn University CN700 - Additive Mdels and Trees March 02, 2004 p.1/34 Overview

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

SAMPLING DYNAMICAL SYSTEMS

SAMPLING DYNAMICAL SYSTEMS SAMPLING DYNAMICAL SYSTEMS Melvin J. Hinich Applied Research Labratries The University f Texas at Austin Austin, TX 78713-8029, USA (512) 835-3278 (Vice) 835-3259 (Fax) hinich@mail.la.utexas.edu ABSTRACT

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedfrward Neural Netwrks Yagmur Gizem Cinar, Eric Gaussier AMA, LIG, Univ. Grenble Alpes 17 March 2017 Yagmur Gizem Cinar, Eric Gaussier Multilayer Perceptrns (MLP) 17 March 2017 1 / 42 Reference Bk Deep

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Modeling the Nonlinear Rheological Behavior of Materials with a Hyper-Exponential Type Function

Modeling the Nonlinear Rheological Behavior of Materials with a Hyper-Exponential Type Function www.ccsenet.rg/mer Mechanical Engineering Research Vl. 1, N. 1; December 011 Mdeling the Nnlinear Rhelgical Behavir f Materials with a Hyper-Expnential Type Functin Marc Delphin Mnsia Département de Physique,

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC 1 SYSTEMS

MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC 1 SYSTEMS MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC SYSTEMS M. J. Yazdanpanah *, E. Semsar, C. Lucas * yazdan@ut.ac.ir, semsar@chamran.ut.ac.ir, lucas@ipm.ir

More information

ENG2410 Digital Design Sequential Circuits: Part A

ENG2410 Digital Design Sequential Circuits: Part A ENG2410 Digital Design Sequential Circuits: Part A Fall 2017 S. Areibi Schl f Engineering University f Guelph Week #6 Tpics Sequential Circuit Definitins Latches Flip-Flps Delays in Sequential Circuits

More information

19 Better Neural Network Training; Convolutional Neural Networks

19 Better Neural Network Training; Convolutional Neural Networks 108 Jnathan Richard Shewchuk 19 Better Neural Netwrk Training; Cnvlutinal Neural Netwrks [I m ging t talk abut a bunch f heuristics that make gradient descent faster, r make it find better lcal minima,

More information

7 TH GRADE MATH STANDARDS

7 TH GRADE MATH STANDARDS ALGEBRA STANDARDS Gal 1: Students will use the language f algebra t explre, describe, represent, and analyze number expressins and relatins 7 TH GRADE MATH STANDARDS 7.M.1.1: (Cmprehensin) Select, use,

More information

Artificial Neural Networks Backpropagation & Deep Neural Networks

Artificial Neural Networks Backpropagation & Deep Neural Networks Artificial Neural Netwrs Bacprpagatin & Deep Neural Netwrs Jan Drcal drcajan@fel.cvut.cz Cmputatinal Intelligence Grup Department f Cmputer Science and Engineering Faculty f Electrical Engineering Czec

More information

Learning to Control an Unstable System with Forward Modeling

Learning to Control an Unstable System with Forward Modeling 324 Jrdan and Jacbs Learning t Cntrl an Unstable System with Frward Mdeling Michael I. Jrdan Brain and Cgnitive Sciences MIT Cambridge, MA 02139 Rbert A. Jacbs Cmputer and Infrmatin Sciences University

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Simple Linear Regression (single variable)

Simple Linear Regression (single variable) Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank CAUSAL INFERENCE Technical Track Sessin I Phillippe Leite The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Phillippe Leite fr the purpse f this wrkshp Plicy questins are causal

More information

Biocomputers. [edit]scientific Background

Biocomputers. [edit]scientific Background Bicmputers Frm Wikipedia, the free encyclpedia Bicmputers use systems f bilgically derived mlecules, such as DNA and prteins, t perfrm cmputatinal calculatins invlving string, retrieving, and prcessing

More information

SELF-ADAPTIVE ARTIFICIAL NEURAL NETWORK TOMÁŠ MAREŠ. Ing. Anna Kučerová, Ph.D.

SELF-ADAPTIVE ARTIFICIAL NEURAL NETWORK TOMÁŠ MAREŠ. Ing. Anna Kučerová, Ph.D. NÁVRH ADAPTIVNÍ UMĚLÉ NEURONOVÉ SÍTĚ A JEJÍ POUŽITÍ PŘI KALIBROVÁNÍ NUMERICKÝCH MODELŮ SELF-ADAPTIVE ARTIFICIAL NEURAL NETWORK IN NUMERICAL MODELS CALIBRATION Sutěžní práce Autr: TOMÁŠ MAREŠ Odbrné vedení:

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Linear programming III

Linear programming III Linear prgramming III Review 1/33 What have cvered in previus tw classes LP prblem setup: linear bjective functin, linear cnstraints. exist extreme pint ptimal slutin. Simplex methd: g thrugh extreme pint

More information

Turing Machines. Human-aware Robotics. 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Announcement:

Turing Machines. Human-aware Robotics. 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Announcement: Turing Machines Human-aware Rbtics 2017/10/17 & 19 Chapter 3.2 & 3.3 in Sipser Ø Annuncement: q q q q Slides fr this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse355/lectures/tm-ii.pdf

More information

Cells though to send feedback signals from the medulla back to the lamina o L: Lamina Monopolar cells

Cells though to send feedback signals from the medulla back to the lamina o L: Lamina Monopolar cells Classificatin Rules (and Exceptins) Name: Cell type fllwed by either a clumn ID (determined by the visual lcatin f the cell) r a numeric identifier t separate ut different examples f a given cell type

More information

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical mdel fr micrarray data analysis David Rssell Department f Bistatistics M.D. Andersn Cancer Center, Hustn, TX 77030, USA rsselldavid@gmail.cm

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

Churn Prediction using Dynamic RFM-Augmented node2vec

Churn Prediction using Dynamic RFM-Augmented node2vec Churn Predictin using Dynamic RFM-Augmented nde2vec Sandra Mitrvić, Jchen de Weerdt, Bart Baesens & Wilfried Lemahieu Department f Decisin Sciences and Infrmatin Management, KU Leuven 18 September 2017,

More information

Department of Electrical Engineering, University of Waterloo. Introduction

Department of Electrical Engineering, University of Waterloo. Introduction Sectin 4: Sequential Circuits Majr Tpics Types f sequential circuits Flip-flps Analysis f clcked sequential circuits Mre and Mealy machines Design f clcked sequential circuits State transitin design methd

More information

Dataflow Analysis and Abstract Interpretation

Dataflow Analysis and Abstract Interpretation Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key

More information

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA Mdelling f Clck Behaviur Dn Percival Applied Physics Labratry University f Washingtn Seattle, Washingtn, USA verheads and paper fr talk available at http://faculty.washingtn.edu/dbp/talks.html 1 Overview

More information

Module 4: General Formulation of Electric Circuit Theory

Module 4: General Formulation of Electric Circuit Theory Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

Prediction of Municipal Solid Waste Generation by Use of Artificial Neural Network: A Case Study of Mashhad

Prediction of Municipal Solid Waste Generation by Use of Artificial Neural Network: A Case Study of Mashhad Int. J. Envirn. Res., 2(1): 13-22, Winter 2008 ISSN: 1735-6865 Predictin f Municipal Slid Waste Generatin by Use f Artificial Neural Netwrk: A Case Study f Mashhad Jalili Ghazi Zade, M. 1 and Nri, R. 2*

More information

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use

More information

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b

MODULE 1. e x + c. [You can t separate a demominator, but you can divide a single denominator into each numerator term] a + b a(a + b)+1 = a + b . REVIEW OF SOME BASIC ALGEBRA MODULE () Slving Equatins Yu shuld be able t slve fr x: a + b = c a d + e x + c and get x = e(ba +) b(c a) d(ba +) c Cmmn mistakes and strategies:. a b + c a b + a c, but

More information

Collocation Map for Overcoming Data Sparseness

Collocation Map for Overcoming Data Sparseness Cllcatin Map fr Overcming Data Sparseness Mnj Kim, Yung S. Han, and Key-Sun Chi Department f Cmputer Science Krea Advanced Institute f Science and Technlgy Taejn, 305-701, Krea mj0712~eve.kaist.ac.kr,

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms RU, Rand GOLOSINSKI, T.S. Early detectin f mining truck failure by mdelling its peratin with neural netwrks classificatin algrithms. Applicatin f Cmputers and Operatins Research ill the Minerals Industries,

More information

ENG2410 Digital Design Sequential Circuits: Part B

ENG2410 Digital Design Sequential Circuits: Part B ENG24 Digital Design Sequential Circuits: Part B Fall 27 S. Areibi Schl f Engineering University f Guelph Analysis f Sequential Circuits Earlier we learned hw t analyze cmbinatinal circuits We will extend

More information

Statistical Learning. 2.1 What Is Statistical Learning?

Statistical Learning. 2.1 What Is Statistical Learning? 2 Statistical Learning 2.1 What Is Statistical Learning? In rder t mtivate ur study f statistical learning, we begin with a simple example. Suppse that we are statistical cnsultants hired by a client t

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

Ray tracing equations in transversely isotropic media Cosmin Macesanu and Faruq Akbar, Seimax Technologies, Inc.

Ray tracing equations in transversely isotropic media Cosmin Macesanu and Faruq Akbar, Seimax Technologies, Inc. Ray tracing equatins in transversely istrpic media Csmin Macesanu and Faruq Akbar, Seimax Technlgies, Inc. SUMMARY We discuss a simple, cmpact apprach t deriving ray tracing equatins in transversely istrpic

More information

Data Mining with Linear Discriminants. Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel

Data Mining with Linear Discriminants. Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel Data Mining with Linear Discriminants Exercise: Business Intelligence (Part 6) Summer Term 2014 Stefan Feuerriegel Tday s Lecture Objectives 1 Recgnizing the ideas f artificial neural netwrks and their

More information

MATHEMATICS SYLLABUS SECONDARY 5th YEAR

MATHEMATICS SYLLABUS SECONDARY 5th YEAR Eurpean Schls Office f the Secretary-General Pedaggical Develpment Unit Ref. : 011-01-D-8-en- Orig. : EN MATHEMATICS SYLLABUS SECONDARY 5th YEAR 6 perid/week curse APPROVED BY THE JOINT TEACHING COMMITTEE

More information

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal

More information

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

INTRODUCTION TO ARTIFICIAL INTELLIGENCE v=1 v= 1 v= 1 v= 1 v= 1 v=1 optima 2) 3) 5) 6) 7) 8) 9) 12) 11) 13) INTRDUCTIN T ARTIFICIAL INTELLIGENCE DATA15001 EPISDE 8: NEURAL NETWRKS TDAY S MENU 1. NEURAL CMPUTATIN 2. FEEDFRWARD NETWRKS (PERCEPTRN)

More information

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS 1 Influential bservatins are bservatins whse presence in the data can have a distrting effect n the parameter estimates and pssibly the entire analysis,

More information

NUMBERS, MATHEMATICS AND EQUATIONS

NUMBERS, MATHEMATICS AND EQUATIONS AUSTRALIAN CURRICULUM PHYSICS GETTING STARTED WITH PHYSICS NUMBERS, MATHEMATICS AND EQUATIONS An integral part t the understanding f ur physical wrld is the use f mathematical mdels which can be used t

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

Modelling of NOLM Demultiplexers Employing Optical Soliton Control Pulse

Modelling of NOLM Demultiplexers Employing Optical Soliton Control Pulse Micwave and Optical Technlgy Letters, Vl. 1, N. 3, 1999. pp. 05-08 Mdelling f NOLM Demultiplexers Emplying Optical Slitn Cntrl Pulse Z. Ghassemly, C. Y. Cheung & A. K. Ray Electrnics Research Grup, Schl

More information

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute

More information

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa

PSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa There are tw parts t this lab. The first is intended t demnstrate hw t request and interpret the spatial diagnstics f a standard OLS regressin mdel using GeDa. The diagnstics prvide infrmatin abut the

More information

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION Malaysian Jurnal f Mathematical Sciences 4(): 7-4 () On Huntsberger Type Shrinkage Estimatr fr the Mean f Nrmal Distributin Department f Mathematical and Physical Sciences, University f Nizwa, Sultanate

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

Physical Layer: Outline

Physical Layer: Outline 18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~ Sectin 6-2: Simplex Methd: Maximizatin with Prblem Cnstraints f the Frm ~ Nte: This methd was develped by Gerge B. Dantzig in 1947 while n assignment t the U.S. Department f the Air Frce. Definitin: Standard

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information