CSCE 496/896 Lecture 2: Basic Artificial Neural Networks. Stephen Scott. Introduction. Supervised Learning. Basic Units.

Size: px
Start display at page:

Download "CSCE 496/896 Lecture 2: Basic Artificial Neural Networks. Stephen Scott. Introduction. Supervised Learning. Basic Units."

Transcription

1 (Adaped from Vinod Variyam, Ehem Alpaydin, Tom Michell, Ian Goodfellow, and Aurélien Géron) learning is mos fundamenal, classic form machine learning par comes from he par labels for examples (insances) Many ways o do supervised learning; we ll focus on arificial neural neworks, which are he basis for deep learning ssco@cseunledu / / ANNs Properies Consider humans: Properies arificial neural nes (ANNs): Toal number neurons 0 0 Neuron swiching ime 0 3 second (vs 0 0 ) Connecions per neuron Many neuron-like swiching unis Many weighed inerconnecions among unis Highly parallel, disribued process Scene recogniion ime 0 second 00 inference seps doesn seem like enough ) massive parallel compuaion Emphasis on uning weighs auomaically Srong differences beween ANNs for ML and ANNs for biological modeling 3 / 4 / When o Consider ANNs Hisory ANNs Inpu is high-dimensional discree- or real-valued (eg, raw sensor inpu) Oupu is discree- or real-valued Oupu is a vecor values Possibly noisy daa Form arge funcion is unknown Human readabiliy resul is unimporan Long raining imes accepable The Beginning: Linear unis and he Percepron algorihm (940s) Spoiler Aler: sagnaed because inabiliy o handle daa no linearly separable Aware usefulness muli-layer neworks, bu could no rain The Comeback: Training muli-layer neworks wih agaion (980s) Many applicaions, bu in 990s replaced by large-margin approaches such as suppor vecor machines and boosing 5 / 6 /

2 Hisory ANNs (con d) Ouline The Resurgence: Deep archiecures (000s) Beer hardware and sware suppor allow for deep (> 5 8 layers) neworks Sill use agaion, bu Larger daases, algorihmic improvemens (new loss and acivaion funcions), and deeper neworks improve performance considerably Very impressive applicaions, eg, capioning images The Ineviable: (TBD) Oops learning Basic ANN unis Linear uni Linear hreshold unis Percepron raining rule separable problems and mulilayer neworks agaion acivaion funcions Puing everyhing ogeher 7 / Thank a gamer oday 8 / from Examples from Examples (con d) Le C be he arge funcion (or arge concep) o be learned Think C as a funcion ha akes as inpu an example (or insance) and oupus a label Goal: Given raining se X = {(x, y )} N = where y = C(x ), oupu hypohesis h H ha approximaes C in is classificaions new insances Each insance x represened as a vecor aribues or feaures Eg, le each x =(, x ) be a vecor describing aribues a car; = price and x = engine power In his example, label is binary (iive/aive, yes/no, /0, +/ ) indicaing wheher insance x is a family car : Engine power x x : Price 9 / 0 / Thinking abou C Thinking abou C (con d) Can hink arge concep C as a funcion In example, C is an axis-parallel box, equivalen o upper and lower bounds on each aribue Migh decide o se H (se candidae hypoheses) o he same family ha C comes from No required o do so Can also hink arge concep C as a se iive insances In example, C he coninuous se all iive poins in he plane Use whichever is convenien a he ime : Engine power x e e C p p : Price / /

3 Hypoheses and Error Hypoheses and Error (con d) 3 / A learning algorihm uses raining se X and finds a hypohesis h H ha approximaes C In example, H can be se all axis-parallel boxes If C guaraneed o come from H, hen we know ha a perfec hypohesis exiss In his case, we choose h from he version space = subse H consisen wih X Wha learning algorihm can you hink o learn C? Can hink wo ypes error (or loss) h Empirical error is fracion X ha h ges wrong Generalizaion error is probabiliy ha a new, randomly seleced, insance is misclassified by h Depends on he probabiliy disribuion over insances Can furher classify error as false iive and false aive 4 / Linear Uni (Regression) Linear Threshold Uni (Binary Classificaion) Linear Uni Linear Threshold Uni Percepron Training Rule x x n w w w n x 0 = w 0 n wi x i i=0 ŷ = f (x; w, b) =x > w + b = w + + w n x n + b Linear Uni Linear Threshold Uni Percepron Training Rule x x n w w w n x 0 = w 0 n wi x i i=0 + if f (x; w, b) > 0 y = o(x; w, b) = oherwise n if w > 0 o = { i x i=0 i - oherwise (someimes use 0 insead ) Puing 5 / Things Each weigh vecor w is differen h If se w 0 = b, can simplify above Forms he basis for many oher acivaion funcions Puing 6 / Things Linear Threshold Uni Decision Surface Linear Threshold Uni Non-Numeric Inpus Linear Uni Linear Threshold Uni Percepron Training Rule + + x + - (a) - - Represens some useful funcions Wha parameers (w, b) represen g(, x ; w, b) =AND(, x )? - x + - (b) + Linear Uni Linear Threshold Uni Percepron Training Rule Wha if aribues are no numeric? Encode hem numerically Eg, if an aribue Color has values Red, Green, and Blue, can encode as one-ho vecors [, 0, 0], [0,, 0], [0, 0, ] Generally beer han using a single ineger, eg, Red is, Green is, and Blue is 3, since here is no implici ordering he values he aribue Puing 7 / Things Bu some funcions no represenable Ie, hose no linearly separable Therefore, we ll wan neworks unis Puing 8 / Things

4 Percepron Training Rule ( Algorihm) Where Does he Training Rule Come From? Linear Regression Linear Uni Linear Threshold Uni Percepron Training Rule where x j Ie, if (y w 0 j w j + (y ŷ ) x j is jh aribue raining insance y is label raining insance ŷ is Percepron oupu on raining insance > 0 is small consan (eg, 0) called learning rae ŷ) > 0 hen increase w j wr x j, else decrease Can prove rule will converge if raining daa is linearly separable and sufficienly small Recall iniial linear uni (no hreshold) If only one feaure, hen his is a regression problem Find a sraigh line ha bes fis he raining daa For simpliciy, le i pass hrough he origin Slope specified by parameer w x y Puing 9 / Things 0 / Where Does he Training Rule Come From? Linear Regression Where Does he Training Rule Come From? Convex Quadraic Opimizaion Goal is o find a parameer w o minimize square loss: J(w )=55w 734w mx J(w )= ŷ X y m = w x y = = =(w 8) +(w 465) +(3w 79) +(4w 0) +(5w ) = 55w 734w / / Minimum is a w 485, wih loss 053 Wha s special abou ha poin? Where Does he Training Rule Come From? Where Does he Training Rule Come From? Example Recall ha a funcion has a (local) minimum or maximum where he derivaive is 0 d dw J(w )=0w 734 In our example, iniialize w, hen repeaedly updae w 0 = w (0 w 734) Seing his = 0 and solving for w yields w 485 Moivaes he use gradien descen o solve in high-dimensional spaces wih nonconvex funcions: w 0 = w rj(w) is learning rae o moderae updaes is a vecor parial i i n i= Could also updae one a = w (x ) x y 3 / 4 /

5 Where Does he Training Rule Come From? Handling The XOR Problem E[w] 5 J(w) Using linear hreshold unis x B: (0,) D: (,) 5 / n In general, define loss funcion J, compue gradien J wr J s parameers, hen apply gradien descen w XOR General 6 / A: (0,0) g (x) < 0 > 0 < 0 > 0 C: (,0) g (x) Represen wih inersecion wo linear separaors x g (x) = + x / g (x) = + x 3/ = x R : g (x) > 0 AND g (x) < 0 = x R : g (x), g (x) < 0 OR g (x), g (x) > 0 Handling The XOR Problem (con d) Handling The XOR Problem (con d) XOR General Le z i = ( 0 if g i (x) < 0 oherwise Class (, x ) g (x) z g (x) z B: (0, ) / / 0 C: (, 0) / / 0 A: (0, 0) / 0 3/ 0 D: (, ) 3/ / Now feed z, z ino g(z) = z z / z D: (,) < 0 > 0 g(z) XOR General In oher words, we remapped all vecors x o z such ha he classes are linearly separable in he new vecor space Hidden Layer w 30= / z w 3= w 50= / Σw x i 3i i w 53= Inpu Layer w 3= Σw w = i 5izi x 4 w 4= Σw x i i 4i w 54= z w 40= 3/ Oupu Layer This is a wo-layer percepron or wo-layer feedforward neural nework 7 / A: (0,0) B, C: (,0) z 8 / Can use many nonlinear acivaion funcions in hidden layer Handling General Training Muliple Layers XOR General By adding up o hidden layers linear hreshold unis, can represen any union inersecion halfspaces Firs hidden layer defines halfspaces, second hidden layer akes inersecion (AND), oupu layer akes union (OR) Mulilayer In a muli-layer nework, have o une parameers in all layers In order o rain, need o know he gradien he loss funcion wr each parameer The agaion algorihm firs feeds forward he nework s inpus o is oupus, hen propagaes back error via repeaed applicaion chain rule for derivaives Can be decomed in a simple, modular way 9 / Alg 30 /

6 Given a complicaed funcion f ( ), wan o know is parial derivaives wr is parameers Will represen f in a modular fashion via a compuaion graph Eg, le f (w, x) =w 0 x 0 + w Eg, w 0 = 30, w = 0, x 0 = 0, = 40 Mulilayer Mulilayer Alg Alg 3 / 3 / So wha? Can now decome gradien calculaion ino basic operaions = If g(y, z) =y + @z = Via chain @a =(0)(0) =0 Mulilayer Mulilayer Alg Alg 33 / 34 / The Basics If h(y, z) = z Via chain @x 0 = 0w 0 = 30 How does his help us wih muli-layer ANNs? Firs, le s replace he hreshold funcion wih a coninuous approximaion w x 0 = Mulilayer Alg 35 / So for x = [0, 40] >, rf (w) =[0, 40] > Mulilayer Alg 36 / x x n w w n w 0 (ne) is he logisic funcion (ne) = (a ype sigmoid funcion) Squashes ne ino [0, ] range n ne = w i x i=0 i = f(x; w,b) + e ne o = (ne) = + ē ne

7 The Compuaion Graph The Le f (w, x) =/ ( +exp( (w 0 x 0 + w = 0( /h )= 0073 Mulilayer Mulilayer Alg Alg 37 / 38 / The = 0073() = = 0073 exp(d) = 0966 Mulilayer Mulilayer Alg Alg 39 / 40 / The = 0966( ) =0966 and so on: Noe = (c)( (c)), = Mulilayer Mulilayer Alg 4 / So for x = [0, 40] >, rf (w) =[0966, 07866] > Alg 4 /

8 Weigh Updae Mulilayer Mulilayer If ŷ = (w x ) is predicion on raining insance x wih label y, le loss be J(w) = (ŷ y = ŷ y @ = ŷ y So updae rule is In general, = ŷ y ŷ ŷ x w 0 = w ŷ ŷ ŷ y x Mulilayer Tha updae formula works for oupu unis when we know he arge labels y (here, a vecor o encode muli-class labels) Bu for a hidden uni, we don know is arge oupu! Alg 43 / w 0 = w ŷ ŷ ŷ y x Alg 44 / w ji = weigh from node i o node j Oupu Hidden Mulilayer Le loss on insance (x, y ) be J(w) = P n i= (ŷ i y i ) Weighs w 5 and w 6 ie o oupu unis s and weigh updaes done as before Eg, w 0 53 = 53 = w 53 ŷ ( ŷ )(ŷ y ) 3 Mulilayer Mulivariae chain rule says we sum pahs from J o w 4 @c = ([ŷ ( ŷ )(ŷ y )] [w 54 ][ 4(a)( 4(a))] Alg 45 / Alg 46 / + [ŷ ( ŷ )(ŷ y )] [w 64 ][ 4(a)( 4(a))]) x Hidden Hidden Mulilayer Alg 47 / Analyical soluion is messy, bu we don need he formula; only need o compue gradien The modular form a compuaion graph means ha once we @d we can plug hose values in and compue gradiens for earlier layers Doesn maer if layer is oupu, or farher back; can run indefiniely backward agaion error from oupus o inpus Define error erm hidden node h as X h ŷ h ( ŷ h ) w k,h k, kdown(h) where ŷ k is oupu node k and down(h) is se nodes immediaely downsream h Noe ha his formula is specific o sigmoid unis Mulilayer Alg 48 / We are propagaing back error erms from oupu layer oward inpu layers, scaling wih he weighs Scaling wih he weighs characerizes how much he error erm each hidden uni is responsible for Process: Submi inpus x Feed forward signal o oupus 3 Compue nework loss 4 Propagae error back o compue loss gradien wr each weigh 5 Updae weighs

9 agaion Algorihm Sigmoid Acivaion and Square Loss agaion Algorihm Noes Mulilayer Alg 49 / Iniialize weighs Unil erminaion condiion saisfied do For each raining example (x, y ) do Inpu x o he nework and compue he oupus ŷ For each oupu uni k 3 For each hidden uni h k ŷ k ( ŷ k)(y k ŷ k) h ŷ h ( ŷ h) 4 Updae each nework weigh w j,i X kdown(h) w j,i w j,i + w j,i w k,h where w j,i = j x j,i and x j,i is signal sen from node i o node j k Mulilayer Alg 50 / Formula for assumes sigmoid acivaion funcion Sraighforward o change o new acivaion funcion via compuaion graph Iniializaion used o be via random numbers near zero, eg, from N (0, ) More refined mehods available (laer) Algorihm as presened updaes weighs afer each insance Can also accumulae w j,i across muliple raining insances in he same mini-bach and do a single updae per mini-bach ) Sochasic gradien descen (SGD) Exreme case: Enire raining se is a single bach (bach gradien descen) Oupu Hidden Oupu Hidden 5 / Given hidden layer oupus h Linear uni: ŷ = w > h + b Minimizing square loss wih his oupu uni maximizes log likelihood when labels from normal disribuion Ie, find a se parameers ha is mos likely o generae he labels he raining daa Works well wih GD raining Sigmoid: ŷ = (w > h + b) Approximaes non-differeniable hreshold funcion More common in older, shallower neworks Can be used o predic probabiliies Smax uni: Sar wih z = W > h + b Predic probabiliy label i o be P smax(z) i =exp(z i )/ j exp(z j) Coninuous, differeniable approximaion o argmax Oupu Hidden 5 / Recified linear uni (ReLU): max{0, W > x + b} Good defaul choice In general, GD works well when funcions nearly linear Variaions: leaky ReLU and exponenial ReLU replace z < 0 side wih 00z and (exp(z) ), respecively Logisic sigmoid (done already) and anh Nice approximaion o hreshold, bu don rain well in deep neworks since hey saurae Puing Everyhing Hidden Layers Puing Everyhing Universal Approximaion Theorem How many layers o use? Deep neworks build poenially useful represenaions daa via comiion simple funcions Performance improvemen no simply from more complex nework (number parameers) Increasing number layers sill increases chances overfiing, so need significan amoun raining daa wih deep nework; raining ime increases as well Accuracy vs Deph Accuracy vs Complexiy Any boolean funcion can be represened wih wo layers Any bounded, coninuous funcion can be represened wih arbirarily small error wih wo layers Any funcion can be represened wih arbirarily small error wih hree layers Only an EXISTENCE PROOF Could need exponenially many nodes in a layer May no be able o find he righ weighs Highlighs risk overfiing and need for regularizaion 53 / 54 /

10 Puing Everyhing Iniializaion Puing Everyhing Opimizers Previously, iniialized weighs o random numbers near 0 (from N (0, )) Sigmoid nearly linear here, so GD expeced o work beer Bu in deep neworks, his increases variance per layer, resuling in vanishing gradiens and poor opimizaion Gloro iniializaion conrols variance per layer: If layer has n in inpus and n ou oupus, iniialize via uniform over [ r, r] or N (0, ) q q 6 r = a and = a nin+nou nin+nou Acivaion a Logisic anh p 4 ReLU Variaions on gradien descen opimizaion: Momenum opimizaion AdaGrad RMSProp Adam 55 / 56 / Puing Everyhing Momenum Opimizaion Puing Everyhing AdaGrad 57 / Use a momenum erm o keep updaes moving in same direcion as previous rials Replace original GD updae w 0 = w where m = w 0 = w m, m + rj(w) rj(w) wih Using sigmoid acivaion and square loss, replace w ji = j x ji wih w ji = j x ji + w ji Can help move hrough small local minima o beer ones & move along fla surfaces 58 / Sandard GD can oo quickly descend seepes slope, hen slowly crawl hrough a valley AdaGrad adaps learning rae by scaling i down in seepes dimensions: w 0 = w rj(w) p s +, where s = s + rj(w) rj(w), and are elemen-wise muliplicaion and division and = 0 0 prevens division by 0 s accumulaes squares gradien, and learning rae for each dimension scaled down Puing Everyhing RMSProp Puing Everyhing Adam AdaGrad ends o sop oo early for neural neworks due o over-aggressive downscaling RMSProp exponenially decays old gradiens o address his Adam (adapive momen esimaion) combines Momenum opimizaion and RMSProp m = m +( )rj(w) s = s +( )rj(w) rj(w) 3 m = m/( ) where w 0 = w rj(w) p s +, s = s +( )rj(w) rj(w) 4 s = s/( ) 5 w 0 = w m p s + Ieraion couner used in 3 and 4 o preven m and s from vanishing Can se = 09, = 0999, = / /

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2 Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

Neural Networks. Understanding the Brain

Neural Networks. Understanding the Brain Neural Neworks Threshold unis Neural Neworks Gradien descen Mulilayer neworks Backpropagaion Hidden layer represenaions Example: Face Recogniion Advanced opics And, more Neworks of processing unis (neurons)

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

More Digital Logic. t p output. Low-to-high and high-to-low transitions could have different t p. V in (t)

More Digital Logic. t p output. Low-to-high and high-to-low transitions could have different t p. V in (t) EECS 4 Spring 23 Lecure 2 EECS 4 Spring 23 Lecure 2 More igial Logic Gae delay and signal propagaion Clocked circui elemens (flip-flop) Wriing a word o memory Simplifying digial circuis: Karnaugh maps

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

Chapter 7: Solving Trig Equations

Chapter 7: Solving Trig Equations Haberman MTH Secion I: The Trigonomeric Funcions Chaper 7: Solving Trig Equaions Le s sar by solving a couple of equaions ha involve he sine funcion EXAMPLE a: Solve he equaion sin( ) The inverse funcions

More information

Tasty Coffee example

Tasty Coffee example Lecure Slides for (Binary) Classificaion: Learning a Class from labeled Examples ITRODUCTIO TO Machine Learning ETHEM ALPAYDI The MIT Press, 00 (modified by dph, 0000) CHAPTER : Supervised Learning Things

More information

Pattern Classification (VI) 杜俊

Pattern Classification (VI) 杜俊 Paern lassificaion VI 杜俊 jundu@usc.edu.cn Ouline Bayesian Decision Theory How o make he oimal decision? Maximum a oserior MAP decision rule Generaive Models Join disribuion of observaion and label sequences

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Machine Learning 4771

Machine Learning 4771 ony Jebara, Columbia Universiy achine Learning 4771 Insrucor: ony Jebara ony Jebara, Columbia Universiy opic 20 Hs wih Evidence H Collec H Evaluae H Disribue H Decode H Parameer Learning via JA & E ony

More information

Ensemble Confidence Estimates Posterior Probability

Ensemble Confidence Estimates Posterior Probability Ensemble Esimaes Poserior Probabiliy Michael Muhlbaier, Aposolos Topalis, and Robi Polikar Rowan Universiy, Elecrical and Compuer Engineering, Mullica Hill Rd., Glassboro, NJ 88, USA {muhlba6, opali5}@sudens.rowan.edu

More information

Experiments on logistic regression

Experiments on logistic regression Experimens on logisic regression Ning Bao March, 8 Absrac In his repor, several experimens have been conduced on a spam daa se wih Logisic Regression based on Gradien Descen approach. Firs, he overfiing

More information

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14 CSE/NB 58 Lecure 14: From Supervised o Reinforcemen Learning Chaper 9 1 Recall from las ime: Sigmoid Neworks Oupu v T g w u g wiui w Inpu nodes u = u 1 u u 3 T i Sigmoid oupu funcion: 1 g a 1 a e 1 ga

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

The average rate of change between two points on a function is d t

The average rate of change between two points on a function is d t SM Dae: Secion: Objecive: The average rae of change beween wo poins on a funcion is d. For example, if he funcion ( ) represens he disance in miles ha a car has raveled afer hours, hen finding he slope

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Predator - Prey Model Trajectories and the nonlinear conservation law

Predator - Prey Model Trajectories and the nonlinear conservation law Predaor - Prey Model Trajecories and he nonlinear conservaion law James K. Peerson Deparmen of Biological Sciences and Deparmen of Mahemaical Sciences Clemson Universiy Ocober 28, 213 Ouline Drawing Trajecories

More information

Fishing limits and the Logistic Equation. 1

Fishing limits and the Logistic Equation. 1 Fishing limis and he Logisic Equaion. 1 1. The Logisic Equaion. The logisic equaion is an equaion governing populaion growh for populaions in an environmen wih a limied amoun of resources (for insance,

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

The fundamental mass balance equation is ( 1 ) where: I = inputs P = production O = outputs L = losses A = accumulation

The fundamental mass balance equation is ( 1 ) where: I = inputs P = production O = outputs L = losses A = accumulation Hea (iffusion) Equaion erivaion of iffusion Equaion The fundamenal mass balance equaion is I P O L A ( 1 ) where: I inpus P producion O oupus L losses A accumulaion Assume ha no chemical is produced or

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

A Dynamic Model of Economic Fluctuations

A Dynamic Model of Economic Fluctuations CHAPTER 15 A Dynamic Model of Economic Flucuaions Modified for ECON 2204 by Bob Murphy 2016 Worh Publishers, all righs reserved IN THIS CHAPTER, OU WILL LEARN: how o incorporae dynamics ino he AD-AS model

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides

Hidden Markov Models. Adapted from. Dr Catherine Sweeney-Reed s slides Hidden Markov Models Adaped from Dr Caherine Sweeney-Reed s slides Summary Inroducion Descripion Cenral in HMM modelling Exensions Demonsraion Specificaion of an HMM Descripion N - number of saes Q = {q

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9)

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9) CSE/NB 528 Lecure 14: Reinforcemen Learning Chaper 9 Image from hp://clasdean.la.asu.edu/news/images/ubep2001/neuron3.jpg Lecure figures are from Dayan & Abbo s book hp://people.brandeis.edu/~abbo/book/index.hml

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

Phys1112: DC and RC circuits

Phys1112: DC and RC circuits Name: Group Members: Dae: TA s Name: Phys1112: DC and RC circuis Objecives: 1. To undersand curren and volage characerisics of a DC RC discharging circui. 2. To undersand he effec of he RC ime consan.

More information

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 RL Lecure 7: Eligibiliy Traces R. S. Suon and A. G. Baro: Reinforcemen Learning: An Inroducion 1 N-sep TD Predicion Idea: Look farher ino he fuure when you do TD backup (1, 2, 3,, n seps) R. S. Suon and

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing

More information

15. Vector Valued Functions

15. Vector Valued Functions 1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

Lecture 12: Multiple Hypothesis Testing

Lecture 12: Multiple Hypothesis Testing ECE 830 Fall 00 Saisical Signal Processing insrucor: R. Nowak, scribe: Xinjue Yu Lecure : Muliple Hypohesis Tesing Inroducion In many applicaions we consider muliple hypohesis es a he same ime. Example

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

6. Stochastic calculus with jump processes

6. Stochastic calculus with jump processes A) Trading sraegies (1/3) Marke wih d asses S = (S 1,, S d ) A rading sraegy can be modelled wih a vecor φ describing he quaniies invesed in each asse a each insan : φ = (φ 1,, φ d ) The value a of a porfolio

More information

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws

Chapter 15: Phenomena. Chapter 15 Chemical Kinetics. Reaction Rates. Reaction Rates R P. Reaction Rates. Rate Laws Chaper 5: Phenomena Phenomena: The reacion (aq) + B(aq) C(aq) was sudied a wo differen emperaures (98 K and 35 K). For each emperaure he reacion was sared by puing differen concenraions of he 3 species

More information

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff

Laplace transfom: t-translation rule , Haynes Miller and Jeremy Orloff Laplace ransfom: -ranslaion rule 8.03, Haynes Miller and Jeremy Orloff Inroducory example Consider he sysem ẋ + 3x = f(, where f is he inpu and x he response. We know is uni impulse response is 0 for

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

The general Solow model

The general Solow model The general Solow model Back o a closed economy In he basic Solow model: no growh in GDP per worker in seady sae This conradics he empirics for he Wesern world (sylized fac #5) In he general Solow model:

More information

CHAPTER 12 DIRECT CURRENT CIRCUITS

CHAPTER 12 DIRECT CURRENT CIRCUITS CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

A variational radial basis function approximation for diffusion processes.

A variational radial basis function approximation for diffusion processes. A variaional radial basis funcion approximaion for diffusion processes. Michail D. Vreas, Dan Cornford and Yuan Shen {vreasm, d.cornford, y.shen}@ason.ac.uk Ason Universiy, Birmingham, UK hp://www.ncrg.ason.ac.uk

More information

Slide03 Historical Overview Haykin Chapter 3 (Chap 1, 3, 3rd Ed): Single-Layer Perceptrons Multiple Faces of a Single Neuron Part I: Adaptive Filter

Slide03 Historical Overview Haykin Chapter 3 (Chap 1, 3, 3rd Ed): Single-Layer Perceptrons Multiple Faces of a Single Neuron Part I: Adaptive Filter Slide3 Haykin Chaper 3 (Chap, 3, 3rd Ed): Single-Layer Perceprons CPSC 636-6 Insrucor: Yoonsuck Choe Hisorical Overview McCulloch and Pis (943): neural neworks as compuing machines. Hebb (949): posulaed

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Orientation. Connections between network coding and stochastic network theory. Outline. Bruce Hajek. Multicast with lost packets

Orientation. Connections between network coding and stochastic network theory. Outline. Bruce Hajek. Multicast with lost packets Connecions beween nework coding and sochasic nework heory Bruce Hajek Orienaion On Thursday, Ralf Koeer discussed nework coding: coding wihin he nework Absrac: Randomly generaed coded informaion blocks

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Some Ramsey results for the n-cube

Some Ramsey results for the n-cube Some Ramsey resuls for he n-cube Ron Graham Universiy of California, San Diego Jozsef Solymosi Universiy of Briish Columbia, Vancouver, Canada Absrac In his noe we esablish a Ramsey-ype resul for cerain

More information

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

u(x) = e x 2 y + 2 ) Integrate and solve for x (1 + x)y + y = cos x Answer: Divide both sides by 1 + x and solve for y. y = x y + cos x

u(x) = e x 2 y + 2 ) Integrate and solve for x (1 + x)y + y = cos x Answer: Divide both sides by 1 + x and solve for y. y = x y + cos x . 1 Mah 211 Homework #3 February 2, 2001 2.4.3. y + (2/x)y = (cos x)/x 2 Answer: Compare y + (2/x) y = (cos x)/x 2 wih y = a(x)x + f(x)and noe ha a(x) = 2/x. Consequenly, an inegraing facor is found wih

More information

EE650R: Reliability Physics of Nanoelectronic Devices Lecture 9:

EE650R: Reliability Physics of Nanoelectronic Devices Lecture 9: EE65R: Reliabiliy Physics of anoelecronic Devices Lecure 9: Feaures of Time-Dependen BTI Degradaion Dae: Sep. 9, 6 Classnoe Lufe Siddique Review Animesh Daa 9. Background/Review: BTI is observed when he

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

MA 366 Review - Test # 1

MA 366 Review - Test # 1 MA 366 Review - Tes # 1 Fall 5 () Resuls from Calculus: differeniaion formulas, implici differeniaion, Chain Rule; inegraion formulas, inegraion b pars, parial fracions, oher inegraion echniques. (1) Order

More information

Lecture 1 Overview. course mechanics. outline & topics. what is a linear dynamical system? why study linear systems? some examples

Lecture 1 Overview. course mechanics. outline & topics. what is a linear dynamical system? why study linear systems? some examples EE263 Auumn 27-8 Sephen Boyd Lecure 1 Overview course mechanics ouline & opics wha is a linear dynamical sysem? why sudy linear sysems? some examples 1 1 Course mechanics all class info, lecures, homeworks,

More information

CptS 570 Machine Learning School of EECS Washington State University. CptS Machine Learning 1

CptS 570 Machine Learning School of EECS Washington State University. CptS Machine Learning 1 CpS 570 Machine Learning School of EECS Washingon Sae Universiy CpS 570 - Machine Learning 1 Form of underlying disribuions unknown Bu sill wan o perform classificaion and regression Semi-parameric esimaion

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information