Recap. Online Learning RWM. Avrim Blum. [ACFS02]: applying RWM to bandits

Size: px
Start display at page:

Download "Recap. Online Learning RWM. Avrim Blum. [ACFS02]: applying RWM to bandits"

Transcription

1 Algorihm Online Learning Recap Noregre algorihms for repeaed decisions: Algorihm has N opions World chooses cos vecor Can view as marix like his (maybe infinie # cols) World life fae Your guide: Avrim Blum Carnegie Mellon Universiy A each ime sep, algorihm picks row, life picks column Alg pays cos (or ges benefi) for acion chosen Alg ges column as feedback (or jus is own cos/benefi in he bandi model) Goal: do nearly as well as bes fixed row in hindsigh [Machine Learning Summer School 2012] RWM [ACFS02]: applying RWM o bandis Wha if only ge your own cos/benefi as feedback? (1ec 12 )(1ec 11 ) 1 (1ec 22 )(1ec 21 ) 1 (1ec 32 )(1ec 31 ) (1ec n2 )(1ec n1 ) 1 World life fae c 1 c 2 scaling so coss in [0,1] Use of RWM as subrouine o ge algorihm wih cumulaive regre O( (TN log N) 1/2 ) [average regre O( ((N log N)/T) 1/2 )] Will do a somewha weaker version of heir analysis (same algorihm bu no as igh a bound) Guaranee: E[cos] + 2( log n) 1/2 Since T, his is a mos + 2(Tlog n) 1/2 So, regre/ime sep 2(Tlog n) 1/2 /T! 0 For fun, alk abou i in he conex of online pricing Online pricing Say you are selling lemonade (or a cool new sofware ool, or boles of waer a he world cup) View each possible For =1,2, T price as a differen Seller ses price p row/exper Buyer arrives wih valuaion v If v p, buyer purchases and pays p, else doesn Repea $2 Assume all valuaions h Goal: do nearly as well as bes fixed price in hindsigh Muliarmed bandi problem Exponenial Weighs for Exploraion and Exploiaion (exp 3 ) [Auer,CesaBianchi,Freund,Schapire] Exper i ~ q Gain g i q Exp3 q = (1 )p + unif ĝ = (0,,0, g i /q i,0,,0) Disrib p Gain vecor ĝ 1 RWM believes gain is: p ĝ = p i (g i /q i ) g RWM 2 g RWM /(1+²) O(² 1 nh/ log n) 3 Acual gain is: g i = g RWM (q i /p i ) g RWM(1 ) nh/ RWM n = #expers 4 E[ ] Because E[ĝ j ] = (1 q j )0 + q j (g j /q j ) = g j, so E[max j [ ĝ j ]] max j [ E[ ĝ j ] ] = 1

2 Muliarmed bandi problem Exponenial Weighs for Exploraion and Exploiaion (exp 3 ) [Auer,CesaBianchi,Freund,Schapire] Exper i ~ q Gain g i q Exp3 q = (1 )p + unif ĝ = (0,,0, g i /q i,0,,0) Disrib p Gain vecor ĝ Conclusion ( = ²): E[Exp3] /(1+²) 2 O(² 2 nh log(n)) nh/ RWM n = #expers Balancing would give O(( nh log n) 2/3 ) in bound because of ² 2 Bu can reduce o ² 1 and O(( nh log n) 1/2 ) more care in analysis A naural generalizaion (Going back o fullinfo seing, hinking abou pahs ) A naural generalizaion of our regre goal is: wha if we also wan ha on rainy days, we do nearly as well as he bes roue for rainy days And on Mondays, do nearly as well as bes roue for Mondays More generally, have N rules (on Monday, use pah P) Goal: simulaneously, for each rule i, guaranee o do nearly as well as i on he ime seps in which i fires For all i, wan E[cos i (alg)] (1+e)cos i (i) + O(e 1 log N) (cos i (X) = cos of X on ime seps where rule i fires) Can we ge his? A naural generalizaion This generalizaion is esp naural in machine learning for combining muliple ifhen rules Eg, documen classificaion Rule: if <wordx> appears hen predic <Y> Eg, if has fooball hen classify as spors So, if 90% of documens wih fooball are abou spors, we should have error 11% on hem Specialiss or sleeping expers problem Assume we have N rules, explicily given For all i, wan E[cos i (alg)] (1+e)cos i (i) + O(e 1 log N) (cos i (X) = cos of X on ime seps where rule i fires) A simple algorihm and analysis (all on one slide) Sar wih all rules a weigh 1 A each ime sep, of he rules i ha fire, selec one wih probabiliy p i / wi Updae weighs: If didn fire, leave weigh alone If did fire, raise or lower depending on performance compared o weighed average: r i = [ j p j cos(j)]/(1+e) cos(i) w i à w i (1+e) ri So, if rule i does exacly as well as weighed average, is weigh drops a lile Weigh increases if does beer han weighed average by more han a (1+e) facor This ensures sum of weighs doesn increase Final w i = (1+e) E[cos i(alg)]/(1+e)cosi(i) So, exponen e 1 log N So, E[cos i (alg)] (1+e)cos i (i) + O(e 1 log N) Los of uses Can combine muliple ifhen rules Can combine muliple learning algorihms: Back o driving, say we are given N condiions o pay aenion o (is i raining?, is i a Monday?, ) Creae N rules: if day saisfies condiion i, hen use oupu of Alg i, where Alg i is an insaniaion of an expers algorihm you run on jus he days saisfying ha condiion Adaping o change Wha if we wan o adap o change do nearly as well as bes recen exper? For each exper, insaniae copy who wakes up on day for each 0 T1 Our cos in previous days is a mos (1+²)(bes exper in las days) + O(² 1 log(nt)) (no bes possible bound since exra log(t) bu no bad) Simulaneously, for each condiion i, do nearly as well as Alg i which iself does nearly as well as bes pah for condiion i 2

3 Summary Algorihms for online decisionmaking wih srong guaranees on performance compared o bes fixed choice Applicaion: play repeaed game agains adversary Perform nearly as well as fixed sraegy in hindsigh Can apply even wih very limied feedback Applicaion: which way o drive o work, wih only feedback abou your own pahs; online pricing, even if only have buy/no buy feedback More general forms of regre 1 bes exper or exernal regre: Given n sraegies Compee wih bes of hem in hindsigh 2 sleeping exper or regre wih imeinervals : Given n sraegies, k properies Le S i be se of days saisfying propery i (migh overlap) Wan o simulaneously achieve low regre over each S i 3 inernal or swap regre: like (2), excep ha S i = se of days in which we chose sraegy i Inernal/swapregre Eg, each day we pick one sock o buy shares in Don wan o have regre of he form every ime I bough IBM, I should have bough Microsof insead Formally, regre is wr opimal funcion f:{1,,n}!{1,,n} such ha every ime you played acion j, i plays f(j) Weird why care? Correlaed equilibrium Disribuion over enries in marix, such ha if a rused pary chooses one a random and ells you your par, you have no incenive o deviae Eg, Shapley game R P S R P S 1,1 1,1 1,1 1,1 1,1 1,1 1,1 1,1 1,1 In generalsum games, if all players have low swapregre, hen empirical disribuion of play is apx correlaed equilibrium Inernal/swapregre, cond Algorihms for achieving low regre of his form: Foser & Vohra, Har & MasColell, Fudenberg & Levine Will presen mehod of [BM05] showing how o conver any bes exper algorihm ino one achieving low swap regre Can conver any bes exper algorihm A ino one achieving low swap regre Idea: Insaniae one copy A j responsible for expeced regre over imes we play j Play p = pq Alg Cos vecor c q 2 Allows us o view p j as prob we play acion j, or as prob we play alg A j Give A j feedback of p j c Q A j guaranees (p j c ) q j min i p j c i + [regre erm] Wrie as: p j (q j c ) min i p j c i + [regre erm] p 2 c A 1 A 2 A n 3

4 Can conver any bes exper algorihm A ino one achieving low swap regre Idea: Insaniae one copy A j responsible for expeced regre over imes we play j Play p = pq Sum over j, ge: Q Alg Cos vecor c q 2 p 2 c p Q c j min i p j c i + n[regre erm] Our oal cos Wrie as: p j (q j c ) min i p j c i + [regre erm] A 1 A 2 A n For each j, can move our prob o is own i=f(j) Iinerary Sop 1: Minimizing regre and combining advice Randomized Wd Majoriy / Muliplicaive Weighs alg Connecions o game heory Sop 2: Exensions Online learning from limied feedback (bandi algs) Algorihms for large acion spaces, sleeping expers Sop 3: Powerful online LTF algorihms Winnow, Percepron Sop 4: Powerful ools for using hese algorihms Kernels and Similariy funcions Sop 5: Somehing compleely differen Disribued machine learning Transiion So far, we have been examining problems of selecing among choices/algorihms/expers given o us from ouside Now, urn o design of online algorihms for learning over daa described by feaures A ypical ML seing Say you wan a compuer program o help you decide which messages are urgen and which can be deal wih laer Migh represen each message by n feaures (eg, reurn address, keywords, header info, ec) On each message received, you make a classificaion and hen laer find ou if you messed up Goal: if here exiss a simple rule ha works (is perfec? low error?) hen our alg does well Simple example: disjuncions Suppose feaures are boolean: X = {0,1} n Targe is an OR funcion, like x 3 v x 9 v x 12 Can we find an online sraegy ha makes a mos n misakes? (assume perfec arge) Sure Sar wih h(x) = x 1 v x 2 v v x n Invarian: {vars in h} {vars in f } Misake on negaive: hrow ou vars in h se o 1 in x Mainains invarian and decreases h by 1 No misakes on posiives So a mos n misakes oal Simple example: disjuncions Suppose feaures are boolean: X = {0,1} n Targe is an OR funcion, like x 3 v x 9 v x 12 Can we find an online sraegy ha makes a mos n misakes? (assume perfec arge) Compare o expers seing: Could define 2 n expers, one for each OR fn #misakes log(# expers) This way is much more efficien bu, requires some exper o be perfec 4

5 Simple example: disjuncions Bu wha if we believe only r ou of he n variables are relevan? Ie, in principle, should be able o ge only O(log n r ) = O(r log n) misakes Can we do i efficienly? Winnow algorihm Winnow algorihm for learning a disjuncion of r ou of n variables eg f(x)= x 3 v x 9 v x 12 h(x): predic pos iff w 1 x w n x n n Iniialize w i = 1 for all i Misake on pos: w i à 2w i for all x i =1 Misake on neg: w i à 0 for all x i =1 Winnow algorihm Winnow algorihm for learning a disjuncion of r ou of n variables eg f(x)= x 3 v x 9 v x 12 h(x): predic pos iff w 1 x w n x n n Iniialize w i = 1 for all i Misake on pos: w i à 2w i for all x i =1 Misake on neg: w i à 0 for all x i =1 Thm: Winnow makes a mos O(r log n) misakes Proof: Each Mop doubles a leas one relevan weigh (and noe ha rel ws never se o 0) A mos r(1+log n) of hese Each Mop adds < n o oal weigh Each Mon removes a leas n from oal weigh So #(Mon) 1+ #(Mop) Tha s i! A generalizaion Winnow algorihm for learning a linear separaor wih nonneg ineger weighs: eg, 2x 3 + 4x 9 + x x 12 5 h(x): predic pos iff w 1 x w n x n n Iniialize w i = 1 for all i Misake on pos: w i à w i (1+²) for all x i =1 Misake on neg: w i à w i /(1+²) for all x i =1 Use ² = O(1/W), W = sum of ws in arge Thm: Winnow makes a mos O(W 2 log n) misakes Winnow for general LTFs More generally, can show he following: Suppose 9 w* s: w* x c on posiive x, w* x c on negaive x Then misake bound is O((L 1 (w*)/ ) 2 log n) Muliply by L 1 (X) if feaures no {0,1} Percepron algorihm An even older and simpler algorihm, wih a bound of a differen form Suppose 9 w* s: w* x on posiive x, w* x on negaive x Then misake bound is O((L 2 (w*)l 2 (x)/ ) 2 ) L 2 margin of examples 5

6 Percepron algorihm Thm: Suppose daa is consisen wih some LTF w* x > 0, where we scale so L 2 (w*)=1, L 2 (x) 1, = min x w* x Then # misakes 1/ Algorihm: Iniialize w=0 Use w x > 0 Misake on pos: w à w+x Misake on neg: w à wx + + w* + + Example: Percepron algorihm (0,1) (1,1) + (1,0) + + Algorihm: Iniialize w=0 Use w x > 0 Misake on pos: w à w+x Misake on neg: w à wx + Analysis Thm: Suppose daa is consisen wih some LTF w* x > 0, where w* =1 and = min x w* x (afer scaling so all x 1) Then # misakes 1/ 2 Proof: consider w w* and w Each misake increases w w* by a leas (w + x) w* = w w* + x w* w w* + Each misake increases w w by a mos 1 (w + x) (w + x) = w w + 2(w x) + x x w w + 1 So, in M misakes, M w w* w M 1/2 So, M 1/ 2 Wha if no perfec separaor? In his case, a misake could cause w w* o drop Impac: magniude of x w* in unis of Misakes(percepron) 1/ 2 + O(how much, in unis of, you would have o move he poins o all be correc by ) Proof: consider w w* and w Each misake increases w w* by a leas (w + x) w* = w w* + x w* w w* + Each misake increases w w by a mos 1 (w + x) (w + x) = w w + 2(w x) + x x w w + 1 So, in M misakes, M w w* w M 1/2 So, M 1/ 2 Wha if no perfec separaor? In his case, a misake could cause w w* o drop Impac: magniude of x w* in unis of Misakes(percepron) 1/ 2 + O(how much, in unis of, you would have o move he poins o all be correc by ) Noe ha was no par of he algorihm So, misakebound of Percepron min (above) Equivalenly, misake bound min w* w* 2 + O(hinge loss(w*)) 6

Topics in Machine Learning Theory

Topics in Machine Learning Theory Topics in Machine Learning Theory The Adversarial Muli-armed Bandi Problem, Inernal Regre, and Correlaed Equilibria Avrim Blum 10/8/14 Plan for oday Online game playing / combining exper advice bu: Wha

More information

Online Learning, Regret Minimization, Minimax Optimality, and Correlated Equilibrium

Online Learning, Regret Minimization, Minimax Optimality, and Correlated Equilibrium Algorihm Online Learning, Regre Minimizaion, Minimax Opimaliy, and Correlaed Equilibrium High level Las ime we discussed noion of Nash equilibrium Saic concep: se of prob Disribuions (p,q, ) such ha nobody

More information

Plan for Today & Next time (B) Machine Learning Theory. 2-Player Zero-Sum games. Game Theory terminolgy. Minimax-optimal strategies

Plan for Today & Next time (B) Machine Learning Theory. 2-Player Zero-Sum games. Game Theory terminolgy. Minimax-optimal strategies 5-859(B) Machine Learning Theory Learning and Game Theory Avrim Blum Plan for Today & Ne ime 2-player zero-sum games 2-player general-sum games Nash equilibria Correlaed equilibria Inernal/swap regre and

More information

Recap from end of last time

Recap from end of last time Topics in Machine Learning Theory Avrim Blum 09/03/4 Lecture 3: Shifting/Sleeping Experts, the Winnow Algorithm, and L Margin Bounds Recap from end of last time RWM (multiplicative weights alg) p i = w

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Online Learning Applications

Online Learning Applications Online Learning Applicaions Sepember 19, 2016 In he las lecure we saw he following guaranee for minimizing misakes wih Randomized Weighed Majoriy (RWM). Theorem 1 Le M be misakes of RWM and M i he misakes

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Notes 04 largely plagiarized by %khc

Notes 04 largely plagiarized by %khc Noes 04 largely plagiarized by %khc Convoluion Recap Some ricks: x() () =x() x() (, 0 )=x(, 0 ) R ț x() u() = x( )d x() () =ẋ() This hen ells us ha an inegraor has impulse response h() =u(), and ha a differeniaor

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions

Inventory Analysis and Management. Multi-Period Stochastic Models: Optimality of (s, S) Policy for K-Convex Objective Functions Muli-Period Sochasic Models: Opimali of (s, S) Polic for -Convex Objecive Funcions Consider a seing similar o he N-sage newsvendor problem excep ha now here is a fixed re-ordering cos (> 0) for each (re-)order.

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power

Learning a Class from Examples. Training set X. Class C 1. Class C of a family car. Output: Input representation: x 1 : price, x 2 : engine power Alpaydin Chaper, Michell Chaper 7 Alpaydin slides are in urquoise. Ehem Alpaydin, copyrigh: The MIT Press, 010. alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/ ehem/imle All oher slides are based on Michell.

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Solutions Problem Set 3 Macro II (14.452)

Solutions Problem Set 3 Macro II (14.452) Soluions Problem Se 3 Macro II (14.452) Francisco A. Gallego 04/27/2005 1 Q heory of invesmen in coninuous ime and no uncerainy Consider he in nie horizon model of a rm facing adjusmen coss o invesmen.

More information

Notes on online convex optimization

Notes on online convex optimization Noes on online convex opimizaion Karl Sraos Online convex opimizaion (OCO) is a principled framework for online learning: OnlineConvexOpimizaion Inpu: convex se S, number of seps T For =, 2,..., T : Selec

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims

Problem Set 5. Graduate Macro II, Spring 2017 The University of Notre Dame Professor Sims Problem Se 5 Graduae Macro II, Spring 2017 The Universiy of Nore Dame Professor Sims Insrucions: You may consul wih oher members of he class, bu please make sure o urn in your own work. Where applicable,

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Phys1112: DC and RC circuits

Phys1112: DC and RC circuits Name: Group Members: Dae: TA s Name: Phys1112: DC and RC circuis Objecives: 1. To undersand curren and volage characerisics of a DC RC discharging circui. 2. To undersand he effec of he RC ime consan.

More information

= ( ) ) or a system of differential equations with continuous parametrization (T = R

= ( ) ) or a system of differential equations with continuous parametrization (T = R XIII. DIFFERENCE AND DIFFERENTIAL EQUATIONS Ofen funcions, or a sysem of funcion, are paramerized in erms of some variable, usually denoed as and inerpreed as ime. The variable is wrien as a funcion of

More information

Chapter 7: Solving Trig Equations

Chapter 7: Solving Trig Equations Haberman MTH Secion I: The Trigonomeric Funcions Chaper 7: Solving Trig Equaions Le s sar by solving a couple of equaions ha involve he sine funcion EXAMPLE a: Solve he equaion sin( ) The inverse funcions

More information

Games Against Nature

Games Against Nature Advanced Course in Machine Learning Spring 2010 Games Agains Naure Handous are joinly prepared by Shie Mannor and Shai Shalev-Shwarz In he previous lecures we alked abou expers in differen seups and analyzed

More information

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14 CSE/NB 58 Lecure 14: From Supervised o Reinforcemen Learning Chaper 9 1 Recall from las ime: Sigmoid Neworks Oupu v T g w u g wiui w Inpu nodes u = u 1 u u 3 T i Sigmoid oupu funcion: 1 g a 1 a e 1 ga

More information

Echocardiography Project and Finite Fourier Series

Echocardiography Project and Finite Fourier Series Echocardiography Projec and Finie Fourier Series 1 U M An echocardiagram is a plo of how a porion of he hear moves as he funcion of ime over he one or more hearbea cycles If he hearbea repeas iself every

More information

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS Name SOLUTIONS Financial Economerics Jeffrey R. Russell Miderm Winer 009 SOLUTIONS You have 80 minues o complee he exam. Use can use a calculaor and noes. Try o fi all your work in he space provided. If

More information

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model 1 Boolean and Vecor Space Rerieval Models Many slides in his secion are adaped from Prof. Joydeep Ghosh (UT ECE) who in urn adaped hem from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) Rerieval

More information

CMU-Q Lecture 3: Search algorithms: Informed. Teacher: Gianni A. Di Caro

CMU-Q Lecture 3: Search algorithms: Informed. Teacher: Gianni A. Di Caro CMU-Q 5-38 Lecure 3: Search algorihms: Informed Teacher: Gianni A. Di Caro UNINFORMED VS. INFORMED SEARCH Sraegy How desirable is o be in a cerain inermediae sae for he sake of (effecively) reaching a

More information

Machine Learning 4771

Machine Learning 4771 ony Jebara, Columbia Universiy achine Learning 4771 Insrucor: ony Jebara ony Jebara, Columbia Universiy opic 20 Hs wih Evidence H Collec H Evaluae H Disribue H Decode H Parameer Learning via JA & E ony

More information

Learning Objectives: Practice designing and simulating digital circuits including flip flops Experience state machine design procedure

Learning Objectives: Practice designing and simulating digital circuits including flip flops Experience state machine design procedure Lab 4: Synchronous Sae Machine Design Summary: Design and implemen synchronous sae machine circuis and es hem wih simulaions in Cadence Viruoso. Learning Objecives: Pracice designing and simulaing digial

More information

( ) a system of differential equations with continuous parametrization ( T = R + These look like, respectively:

( ) a system of differential equations with continuous parametrization ( T = R + These look like, respectively: XIII. DIFFERENCE AND DIFFERENTIAL EQUATIONS Ofen funcions, or a sysem of funcion, are paramerized in erms of some variable, usually denoed as and inerpreed as ime. The variable is wrien as a funcion of

More information

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities:

Math 2142 Exam 1 Review Problems. x 2 + f (0) 3! for the 3rd Taylor polynomial at x = 0. To calculate the various quantities: Mah 4 Eam Review Problems Problem. Calculae he 3rd Taylor polynomial for arcsin a =. Soluion. Le f() = arcsin. For his problem, we use he formula f() + f () + f ()! + f () 3! for he 3rd Taylor polynomial

More information

Notes for Lecture 17-18

Notes for Lecture 17-18 U.C. Berkeley CS278: Compuaional Complexiy Handou N7-8 Professor Luca Trevisan April 3-8, 2008 Noes for Lecure 7-8 In hese wo lecures we prove he firs half of he PCP Theorem, he Amplificaion Lemma, up

More information

Math 333 Problem Set #2 Solution 14 February 2003

Math 333 Problem Set #2 Solution 14 February 2003 Mah 333 Problem Se #2 Soluion 14 February 2003 A1. Solve he iniial value problem dy dx = x2 + e 3x ; 2y 4 y(0) = 1. Soluion: This is separable; we wrie 2y 4 dy = x 2 + e x dx and inegrae o ge The iniial

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION MATH 28A, SUMME 2009, FINAL EXAM SOLUTION BENJAMIN JOHNSON () (8 poins) [Lagrange Inerpolaion] (a) (4 poins) Le f be a funcion defined a some real numbers x 0,..., x n. Give a defining equaion for he Lagrange

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Chapter Floating Point Representation

Chapter Floating Point Representation Chaper 01.05 Floaing Poin Represenaion Afer reading his chaper, you should be able o: 1. conver a base- number o a binary floaing poin represenaion,. conver a binary floaing poin number o is equivalen

More information

The average rate of change between two points on a function is d t

The average rate of change between two points on a function is d t SM Dae: Secion: Objecive: The average rae of change beween wo poins on a funcion is d. For example, if he funcion ( ) represens he disance in miles ha a car has raveled afer hours, hen finding he slope

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

This document was generated at 7:34 PM, 07/27/09 Copyright 2009 Richard T. Woodward

This document was generated at 7:34 PM, 07/27/09 Copyright 2009 Richard T. Woodward his documen was generaed a 7:34 PM, 07/27/09 Copyrigh 2009 Richard. Woodward 15. Bang-bang and mos rapid approach problems AGEC 637 - Summer 2009 here are some problems for which he opimal pah does no

More information

FITTING EQUATIONS TO DATA

FITTING EQUATIONS TO DATA TANTON S TAKE ON FITTING EQUATIONS TO DATA CURRICULUM TIDBITS FOR THE MATHEMATICS CLASSROOM MAY 013 Sandard algebra courses have sudens fi linear and eponenial funcions o wo daa poins, and quadraic funcions

More information

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems.

Math Week 14 April 16-20: sections first order systems of linear differential equations; 7.4 mass-spring systems. Mah 2250-004 Week 4 April 6-20 secions 7.-7.3 firs order sysems of linear differenial equaions; 7.4 mass-spring sysems. Mon Apr 6 7.-7.2 Sysems of differenial equaions (7.), and he vecor Calculus we need

More information

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC This documen was generaed a :45 PM 8/8/04 Copyrigh 04 Richard T. Woodward. An inroducion o dynamic opimizaion -- Opimal Conrol and Dynamic Programming AGEC 637-04 I. Overview of opimizaion Opimizaion is

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Today: Falling. v, a

Today: Falling. v, a Today: Falling. v, a Did you ge my es email? If no, make sure i s no in your junk box, and add sbs0016@mix.wvu.edu o your address book! Also please email me o le me know. I will be emailing ou pracice

More information

Linear Time-invariant systems, Convolution, and Cross-correlation

Linear Time-invariant systems, Convolution, and Cross-correlation Linear Time-invarian sysems, Convoluion, and Cross-correlaion (1) Linear Time-invarian (LTI) sysem A sysem akes in an inpu funcion and reurns an oupu funcion. x() T y() Inpu Sysem Oupu y() = T[x()] An

More information

Section 5: Chain Rule

Section 5: Chain Rule Chaper The Derivaive Applie Calculus 11 Secion 5: Chain Rule There is one more ype of complicae funcion ha we will wan o know how o iffereniae: composiion. The Chain Rule will le us fin he erivaive of

More information

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2 Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes

More information

15. Vector Valued Functions

15. Vector Valued Functions 1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,

More information

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3

Macroeconomic Theory Ph.D. Qualifying Examination Fall 2005 ANSWER EACH PART IN A SEPARATE BLUE BOOK. PART ONE: ANSWER IN BOOK 1 WEIGHT 1/3 Macroeconomic Theory Ph.D. Qualifying Examinaion Fall 2005 Comprehensive Examinaion UCLA Dep. of Economics You have 4 hours o complee he exam. There are hree pars o he exam. Answer all pars. Each par has

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Online Learning with Partial Feedback. 1 Online Mirror Descent with Estimated Gradient

Online Learning with Partial Feedback. 1 Online Mirror Descent with Estimated Gradient Avance Course in Machine Learning Spring 2010 Online Learning wih Parial Feeback Hanous are joinly prepare by Shie Mannor an Shai Shalev-Shwarz In previous lecures we alke abou he general framework of

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

SOLUTIONS TO ECE 3084

SOLUTIONS TO ECE 3084 SOLUTIONS TO ECE 384 PROBLEM 2.. For each sysem below, specify wheher or no i is: (i) memoryless; (ii) causal; (iii) inverible; (iv) linear; (v) ime invarian; Explain your reasoning. If he propery is no

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Online Learning with Queries

Online Learning with Queries Online Learning wih Queries Chao-Kai Chiang Chi-Jen Lu Absrac The online learning problem requires a player o ieraively choose an acion in an unknown and changing environmen. In he sandard seing of his

More information

Longest Common Prefixes

Longest Common Prefixes Longes Common Prefixes The sandard ordering for srings is he lexicographical order. I is induced by an order over he alphabe. We will use he same symbols (,

More information

Expert Advice for Amateurs

Expert Advice for Amateurs Exper Advice for Amaeurs Ernes K. Lai Online Appendix - Exisence of Equilibria The analysis in his secion is performed under more general payoff funcions. Wihou aking an explici form, he payoffs of he

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

. Now define y j = log x j, and solve the iteration.

. Now define y j = log x j, and solve the iteration. Problem 1: (Disribued Resource Allocaion (ALOHA!)) (Adaped from M& U, Problem 5.11) In his problem, we sudy a simple disribued proocol for allocaing agens o shared resources, wherein agens conend for resources

More information

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems

Essential Microeconomics : OPTIMAL CONTROL 1. Consider the following class of optimization problems Essenial Microeconomics -- 6.5: OPIMAL CONROL Consider he following class of opimizaion problems Max{ U( k, x) + U+ ( k+ ) k+ k F( k, x)}. { x, k+ } = In he language of conrol heory, he vecor k is he vecor

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LDA, logisic

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important

Non-parametric techniques. Instance Based Learning. NN Decision Boundaries. Nearest Neighbor Algorithm. Distance metric important on-parameric echniques Insance Based Learning AKA: neares neighbor mehods, non-parameric, lazy, memorybased, or case-based learning Copyrigh 2005 by David Helmbold 1 Do no fi a model (as do LTU, decision

More information

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is UNIT IMPULSE RESPONSE, UNIT STEP RESPONSE, STABILITY. Uni impulse funcion (Dirac dela funcion, dela funcion) rigorously defined is no sricly a funcion, bu disribuion (or measure), precise reamen requires

More information

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9)

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9) CSE/NB 528 Lecure 14: Reinforcemen Learning Chaper 9 Image from hp://clasdean.la.asu.edu/news/images/ubep2001/neuron3.jpg Lecure figures are from Dayan & Abbo s book hp://people.brandeis.edu/~abbo/book/index.hml

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

From Complex Fourier Series to Fourier Transforms

From Complex Fourier Series to Fourier Transforms Topic From Complex Fourier Series o Fourier Transforms. Inroducion In he previous lecure you saw ha complex Fourier Series and is coeciens were dened by as f ( = n= C ne in! where C n = T T = T = f (e

More information

Seminar 4: Hotelling 2

Seminar 4: Hotelling 2 Seminar 4: Hoelling 2 November 3, 211 1 Exercise Par 1 Iso-elasic demand A non renewable resource of a known sock S can be exraced a zero cos. Demand for he resource is of he form: D(p ) = p ε ε > A a

More information

Unemployment and Mismatch in the UK

Unemployment and Mismatch in the UK Unemploymen and Mismach in he UK Jennifer C. Smih Universiy of Warwick, UK CAGE (Cenre for Compeiive Advanage in he Global Economy) BoE/LSE Conference on Macroeconomics and Moneary Policy: Unemploymen,

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov

Stationary Distribution. Design and Analysis of Algorithms Andrei Bulatov Saionary Disribuion Design and Analysis of Algorihms Andrei Bulaov Algorihms Markov Chains 34-2 Classificaion of Saes k By P we denoe he (i,j)-enry of i, j Sae is accessible from sae if 0 for some k 0

More information

5. Stochastic processes (1)

5. Stochastic processes (1) Lec05.pp S-38.45 - Inroducion o Teleraffic Theory Spring 2005 Conens Basic conceps Poisson process 2 Sochasic processes () Consider some quaniy in a eleraffic (or any) sysem I ypically evolves in ime randomly

More information

Regression with Time Series Data

Regression with Time Series Data Regression wih Time Series Daa y = β 0 + β 1 x 1 +...+ β k x k + u Serial Correlaion and Heeroskedasiciy Time Series - Serial Correlaion and Heeroskedasiciy 1 Serially Correlaed Errors: Consequences Wih

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

RL Lecture 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 RL Lecure 7: Eligibiliy Traces R. S. Suon and A. G. Baro: Reinforcemen Learning: An Inroducion 1 N-sep TD Predicion Idea: Look farher ino he fuure when you do TD backup (1, 2, 3,, n seps) R. S. Suon and

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

1.6. Slopes of Tangents and Instantaneous Rate of Change

1.6. Slopes of Tangents and Instantaneous Rate of Change 1.6 Slopes of Tangens and Insananeous Rae of Change When you hi or kick a ball, he heigh, h, in meres, of he ball can be modelled by he equaion h() 4.9 2 v c. In his equaion, is he ime, in seconds; c represens

More information

- If one knows that a magnetic field has a symmetry, one may calculate the magnitude of B by use of Ampere s law: The integral of scalar product

- If one knows that a magnetic field has a symmetry, one may calculate the magnitude of B by use of Ampere s law: The integral of scalar product 11.1 APPCATON OF AMPEE S AW N SYMMETC MAGNETC FEDS - f one knows ha a magneic field has a symmery, one may calculae he magniude of by use of Ampere s law: The inegral of scalar produc Closed _ pah * d

More information

Economics 8105 Macroeconomic Theory Recitation 6

Economics 8105 Macroeconomic Theory Recitation 6 Economics 8105 Macroeconomic Theory Reciaion 6 Conor Ryan Ocober 11h, 2016 Ouline: Opimal Taxaion wih Governmen Invesmen 1 Governmen Expendiure in Producion In hese noes we will examine a model in which

More information

I. Return Calculations (20 pts, 4 points each)

I. Return Calculations (20 pts, 4 points each) Universiy of Washingon Spring 015 Deparmen of Economics Eric Zivo Econ 44 Miderm Exam Soluions This is a closed book and closed noe exam. However, you are allowed one page of noes (8.5 by 11 or A4 double-sided)

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Ensemble Confidence Estimates Posterior Probability

Ensemble Confidence Estimates Posterior Probability Ensemble Esimaes Poserior Probabiliy Michael Muhlbaier, Aposolos Topalis, and Robi Polikar Rowan Universiy, Elecrical and Compuer Engineering, Mullica Hill Rd., Glassboro, NJ 88, USA {muhlba6, opali5}@sudens.rowan.edu

More information

13.3 Term structure models

13.3 Term structure models 13.3 Term srucure models 13.3.1 Expecaions hypohesis model - Simples "model" a) shor rae b) expecaions o ge oher prices Resul: y () = 1 h +1 δ = φ( δ)+ε +1 f () = E (y +1) (1) =δ + φ( δ) f (3) = E (y +)

More information