Reinfrcement Learning" CMPSCI 383 Nv 29, 2011! 1
Tdayʼs lecture" Review f Chapter 17: Making Cmple Decisins! Sequential decisin prblems! The mtivatin and advantages f reinfrcement learning.! Passive learning! Plicy evaluatin! Direct utility estimatin! Adaptive dynamic prgramming! Tempral Difference (TD) learning! 2
Check this ut" MIT's Technlgy Review has an in-depth interview with Peter Nrvig, Ggle'sDirectr f Research, and Eric Hrvitz, a Distinguished Scientist at MicrsftResearch abut their ptimism fr the futuref AI.! http://www.technlgyreview.cm/cmputing/39156/! 3
A Simple Eample" Gridwrld with 2 Gal states! Actins: Up, Dwn, Left, Right! Fully bservable: Agent knws where it is! 4
Transitin Mdel" 5
Markv Assumptin" 6
Agentʼs Utility Functin" Perfrmance depends n the entire sequence f states and actins.! Envirnment histry! In each state, the agent receives a reward R(s).! The reward is real-valued. It may be psitive r negative.! Utility f envirnment histry = sum f reward received.! 7
Reward functin" -.04 -.04 -.04 -.04 -.04 -.04 -.04 -.04 8
Decisin Rules" Decisin rules say what t d in each state.! Often called plicies, π. Actin fr state s is given by π(s).! 9
Our Gal" Find the plicy that maimizes the epected sum f rewards.! Called an ptimal plicy.! 10
Markv Decisin Prcess (MDP)" M=(S,A,P,R)! S = set f pssible states! A = set f pssible actins! P(sʼ s,a) gives transitin prbabilities! R = reward functin! Gal: find an ptimal plicy, π*.! 11
Finite/Infinite Hrizn" Finite hrizn: the game ends after N steps.! Infinite hrizn: the game never ends! With a finite hrizn, the ptimal actin in a given state culd change ver time.! The ptimal plicy is nnstatinary.! With infinite hrizn, the ptimal plicy is statinary.! 12
Utilities ver time" Additive rewards:! Discunted rewards:! Discunt factr:! 13
Discunted Rewards" Wuld yu rather have a marshmallw nw, r tw in 20 minutes?! Infinite sums!! 14
Utility f States" Given a plicy, we can define the utility f a state:! 15
Plicy Evaluatin" Finding the utility f states fr a given plicy.! Slve a system f linear equatins:! π π An instance f a Bellman Equatin.! 16
Plicy Evaluatin" The n linear equatins can be slved in O(n 3 ) with standard linear algebra methds.! If O(n 3 ) is still t much, we can d it iteratively:! π π 17
Optimal Plicy" Optimal plicy desnʼt depend n what state yu start in (fr infinite hrizn discunted case).! Optimal plicy:! π * True utility f a state:! ptimal value functin! 18
Selecting ptimal actins" Given the true U(s) values, hw can we select actins? (Maimum epected utility MEU)! 19
Utility and Rewards" Utility = lng term ttal reward frm s nwards! Reward = shrt term reward frm s! 20
Utility" 21
Value-Guided Optimal Cntrl" Predicted minimum time t gal (negated) Get t the tp f the hill as quickly as pssible (rughly) Muns & Mre Variable reslutin discretizatin fr high-accuracy slutins f ptimal cntrl prblems, IJCAI 99. 22
Searching fr Optimal Plicies" Bellman Equatin! If we write ut the Bellman equatin fr all n states, we get n equatins, with n unknwns: U(s).! We can slve this system f equatins t determine the Utility f every state.! 23
Value Iteratin" The equatins are nn-linear, s we canʼt use standard linear algebra methds.! Value iteratin: start with randm initial values fr each U(s), iteratively update each value t fit the fight-hand side f the equatin:! 24
Value Iteratin" The update is applied simultaneusly t every state.! If this update is applied infinitely ften, we are guaranteed t find the true U(s) values.! There is ne unique slutin! Given the true U(s) values, hw can we select actins? (Maimum epected utility MEU)! 25
Plicy Iteratin" Plicy iteratin interleaves tw steps:! Plicy evaluatin: Given a plicy, cmpute the utility f each state fr that plicy! Plicy imprvement: Calculate a new MEU plicy! Terminate when the plicy desnʼt change the utilities.! Guaranteed t cnverge t an ptimal plicy! 26
Asynchrnus Plicy Iteratin" We said the utility f every state is updated simultaneusly. This isnʼt necessary.! Yu can pick and subset f the states and apply either plicy imprvement r value iteratin t that subset.! Given certain cnditins, this is als guaranteed t cnverge t an ptimal plicy.! 27
Reinfrcement Learning" 28
Edward L. Thrndike (1874-1949)" Learning by Trial-and-Errr! puzzle b! 29
Law f Effect " Of several respnses made t the same situatin, thse which are accmpanied r clsely fllwed by satisfactin t the animal will, ther things being equal, be mre firmly cnnected with the situatin, s that, when it recurs, they will be mre likely t recur; thse which are accmpanied r clsely fllwed by discmfrt t the animal will, ther things being equal, have their cnnectins with that situatin weakened, s that when it recurs, they will be less likely t ccur. Edward Thrndike, 1911 30
RL = Search + Memry" Search: Trial-and-Errr, Generate-and-Test, Variatin-and-Selectin! Memry: remember what wrked best fr each situatin and start frm there net time! 31
32 MENACE (Michie, 1961)" Matchb Educable Nughts and Crsses Engine
Supervised learning" Training Inf = desired (target) utputs! Inputs! Supervised Learning " System! Outputs! Errr = (target utput actual utput)! 33
Reinfrcement learning" Training Inf = evaluatins ( rewards, penalties, etc.)! RL" Inputs! Outputs! System! ( actins, cntrls )! Objective: get as much reward as pssible! 34
Reinfrcement learning" Learning what t d hw t map situatins t actins s as t maimize a numerical reward signal! Rewards may be prvided fllwing each actin, r nly when the agent reaches a gal state.! The credit assignment prblem.! 35
Why reinfrcement learning?" Mre realistic mdel f the kind f learning that animals d.! Supervised learning is smetimes unrealistic: where will crrect eamples cme frm?! Envirnments change, and s the agent must adjust its actin chices.! 36
Stick t this kind f RL prblem here:" Envirnment is fully bservable: agent knws what state the envirnment is in! But: agent des nt knw hw the envirnment wrks r what its actins d! 37
Sme kinds f RL agents" Utility-based agent: learns utility functin n states and uses it t select actins! Needs an envirnment mdel t decide n actins! Q-learning agent: leans and actin-utility functins, r Q-functin, giving epected utility fr taking each actin in each state.! Des nt need an envirnment mdel.! Refle agent: learn a plicy withut first learning a state-utility functin r a Q-functin! 38
Passive versus active learning " A passive learner simply watches the wrld ging by and tries t learn the utility f being in varius states.! An active learner must als act using the learned infrmatin, and can use its prblem generatr t suggest eplratins f unknwn prtins f the envirnment.! 39
Passive learning" Given (but agent desnʼt knw this):" A Markv mdel f the envirnment.! States, with prbabilistic actins.! Terminal states have rewards/utilities.! Prblem:" Learn epected utility f each state.! Nte: if agent knws hw the envirnment and its actins wrk, can slve the relevant Bellman equatin (which wuld be linear).! 40
Eample" -.04 -.04 -.04 -.04 -.04 -.04 -.04 -.04 Percepts tell agent: [State, Reward, Terminal?] 41
Learning utility functins" A training sequence (r episde) is an instance f wrld transitins frm an initial state t a terminal state.! The additive utility assumptin: utility f a sequence is the sum f the rewards ver the states f the sequence.! Under this assumptin, the utility f a state is the epected reward-t-g f that state.! 42
Direct Utility Estimatin" Fr each training sequence, cmpute the reward-t-g fr each state in the sequence and update the utilities.! This is just learning the utility functin frm eamples.! Generates utility estimates that minimize the mean square errr (LMS-update).! 43
Direct Utility Estimatin" U(i) (1 α)u(i) + α REWARD(training sequence) state i! T T T T T T T T T T 44
Prblems with direct utility estimatin" Cnverges slwly because it ignres the relatinship between neighbring states:! 1 New U=? Old U=.8 p=.9 p=.1 +1 45
Key Observatin: Tempral Cnsistency" Utilities f states are nt independent! The utility f each state equals its wn reward plus the epected utility f its successr states:! U π (s) = R(s) +γ s ʹ P( s ʹ s, π(s)) U π ( s ʹ ) The key fact:! U t = r t +1 +γ r t +2 +γ 2 r t +3 +γ 3 r t +4 L = r t +1 +γ( r t +2 +γ r t +3 +γ 2 r t +4 L) = r t +1 +γu t +1 46
Adaptive dynamic prgramming" Learn a mdel: transitin prbabilities, reward functin! D plicy evaluatin! Slve the Bellman equatin either directly r iteratively (value iteratin withut the ma)! Learn mdel while ding iterative plicy evaluatin:! Update the mdel f the envirnment after each step. Since the mdel changes nly slightly after each step, plicy evaluatin will cnverge quickly.! 47
Adaptive dynamic prg (ADP)" j U(i) R(i) + P ij U( j) state i R(i) pssible states j T T T T T T T T T T 48
Passive ADP learning curves" 49
Eact utility values fr the eample" 50
Tempral difference (TD) learning" Apprimate the cnstraint equatins withut slving them fr all states.! Mdify U(i) whenever we see a transitin frm i t j using the fllwing rule:! U(i) U(i) +α[ R(i) + U( j) U(i) ] The mdificatin mves U(i) clser t satisfying the riginal equatin.! Q. Why des it wrk?! 51
TD learning cntd." U(i) (1 α)u(i) + α[ R(i) + U( j) ] state i R(i) state j T T T T T T T T T T 52
TD learning curves" 53
Net Class" Active Reinfrcement Learning! Sec. 21.3! 54
Net Class" Reinfrcement Learning! Secs. 21.1 21.3! 55