Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Similar documents
Reinforcement Learning

Introduction to Reinforcement Learning

Reinforcement Learning and Control

Reinforcement learning an introduction

ARTIFICIAL INTELLIGENCE. Reinforcement learning

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Lecture 1: March 7, 2018

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. George Konidaris

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

CSC321 Lecture 22: Q-Learning

Basics of reinforcement learning

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Machine Learning I Reinforcement Learning

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

Markov Decision Processes

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Reinforcement Learning

Evaluation of multi armed bandit algorithms and empirical algorithm

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati

CSE250A Fall 12: Discussion Week 9

CS 598 Statistical Reinforcement Learning. Nan Jiang

CS 7180: Behavioral Modeling and Decisionmaking

Reinforcement Learning. Introduction

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Markov Models and Reinforcement Learning. Stephen G. Ware CSCI 4525 / 5525

1 MDP Value Iteration Algorithm

New Algorithms for Contextual Bandits

Lecture 10 - Planning under Uncertainty (III)

Q-Learning in Continuous State Action Spaces

Online Learning and Sequential Decision Making

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Grundlagen der Künstlichen Intelligenz

RL 3: Reinforcement Learning

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Sequential Decision Problems

Hidden Markov Models (HMM) and Support Vector Machine (SVM)

CS599 Lecture 1 Introduction To RL

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Reinforcement Learning

Lecture 2: Learning from Evaluative Feedback. or Bandit Problems

REINFORCEMENT LEARNING

Lower PAC bound on Upper Confidence Bound-based Q-learning with examples

Large-scale Information Processing, Summer Recommender Systems (part 2)

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

CS230: Lecture 9 Deep Reinforcement Learning

Reinforcement Learning

Q-learning. Tambet Matiisen

Reinforcement Learning Part 2

Trust Region Policy Optimization

Reinforcement Learning and NLP

Decision Theory: Q-Learning

6 Reinforcement Learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Artificial Intelligence

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang

Reinforcement Learning

Lecture 23: Reinforcement Learning

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Lecture 15: Bandit problems. Markov Processes. Recall: Lotteries and utilities

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188: Artificial Intelligence

Reinforcement Learning

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 3: Markov Decision Processes

Reinforcement Learning and Deep Reinforcement Learning

Short Course: Multiagent Systems. Multiagent Systems. Lecture 1: Basics Agents Environments. Reinforcement Learning. This course is about:

CS 188: Artificial Intelligence Spring Announcements

Lecture 9: Policy Gradient II (Post lecture) 2

Hybrid Machine Learning Algorithms

Real Time Value Iteration and the State-Action Value Function

CS 570: Machine Learning Seminar. Fall 2016

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Approximate Q-Learning. Dan Weld / University of Washington

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Stochastic bandits: Explore-First and UCB

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

CSCI-567: Machine Learning (Spring 2019)

Advanced Machine Learning

Reinforcement Learning

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

15-780: ReinforcementLearning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

Online regret in reinforcement learning

Lecture 9: Policy Gradient II 1

Markov decision processes (MDP) CS 416 Artificial Intelligence. Iterative solution of Bellman equations. Building an optimal policy.

(Deep) Reinforcement Learning

A Gentle Introduction to Reinforcement Learning

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]

Partially Observable Markov Decision Processes (POMDPs)

Transcription:

Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14, guest lecture by Dr. Bilal Shaw on fraud detection in real world 11/21, Thanksgiving 11/28, final exam THH 101 and 201 November 7, 2018 1 / 42 November 7, 2018 2 / 42 Review of last lecture Outline Outline 1 Review of last lecture 2 1 Review of last lecture 2 3 3 November 7, 2018 3 / 42 November 7, 2018 4 / 42

Hidden Markov Models Review of last lecture Baum Welch algorithm Review of last lecture Step 0 Initialize the parameters π, A, B Model parameters: initial distribution P Z 1 = s = π s transition distribution P Z t+1 = s Z t = s = a s,s Step 1 E-Step Fixing the parameters, compute forward and backward messages for all sample sequences, then use these to compute γ s n t and ξ n s,s t for each n, t, s, s. Step 2 M-Step Update parameters: emission distribution P X t = o Z t = s = b s,o π s n γ n s 1, a s,s n T 1 ξ n s,s t, t=1 b s,o n t:x t =o γ s n t Step 3 Return to Step 1 if not converged November 7, 2018 5 / 42 November 7, 2018 6 / 42 Review of last lecture Review of last lecture Viterbi Algorithm Viterbi Algorithm For each s [S], compute δ s 1 = π s b s,x1. For each t = 2,..., T, for each s [S], compute Example Arrows represent the argmax, i.e. s t. δ s t = b s,xt max s a s,sδ s t 1 s t = argmax s a s,sδ s t 1 Backtracking: let z T = argmax s δ s T. For each t = T,..., 2: set z t 1 = z t t. The most likely path is rainy, rainy, sunny, sunny. Output the most likely path z 1,..., z T. November 7, 2018 7 / 42 November 7, 2018 8 / 42

Online decision making Outline Decision making Problems we have discussed so far: 1 Review of last lecture 2 Online decision making 3 start with a training dataset learn a predictor or discover some patterns But many real-life problems are about learning continuously: make a prediction/decision receive some feedback repeat Broadly, these are called online decision making problems. November 7, 2018 9 / 42 November 7, 2018 10 / 42 Online decision making Online decision making Examples Two formal setups Amazon/Netflix/MSN recommendation systems: a user visits the website the system recommends some produces/movies/news stories the system observes whether the user clicks on the recommendation Playing games Go/Atari/Dota 2/... or controlling robots: We discuss two such problems today: multi-armed bandit reinforcement learning make a move receive some reward e.g. score a point or loss e.g. fall down make another move... November 7, 2018 11 / 42 November 7, 2018 12 / 42

Mulit-armed bandits: motivation Applications Imagine going to a casino to play a slot machine This simple model and its variants capture many real-life applications it robs you, like a bandit with a single arm recommendation systems, each product/movie/news story is an arm Microsoft MSN indeed employs a variant of bandit algorithm Of course there are many slot machines in the casino like a bandit with multiple arms hence the name game playing, each possible move is an arm AlphaGo indeed has a bandit algorithm as one of the components if I can play for 10 times, which machines should I play? November 7, 2018 November 7, 2018 13 / 42 Formal setup Objective There are K arms actions/choices/... What is the goal of this problem? P Maximizing total rewards Tt=1 rt,at seems natural The problem proceeds in rounds between the environment and a learner: for each time t = 1,..., T But the absolute value of rewards is not meaningful, instead we should compare it to some benchmark. A classic benchmark is the environment decides the reward for each arm rt,1,..., rt,k max a [K] the learner picks an arm at [K] 14 / 42 T X rt,a t=1 i.e. the largest reward one can achieve by always playing a fixed arm the learner observes the reward for arm at, i.e., rt,at So we want to minimize Importantly, learner does not observe the reward for the arm not picked! max a [K] This kind of limited feedback is now usually referred to as bandit feedback T X t=1 rt,a T X rt,at t=1 This is called the regret: how much I regret for not sticking with the best fixed arm in hindsight? November 7, 2018 15 / 42 November 7, 2018 16 / 42

Environments Empirical means How are the rewards generated by the environments? they could be generated via some fixed distribution they could be generated via some changing distribution they could be generated even completely arbitrarily/adversarially We focus on a simple setting: rewards of arm a are i.i.d. samples of Berµ a, that is, r t,a is 1 with prob. µ a, and 0 with prob. 1 µ a, independent of anything else. each arm has a different mean µ 1,..., µ K ; the problem is essentially about finding the best arm argmax a µ a as quickly as possible Let ˆµ t,a be the empirical mean of arm a up to time t: where ˆµ t,a = 1 n t,a n t,a = τ t τ t:a τ =a r τ,a I[a τ == a] is the number of times we have picked arm a. Concentration: ˆµ t,a should be close to µ a if n t,a is large November 7, 2018 17 / 42 November 7, 2018 18 / 42 Exploitation only The key challenge Greedy Pick each arm once for the first K rounds. For t = K + 1,..., T, pick a t = argmax a What s wrong with this greedy algorithm? Consider the following example: ˆµ t 1,a K = 2, µ 1 = 0.6, µ 2 = 0.5 so arm 1 is the best suppose the alg. first pick arm 1 and see reward 0, then pick arm 2 and see reward 1 this happens with decent probability the algorithm will never pick arm 1 again! All bandit problems face the same dilemma: trade-off on one hand we want to exploit the arms that we think are good on the other hand we need to explore all actions often enough in order to figure out which one is better so each time we need to ask: do I explore or exploit? and how? We next discuss three ways to trade off exploration and exploitation for our simple multi-armed bandit setting. November 7, 2018 19 / 42 November 7, 2018 20 / 42

A natural first attempt Issues of Explore then Exploit Explore then Exploit Input: a parameter T 0 [T ] Exploration phase: for the first T 0 rounds, pick each arm for T 0 /K times Exploitation phase: for the remaining T T 0 rounds, stick with the empirically best arm argmax a ˆµ T0,a It s pretty reasonable, but the disadvantages are also clear: not clear how to tune the hyperparameter T 0 in the exploration phase, even if an arm is clearly worse than others based on a few pulls, it s still pulled for T 0 /K times clearly it won t work if the environment is changing Parameter T 0 clearly controls the exploration/exploitation trade-off November 7, 2018 21 / 42 November 7, 2018 22 / 42 A slightly better algorithm More adaptive exploration ɛ-greedy Pick each arm once for the first K rounds. For t = K + 1,..., T, with probability ɛ, explore: pick an arm uniformly at random with probability 1 ɛ, exploit: pick a t = argmax a ˆµ t 1,a Pros Cons always exploring and exploiting need to tune ɛ applicable to many other problems same uniform exploration first thing to try usually Is there a more adaptive way to explore? November 7, 2018 23 / 42 A simple modification of Greedy leads to the well-known: Upper Confidence Bound UCB algorithm For t = 1,..., T, pick a t = argmax a UCB t,a where UCB t,a ˆµ t 1,a + 2 ln t n t 1,a the first term in UCB t,a represents exploitation, while the second bonus term represents exploration the bonus term forces the algorithm to try every arm once first the bonus term is large if the arm is not pulled often enough, which encourages exploration but adaptive one due to the first term a parameter-free algorithm, and it enjoys optimal regret! November 7, 2018 24 / 42

Upper confidence bound Outline Why is it called upper confidence bound? One can prove that with high probability, µ a UCB t,a so UCB t,a is indeed an upper bound on the true mean. Another way to interpret UCB, optimism in face of uncertainty : true environment is unknown due to randomness uncertainty 1 Review of last lecture 2 3 just pretend it s the most preferable one among all plausible environments optimism This principle is useful for many other bandit problems. November 7, 2018 25 / 42 November 7, 2018 26 / 42 Motivation Multi-armed bandit is among the simplest decision making problems with limited feedback. RL is one way to deal with this issue. Huge recent success when combined with deep learning techniques Atari games, poker, self-driving cars, etc. It s often too simple to capture many real-life problems. One thing it fails to capture is the state of the learning agent, which has impacts on the reward of each action. The foundation of RL is Markov Decision Process MDP, a combination of Markov model Lec 10 and multi-armed bandit e.g. for Atari games, after making one move, the agent moves to a different state, with possible different rewards for each action November 7, 2018 27 / 42 November 7, 2018 28 / 42

An MDP is parameterized by five elements Example 3 states, 2 actions S: a set of possible states A: a set of possible actions P : transition probability, P a s, s is the probability of transiting from state s to state s after taking action a Markov property r: reward function, r a s is expected reward of action a at state s γ 0, 1: discount factor, informally, reward of 1 from tomorrow is only counted as γ for today Different from Markov models discussed in Lec 10, the state transition is influenced by the taken action. Different from Multi-armed bandit, the reward depends on the state. November 7, 2018 29 / 42 November 7, 2018 30 / 42 Policy A policy π : S A indicates which action to take at each state. If we start from state s 0 S and act according to a policy π, the discounted rewards for time 0, 1, 2,... are respectively r πs0 s 0, γr πs1 s 1, γ 2 r πs2 s 2, where s 1 P πs0 s 0,, s 2 P πs1 s 1,, If we follow the policy forever, the total discounted reward is [ ] E γ t r πst s t t=0 where the randomness is from s t+1 P πst s t,. Note: the discount factor allows us to consider an infinite learning process November 7, 2018 31 / 42 Optimal policy and Bellman equation First goal: knowing all parameters, how to find the optimal policy [ ] argmax E γ t r πst s t? π t=0 We first answer a related question: what is the maximum reward one can achieve starting from an arbitrary state s? [ ] V s = max E γ t r πst s t with s 0 = s π t=0 = max a A r s a + γ s S P a s, s V s V is called the value function. It satisfies the above Bellman equation: S unknowns, nonlinear, how to solve it? November 7, 2018 32 / 42

Value Iteration Convergence of Value Iteration Value Iteration Initialize V 0 s randomly for all s S For k = 1, 2,... until convergence V k s = max a A r s a + γ s S P a s, s V k 1 s Bellman upate Does Value Iteration always find the true value function V? Yes, in W5 you will show max s V k s V s γ max V k 1 s V s s Knowing V, the optimal policy π is simply π s = argmax a A r s a + γ s S P a s, s V s i.e. V k is getting closer and closer to the true V. November 7, 2018 33 / 42 November 7, 2018 34 / 42 Model-based approaches Now suppose we do not know the parameters of the MDP transition probability P reward function r But we do still assume we can observe the states in contrast to HMM; otherwise, this is called Partially Observable MDP POMDP and learning is much more difficult. In this case, how do we find the optimal policy? We discuss examples from two families of learning algorithms: model-based approaches Key idea: learn the model P and r explicitly from samples Suppose we have a sequence of interactions: s 1, a 1, r 1, s 2, a 2, r 2,..., s T, a T, r T, then the MLE for P and r are simply P a s, s #transitions from s to s after taking action a r a s = average observed reward at state s after taking action a Having estimates of the parameters we can then apply value iteration to find the optimal policy. model-free approaches November 7, 2018 35 / 42 November 7, 2018 36 / 42

Model-based approaches Model-free approaches How do we collect data s 1, a 1, r 1, s 2, a 2, r 2,..., s T, a T, r T? Simplest idea: follow a random policy for T steps. This is similar to explore then exploit, and we know this is not the best way. Let s adopt the ɛ-greedy idea instead. A sketch for model-based approaches Initialize V, P, r randomly For t = 1, 2,..., with probability ɛ, explore: pick an action uniformly at random with probability 1 ɛ, exploit: pick the optimal action based on V update the model parameters P, r update the value function V via value iteration or simpler methods November 7, 2018 37 / 42 Key idea: do not learn the model explicitly. What do we learn then? Define the Q : S A R function as Qs, a = r a s + γ s S P a s, s max a A Qs, a In words, Qs, a is the expected reward one can achieve starting from state s with action a, then acting optimally. Clearly, V s = max a Qs, a. Knowing Qs, a, the optimal policy at state s is simply argmax a Qs, a. Model-free approaches learn the Q function directly from samples. November 7, 2018 38 / 42 Temporal difference How to learn the Q function? Qs, a = r a s + γ s S P a s, s max a A Qs, a On experience s t, a t, r t, s t+1, with the current guess on Q, r t + γ max a Qs t+1, a is like a sample of the RHS of the equation. So it s natural to do the following update: Qs t, a t 1 αqs t, a t + α r t + γ max a Qs t+1, a = Qs t, a t + α r t + γ max } a Qs t+1, a Qs t, a t {{} temporal difference α is like learning rate November 7, 2018 39 / 42 Q-learning The simplest model-free algorithm: Q-learning Initialize Q randomly; denote the initial state by s 1. For t = 1, 2,..., with probability ɛ, explore: a t is chosen uniformly at random with probability 1 ɛ, exploit: a t = argmax a Qs t, a execute action a t, receive reward r t, arrive at state s t+1 update the Q function Qs t, a t 1 αqs t, a t + α for some learning rate α. r t + γ max a Qs t+1, a November 7, 2018 40 / 42

Comparisons Summary A brief introduction to some online decision making problems: Model-based Model-free What it learns model parameters P, r,... Q function Space O S 2 A O S A Sample efficiency usually better usually worse There are many different algorithms and RL is an active research area. Multi-armed bandits most basic problem to understand exploration vs. exploitation algorithms: explore then exploit, ɛ-greedy, UCB and reinforcement learning a combination of Markov models and multi-armed bandits learning the optimal policy with a known MDP: value iteration learning the optimal policy with an unknown MDP: model-based approach and model-free approach e.g. Q-learning November 7, 2018 41 / 42 November 7, 2018 42 / 42