Reinforcement Learning Wrap-up

Similar documents
CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

CS 188: Artificial Intelligence Spring Announcements

Our Status in CSE 5522

CS 343: Artificial Intelligence

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

Our Status. We re done with Part I Search and Planning!

CS 188: Artificial Intelligence Fall 2011

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence. Our Status in CS188

Random Variables. A random variable is some aspect of the world about which we (may) have uncertainty

CS 188: Artificial Intelligence Fall 2009

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Approximate Q-Learning. Dan Weld / University of Washington

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Probability, Markov models and HMMs. Vibhav Gogate The University of Texas at Dallas

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

PROBABILITY. Inference: constraint propagation

PROBABILITY AND INFERENCE

CS 188: Artificial Intelligence. Bayes Nets

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Today

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Fall 2009

CS 343: Artificial Intelligence

CSEP 573: Artificial Intelligence

Bayes Nets: Sampling

Uncertainty. Chapter 13

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

Approximate Inference

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

CS 343: Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Reinforcement learning

Bayes Nets III: Inference

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

CS 188: Artificial Intelligence Fall 2008

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence Spring Announcements

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

CSEP 573: Artificial Intelligence

CS 343: Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Spring Bayes Net Semantics. Probabilities in BNs. All Conditional Independences

Computer Science CPSC 322. Lecture 18 Marginalization, Conditioning

CS 188: Artificial Intelligence Spring 2009

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

CS 188: Artificial Intelligence

Artificial Intelligence Uncertainty

Bayes Nets: Independence

Machine Learning. CS Spring 2015 a Bayesian Learning (I) Uncertainty

CS 188: Artificial Intelligence

Artificial Intelligence Bayes Nets: Independence

Basic Probability and Decisions

CS 188: Artificial Intelligence. Outline

Artificial Intelligence

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

Uncertainty and Bayesian Networks

CSE 473: Artificial Intelligence

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Probabilistic Reasoning

Hidden Markov Models. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 19 Apr 2012

CSE 473: Ar+ficial Intelligence

CS 4100 // artificial intelligence. Recap/midterm review!

Uncertain Knowledge and Reasoning

Artificial Intelligence Programming Probability

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

Bayes Net Representation. CS 188: Artificial Intelligence. Approximate Inference: Sampling. Variable Elimination. Sampling.

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Basic Probability. Robert Platt Northeastern University. Some images and slides are used from: 1. AIMA 2. Chris Amato

CS 188: Artificial Intelligence Fall Recap: Inference Example

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Pengju XJTU 2016

CS 188: Artificial Intelligence. Machine Learning

Uncertain Knowledge and Bayes Rule. George Konidaris

Probabilistic Models. Models describe how (a portion of) the world works

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Generative Learning. INFO-4604, Applied Machine Learning University of Colorado Boulder. November 29, 2018 Prof. Michael Paul

CS 2750: Machine Learning. Bayesian Networks. Prof. Adriana Kovashka University of Pittsburgh March 14, 2016

Uncertainty. Outline

Introduction to Artificial Intelligence (AI)

Chapter 13 Quantifying Uncertainty

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

13.4 INDEPENDENCE. 494 Chapter 13. Quantifying Uncertainty

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

CS 5522: Artificial Intelligence II

CS 4649/7649 Robot Intelligence: Planning

Be able to define the following terms and answer basic questions about them:

Bayesian Networks BY: MOHAMAD ALSABBAGH

Review of Probability Mark Craven and David Page Computer Sciences 760.

CS 343: Artificial Intelligence

Transcription:

Reinforcement Learning Wrap-up Slides courtesy of Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Approximate Q-Learning

[demo RL pacman] Generalizing Across States Basic Q-Learning keeps a table of all q-values In realistic situations, we cannot possibly learn about every single state! Too many states to visit them all in training Too many states to hold the q-tables in memory Instead, we want to generalize: Learn about some small number of training states from experience Generalize that experience to new, similar situations This is a fundamental idea in machine learning, and we ll see it over and over again

[Demo: Q-learning pacman tiny watch all (L11D5)] [Demo: Q-learning pacman tiny silent train (L11D6)] [Demo: Q-learning pacman tricky watch all (L11D7)] Example: Pacman Let s say we discover through experience that this state is bad: In naïve q-learning, we know nothing about this state: Or even this one!

Feature-Based Representations Solution: describe a state using a vector of features (properties) Features are functions from states to real numbers (often 0/1) that capture important properties of the state Example features: Distance to closest ghost Distance to closest dot Number of ghosts 1 / (dist to dot) 2 Is Pacman in a tunnel? (0/1) etc. Is it the exact state on this slide? Can also describe a q-state (s, a) with features (e.g. action moves closer to food)

Linear Value Functions Using a feature representation, we can write a q function (or value function) for any state using a few weights: Advantage: our experience is summed up in a few powerful numbers Disadvantage: states may share features but actually be very different in value!

Approximate Q-Learning Q-learning with linear Q-functions: Intuitive interpretation: Adjust weights of active features E.g., if something unexpectedly bad happens, blame the features that were on: disprefer all states with that state s features Formal justification: online least squares Exact Q s Approximate Q s

Example: Q-Pacman [Demo: approximate Q- learning pacman (L11D10)]

Video of Demo Approximate Q-Learning -- Pacman

Q-Learning and Least Squares

Linear Approximation: Regression* 40 26 20 24 22 20 0 0 20 30 20 10 0 0 10 20 30 40 Prediction: Prediction:

Optimization: Least Squares* Observation Error or residual Prediction 0 0 20

Minimizing Error* Imagine we had only one point x, with features f(x), target value y, and weights w: Approximate q update explained: target prediction

Overfitting: Why Limiting Capacity Can Help* 30 25 20 Degree 15 polynomial 15 10 5 0-5 -10-15 0 2 4 6 8 10 12 14 16 18 20

Policy Search

Policy Search Problem: often the feature-based policies that work well (win games, maximize utilities) aren t the ones that approximate V / Q best E.g. your value functions from project 2 were probably horrible estimates of future rewards, but they still produced good decisions Q-learning s priority: get Q-values close (modeling) Action selection priority: get ordering of Q-values right (prediction) We ll see this distinction between modeling and prediction again later in the course Solution: learn policies that maximize rewards, not the values that predict them Policy search: start with an ok solution (e.g. Q-learning) then fine-tune by hill climbing on feature weights

Policy Search Simplest policy search: Start with an initial linear value function or Q-function Nudge each feature weight up and down and see if your policy is better than before Problems: How do we tell the policy got better? Need to run many sample episodes! If there are a lot of features, this can be impractical Better methods exploit look ahead structure, sample wisely, change multiple parameters

Policy Search [Andrew Ng] [Video: HELICOPTER]

Probability Slides courtesy of Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]

Today Probability Random Variables Joint and Marginal Distributions Conditional Distribution Product Rule, Chain Rule, Bayes Rule Inference Independence You ll need all this stuff A LOT for the next few weeks, so make sure you go over it now! Read AIMA 13.1-13.5.

Uncertainty General situation: Observed variables (evidence): Agent knows certain things about the state of the world (e.g., sensor readings or symptoms) Unobserved variables: Agent needs to reason about other aspects (e.g. where an object is or what disease is present) Model: Agent knows something about how the known variables relate to the unknown variables Probabilistic reasoning gives us a framework for managing our beliefs and knowledge

Random Variables A random variable is some aspect of the world about which we (may) have uncertainty R = Is it raining? T = Is it hot or cold? D = How long will it take to drive to work? L = Where is the ghost? We denote random variables with capital letters Like variables in a CSP, random variables have domains R in {true, false} (often write as {+r, -r}) T in {hot, cold} D in [0, ) L in possible locations, maybe {(0,0), (0,1), }

Probability Distributions Associate a probability with each value Temperature: Weather: T P hot 0.5 cold 0.5 W P sun 0.6 rain 0.1 fog 0.3 meteor 0.0

Probability Distributions Unobserved random variables have distributions T P W P hot 0.5 sun 0.6 cold 0.5 rain 0.1 fog 0.3 meteor 0.0 A distribution is a TABLE of probabilities of values Shorthand notation: OK if all domain entries are unique A probability (lower case value) is a single number Must have: and

Joint Distributions A joint distribution over a set of random variables: specifies a real number for each assignment (or outcome): Must obey: T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 Size of distribution if n variables with domain sizes d? For all but the smallest distributions, impractical to write out!

Probabilistic Models A probabilistic model is a joint distribution over a set of random variables Probabilistic models: (Random) variables with domains Assignments are called outcomes Joint distributions: say whether assignments (outcomes) are likely Normalized: sum to 1.0 Ideally: only certain variables directly interact Constraint satisfaction problems: Variables with domains Constraints: state whether assignments are possible Ideally: only certain variables directly interact Distribution over T,W T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 Constraint over T,W T W P hot sun T hot rain F cold sun F cold rain T

Events An event is a set E of outcomes From a joint distribution, we can calculate the probability of any event Probability that it s hot AND sunny? Probability that it s hot? Probability that it s hot OR sunny? T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 Typically, the events we care about are partial assignments, like P(T=hot)

Quiz: Events P(+x, +y)? P(+x)? P(-y OR +x)? X Y P +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1

Marginal Distributions Marginal distributions are sub-tables which eliminate variables Marginalization (summing out): Combine collapsed rows by adding T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 T P hot 0.5 cold 0.5 W P sun 0.6 rain 0.4

Quiz: Marginal Distributions X Y P +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 X +x -x Y +y -y P P

Conditional Probabilities A simple relation between joint and conditional probabilities In fact, this is taken as the definition of a conditional probability P(a,b) P(a) P(b) T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3

Quiz: Conditional Probabilities P(+x +y)? X Y P +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 P(-x +y)? P(-y +x)?

Conditional Distributions Conditional distributions are probability distributions over some variables given fixed values of others Conditional Distributions Joint Distribution W P sun 0.8 rain 0.2 W P sun 0.4 rain 0.6 T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3

Normalization Trick T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 W P sun 0.4 rain 0.6

Normalization Trick T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 SELECT the joint probabilities matching the evidence T W P cold sun 0.2 cold rain 0.3 NORMALIZE the selection (make it sum to one) W P sun 0.4 rain 0.6

Normalization Trick T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 0.3 SELECT the joint probabilities matching the evidence T W P cold sun 0.2 cold rain 0.3 NORMALIZE the selection (make it sum to one) W P sun 0.4 rain 0.6 Why does this work? Sum of selection is P(evidence)! (P(T=c), here)

Quiz: Normalization Trick P(X Y=-y)? X Y P +x +y 0.2 +x -y 0.3 -x +y 0.4 -x -y 0.1 SELECT the joint probabilities matching the evidence NORMALIZE the selection (make it sum to one)

To Normalize (Dictionary) To bring or restore to a normal condition Procedure: Step 1: Compute Z = sum over all entries Step 2: Divide every entry by Z All entries sum to ONE Example 1 Example 2 W P Normalize sun 0.2 rain 0.3 Z = 0.5 W P sun 0.4 rain 0.6 T W P hot sun 20 hot rain 5 cold sun 10 Normalize Z = 50 T W P hot sun 0.4 hot rain 0.1 cold sun 0.2 cold rain 15 cold rain 0.3

Probabilistic Inference Probabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) We generally compute conditional probabilities P(on time no reported accidents) = 0.90 These represent the agent s beliefs given the evidence Probabilities change with new evidence: P(on time no accidents, 5 a.m.) = 0.95 P(on time no accidents, 5 a.m., raining) = 0.80 Observing new evidence causes beliefs to be updated

Inference by Enumeration General case: Evidence variables: Query* variable: Hidden variables: All variables We want: * Works fine with multiple query variables, too Step 1: Select the entries consistent with the evidence Step 2: Sum out H to get joint of Query and evidence Step 3: Normalize 1 Z

Inference by Enumeration P(W)? P(W winter)? P(W winter, hot)? S T W P summe r summe r summe r summe r hot sun 0.30 hot rain 0.05 cold sun 0.10 cold rain 0.05 winter hot sun 0.10 winter hot rain 0.05 winter cold sun 0.15 winter cold rain 0.20

Inference by Enumeration Obvious problems: Worst-case time complexity O(d n ) Space complexity O(d n ) to store the joint distribution

The Product Rule Sometimes have conditional distributions but want the joint

The Product Rule Example: R P sun 0.8 rain 0.2 D W P wet sun 0.1 dry sun 0.9 wet rain 0.7 dry rain 0.3 D W P wet sun 0.08 dry sun 0.72 wet rain 0.14 dry rain 0.06

The Chain Rule More generally, can always write any joint distribution as an incremental product of conditional distributions Why is this always true?

Bayes Rule

Bayes Rule Two ways to factor a joint distribution over two variables: That s my rule! Dividing, we get: Why is this at all helpful? Lets us build one conditional from its reverse Often one conditional is tricky but the other one is simple Foundation of many systems we ll see later (e.g. ASR, MT) In the running for most important AI equation!

Inference with Bayes Rule Example: Diagnostic probability from causal probability: Example: P (cause e ect) = M: meningitis, S: stiff neck P (e ect cause)p (cause) P (e ect) P (+m) =0.0001 P (+s + m) =0.8 P (+s m) =0.01 Example givens P (+m + s) = P (+s + m)p (+m) P (+s) = P (+s + m)p (+m) P (+s + m)p (+m)+p(+s m)p ( m) = 0.8 0.0001 0.8 0.0001 + 0.01 0.9999 Note: posterior probability of meningitis still very small Note: you should still get stiff necks checked out! Why?

Quiz: Bayes Rule Given: R P sun 0.8 rain 0.2 D W P wet sun 0.1 dry sun 0.9 wet rain 0.7 dry rain 0.3 What is P(W dry)?

Next Time: Markov Models