Uncertain Knowledge and Reasoning

Similar documents
Pengju XJTU 2016

Uncertainty. Outline. Probability Syntax and Semantics Inference Independence and Bayes Rule. AIMA2e Chapter 13

Probabilistic Reasoning

Outline. Uncertainty. Methods for handling uncertainty. Uncertainty. Making decisions under uncertainty. Probability. Uncertainty

Uncertainty. Chapter 13. Chapter 13 1

COMP9414/ 9814/ 3411: Artificial Intelligence. 14. Uncertainty. Russell & Norvig, Chapter 13. UNSW c AIMA, 2004, Alan Blair, 2012

Uncertainty. Chapter 13

CS 561: Artificial Intelligence

Uncertainty. Chapter 13, Sections 1 6

Artificial Intelligence

Pengju

Uncertainty. Outline

Artificial Intelligence Uncertainty

Chapter 13 Quantifying Uncertainty

Uncertainty. Chapter 13

Uncertainty and Bayesian Networks

Web-Mining Agents Data Mining

Basic Probability and Decisions

UNCERTAINTY. In which we see what an agent should do when not all is crystal-clear.

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Uncertainty (Chapter 13, Russell & Norvig) Introduction to Artificial Intelligence CS 150 Lecture 14

Uncertainty. 22c:145 Artificial Intelligence. Problem of Logic Agents. Foundations of Probability. Axioms of Probability

13.4 INDEPENDENCE. 494 Chapter 13. Quantifying Uncertainty

Probabilistic Robotics

Quantifying uncertainty & Bayesian networks

Reasoning with Uncertainty. Chapter 13

n How to represent uncertainty in knowledge? n Which action to choose under uncertainty? q Assume the car does not have a flat tire

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Resolution or modus ponens are exact there is no possibility of mistake if the rules are followed exactly.

Probabilistic Reasoning. Kee-Eung Kim KAIST Computer Science

ARTIFICIAL INTELLIGENCE. Uncertainty: probabilistic reasoning

Probabilistic representation and reasoning

PROBABILISTIC REASONING Outline

Where are we in CS 440?

Probability and Decision Theory

Where are we in CS 440?

Uncertainty (Chapter 13, Russell & Norvig)

Uncertainty. Russell & Norvig Chapter 13.

CS 5100: Founda.ons of Ar.ficial Intelligence

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Probability theory: elements

Basic Probability. Robert Platt Northeastern University. Some images and slides are used from: 1. AIMA 2. Chris Amato

CS 188: Artificial Intelligence Fall 2009

Probabilistic representation and reasoning

Probabilistic Reasoning

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

Graphical Models - Part I

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

Fusion in simple models

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Probabilistic Models

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Lecture Overview. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 11: Uncertainty. Uncertainty.

CSE 473: Artificial Intelligence

Our Status. We re done with Part I Search and Planning!

Cartesian-product sample spaces and independence

Probabilistic Reasoning Systems

Uncertainty. CmpE 540 Principles of Artificial Intelligence Pınar Yolum Uncertainty. Sources of Uncertainty

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Logical Agents. Knowledge based agents. Knowledge based agents. Knowledge based agents. The Wumpus World. Knowledge Bases 10/20/14

CS 188: Artificial Intelligence. Our Status in CS188

Artificial Intelligence CS 6364

Artificial Intelligence Programming Probability

Intelligent Agents. Pınar Yolum Utrecht University

CS 188: Artificial Intelligence Fall 2009

Propositional Logic: Logical Agents (Part I)

Logical Agents. Outline

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science

CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents

Uncertainty Chapter 13. Mausam (Based on slides by UW-AI faculty)

Inf2D 06: Logical Agents: Knowledge Bases and the Wumpus World

Knowledge-based Agents. CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents. Outline. Knowledge-based Agents

Propositional Logic: Logical Agents (Part I)

Brief Intro. to Bayesian Networks. Extracted and modified from four lectures in Intro to AI, Spring 2008 Paul S. Rosenbloom

CS 5522: Artificial Intelligence II

CS 188: Artificial Intelligence Spring Announcements

Reasoning Under Uncertainty

Logical Agents. Chapter 7

Revised by Hankui Zhuo, March 21, Logical agents. Chapter 7. Chapter 7 1

CS 343: Artificial Intelligence

PROBABILITY. Inference: constraint propagation

Reinforcement Learning Wrap-up

Logical Agents. Outline

Probabilistic Models. Models describe how (a portion of) the world works

Artificial Intelligence Chapter 7: Logical Agents

CS 771 Artificial Intelligence. Propositional Logic

7 LOGICAL AGENTS. OHJ-2556 Artificial Intelligence, Spring OHJ-2556 Artificial Intelligence, Spring

Y. Xiang, Inference with Uncertain Knowledge 1

Computer Science CPSC 322. Lecture 18 Marginalization, Conditioning

This lecture. Reading. Conditional Independence Bayesian (Belief) Networks: Syntax and semantics. Chapter CS151, Spring 2004

Reasoning Under Uncertainty

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Bayesian Networks. Axioms of Probability Theory. Conditional Probability. Inference by Enumeration. Inference by Enumeration CSE 473

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Logical Agent & Propositional Logic

Uncertainty and Belief Networks. Introduction to Artificial Intelligence CS 151 Lecture 1 continued Ok, Lecture 2!

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Transcription:

Uncertainty Part IV Uncertain Knowledge and Reasoning et action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1) partial observability (road state, other drivers plans, etc.) 2) noisy sensors (traffic reports) 3) uncertainty in action outcomes (flat tire, etc.) 4) immense complexity of modelling and predicting traffic Hence a purely logical approach either 1) risks falsehood: A 25 will get me there on time or 2) leads to conclusions that are too weak for decision making: A 25 will get me there on time if there s no accident on the bridge and it doesn t rain and my tires remain intact etc etc. (A 1440 might reasonably be said to get me there on time but I d have to stay overnight in the airport...) 2 / 86 3 / 86 Methods for handling uncertainty Default or nonmonotonic logic (Ch. 12.6): Assume my car does not have a flat tire Assume A 25 works unless contradicted by evidence Issues: What assumptions are reasonable? How to handle contradiction? Rules with fudge factors (Ch 14.7.1): A 25 7! 0.3 AtAirportOnTime Sprinkler 7! 0.99 WetGrass WetGrass 7! 0.7 Rain Issues: Problems with combination, e.g., Sprinkler causes Rain?? Probability Given the available evidence, A 25 will get me there on time with probability 0.04 (N Fuzzy logic (Ch. 14.7.3) handles degree of truth NOT uncertainty e.g., WetGrass is true to degree 0.2) 4 / 86 Probability Probabilistic assertions summarize effects of laziness: failure to enumerate exceptions, qualifications, etc. ignorance: lack of relevant facts, initial conditions, etc. Subjective or ayesian probability: Probabilities relate propositions to one s own state of knowledge e.g., P (A 25 no reported accidents) =0.06 (these might be learned from past experience of similar situations) Probabilities of propositions change with new evidence: e.g., P (A 25 no reported accidents, 5a.m.) =0.15 (note that the semantics of, is and ) 5 / 86

Suppose I believe the following: Making decisions under uncertainty Decisions under uncertainty Using probabilities, I can reason under uncertainty. What do I need to make rational decisions under uncertainty? When are decisions considered to be rational? P (A 25 gets me there on time...) = 0.04 P (A 90 gets me there on time...) = 0.70 P (A 120 gets me there on time...) = 0.95 P (A 1440 gets me there on time...) = 0.9999 Which action should I choose? Depends on my preferences for missing flight vs. airport cuisine, etc. Utility theory is used to represent and infer preferences Decision theory = utility theory + probability theory 6 / 86 7 / 86 Refreshing probability theory Chapter 13 Quantifying Uncertainty Do you recall the following? probability distribution probability density or mass function prior (unconditional) posterior (conditional) (conditional) independence chain rule, marginalisation, conditioning, ayes rule properties of probabilities and distributions 8 / 86 9 / 86

Probability basics Outline } Probability } Syntax and Semantics } Inference } Independence and ayes Rule egin with a set the sample space e.g., 6 possible rolls of a die.! 2 is a sample point/possible world/atomic event A probability space or probability model is a sample space with an assignment P (!) for every! 2 s.t. 0 apple P (!) apple 1! P (!) =1 e.g., P (1) = P (2) = P (3) = P (4) = P (5) = P (6) = 1/6. An event A is any subset of P (A) = {!2A} P (!) E.g., P (die roll < 4) = P (1) + P (2) + P (3) = 1/6+1/6+1/6 =1/2 10 / 86 11 / 86 Syntax for propositions Propositional or oolean random variables e.g., Cavity (do I have a cavity?) Cavity = false is a proposition, also written cavity Discrete random variables e.g., Weather is one of hsunny, rain, cloudy, snowi Weather = rain is a proposition Values must be exhaustive and mutually exclusive Continuous random variables e.g., Temp = 21.6; also allow, e.g., Temp < 22.0. Arbitrary oolean combinations of basic propositions are allowed: e.g. Weather = sunny ^ Temp < 22.0 15 / 86 Prior probability: notation and sematics Prior or unconditional probabilities of propositions e.g., P (Cavity = true) =0.1 and P (Weather = sunny) =0.72 correspond to belief prior to arrival of any (new) evidence Probability distribution gives values for all possible assignments: P(Weather) =h0.72, 0.1, 0.08, 0.1i (normalized, i.e., sums to 1) Joint probability distribution for a set of random variables gives the probability of every atomic event (sample point) on those variables P(Weather, Cavity) =a4 2 matrix of values: Weather = sunny rain cloudy snow Cavity = true 0.144 0.02 0.016 0.02 Cavity = false 0.576 0.08 0.064 0.08 Every question about a domain can be answered by the joint distribution, because every event is a sum of sample points 16 / 86

Probability for continuous variables Distribution cannot be expressed as a (multi-dimensional) vector. Rather: express distribution as a parameterized function of value: P (X = x) =U [18,26] (x) = uniform density between 18 and 26 Example: Gaussian density The Gaussian density is also known as the normal distribution; with µ =0and 2 =1it is the standard normal distribution. P (x) =N(µ, 2 )(x) = 1 p 2 e (x µ)2 /2 2 0.125 18 dx 26 Here P is a density; it therefore integrates to 1. P (X = 20.5) = 0.125 really means lim dx!0 P (20.5 apple X apple 20.5+dx)/dx =0.125 17 / 86 0 18 / 86 Conditional probability: notation and sematics Conditional or posterior probabilities e.g., P (cavity toothache) =0.8 i.e., given that toothache is all I know NOT if toothache then 80% chance of cavity (Notation for conditional distributions: P(Cavity Toothache) =hh, i, h, i i, i.e. a 2-element vector of 2-element vectors) If we know more, e.g., cavity is also given, then we have P (cavity toothache, cavity) =1 Note: the less specific belief remains valid after more evidence arrives, but is not always useful New evidence may be irrelevant, allowing simplification, e.g., P (cavity toothache, 49ersWin) =P (cavity toothache) =0.8 This kind of inference, sanctioned by domain knowledge, is crucial! Conditional probability: definition and derived rules Definition of conditional probability: P (a b) = P (a ^ b) P (b) if P (b) 6= 0 Product rule gives an alternative formulation: P (a ^ b) =P (a b)p (b) =P (b a)p (a) A general version holds for whole distributions, e.g., P(Weather, Cavity) =P(Weather Cavity)P(Cavity) (View as a 4 2 set of equations, not as matrix multiplication) Chain rule is derived by successive application of product rule: P(X 1,...,X n )=P(X 1,...,X n 1 ) P(X n X 1,...,X n 1 ) = P(X 1,...,X n 2 ) P(X n 1 X 1,...,X n 2 ) P(X n X 1,...,X n 1 ) =... = n i =1P(X i X 1,...,X i 1 ) 19 / 86 20 / 86

Probabilistic reasoning: inference by enumeration Start with the joint distribution P(Cavity, Catch, Toothache): Start with the joint distribution: Inference by enumeration cavity cavity toothache.108.012.016.064 toothache.072.144.008.576 cavity cavity toothache.108.012.016.064 toothache.072.144.008.576 For any proposition, sum the atomic events where it is true: For any proposition, sum the atomic events where it is true: P ( )=!:! = P (!) P ( )=!:! = P (!) P (toothache) =0.108 + 0.012 + 0.016 + 0.064 = 0.2 21 / 86 22 / 86 Inference by enumeration Inference by enumeration: computing conditionals Start with the joint distribution: Start with the joint distribution: cavity cavity toothache.108.012.016.064 toothache.072.144.008.576 cavity cavity toothache.108.012.016.064 toothache.072.144.008.576 You can also compute conditional probabilities: For any proposition P ( )=!:! = P (!), sum the atomic events where it is true: P ( cavity toothache) = = P ( cavity ^ toothache) P (toothache) 0.016 + 0.064 0.108 + 0.012 + 0.016 + 0.064 =0.4 P (cavity _ toothache) =0.108 + 0.012 + 0.072 + 0.008 + +0.016 + 0.064 = 0.28 23 / 86 P (cavity toothache) =? 24 / 86

cavity cavity Normalization toothache.108.012.016.064 toothache.072.144.008.576 Denominator can be viewed as a normalization constant P(Cavity toothache) = P(Cavity, toothache) = [P(Cavity, toothache, )+P(Cavity, toothache, )] = [h0.108, 0.016i + h0.012, 0.064i] = h0.12, 0.08i = h0.6, 0.4i General idea: compute distribution on query variable by fixing evidence variables and summing over hidden variables 25 / 86 Inference by enumeration: posterior marginals et X be all the variables. Typically, we want the posterior joint distribution of the query variables Y given specific values e for the evidence variables E et the hidden variables be H = X Y E Then the required summation of joint entries is done by summing out the hidden variables: P(Y E = e) = P(Y, E = e) = h P(Y, E = e, H = h) The terms in the summation are joint entries because Y, E, and H together exhaust the set of random variables. Summing out is also known as marginalisation; when written with conditional probabilities the rule is known as conditioning: P(Y E = e) = h P(Y E = e, H = h)p(h = h E = e) 26 / 86 Inference by enumeration: problems Inference by enumeration requires the summing out of hidden variables: P(Y E = e) = P(Y, E = e) = h P(Y, E = e, H = h) Obvious problems with this approach to inference: scaling to n variables, each with arity (# of values) d ) 1) Worst-case time complexity O(d n ) 2) Space complexity O(d n ) to store the joint distribution 3) How to find the numbers for O(d n ) entries??? Independence A and are independent iff P(A )=P(A) or P( A)=P() or P(A, )=P(A)P() Toothache Cavity Weather Catch decomposes into Cavity Toothache Catch Weather P(Toothache, Catch, Cavity, Weather) = P(Toothache, Catch, Cavity)P(Weather) 32 entries reduced to 12; for n independent biased coins, 2 n! n Absolute independence powerful but rare Dentistry is a large field with hundreds of variables, none of which are independent. What to do? 27 / 86 28 / 86

Conditional independence P(Toothache, Cavity, Catch) has 2 3 parameters) 1 = 7 independent entries (or: If I have a cavity, the probability that the probe es in it doesn t depend on whether I have a toothache: (1) P ( toothache, cavity) =P ( cavity) The same independence holds if I haven t got a cavity: (2) P ( toothache, cavity) =P ( cavity) So Catch is conditionally independent of Toothache given Cavity: P(Catch Toothache, Cavity) =P(Catch Cavity) Equivalent statements: P(Toothache Catch, Cavity) =P(Toothache Cavity) P(Toothache, Catch Cavity) =P(Toothache Cavity)P(Catch Cavity) Note that (conditional) independence is symmetric! Conditional independence contd. Write out full joint distribution using chain rule: P(Toothache, Catch, Cavity) = P(Toothache Catch, Cavity)P(Catch, Cavity) = P(Toothache Catch, Cavity)P(Catch Cavity)P(Cavity) = P(Toothache Cavity)P(Catch Cavity)P(Cavity) I.e., 2 + 2 + 1 = 5 independent numbers (equations 1 and 2 remove 2) In most cases, the use of conditional independence reduces the size of the representation of the joint distribution from exponential in n to linear in n. Conditional independence is our most basic and robust form of knowledge about uncertain environments. ut how then, do we compute e.g. P(Cavity Toothache)? 29 / 86 30 / 86 Inference using ayes Rule Product rule P (a ^ b) =P (a b)p (b) =P (b a)p (a) ) ayes rule P (a b) = or in distribution form P(Y X, e) = P(X Y,e)P(Y e) P(X e) P (b a)p (a) P (b) = P(X Y,e)P(Y e) Useful for assessing diagnostic probability from causal probability: P (Cause E ect) = P (E ect Cause)P (Cause) P (E ect) E.g., let M be meningitis, S be stiff neck: P (m s) = P (s m)p (m) P (s) = 0.8 0.0001 0.1 =0.0008 Note: posterior probability of meningitis still very small! 31 / 86 ayes Rule (normalised) and conditional independence P(Cavity toothache ^ ) = P(toothache ^ Cavity)P(Cavity) = P(toothache Cavity)P( Cavity)P(Cavity) This is an example of a naive ayes model: P(Cause, E ect 1,...,E ect n )=P(Cause) i P(E ect i Cause) Cavity Cause Toothache Catch Effect 1 Effect n Total number of probability parameters is linear in n 32 / 86

Wumpus World: description Game which ends when player climbs out of the cave at [1,1], or dies. Death occurs from falling into a pit, or running into a life Wumpus. A Wumpus can be killed with an arrow (the player has 1). Goal is to leave the cave with gold. Rewards gold +1000, death -1000-1 per step, -10 for using the arrow Actuators eft turn, Right turn, Forward, Grab, Shoot, Climb Sensors Stench, reeze, Glitter, ump, Scream: Squares with and adjacent wumpus smell Squares adjacent to pit are breezy Glitter iff gold is in the same square ump means walking into a wall Upon killing a wumpus, you hear a scream 4 3 2 1 Stench Stench START reeze Stench Gold reeze reeze PIT reeze PIT PIT reeze reeze 1 2 3 4 33 / 86 Suppose the current state is: Wumpus World: capturing uncertainty 1,4 2,4 1,3 2,3 3,3 4,3 1,2 2,2 3,2 4,2 1,1 2,1 3,1 4,1 3,4 4,4 Now each next step may be into a pit... What is the safest step? Variable P ij with P ij = true iff [i, j] contains a pit Variable ij with ij = true iff [i, j] is breezy Include only 1,1, 1,2, 2,1 (currently sensed) in the probability model 34 / 86 Specifying the probability model The full joint distribution is P(P 1,1,...,P 4,4, 1,1, 1,2, 2,1 ) Apply product rule: P( 1,1, 1,2, 2,1 P 1,1,...,P 4,4 )P(P 1,1,...,P 4,4 ) (Do it this way to get P (E ect Cause)) First term: 1 if pits are adjacent to breezes, 0 otherwise Second term: pits are placed randomly, probability 0.2 per square: P(P 1,1,...,P 4,4 )= 4,4 i,j =1,1P(P i,j )=0.2 n 0.8 16 for n pits. n Define Known = {P 1,1,P 1,2,P 2,1 } Then we know the following facts: b = b 1,1 ^ b 1,2 ^ b 2,1 known = p 1,1 ^ p 1,2 ^ p 2,1 Query is P(P 1,3 known, b) Observations and query Define Unknown = {P ij } {P 1,3 } Known For inference by enumeration, we have P(P 1,3 known, b) = unknown P(P 1,3,unknown,known,b) Grows exponentially with number of squares! 35 / 86 36 / 86

Using conditional independence asic insight: observations are conditionally independent of other hidden squares given neighbouring hidden squares 1,4 2,4 3,4 4,4 1,3 2,3 3,3 4,3 QUERY OTHER 1,2 2,2 3,2 4,2 1,1 2,1 FRINGE 3,1 4,1 KNOWN Note: Unknown = Fringe [ Other P(b P 1,3,Known,Unknown)=P(b P 1,3,Known,Fringe) Manipulate query into a form where we can use this! 37 / 86 Using conditional independence contd. P(P 1,3 known, b)= X P(P 1,3,unknown,known,b) unknown = X P(b P 1,3,known,unknown)P(P 1,3,known,unknown) unknown = X X P(b known, P 1,3, fringe,other)p(p 1,3,known,fringe,other) fringe other = X X fringe other P(b known, P 1,3, fringe)p(p 1,3,known,fringe,other) = X P(b known, P 1,3, fringe) X P(P 1,3,known,fringe,other) fringe other = X P(b known, P 1,3, fringe) X P(P 1,3 )P (known)p(fringe)p (other) fringe other = P (known)p(p 1,3 ) X P(b known, P 1,3, fringe)p (fringe) X P (other) fringe other = 0 P(P 1,3 ) X P(b known, P 1,3, fringe)p (fringe) fringe 38 / 86 Using conditional independence contd. Consistent models for variables P 1,3 and Fringe: Summary 1,3 1,3 1,3 1,3 1,3 1,2 2,2 1,2 2,2 1,2 2,2 1,2 2,2 1,2 2,2 Probability is a rigorous formalism for uncertain knowledge 1,1 2,1 3,1 1,1 2,1 3,1 1,1 2,1 3,1 0.2 x 0.2 = 0.04 0.2 x 0.8 = 0.16 0.8 x 0.2 = 0.16 together with P (fringe); circle indicates pit 1,1 2,1 3,1 1,1 2,1 3,1 0.2 x 0.2 = 0.04 0.2 x 0.8 = 0.16 Joint probability distribution specifies probability of every atomic event Queries can be answered by summing over atomic events For nontrivial domains, we must find a way to reduce the joint size P(P 1,3 known, b) = 0 h0.2(0.04 + 0.16 + 0.16), 0.8(0.04 + 0.16)i h0.31, 0.69i Independence and conditional independence provide the tools Analogously: P(P 2,2 known, b) h0.86, 0.14i 39 / 86 40 / 86

Chapter 14 Probabilistic Reasoning Chapter 15 Probabilistic Reasoning over Time 41 / 86 42 / 86 Chapter 16 Making Simple Decisions 43 / 86