Introduction. next up previous Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks

Size: px
Start display at page:

Download "Introduction. next up previous Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks"

Transcription

1 Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks Introduction Autonomous agents often find themselves facing a great deal of uncertainty. It decides how to act based on its perceived state and that of the world, which is not necessarily the true state. An agent must rely on uncertain and incomplete data from its inputs. In other cases we may want the agent to predict future state which is fraught with even more uncertainty. If the agent's actions were based on probability distributions over states perhaps we can deal with this uncertainty effectively. Given this distribution over states and the (potentially probabilistic) model for the outcomes of its actions, it could compute the expected reward for an action or policy. To the degree that this information is accurate, the action or policy would be a good one. This chapter will focus on how to generate the probability distribution on which the agent will base its actions. In order to have the most accurate distribution, it must use all the information at its disposal. The Bayesian Network, sometimes referred to as a BayesNet, is a framework for using incorporating the available information into an estimate. Consider the car insurance agent. A client asks for a quote and the agent wants to provide the quote that maximizes expected profit for his company. His quote should be the one that maximizes This is very similar to a robot problem in which is the action model and is the state. The primary difficulty is that we don't know ahead of time whether or not this client will file any claims. If we had that information, the problem would be trivial. We could instead compute an expectation over claims, which serves to push the difficulty into estimating address.. This is precisely the problem I hope to

2 Given no information about the agent would be forced to resort to his prior distribution. That is likely the probability distribution of cost for all drivers. Then the expectation would tell him that this client would cost the average amount. However, the agent does have information about the client. Suppose that he knows that this client has been in 5 accidents in the last 7 years. A quote based on the average driver might not be prudent. How should the agent modify the quote? The agent now has some information on which to condition the probability. Rather than, he is now interested in. Suppose that the insurance company has compiled a database of all claims with client information for each. This allows the agent to quote the average among clients with 5 accidents, which is surely a better estimate. But that isn't all that the agent knows. He also knows the client is a male, age 22 who drives a 1995 Buick Regal. The only problem is that when we filter the claims database on {male,22,5-accidents, 1995 Buick Regal} we don't get any data back; this client would be the first. How can the agent use this information to improve his estimate? Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks root

3 Next: Java API Up: Problem Description Previous: Problem Description Subsections Inputs Correctness Restrictions Problem Statement The goal is to rationally estimate probability using all the information available. To do this we will generate a BayesNet. A BayesNet is a directed-acyclic graph whose nodes denote variables and whose edges denote ``relationships.'' A node contains the conditional probability distribution of its variable conditioned on its parents in the graph. Therefore, if two nodes are connected by an edge, the variables are dependant. The converse, however is not true. Suppose that I'm interested in the probability of the grass being wet and I know that it is both cloudy and raining (fig 2.1). The sky being cloudy and the grass being wet are dependant. But knowing whether or

4 not it is raining makes the cloudy information irrelevant. There is an edge between two nodes only they are dependant regardless of the other information available. This is the essential feature of a Bayesian Network, i.e. some variables are not connected by edges because they are conditionally independent. This is a key insight that dramatically simplifies the model. Finally, we define the direction of the edges. We can consider these to be directions of causality. That, however, is ambiguous. In figure 2.1 the causality is clear but that is not the case in general. Consider the relationship between income and level of education for instance. It turns out that if we are only considering two variables, the direction of the edge does not matter. In fact, if it did that would suggest a serious flaw. But between three or more variables, it does matter. Figure 2.1 shows the difference between a chain of causality and root causality between three variables. Suppose I go to school on weekdays and I have ballet on Tuesday, Thursday and Sunday. Given the day of the week, school and ballet are independent. Figure 2.1a is correct. Figure 2.1b is not correct because if we knew that it was a school day, ballet is not independent of the day of the week i.e. if we then learned the day of the week, the probability of ballet would change. Inputs Figure 1:figure 2 There are three scenarios in which BayesNets are typically used. They are used when we start out knowing either

5 the variables, their relationships and their joint distributions, variables and their relationships or, just the variables. These cases are progressively more difficult because each includes the former as a sub-problem. This chapter will focus on the first case. The second and third cases are essentially supervised learning problems and are beyond the current scope. 1 Given a complete BayesNet we can take values of a subset of the variables and generate a probability distribution over a set of the remaining variables. Given the structure of a BayesNet and sample data over its variables, a simple way of generating the Correctness Given a complete BayesNet of modest size or convenient structure, it is possible to compute exact probabilities, that is probabilities that are consistent with Bayes' rule. In this case, that will constitute correctness. For larger networks estimates can be achieved with sampling. The algorithm that I will present here is an exact inference algorithm. Restrictions In this chapter I will only consider discrete variables. This will make the joint distributions into tables rather than multivariable functions which will simplify the treatment. Since continuous variables can be discretized, the convenience comes at only a small loss of expressiveness. However, we shall see that large variable domains has a tremendous impact on computational expense. Next: Java API Up: Problem Description Previous: Problem Description root

6 Next: Method Description Up: Problem Description Previous: Java API Example Usage Given this API, the car insurance agent would be able to find out the probability distribution over the cost of insuring the client in front of him. Suppose that someone in company headquarters had already generated a complete Bayesian Network and stored the data in the file insurance.bn. The agent could instantiated the network with BayesNet bn = XMLBayesNet.readGraph(``insurance.bn''); Had company headquarters only supplied the structure but not the probabilities, the agent could have created the as above and then initialized it with bn = CPTEstimator.estimateCPTs(``claims.data'');. With the BayesNet instantiated, he could query it as follows. Assignment clientdata = XMLBayesNet.readAssignment(``client.data''); Factor cost = bn.query(``cost'',clientdata); System.out.println(cost); The above functions use XML-style files as input. The formats of those files are defined here. Here are example files for a network, assignment and data set. root

7 Next: Variable Elimination Up: Bayesian Networks Previous: Example Usage Method Description In the inference case, we are supplied with a Bayesian Network and we want to be able to answer questions of the form, ``What is where is the name of a variable in the network and is a partial assignment to values of other variables in the network. Let denote the remaining unassigned variables. To determine the probability over the query variable, we can sum out all the unassigned variables. If we were to do this directly, the BayesNet would offer us no advantage since we would need to know the whole joint distribution over. But from the structure of the network, we can decouple the distribution. For instance in the network in figure 3 and a query on variable we know that, where is a normalization constant. Suppose that we are given but none of the other variables. But is one since it is known. Therefore

8 This makes the original expression simply which is much more compact. Figure 2: Subsections Variable Elimination Example run Next: Variable Elimination Up: Bayesian Networks Previous: Example Usage root

9 Next: Example run Up: Method Description Previous: Method Description Variable Elimination The inferrence algorithm I am implementing is called VARIABLEELIMINATION. It computes the joint distribution over the querry variable by eliminating all of the non-query variables as it combines them into the full joint distribution. Let me introduce the following terminology: a factor, is an very much like a joint probability distribution and is defined over a set of variables SCOPE( ). Since we are only dealing with discrete variables, a factor can be represented by probability table. A factor need not be normalized but its entries must sum to less than 1. This will make many operations simpler. To convert a factor into a joint distribution, we perform this normalization. The algorithm will proceed as follows. It maintains a set of factors initialized with the probability distributions supplied by the BayesNet. Any factor that contains evidence variables is adjusted to take that into account. In the discrete case that means selecting the appropriate row (or higher-dimensional equivalent) of the table. Once that initialization is complete, we iterate through all of the variables that we don't want and remove any mention of them from the factors. Delaying the details of this elimination for the moment, the next thing to is to consolidate all the remaining factors factor involving all the query variables. Factors are combined using the FACTORMULTIPLY operation. Finally normalizing this factor turns into a probability distribution. VARIABLEELIMINATION( set of querry variables, vector of evidence ( ), BayesNet, = <variables, edges, probability tables >) returns a probability distribution local set of factors, for all variables initially empty if else

10 endif endfor for all variables endfor local for all factors endfor local for all assignments to querry variables endfor return A single variable is eliminated from the factors by collecting all of the factors that reference it and combining them into a single factor and then by summing-out (marginalizing) the variable that we wished to eliminate. ELIMINATEVARIABLE variable to be eliminated, set of factors ) returns no longer containing local factor local set of factors for all if

11 else endif endfor return When we eliminate a variable, we add a new factor containing all the variables that appeared in a factor with the eliminated variable. This has the potential for causing a ballooning of the size of some tables. That can quickly become unmanageable. This effect is dependant on the order in which the variables are eliminated which I have left unspecified. It turns out that finding the best order is NP-hard. One heuristic solution is to eliminate the variables greedily i.e. always eliminate the one that creates the smallest new table. The function FACTORMULTIPLY combines two factors into a single equivalent factor over the union of the scopes. The new factor is the product of the original factors evaluated at the applicable subassignment. Recall that factors do not need to be normalized. FACTORMULTIPLY( factor, factor ) returns an equivalent factor, with local local local local for each assignment to endfor return

12 For example let, = A B C T T T.11 T T F.12 T F T.13 T F F.14 F T T.15 F T F.16 F F T.17 F F F.18 = B C D T T T.21 T T F.22 T F T.23 T F F.24 F T T.25 F T F.26 F F T.27 F F F.28 then FACTORMULTIPLY = A B C D

13 T T T T =.0231 T T T F =.0242 T T F T =.0276 T T F F =.0288 T F T T =.0325 T F T F =.0338 T F F T =.0378 T F F F =.0392 F T T T =.0315 F T T F =.0330 F T F T =.0368 F T F F =.0384 F F T T =.0425 F F T F =.0442 F F F T =.0486 F F F F =.0504 Now if we wanted to marginalize or sum-out a variable from a table, for example, from in the preceding example, we would use the MARGINALIZE function which takes a factor and a set of variables

14 and returns a new factor with those variables removed. Marginalization simply requires summing over the assignments to the variables that we wish to remove. MARGINALIZE( factor set of variables ) returns factor no longer including local set of variables for all assignments for all assignments endfor endfor return To remove from in the preceding example we would get = A C T T =.24 T F =.26 F T =.32 F F =.34 Next: Example run Up: Method Description Previous: Method Description root

15 Next: Demonstration Up: Method Description Previous: Variable Elimination Example run Let's consider a simplified version of our original motivating problem, the insurance network in figure 3.2. Each variable is from a binary domain. The query is on the ``cost'' variable, that is, ``What is the probability that there will be a claim?'' The evidence is that the client is a young male with a cheap car who lives in a shady neighborhood (see figure 3.2). We start by initializing with the tables for all the non-query, non-evidence nodes of the graph. But rather than use the entire tables for ``stolen'',``this car cost'' and ``safe driver'' we select for the information that we know. The factor table for ``safe driver'' becomes simply.3, the table for ``stolen'' becomes.05 and the table for ``this car cost'' becomes damage.1 no damage 0. Suppose that we decide to eliminate ``safe driver'' next. We see that the only variable that mentions it is ``accident.'' We multiply the ``safe driver'' and ``accident'' tables and get

16 safe/accident.03 unsafe/accident.14. When ``safe driver'' is marginalized this gives a probability of.17 of an accident. Let's next eliminate ``stolen.'' The only factor mentioning it is ``this car damage.'' When multiplied we get the new factor accident no accident stolen not stolen.95 0

17 which when ``stolen'' is marginalized becomes accident 1 no accident.05 Recall that the variable elimination order is not explicit. We are not restricted to work from top down, although sometimes it is perhaps more intuitive to do so. Next we might try to eliminate ``Accident'' but that would create a factor of size four. If we were to next eliminate either ``This car cost'' or ``Other car cost'' next, we would get a new table of size two. Choosing on this basis, is no guarantee of optimality (determining optimality is #p-hard), but in general offers a great deal of improvement. Next let's eliminate ``this car damage.'' I will not go through the details. The result is shown in figure 3.2. Next we choose to eliminate the ``crash'' node. This will create a compound node consisting of both ``this car cost'' and ``other car cost.'' as shown in figure 3.2.

18 crash T T T T F F F F other damage T T F F T T F F this cost T F T F T F T F Which marginalizes to other damage T T F F this cost T F T F Now we are left with only one two factors left, one of which is the query. So we have no choice but to eliminate the last remaining non-query factor. This yields other T T T T F F F F this T T F F T T F F cost T F T F T F T F

19 Which marginalizes to cost T F which is the answer to our query. Next: Demonstration Up: Method Description Previous: Variable Elimination root

20 Next: Larger Example Up: Demonstration Previous: Demonstration Performance Profile In order to quantify the limit of the ability of this algorithm, I generated a suite of random networks. I varied the number of variables between 4 and 64, the ratio of edges to nodes from 1 to 4, and the size of the domains (which are the same for all variables). For each network, I queried a single, randomly-chosen variable with no evidence. I repeated this three times for each network and recorded the average computation time. The results of these trials are represented in figures 3-6. Figures 3 and 4 are with a fixed domain size of 2 and 3 respectively. From these one can see how the computation time grows with the density of edges. Figures 5 and 6 are with fixed edge to variable ratio of 2 and 3 respectively. From these one can see how computation time grows with the size of the domains. As the domains get larger than about 6, the runs were terminated by memory constraints instead of time. And all of it looks like bad news for the practicality of this algorithm. This should not be surprising since in a multiply connected network, exact inference is NP-Complete 2. Figure 3:Computation time with domains of size 2

21 Figure 4:Computation time with domains of size 2

22 Figure 5:Computation time with edges/variable = 2

23 Figure 6:Computation time with edges/variable = 3 Next: Larger Example Up: Demonstration Previous: Demonstration root

24 Next: Package Loading Up: Demonstration Previous: Performance Profile Larger Example So one might ask how practical is this algorithm on real examples. The insurance network described in the previous section is actually a simplification of the larger network shown in Figure 7, which is commonly used in machine learning benchmarks. In addition to having about twice as many nodes, it is connected more tightly and the variable domains are considerably larger. The maximum domain size is about 8 values, while most have about 4. It takes about to perform Variable Elimination on this network. That speed is substantially better than what we would expect from a random network of similar size. Perhaps that owes itself to the fact the ``intuition'' involved in designing that network did a great deal to simplify it. This network is approaching the limit of the tractability of the algorithm while only skimming the surface of complexity of networks we might be interested in pursuing. This problem is shared by all exact solution methods, which, by the virtue of being exact, all entail a similar amount of computation. In harder practical cases, one must resort to inexact methods. Inexact methods aren't necessarily much of a drawback when you consider that the original probabilities themselves, on which the entire inference is based, are just estimates.

25 Figure 7: Next: Package Loading Up: Demonstration Previous: Performance Profile root

26 Up: Bayesian Networks Previous: Package Loading Bibliography 1 2 Daphne Koller and Nir Friedman. Bayesian Networks and Beyond. None, draft edition, Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, root

Quantifying uncertainty & Bayesian networks

Quantifying uncertainty & Bayesian networks Quantifying uncertainty & Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial C:145 Artificial Intelligence@ Uncertainty Readings: Chapter 13 Russell & Norvig. Artificial Intelligence p.1/43 Logic and Uncertainty One problem with logical-agent approaches: Agents almost never have

More information

Bayesian Networks BY: MOHAMAD ALSABBAGH

Bayesian Networks BY: MOHAMAD ALSABBAGH Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional

More information

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic CS 188: Artificial Intelligence Fall 2008 Lecture 16: Bayes Nets III 10/23/2008 Announcements Midterms graded, up on glookup, back Tuesday W4 also graded, back in sections / box Past homeworks in return

More information

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012 CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 14: Bayes Nets II Independence 3/9/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 12: Probability 3/2/2011 Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Announcements P3 due on Monday (3/7) at 4:59pm W3 going out

More information

CS 188: Artificial Intelligence. Our Status in CS188

CS 188: Artificial Intelligence. Our Status in CS188 CS 188: Artificial Intelligence Probability Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. 1 Our Status in CS188 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning

More information

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling

Bayesian Networks. instructor: Matteo Pozzi. x 1. x 2. x 3 x 4. x 5. x 6. x 7. x 8. x 9. Lec : Urban Systems Modeling 12735: Urban Systems Modeling Lec. 09 Bayesian Networks instructor: Matteo Pozzi x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 outline example of applications how to shape a problem as a BN complexity of the inference

More information

Random Variables. A random variable is some aspect of the world about which we (may) have uncertainty

Random Variables. A random variable is some aspect of the world about which we (may) have uncertainty Review Probability Random Variables Joint and Marginal Distributions Conditional Distribution Product Rule, Chain Rule, Bayes Rule Inference Independence 1 Random Variables A random variable is some aspect

More information

Bayes Nets III: Inference

Bayes Nets III: Inference 1 Hal Daumé III (me@hal3.name) Bayes Nets III: Inference Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 10 Apr 2012 Many slides courtesy

More information

Bayesian Networks. Machine Learning, Fall Slides based on material from the Russell and Norvig AI Book, Ch. 14

Bayesian Networks. Machine Learning, Fall Slides based on material from the Russell and Norvig AI Book, Ch. 14 Bayesian Networks Machine Learning, Fall 2010 Slides based on material from the Russell and Norvig AI Book, Ch. 14 1 Administrativia Bayesian networks The inference problem: given a BN, how to make predictions

More information

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012

Probability Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 27 Mar 2012 1 Hal Daumé III (me@hal3.name) Probability 101++ Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 27 Mar 2012 Many slides courtesy of Dan

More information

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science

Ch.6 Uncertain Knowledge. Logic and Uncertainty. Representation. One problem with logical approaches: Department of Computer Science Ch.6 Uncertain Knowledge Representation Hantao Zhang http://www.cs.uiowa.edu/ hzhang/c145 The University of Iowa Department of Computer Science Artificial Intelligence p.1/39 Logic and Uncertainty One

More information

CS 188: Artificial Intelligence Fall 2009

CS 188: Artificial Intelligence Fall 2009 CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes Nets 10/13/2009 Dan Klein UC Berkeley Announcements Assignments P3 due yesterday W2 due Thursday W1 returned in front (after lecture) Midterm

More information

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation CS 188: Artificial Intelligence Spring 2010 Lecture 15: Bayes Nets II Independence 3/9/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Current

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Introduction. Basic Probability and Bayes Volkan Cevher, Matthias Seeger Ecole Polytechnique Fédérale de Lausanne 26/9/2011 (EPFL) Graphical Models 26/9/2011 1 / 28 Outline

More information

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning!

CS188 Outline. CS 188: Artificial Intelligence. Today. Inference in Ghostbusters. Probability. We re done with Part I: Search and Planning! CS188 Outline We re done with art I: Search and lanning! CS 188: Artificial Intelligence robability art II: robabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error

More information

Probabilistic Models

Probabilistic Models Bayes Nets 1 Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables

More information

Bayesian Networks: Representation, Variable Elimination

Bayesian Networks: Representation, Variable Elimination Bayesian Networks: Representation, Variable Elimination CS 6375: Machine Learning Class Notes Instructor: Vibhav Gogate The University of Texas at Dallas We can view a Bayesian network as a compact representation

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Bayes Nets: Independence Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Bayes Nets Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

More information

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22.

Announcements. Inference. Mid-term. Inference by Enumeration. Reminder: Alarm Network. Introduction to Artificial Intelligence. V22. Introduction to Artificial Intelligence V22.0472-001 Fall 2009 Lecture 15: Bayes Nets 3 Midterms graded Assignment 2 graded Announcements Rob Fergus Dept of Computer Science, Courant Institute, NYU Slides

More information

Our Status in CSE 5522

Our Status in CSE 5522 Our Status in CSE 5522 We re done with Part I Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more!

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 14: Bayes Nets 10/14/2008 Dan Klein UC Berkeley 1 1 Announcements Midterm 10/21! One page note sheet Review sessions Friday and Sunday (similar) OHs on

More information

CMPSCI 240: Reasoning about Uncertainty

CMPSCI 240: Reasoning about Uncertainty CMPSCI 240: Reasoning about Uncertainty Lecture 17: Representing Joint PMFs and Bayesian Networks Andrew McGregor University of Massachusetts Last Compiled: April 7, 2017 Warm Up: Joint distributions Recall

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 16: Bayes Nets IV Inference 3/28/2011 Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore Announcements

More information

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Probability. CS 3793/5233 Artificial Intelligence Probability 1 CS 3793/5233 Artificial Intelligence 1 Motivation Motivation Random Variables Semantics Dice Example Joint Dist. Ex. Axioms Agents don t have complete knowledge about the world. Agents need to make decisions

More information

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II.

Objectives. Probabilistic Reasoning Systems. Outline. Independence. Conditional independence. Conditional independence II. Copyright Richard J. Povinelli rev 1.0, 10/1//2001 Page 1 Probabilistic Reasoning Systems Dr. Richard J. Povinelli Objectives You should be able to apply belief networks to model a problem with uncertainty.

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 12: Probability 10/4/2011 Dan Klein UC Berkeley 1 Today Probability Random Variables Joint and Marginal Distributions Conditional Distribution Product

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Bayes Nets: Sampling Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Undirected Graphical Models Mark Schmidt University of British Columbia Winter 2016 Admin Assignment 3: 2 late days to hand it in today, Thursday is final day. Assignment 4:

More information

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning

COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning COMP9414, Monday 26 March, 2012 Propositional Logic 2 COMP9414: Artificial Intelligence Propositional Logic: Automated Reasoning Overview Proof systems (including soundness and completeness) Normal Forms

More information

Reinforcement Learning Wrap-up

Reinforcement Learning Wrap-up Reinforcement Learning Wrap-up Slides courtesy of Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Cognitive Systems 300: Probability and Causality (cont.)

Cognitive Systems 300: Probability and Causality (cont.) Cognitive Systems 300: Probability and Causality (cont.) David Poole and Peter Danielson University of British Columbia Fall 2013 1 David Poole and Peter Danielson Cognitive Systems 300: Probability and

More information

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning

CS188 Outline. We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning. Part III: Machine Learning CS188 Outline We re done with Part I: Search and Planning! Part II: Probabilistic Reasoning Diagnosis Speech recognition Tracking objects Robot mapping Genetics Error correcting codes lots more! Part III:

More information

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018

Bayesian networks. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Bayesian networks CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2018 Soleymani Slides have been adopted from Klein and Abdeel, CS188, UC Berkeley. Outline Probability

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Bayes Nets Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188

More information

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004

Uncertainty. Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, CS151, Spring 2004 Uncertainty Introduction to Artificial Intelligence CS 151 Lecture 2 April 1, 2004 Administration PA 1 will be handed out today. There will be a MATLAB tutorial tomorrow, Friday, April 2 in AP&M 4882 at

More information

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015

Midterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015 S 88 Spring 205 Introduction to rtificial Intelligence Midterm 2 V ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

Probabilistic Reasoning Systems

Probabilistic Reasoning Systems Probabilistic Reasoning Systems Dr. Richard J. Povinelli Copyright Richard J. Povinelli rev 1.0, 10/7/2001 Page 1 Objectives You should be able to apply belief networks to model a problem with uncertainty.

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

Final Exam December 12, 2017

Final Exam December 12, 2017 Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes

More information

CS 188: Artificial Intelligence. Bayes Nets

CS 188: Artificial Intelligence. Bayes Nets CS 188: Artificial Intelligence Probabilistic Inference: Enumeration, Variable Elimination, Sampling Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew

More information

Uncertainty and Bayesian Networks

Uncertainty and Bayesian Networks Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks

More information

Probabilistic Representation and Reasoning

Probabilistic Representation and Reasoning Probabilistic Representation and Reasoning Alessandro Panella Department of Computer Science University of Illinois at Chicago May 4, 2010 Alessandro Panella (CS Dept. - UIC) Probabilistic Representation

More information

6 Event-based Independence and Conditional Probability. Sneak peek: Figure 3: Conditional Probability Example: Sneak Peek

6 Event-based Independence and Conditional Probability. Sneak peek: Figure 3: Conditional Probability Example: Sneak Peek 6 Event-based Independence and Conditional Probability Example Example Roll 6.1. a fair Roll dicea dice... Sneak peek: Figure 3: Conditional Probability Example: Sneak Peek Example 6.2 (Slides). Diagnostic

More information

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas Hidden Markov Models Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1

More information

Lecture 4 October 18th

Lecture 4 October 18th Directed and undirected graphical models Fall 2017 Lecture 4 October 18th Lecturer: Guillaume Obozinski Scribe: In this lecture, we will assume that all random variables are discrete, to keep notations

More information

Bayesian networks. Chapter 14, Sections 1 4

Bayesian networks. Chapter 14, Sections 1 4 Bayesian networks Chapter 14, Sections 1 4 Artificial Intelligence, spring 2013, Peter Ljunglöf; based on AIMA Slides c Stuart Russel and Peter Norvig, 2004 Chapter 14, Sections 1 4 1 Bayesian networks

More information

CS 188: Artificial Intelligence Fall 2009

CS 188: Artificial Intelligence Fall 2009 CS 188: Artificial Intelligence Fall 2009 Lecture 13: Probability 10/8/2009 Dan Klein UC Berkeley 1 Announcements Upcoming P3 Due 10/12 W2 Due 10/15 Midterm in evening of 10/22 Review sessions: Probability

More information

CSE 473: Artificial Intelligence Autumn 2011

CSE 473: Artificial Intelligence Autumn 2011 CSE 473: Artificial Intelligence Autumn 2011 Bayesian Networks Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Outline Probabilistic models

More information

PROBABILITY. Inference: constraint propagation

PROBABILITY. Inference: constraint propagation PROBABILITY Inference: constraint propagation! Use the constraints to reduce the number of legal values for a variable! Possible to find a solution without searching! Node consistency " A node is node-consistent

More information

Computer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks

Computer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks Computer Science CPSC 322 Lecture 23 Planning Under Uncertainty and Decision Networks 1 Announcements Final exam Mon, Dec. 18, 12noon Same general format as midterm Part short questions, part longer problems

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014 Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several

More information

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech

Probabilistic Graphical Models and Bayesian Networks. Artificial Intelligence Bert Huang Virginia Tech Probabilistic Graphical Models and Bayesian Networks Artificial Intelligence Bert Huang Virginia Tech Concept Map for Segment Probabilistic Graphical Models Probabilistic Time Series Models Particle Filters

More information

Y. Xiang, Inference with Uncertain Knowledge 1

Y. Xiang, Inference with Uncertain Knowledge 1 Inference with Uncertain Knowledge Objectives Why must agent use uncertain knowledge? Fundamentals of Bayesian probability Inference with full joint distributions Inference with Bayes rule Bayesian networks

More information

Uncertainty and knowledge. Uncertainty and knowledge. Reasoning with uncertainty. Notes

Uncertainty and knowledge. Uncertainty and knowledge. Reasoning with uncertainty. Notes Approximate reasoning Uncertainty and knowledge Introduction All knowledge representation formalism and problem solving mechanisms that we have seen until now are based on the following assumptions: All

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Probability Steve Tanimoto University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Bayesian Networks: Inference Ali Farhadi Many slides over the course adapted from either Luke Zettlemoyer, Pieter Abbeel, Dan Klein, Stuart Russell or Andrew Moore 1 Outline

More information

Probabilistic Graphical Models for Image Analysis - Lecture 1

Probabilistic Graphical Models for Image Analysis - Lecture 1 Probabilistic Graphical Models for Image Analysis - Lecture 1 Alexey Gronskiy, Stefan Bauer 21 September 2018 Max Planck ETH Center for Learning Systems Overview 1. Motivation - Why Graphical Models 2.

More information

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car

Outline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference)

More information

Probabilistic representation and reasoning

Probabilistic representation and reasoning Probabilistic representation and reasoning Applied artificial intelligence (EDA132) Lecture 09 2017-02-15 Elin A. Topp Material based on course book, chapter 13, 14.1-3 1 Show time! Two boxes of chocolates,

More information

Belief Update in CLG Bayesian Networks With Lazy Propagation

Belief Update in CLG Bayesian Networks With Lazy Propagation Belief Update in CLG Bayesian Networks With Lazy Propagation Anders L Madsen HUGIN Expert A/S Gasværksvej 5 9000 Aalborg, Denmark Anders.L.Madsen@hugin.com Abstract In recent years Bayesian networks (BNs)

More information

Planning With Information States: A Survey Term Project for cs397sml Spring 2002

Planning With Information States: A Survey Term Project for cs397sml Spring 2002 Planning With Information States: A Survey Term Project for cs397sml Spring 2002 Jason O Kane jokane@uiuc.edu April 18, 2003 1 Introduction Classical planning generally depends on the assumption that the

More information

Bayes Nets: Independence

Bayes Nets: Independence Bayes Nets: Independence [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Bayes Nets A Bayes

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Bayesian belief networks. Inference.

Bayesian belief networks. Inference. Lecture 13 Bayesian belief networks. Inference. Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Midterm exam Monday, March 17, 2003 In class Closed book Material covered by Wednesday, March 12 Last

More information

Recovering Probability Distributions from Missing Data

Recovering Probability Distributions from Missing Data Proceedings of Machine Learning Research 77:574 589, 2017 ACML 2017 Recovering Probability Distributions from Missing Data Jin Tian Iowa State University jtian@iastate.edu Editors: Yung-Kyun Noh and Min-Ling

More information

On the errors introduced by the naive Bayes independence assumption

On the errors introduced by the naive Bayes independence assumption On the errors introduced by the naive Bayes independence assumption Author Matthijs de Wachter 3671100 Utrecht University Master Thesis Artificial Intelligence Supervisor Dr. Silja Renooij Department of

More information

CS Lecture 3. More Bayesian Networks

CS Lecture 3. More Bayesian Networks CS 6347 Lecture 3 More Bayesian Networks Recap Last time: Complexity challenges Representing distributions Computing probabilities/doing inference Introduction to Bayesian networks Today: D-separation,

More information

Probabilistic Models. Models describe how (a portion of) the world works

Probabilistic Models. Models describe how (a portion of) the world works Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models

More information

Conditional probabilities and graphical models

Conditional probabilities and graphical models Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within

More information

Notes on Markov Networks

Notes on Markov Networks Notes on Markov Networks Lili Mou moull12@sei.pku.edu.cn December, 2014 This note covers basic topics in Markov networks. We mainly talk about the formal definition, Gibbs sampling for inference, and maximum

More information

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Part I C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS Probabilistic Graphical Models Graphical representation of a probabilistic model Each variable corresponds to a

More information

PROBABILITY AND INFERENCE

PROBABILITY AND INFERENCE PROBABILITY AND INFERENCE Progress Report We ve finished Part I: Problem Solving! Part II: Reasoning with uncertainty Part III: Machine Learning 1 Today Random variables and probabilities Joint, marginal,

More information

Y1 Y2 Y3 Y4 Y1 Y2 Y3 Y4 Z1 Z2 Z3 Z4

Y1 Y2 Y3 Y4 Y1 Y2 Y3 Y4 Z1 Z2 Z3 Z4 Inference: Exploiting Local Structure aphne Koller Stanford University CS228 Handout #4 We have seen that N inference exploits the network structure, in particular the conditional independence and the

More information

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams.

Course Introduction. Probabilistic Modelling and Reasoning. Relationships between courses. Dealing with Uncertainty. Chris Williams. Course Introduction Probabilistic Modelling and Reasoning Chris Williams School of Informatics, University of Edinburgh September 2008 Welcome Administration Handout Books Assignments Tutorials Course

More information

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference

COS402- Artificial Intelligence Fall Lecture 10: Bayesian Networks & Exact Inference COS402- Artificial Intelligence Fall 2015 Lecture 10: Bayesian Networks & Exact Inference Outline Logical inference and probabilistic inference Independence and conditional independence Bayes Nets Semantics

More information

Our Status. We re done with Part I Search and Planning!

Our Status. We re done with Part I Search and Planning! Probability [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Our Status We re done with Part

More information

PROBABILISTIC REASONING SYSTEMS

PROBABILISTIC REASONING SYSTEMS PROBABILISTIC REASONING SYSTEMS In which we explain how to build reasoning systems that use network models to reason with uncertainty according to the laws of probability theory. Outline Knowledge in uncertain

More information

Probabilistic Reasoning. (Mostly using Bayesian Networks)

Probabilistic Reasoning. (Mostly using Bayesian Networks) Probabilistic Reasoning (Mostly using Bayesian Networks) Introduction: Why probabilistic reasoning? The world is not deterministic. (Usually because information is limited.) Ways of coping with uncertainty

More information

COMP5211 Lecture Note on Reasoning under Uncertainty

COMP5211 Lecture Note on Reasoning under Uncertainty COMP5211 Lecture Note on Reasoning under Uncertainty Fangzhen Lin Department of Computer Science and Engineering Hong Kong University of Science and Technology Fangzhen Lin (HKUST) Uncertainty 1 / 33 Uncertainty

More information

Review: Bayesian learning and inference

Review: Bayesian learning and inference Review: Bayesian learning and inference Suppose the agent has to make decisions about the value of an unobserved query variable X based on the values of an observed evidence variable E Inference problem:

More information

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence Linear Combinations Spanning and Linear Independence We have seen that there are two operations defined on a given vector space V :. vector addition of two vectors and. scalar multiplication of a vector

More information

Introduction to Fall 2009 Artificial Intelligence Final Exam

Introduction to Fall 2009 Artificial Intelligence Final Exam CS 188 Introduction to Fall 2009 Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet. Please use non-programmable calculators

More information

Announcements. Midterm Review Session. Extended TA session on Friday 9am to 12 noon.

Announcements. Midterm Review Session. Extended TA session on Friday 9am to 12 noon. Announcements Extended TA session on Friday 9am to 12 noon. Midterm Review Session The midterm prep set indicates the difficulty level of the midterm and the type of questions that can be asked. Please

More information

Statistical Learning. Philipp Koehn. 10 November 2015

Statistical Learning. Philipp Koehn. 10 November 2015 Statistical Learning Philipp Koehn 10 November 2015 Outline 1 Learning agents Inductive learning Decision tree learning Measuring learning performance Bayesian learning Maximum a posteriori and maximum

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Introduction to Bayesian Networks

Introduction to Bayesian Networks Introduction to Bayesian Networks Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/23 Outline Basic Concepts Bayesian

More information

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline

CSE 473: Artificial Intelligence Probability Review à Markov Models. Outline CSE 473: Artificial Intelligence Probability Review à Markov Models Daniel Weld University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

Bayesian belief networks

Bayesian belief networks CS 2001 Lecture 1 Bayesian belief networks Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square 4-8845 Milos research interests Artificial Intelligence Planning, reasoning and optimization in the presence

More information

Bayesian networks. Chapter Chapter

Bayesian networks. Chapter Chapter Bayesian networks Chapter 14.1 3 Chapter 14.1 3 1 Outline Syntax Semantics Parameterized distributions Chapter 14.1 3 2 Bayesian networks A simple, graphical notation for conditional independence assertions

More information

Artificial Intelligence Bayes Nets: Independence

Artificial Intelligence Bayes Nets: Independence Artificial Intelligence Bayes Nets: Independence Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter

More information

Intelligent Systems (AI-2)

Intelligent Systems (AI-2) Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 3, 2016 CPSC 422, Lecture 11 Slide 1 422 big picture: Where are we? Query Planning Deterministic Logics First Order Logics Ontologies

More information

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari

Quantifying Uncertainty & Probabilistic Reasoning. Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari Quantifying Uncertainty & Probabilistic Reasoning Abdulla AlKhenji Khaled AlEmadi Mohammed AlAnsari Outline Previous Implementations What is Uncertainty? Acting Under Uncertainty Rational Decisions Basic

More information

Logical Agents (I) Instructor: Tsung-Che Chiang

Logical Agents (I) Instructor: Tsung-Che Chiang Logical Agents (I) Instructor: Tsung-Che Chiang tcchiang@ieee.org Department of Computer Science and Information Engineering National Taiwan Normal University Artificial Intelligence, Spring, 2010 編譯有誤

More information

Artificial Intelligence Programming Probability

Artificial Intelligence Programming Probability Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/?? 13-0: Uncertainty

More information