CS 188 Introduction to AI Fall 2005 Stuart Russell Final

Similar documents
Introduction to Spring 2006 Artificial Intelligence Practice Final

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

total 58

CS221 Practice Midterm

Final Exam December 12, 2017

Final Exam December 12, 2017

Introduction to Spring 2009 Artificial Intelligence Midterm Exam

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Midterm Examination CS540: Introduction to Artificial Intelligence

Introduction to Fall 2009 Artificial Intelligence Final Exam

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.

Natural Language Processing : Probabilistic Context Free Grammars. Updated 5/09

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

Andrew/CS ID: Midterm Solutions, Fall 2006

Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007.

Final. CS 188 Fall Introduction to Artificial Intelligence

Introduction to Fall 2008 Artificial Intelligence Midterm Exam

A* Search. 1 Dijkstra Shortest Path

Name: UW CSE 473 Final Exam, Fall 2014

Mock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

Final, CSC242, May 2009

Uncertainty and Bayesian Networks

Final. CS 188 Fall Introduction to Artificial Intelligence

1 [15 points] Search Strategies

Uncertainty. Logic and Uncertainty. Russell & Norvig. Readings: Chapter 13. One problem with logical-agent approaches: C:145 Artificial

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Midterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes.

CITS4211 Mid-semester test 2011

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

Introduction to Fall 2011 Artificial Intelligence Final Exam

Machine Learning, Midterm Exam

1. True/False (40 points) total

Introduction to Spring 2009 Artificial Intelligence Midterm Solutions

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

Qualifier: CS 6375 Machine Learning Spring 2015

CS 188 Fall Introduction to Artificial Intelligence Midterm 2

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

Natural Language Processing CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science

MIDTERM SOLUTIONS: FALL 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

CS 4100 // artificial intelligence. Recap/midterm review!

Introduction to Machine Learning Midterm, Tues April 8

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Announcements. CS 188: Artificial Intelligence Fall Adversarial Games. Computing Minimax Values. Evaluation Functions. Recap: Resource Limits

Final Exam, Spring 2006

Using first-order logic, formalize the following knowledge:

CISC681/481 AI Midterm Exam

Parsing with Context-Free Grammars

Tentamen TDDC17 Artificial Intelligence 20 August 2012 kl

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Midterm sample questions

An AI-ish view of Probability, Conditional Probability & Bayes Theorem

10/18/2017. An AI-ish view of Probability, Conditional Probability & Bayes Theorem. Making decisions under uncertainty.

Expectation Maximization (EM)

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

CS 188: Artificial Intelligence. Bayes Nets

Extensions to the Logic of All x are y: Verbs, Relative Clauses, and Only

INTRODUCTION TO LOGIC. Propositional Logic. Examples of syntactic claims

CS221 Practice Midterm #2 Solutions

Bayesian Networks. Vibhav Gogate The University of Texas at Dallas

Logical Agents (I) Instructor: Tsung-Che Chiang

CS 188: Artificial Intelligence Fall Announcements

Probability. CS 3793/5233 Artificial Intelligence Probability 1

Ch. 2: Phrase Structure Syntactic Structure (basic concepts) A tree diagram marks constituents hierarchically

From Bayes Theorem to Pattern Recognition via Bayes Rule

Parsing with Context-Free Grammars

Bayesian Networks. Motivation

Parsing. Based on presentations from Chris Manning s course on Statistical Parsing (Stanford)

Intelligent Systems (AI-2)

Logic and Prolog Midterm, CSC173With Answers and FFQOct. 26, 20

CS 662 Sample Midterm

Knowledge representation DATA INFORMATION KNOWLEDGE WISDOM. Figure Relation ship between data, information knowledge and wisdom.

Sequences and Information

Announcements. CS 188: Artificial Intelligence Spring Mini-Contest Winners. Today. GamesCrafters. Adversarial Games

Statistical Methods for NLP

Computer Science CPSC 322. Lecture 23 Planning Under Uncertainty and Decision Networks

CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001

Bayesian Networks BY: MOHAMAD ALSABBAGH

ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Introduction to Arti Intelligence

CS 188: Artificial Intelligence Spring 2007

CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents

Knowledge-based Agents. CS 331: Artificial Intelligence Propositional Logic I. Knowledge-based Agents. Outline. Knowledge-based Agents

CSE 546 Final Exam, Autumn 2013

Introduction to Fall 2008 Artificial Intelligence Final Exam

Intelligent Systems (AI-2)

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

Sampling from Bayes Nets

CS 4649/7649 Robot Intelligence: Planning

Roger Levy Probabilistic Models in the Study of Language draft, October 2,

Examination Artificial Intelligence Module Intelligent Interaction Design December 2014

Modeling Environment

Transcription:

NAME: SID#: Section: 1 CS 188 Introduction to AI all 2005 Stuart Russell inal You have 2 hours and 50 minutes. he exam is open-book, open-notes. 100 points total. Panic not. Mark your answers ON HE EXAM ISEL. Write your name, SID, and section number at the top of each page. or true/false questions, CIRCLE rue OR alse. or multiple-choice questions, CIRCLE ALL CORREC CHOICES (in some cases, there may be more than one). If you are not sure of your answer you may wish to provide a brief explanation. or official use only Q. 1 Q. 2 Q. 3 Q. 4 Q. 5 Q. 6 Q. 7 otal /12 /15 /16 /15 /8 /16 /18 /100 1. (12 pts.) Some Easy Questions to Start With (a) (2) rue/alse: Suppose that variables X 1, X 2,..., X k have no parents in a given Bayes net that contains n variables in all, where n > k. hen the Bayes net asserts that P(X 1, X 2,..., X k ) = P(X 1 )P(X 2 ) P(X k ). (b) (2) rue/alse: In a fully observable, turn-taking, zero-sum game between two perfectly rational players, it does not help the first player to know what move the second player will make. (c) (2) rue/alse: or any propositional sentences α, β, γ, if at least one of [α = γ] and [β = γ] holds then (α β) = γ (d) (2) rue/alse: or any propositional sentences α, β, γ, if (α β) = γ then at least one of [α = γ] and [β = γ] holds (e) (2) rue/alse: If C 1 and C 2 are two first-order clauses that have been standardized apart, then no literal in C 1 is identical to any literal in C 2. (f) (2) rue/alse: New lexical categories (such as Noun, Verb, Preposition, etc.) are frequently added to English.

NAME: SID#: Section: 2 2. (15 pts.) Search n vehicles occupy squares (1, 1) through (n, 1) (i.e., the bottom row) of an n n grid. he vehicles must be moved to the top row but in reverse order; so the vehicle i that starts in (i, 1) must end up in (n i + 1, n). On each time step, every one of the n vehicles can move one square up, down, left, or right, or stay put; but if a vehicle stays put, one other adjacent vehicle (but not more than one) can hop over it. wo vehicles cannot occupy the same square. (a) (4) he size of the state space is roughly (i) n 2 (ii) n 3 (iii) n 2n (iv) n n2 (b) (3) he branching factor is roughly (i) 5 (ii) 5n (iii) 5 n (c) (2) Suppose that vehicle i is at (x i, y i ); write a nontrivial admissible heuristic h i for the number of moves it will require to get to its goal location (n i + 1, n), assuming there are no other vehicles on the grid. (d) (2) Which of the following heuristics are admissible for the problem of moving all n vehicles to their destinations? (i) n i = 1 h i (ii) max{h 1,..., h n } (iii) min{h 1,..., h n } (iv) None of these (e) (4) Explain your answer to part (d).

NAME: SID#: Section: 3 3. (16 pts.) Propositional Logic Suppose an agent inhabits a world with two states, S and S, can do exactly one of two actions, a and b. Action a does nothing and action b flips from one state to the other. Let S t be the proposition that the agent is in state S at time t, and let a t be the proposition that the agent does action a at time t (similarly for b t ). (a) (4) Write a successor-state axiom for S t+1 (b) (4) Convert the sentence in (a) into CN. (c) (8) Show a resolution refutation proof that if the agent is in S at time t and does a it will still be in S at time t + 1.

NAME: SID#: Section: 4 4. (15 pts.) Pruning in search trees In the following, a max tree consists only of max nodes, whereas an expectimax tree consists of a max node at the root with alternating layers of chance and max nodes. At chance nodes, all outcome probabilities are non-zero. he goal is to find the value of the root with a bounded-depth search. (a) (2) Assuming that leaf values are finite but unbounded, is pruning (as in alpha-beta) ever possible in a max tree? Give an example, or explain why not. (b) (2) Is pruning ever possible in an expectimax tree under the same conditions? Give an example, or explain why not. (c) (2) If leaf values are constrained to be nonnegative, is pruning ever possible in a max tree? example, or explain why not. Give an (d) (2) If leaf values are constrained to be nonnegative, is pruning ever possible in an expectimax tree? Give an example, or explain why not.

NAME: SID#: Section: 5 (e) (2) If leaf values are constrained to be in the range [0, 1], is pruning ever possible in a max tree? Give an example, or explain why not. (f) (2) If leaf values are constrained to be in the range [0, 1], is pruning ever possible in an expectimax tree? Give an example (qualitatively different from your example in (e), if any), or explain why not. (g) (3) Consider the the outcomes of a chance node in an expectimax tree. Which of the following evaluation orders is most likely to yield pruning opportunities? (i) Lowest probability first (ii) Highest probability first (iii) Doesn t make any difference 5. (8 pts.) MDPs Consider the world in Q.3 as an MDP, with R(S) = 3, R( S) = 2, and γ = 0.5. Complete the columns of the following table to perform policy iteration to find the optimal policy. π 0 V π0 π 1 V π1 π 2 S a S a

NAME: SID#: Section: 6 B M P(I).9.5 P(B) P(M).5.9.1.1 B I M B I M P(G).9.8.2.1 G J G P(J).9 ig. 1: A simple Bayes net with Boolean variables B = BrokeElectionLaw, I = Indicted, M = P oliticallymotivatedp rosecutor, G = oundguilty, J = Jailed. 6. (16 pts.) Probabilistic inference Consider the Bayes net shown in ig. 1. (a) (3) Which, if any, of the following are asserted by the network structure (ignoring the CPs for now)? (i) P(B, I, M) = P(B)P(I)P(M) (ii) P(J G) = P(J G, I) (iii) P(M G, B, I) = P(M G, B, I, J) (b) (2) Calculate the value of P (b, i, m, g, j). (c) (4) Calculate the probability that someone goes to jail given that they broke the law, have been indicted, and face a politically motivated prosecutor. (d) (2) A context-specific independence has the following form: X is conditionally independent of Y given Z in context C = c if P(X Y, Z, C = c) = P(X Z, C = c). In addition to the usual conditional independences given by the graph structure, what context-specific independences exist in the Bayes net in ig. 1?

NAME: SID#: Section: 7 (e) (5) Suppose we want to add the variable P = P residentialp ardon to the network; draw the new network and briefly explain any links you add.

NAME: SID#: Section: 8 7. (18 pts.) Language and statistical learning A probabilistic context-free grammar (PCG) is a context-free grammar augmented with a probability value on each rule, so that the PCG specifies a probability distribution over all allowable strings in the language. Specifically, the rules for each nonterminal are annotated with probabilities that sum to 1; these define how likely it is that each rule is applied to expand that nonterminal; and all such rule choices are made independently. or example, the following is a PCG for simple verb phrases: 0.1 : V P V erb 0.2 : V P Copula Adjective 0.5 : V P V erb the Noun 0.2 : V P V P Adverb 0.5 : V erb is 0.5 : V erb shoots 0.8 : Copula is 0.2 : Copula seems 0.5 : Adjective unwell 0.5 : Adjective well 0.5 : Adverb well 0.5 : Adverb badly 0.6 : Noun duck 0.4 : Noun well (a) (3) Which of the following have a nonzero probability of being generated as complete VPs? (i) shoots the duck well well well (ii) seems the well well (iii) shoots the unwell well badly (b) (5) What is the probability of generating is well well? (c) (2) What types of ambiguity are exhibited by the phrase in (b)? (i) lexical (ii) syntactic (iii) referential (iv) none (d) (2) rue/alse: Given any PCG, it is possible to calculate the probability that the PCG generates a string of exactly 10 words. (e) (1) A PCG learning algorithm takes as input a set of strings D and outputs the PCG h that maximizes log P (D h) C(h), where C is a measure of the complexity of h. or a suitably defined C, this could be an example of (i) Bayesian learning (ii) MAP learning (iii) maximum likelihood learning.

NAME: SID#: Section: 9 (f) (5, tricky) An HMM grammar is essentially a standard HMM whose state variable is N (nonterminal, with values such as Det, Adjective, Noun and so on) and whose evidence variable is W (word, with values such as is, duck, and so on). he HMM model includes a prior P(N 0 ), a transition model P(N t+1 N t ), and a sensor model P(W t N t ). Show that every HMM grammar can be written as a PCG. [Hint: start by thinking about how the HMM prior can be represented by PCG rules for the sentence symbol. You may find it helpful to illustrate for the particular HMM with values A, B for N and values x, y for W.]