: Principles of Autonomy and Decision Making Final Exam Solutions

Similar documents
: Principles of Autonomy and Decision Making Final Exam

Final Exam December 12, 2017

Final Exam December 12, 2017

Introduction to Machine Learning Midterm, Tues April 8

Be able to define the following terms and answer basic questions about them:

16.410: Principles of Autonomy and Decision Making Final Exam

Name: UW CSE 473 Final Exam, Fall 2014

Introduction to Probabilistic Reasoning. Image credit: NASA. Assignment

Machine Learning, Midterm Exam

Be able to define the following terms and answer basic questions about them:

1 [15 points] Search Strategies

6.041/6.431 Fall 2010 Final Exam Wednesday, December 15, 9:00AM - 12:00noon.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

Solar System Test - Grade 5

Bayesian Networks. Motivation

Chapter 3 Checkpoint 3.1 Checkpoint 3.2 Venn Diagram: Planets versus Asteroids Checkpoint 3.3 Asteroid Crashes the Moon?

CS188: Artificial Intelligence, Fall 2010 Written 3: Bayes Nets, VPI, and HMMs

CS 4649/7649 Robot Intelligence: Planning

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science ALGORITHMS FOR INFERENCE Fall 2014

Probability Map Building of Uncertain Dynamic Environments with Indistinguishable Obstacles

ESSE Payload Design. 1.2 Introduction to Space Missions

The Haberdashers' Aske's Boys School Elstree, Herts. 13+ Entrance Examination 2015 PHYSICS

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

ASTR 380 Possibilities for Life in the Inner Solar System

Partially Observable Markov Decision Processes (POMDPs)

Deep Space Communication*

Learning Bayesian Networks (part 1) Goals for the lecture

Pearson Edexcel GCSE Physics/Science Unit P1: Universal Physics

Introduction to Spring 2009 Artificial Intelligence Midterm Exam

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

Gravitational Fields

Lunar Satellite Attitude Determination System

Final. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.

9.2 Worksheet #3 - Circular and Satellite Motion

16.410/413 Principles of Autonomy and Decision Making

Midterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.

AY2 Winter 2017 Midterm Exam Prof. C. Rockosi February 14, Name and Student ID Section Day/Time

Astronomy 110 Section 1 2 / 01/ Extra Credit

CS 570: Machine Learning Seminar. Fall 2016

Mathematical Induction. Section 5.1

Using first-order logic, formalize the following knowledge:

Midterm 1. Your Exam Room: Name of Person Sitting on Your Left: Name of Person Sitting on Your Right: Name of Person Sitting in Front of You:

Formulas. People = Area. Mass Density = Volume. Population Density. Absolute change = New value Old value. Relative change Old value

InSight Spacecraft Launch for Mission to Interior of Mars

Andrew/CS ID: Midterm Solutions, Fall 2006

Machine Learning Practice Page 2 of 2 10/28/13

Quiz 1 Date: Monday, October 17, 2016

DARE Mission and Spacecraft Overview

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

16.410/413 Principles of Autonomy and Decision Making

6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE

PROBABILITY. Inference: constraint propagation

CSEP 573: Artificial Intelligence

Department of Electrical and Computer Engineering University of Wisconsin Madison. Fall Final Examination

Faculty of Engineering and Department of Physics Engineering Physics 131 Midterm Examination February 26, 2007; 7:00 pm 8:30 pm

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

AP Physics 1: Algebra-Based

Randomized Algorithms

Probabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm

Probabilistic Model Checking Michaelmas Term Dr. Dave Parker. Department of Computer Science University of Oxford

Artificial Intelligence

Practice Midterm Exam Solutions

Announcements. True or False: When a rocket blasts off, it pushes off the ground in order to launch itself into the air.

From VOA Learning English, this is Science in the News. I m June Simms.

CAP High School Prize Exam April 20, :00-12:00

Math 55 Second Midterm Exam, Prof. Srivastava April 5, 2016, 3:40pm 5:00pm, F295 Haas Auditorium.

6.042/18.062J Mathematics for Computer Science December 14, 2004 Tom Leighton and Eric Lehman. Final Exam

University of Toronto Department of Electrical and Computer Engineering. Final Examination. ECE 345 Algorithms and Data Structures Fall 2016

Located Between The Orbits Of

Multiple-Choice Answer Key

Manipulating Radicals

Family Name: Given Name: Student number:

Bayes Nets III: Inference

Climbing an Infinite Ladder

ASTR 4 Solar System Astronom y

Physical Science Syllabus CHS Science Department

NASA The planets in our solar system are all different sizes.

Final Exam, Fall 2002

AS 101: The Solar System (Spring 2017) Course Syllabus

Final Exam, Machine Learning, Spring 2009

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Math 116 Second Midterm November 13, 2017

15-381: Artificial Intelligence. Hidden Markov Models (HMMs)

Machine Learning, Midterm Exam: Spring 2009 SOLUTION

4.2 Detecting Celestial Bodies and the Moon

1. A Not So Random Walk

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

4 : Exact Inference: Variable Elimination

What is scan? Answer key. Space Communications and Navigation Program. Entering the Decade of Light.

Number : Name: Vale of Leven Academy. Physics Department. Standard Grade UNIT 7 SPACE PHYSICS. Physics. Study Guides Summary Notes Homework Sheets

SFM-11:CONNECT Summer School, Bertinoro, June 2011

Bayesian networks. Chapter 14, Sections 1 4

Activity Planning II: Plan Extraction and Analysis. Assignments

ALL ABOUT THE PLANETS

Introduction to Fall 2009 Artificial Intelligence Final Exam

AP Physics C Summer Assignment Kinematics

Sun-Synchronous Robotic Exploration

Introduction to Machine Learning Midterm Exam Solutions

Section 10.1 (Part 2 of 2) Significance Tests: Power of a Test

CS684 Graph Algorithms

Transcription:

16.410-13: Principles of Autonomy and Decision Making Final Exam Solutions December 14 th, 2010 Name E-mail Note: Budget your time wisely. Some parts of this exam could take you much longer than others. Move on if you are stuck and return to the problem later. Problem Number Problem 1 (optional) Max Score Grader 20 Problem 2 20 Problem 3 20 Problem 4 20 Problem 5 20 Problem 6 20 Problem 7 20 Total (not including Problem 1) 120

Problem 1 Activity Planning (20 points) Note that this is an optional problem, which you can use to replace your score on the planning problem (Question 4A & C) from the mid-term. We will take the max of your score from the midterm and the final for the respective part of this question in your final grade. In this problem we consider the formulation of an activity planning problem, to be encoded as atomic actions in PDDL, as well as the soundness/completeness properties of GraphPlan. Part 1.A Formulating Planning Problems (10 points) NASA is planning a Mars sample return mission with the objective of returning rock samples back to Earth. The mission involves sending out two Mars rovers to the Mars surface, which will collect samples for years. Other space vehicles will subsequently be sent to Mars to meet the rovers and take rocks to Mars orbit. They will rendezvous with orbiters, which will return the rocks to Earth. Please describe this problem as an activity planning problem using PDDL. Part 1.A.1 (3 points) List and describe your predicates, actions, initial condition, and goal predicates.

Part 1.A.2 (3 points) State any assumptions that you make, and justify why your representation is at the right level of abstraction. Part 1.A.3 (4 points) Point out one limitation of atomic actions in PDDL (as presented in class) that limits the effectiveness of your solution. Augment PDDL with a new syntax that remedies this limitation. Explain the meaning of this augmentation and give one example action description.

Part 1.C Proving completeness of Graph Plan (10 points) Give an inductive proof for the fact that the Graph Plan algorithm finds any plan of length N that is complete and consistent. What is the induction on? Formulate and prove the base case: Formulate and prove the induction step:

Problem 2 Model-based Diagnosis (20 points) You are part of a team that is developing a spacecraft for deep space exploration. Your task is to monitor and diagnosis the electrical subsystem of the spacecraft, shown in the figure below. Part 2.A (6 points) A solar panel provides power to three electrical busses. The first bus is directly connected to the power source, the second bus is connected to the power source with a switch (Switch 1), and the third bus is connected to the second bus with another switch (Switch 2). A heater unit is connected to the first bus, a wheel motor is connected to the second bus, and a star tracker is connected to the third bus.

The star tracker, wheel motor, and heater each have a power indicator, which outputs 1 if the device is receiving power and 0 otherwise. Assume that these three devices and their indicators are working correctly; the solar panel is exposed to the sun, and the star tracker and heater are powered, but the wheel motor is not powered, as indicated on the figure below. Write down all minimal conflicts (do not provide conflicts that are not minimal). Write down the corresponding constituent kernels and the kernel diagnoses, using the minimal set-covering algorithm. Show your work below. There are two minimal conflicts: {S1 = G} and {S2 = G}. See the figure below. The corresponding constituent kernels are {S1 = U} and {S2 = U} respectively. There is only one kernel diagnosis: {S1 = U, S2 = U}. 2 pts each for conflicts, constitutent kernels, kernels. -2 missing answer. -1 if conflicts not minimal. -1 for missing conflicts, kernels, or constitutents

Part 2.B (14 points) This time consider the circuit shown below. In this case, Bus 1 has no load. There are two wheel motors connected to Bus 2. And there is a star tracker connected to Bus 3.

Part 2.B.1 (7 points) Assume that the three devices connected to the bus can fail, that is, the star tracker and the two motors. Assume that the indicators cannot fail; they indicate that the star tracker (component M1) and the second wheel motor (component M3) are powered, and the first wheel motor (component M2) is not powered. See the figure below. Write down all minimal conflicts (do not provide conflicts that are not minimal). Write down the corresponding constituent kernels, and the kernel diagnoses, using the minimal set-covering algorithm. Show your work below. There are three minimal conflicts: {M2 = G, M3 = G}, {M1 = G, M2 = G, S2 = G}, {P1 = G, S1 = G, M2=G}. The corresponding constituent kernels are {M2 = U, M3 = U}, {M1 = U, M2 = U, S2 = U}, and {P1 = U, S1 = U, M2 = U}, respectively. The kernel diagnoses are: {M2 = U}, {M3 = U, M1 = U, P1 = U}, {M3 = U, M1 = U, S1 = U}, {M3 = U, S2 = U, P1 = U}, and {M3 = U, S2 = U, S1 = U}. The corresponding constituent kernels are {S1 = U} and {S2 = U} respectively. There is only one kernel diagnosis: {S1 = U, S2 = U}. -1 if conflicts not minimal. -1 for incorrect conflicts. -1 for each missing conflicts, kernels, or constitutents. -1 for missing constitutent kernels (already penalized on part A).

Part 2.B.2 (7 Points) Consider again the circuit introduced at the start of Part B. This time assume that the sensors indicate that the star tracker (component M1) and the first wheel motor (component M2) are not powered. But the second wheel motor (M3) is powered. Write down all minimal conflicts (do not provide conflicts that are not minimal). Write down the corresponding constituent kernels, and the kernel diagnoses, using the minimal set-covering algorithm. Show your work below. There are four minimal conflicts {M2 = G, M3 =G}, {M1 = G, M3 = G, S2 = G}, {P1 = G, S1 = G, S2 = G, M1 = G}, and {P1 = G, S1 = G, M2 = G}. See the figures on the next page for derivations. The corresponding constituent kernels are {M2 = U, M3 = U}, {M1 = U, M3 = U, S2 = U}, {P1 = U, S1 = U, S2 = U, M1 = U}, and {P1 = U, S1 = U, M2 = U}, respectively. There are four kernel diagnoses: {M3 = U, P1 = U}, {M3 = U, S1 = U}, {M3 = U, S2 = U, P1 = U}, (not minimal) {M3 = U, S1 = U, S2 = U}, (not minimal) {M2 = U, S2 = U}, and {M3 = U, M1 = U, P1 = U}, (not minimal) {M3 = U, M1 = U, S1 = U}, (not minimal) {M1 = U, M2 = U}.

Problem 3 Optimal Search (20 points) Consider a shortest path problem in the following planar environment, with polytopic obstacles. Part 3.A (8 points) Using Dijsktra s algorithm, compute the length of the shortest collision-free paths from each vertex of the obstacles to the origin. Mark clearly the order in which you compute such shortest path lengths.

Part 3.B (8 points) Use the information above to design an algorithm to find the shortest collision-free path to the origin starting from any point in the plane. State the algorithm, and compute the lenght of the shortest collision-free path to the origin starting from the point (5,5). Part 3.C (4 points) Would such an algorithm be applicable to shortest-path problems in three dimensions? Why or why not?

Problem 4 Linear Programming (20 points) Consider the following problem: Given a polyhedron, find the center and the radius of the largest ball contained within the polyhedron. In other words, what is the point that is furthest away from all the boundaries of the polyhedron, and what is its distance from the closest boundary? Part 4.A (10 points) Show that this problem can be formulated as a linear program. Remember that a polyhedron is defined by a number of inequalities of the form a i T x <= b i.

Part 4.B (10 points) Use the simplex algorithm on the linear program model you just derived to compute the center and the radius of the largest ball within the polyhedron described by the following inequalities: -x 2 < 0 sqrt(3) x 1 + x 2 < sqrt(3) -sqrt(3) x 1 + x 2 <sqrt(3)

Problem 5 Probabilistic Inference (20 points) Two astronomers, in different parts of the world, make measurements M1 and M2 of the number of stars N in some small region of the sky, using their telescopes 1 and 2. There is a small probability of an error in N, up to a few stars. Also, there is a slight chance that the telescopes can be badly out of focus, which influences the measurement error. The proposition that Telescope 1 is out of focus is denoted by F1. Similarly, the proposition that Telescope 2 is out of focus is denoted by F2. Part 5.A (5 points) Which of the following Bayesian network models best represents the information given above? Why? 1. 2. 3. (i) 4. (ii) The first model better represents the information, since measurements M1 and M2 are the outputs of the telescopes, and the inputs are the number of stars and whether each telescope is out of focus. In addition, the measurement obtained using the first telescope is directly influenced by whether the first telescope is out of focus and the number of stars, but not the measurement of the second telescope and whether it is out of focus. The same holds for the second telescope. The model describes each of these propositions well.

Part 5.B (5 points) For both of the Bayesian networks provided in the figure above, write down the joint probability distribution P(M1, M2, F1, F2, N) using the Chain Rule for Bayesian networks. Solution for Model (i): P(M1, M2, F1, F2, N) = P(M1 F1, N) P(M2 F2, N) P(F1) P(F2) P(N) -1 If missing unconditioned probabilities -1 Mission variables in conditional probabilities. Solution for Model (ii): P(M1, M2, F1, F2, N) = P(F1 N, M1) P(F2 N, M2) P(N M1, M2) P(M2 M1) P(M1)

Part 5.C (5 points) For both of the Bayesian network models provided in the figure above, write down P(M1, F1) using marginalization. Make careful use of the order of summations to reduce the time complexity. Solution for Model (i) P(M1, F1) = Sum_(M2, F2, N) P(M1 F1, N) P(M2 F2, N) P(F1) P(F2) P(N) = P(F1) Sum_(N) P(M1 F1, N) P(N) Sum_(F2) P(F2) Sum_(M2) P(M2 F2, N). Sum_(M2) P(M2 F2, N) = 1 and Sum_(F2) P(F2) = 1, Hence this simplifies further to: P(M1, F1) = P(F1) Sum_(N) P(M1 F1, N) P(N). Accept any variable ordering, as long as summations are pushed in as far as possible. +1 if some attempt made. +2 If stated marginalization, but didn t instantiate Bayes net chain rule. -1 If summations not fully reduced, or -2 if systematic problem. -2 Assumes false independence. (Note: 16.410 should teach independence). Or missing terms. -1 Summation over wrong variables. -1 Summation is in too far. Solution for Model (ii): P(M1, F1) = Sum(M2, F2, N) P(F1 N, M1) P(F2 N, M2) P(N M1, M2) P(M2 M1) P(M1) = P(M1) Sum(M2) P(M2 M1) Sum(N) P(F1 N, M1) P(M2 N) Sum(F2) P(F2 N, M2).

Part 5.D (5 points) Next, we would like to estimate the probability of a particular number of stars N, given measurements M1 and M2, that is P(N M1, M2). Write down this probability for both models. Solution for Model (i): P(M1, M2, N) / P(M1, M2) where P(M1, M2, N) = Sum_(F1, F2) P(M1 F1, N) P(M2 F2, N) P(F1) P(F2) P(N) = P(N) Sum_(F1) P(M1 F1, N) P(F1) Sum_(F2) P(M2, F2, N) P(F2). and P(M1, M2) = Sum_(N) P(M1, M2, N) 2 state P(M1, M2, N) / P(M1, M2) 5 if fully instantiated. 2 if deadend based on Bayes rule. 1 random attempt. Solution for Model (ii): P(M1, M2, N) / P(M1, M2) where P(M1, M2, N) = Sum(F1, F2) P(F1 N, M1) P(F2 N, M2) P(N M1, M2) P(M2 M1) P(M1) = P(M1) P(M2 M1) P(M2 N) Sum(F1) Sum(F1) P(F1 N, M1) Sum(F2) P(F2 N, M2). and P(M1, M2) = Sum_(N) P(M1, M2, N)

Problem 6 Hidden Markov Models (20 points) Consider the following simplified model of a human operator s performance while performing a series of tasks sequentially. The human operator can be in one of these states: distracted, focused, or stressed. Tasks can either be easy or hard. Each new task assigned to the operator has the same difficulty as the last one with probability 0.6, and has different difficulty with probability 0.4. At the beginning of his/her shift, the operator is focused. The probability that the operator transitions to a different state depends on its current state, and the difficulty of the task, according to the following table, where the first number is the transition probability for easy tasks, and the second is the transition probability for hard tasks : to Distracted to Focused to Stressed from Distracted 0.4/0.2 0.6/0.4 0/0.4 from Focused 0.6/0.3 0.4/0.4 0/0.3 from Stressed 0.1/0 0.3/0.1 0.6/0.9 Assume that the time to complete each task is observable. The time that the operator needs to complete a task is a random variable that depends on the operator s state and the difficulty of the task, according to the following table. Easy Hard Distracted long long Focused short long Stressed long very long Formulate the problem of inferring the state of the operator and the difficulty of each task as a Hidden Markov Model. In particular, find the best estimate at each time step of the state of the operator and of the types of tasks, and of the most likely trajectory, given the following sequence of observation: (long, short, short, long, long, very long)

Problem 7 Markov Decision Processes (20 points) Consider the following abstract model of a CAP (combat air patrol) mission. An aircraft can be at one of two locations: the home base or the objective. Transitions between these locations are deterministic. However, fuel consumption is not deterministic. In particular, whenever the airplane leaves the home base, it has a full tank with 10 units of fuel. At each time step, the airplane burns 1 unit of fuel with probability 0.5, 2 units with probability 0.3, and 3 units with probability 0.2, and can either stay at the current location, or move to the other location (i.e., return to base, or go to the objective). Should the on-board fuel become zero or negative, the aircraft crashes and is lost (i.e., it remains crashed for all future times). The aircraft gets a unit reward for each time unit they spend at the objective, with a discount factor equal to 0.8. The aircraft gets zero reward if crashed. Consider the following policy: go to the objective, or stay at the objective, if the on-board fuel is greater than 2 units, otherwise return to base. Part 7.A (10 points) Compute the expected reward (starting from the home base) under this policy.

Part 7.B (10 points) Compute an improved policy, if one exists, and compute its expected reward.

MIT OpenCourseWare http://ocw.mit.edu 16.410 / 16.413 Principles of Autonomy and Decision Making Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.