Final Exam V1. CS 188 Fall Introduction to Artificial Intelligence
|
|
- Vincent Young
- 6 years ago
- Views:
Transcription
1 CS 88 Fall 7 Introduction to Artificial Intelligence Final Eam V You have approimatel 7 minutes. The eam is closed book, closed calculator, and closed notes ecept our three-page crib sheet. Mark our answers ON THE EXAM ITSELF. If ou are not sure of our answer ou ma wish to provide a brief eplanation. All short answer sections can be successfull answered in a few sentences AT MOST. For multiple choice questions: means mark all options that appl means mark a single choice When selecting an answer, please fill in the bubble or square completel ( and ) First name Last name SID Student to our right Student to our left Your Discussion/Eam Prep* TA (fill all that appl): Brijen (Tu) Peter (Tu) David (Tu) Nipun (Tu) Wenjing (Tu) Aaron (W) Mitchell (W) Abhishek (W) Carn (W) Anwar (W) Aarash (W) Daniel (W) Yuchen* (Tu) And* (Tu) Nikita* (Tu) Shea* (W) Daniel* (W) For staff use onl: Q. Agent Testing Toda! / Q. Potpourri /4 Q. Search /9 Q4. CSPs /8 Q5. Game Trees /9 Q6. Something Fish / Q7. Polic Evaluation /8 Q8. Baes Nets: Inference /8 Q9. Decision Networks and VPI /9 Q. Neural Networks: Representation /5 Q. Backpropagation /9 Total /
2 THIS PAGE IS INTENTIONALLY LEFT BLANK
3 Q. [ pt] Agent Testing Toda! SID: It s testing time! Not onl for ou, but for our CS88 robots as well! Circle our favorite robot below.
4 Q. [4 pts] Potpourri (a) [ pt] Fill in the unlabelled nodes in the Baes Net below with the variables {A, B, C, E} such that the following independence assertions are true:. A B E, D. E D B. E C A, B 4. C D A, B D (b) [4 pts] For each of the 4 plots below, create a classification dataset which can or cannot be classified correctl b Naive Baes and perceptron, as specified. Each dataset should consist of nine points represented b the boes, shading the bo for positive class or leaving it blank for negative class. Mark Not Possible if no such dataset is possible. For can be classified b Naive Baes, there should be some probabilit distributions P (Y ) and P (F Y ), P (F Y ) for the class Y and features F, F that can correctl classif the data according to the Naive Baes rule, and for cannot there should be no such distribution. For perceptron, assume that there is a bias feature in addition to F and F. Naive Baes and perceptron both can classif: Naive Baes and perceptron both cannot classif: F F Not Possible Naive Baes can classif; perceptron cannot classif: F F Not Possible F F Not Possible Naive Baes cannot classif; perceptron can classif: (c) [ pt] Consider a multi-class perceptron for classes A, B, and C with current weight vectors: w A = (, 4, 7), w B = (,, 6), w C = (7, 9, ) F F Not Possible A new training sample is now considered, which has feature vector f() = (,, ) and label = B. What are the resulting weight vectors after the perceptron has seen this eample and updated the weights? w A = w B = w C =
5 SID: (d) [ pt] A single perceptron can compute the XOR function. True False (e) [ pt] A perceptron is guaranteed to learn a separating decision boundar for a separable dataset within a finite number of training steps. True False (f) [ pt] Given a linearl separable dataset, the perceptron algorithm is guaranteed to find a ma-margin separating hperplane. True False (g) [ pt] You would like to train a neural network to classif digits. Your network takes as input an image and outputs probabilities for each of the classes, -9. The network s prediction is the class that it assigns the highest probabilit to. From the following functions, select all that would be suitable loss functions to minimize using gradient descent: The square of the difference between the correct digit and the digit predicted b our network The probabilit of the correct digit under our network The negative log-probabilit of the correct digit under our network None of the above (h) [ pt] From the list below, mark all triples that are inactive. A shaded circle means that node is conditioned on. (i) [ pts] A Consider the gridworld above. At each timestep the agent will have two available actions from the set {North, South, East, W est}. Actions that would move the agent into the wall ma never be chosen, and allowed actions alwas succeed. The agent receives a reward of +8 ever time it enters the square marked A. Let the discount factor be γ =. At each cell in the following tables, fill in the value of that state after iteration k of Value Iteration. k = k = k = k = (j) [ pt] Consider an HMM with T timesteps, hidden state variables X,... X T, and observed variables E,... E T. Let S be the number of possible states for each hidden state variable X. We want to compute (with the forward algorithm) or estimate (with particle filtering) P (X T E = e,... E T = e T ). How man particles, in terms of S and T, would it take for particle filtering to have the same time compleit as the forward algorithm? You can assume that, in particle filtering, each sampling step can be done in constant time for a single particle (though this is not necessaril the case in realit): # Particles =
6 Q. [9 pts] Search Suppose we have a connected graph with N nodes, where N is finite but large. Assume that ever node in the graph has eactl D neighbors. All edges are undirected. We have eactl one start node, S, and eactl one goal node, G. Suppose we know that the shortest path in the graph from S to G has length L. That is, it takes at least L edge-traversals to get from S to G or from G to S (and perhaps there are other, longer paths). We ll consider various algorithms for searching for paths from S to G. (a) [ pts] Uninformed Search Using the information above, give the tightest possible bounds, using big O notation, on both the absolute best case and the absolute worst case number of node epansions for each algorithm. Your answer should be a function in terms of variables from the set {N, D, L}. You ma not need to use ever variable. (i) [ pt] DFS Graph Search Best case: Worst case: (ii) [ pt] BFS Tree Search Best case: Worst case: (b) [ pts] Bidirectional Search Notice that because the graph is undirected, finding a path from S to G is equivalent to finding a path from G to S, since reversing a path gives us a path from the other direction of the same length. This fact inspired bidirectional search. As the name implies, bidirectional search consists of two simultaneous searches which both use the same algorithm; one from S towards G, and another from G towards S. When these searches meet in the middle, the can construct a path from S to G. More concretel, in bidirectional search: We start Search from S and Search from G. The searches take turns popping nodes off of their separate fringes. First Search epands a node, then Search epands a node, then Search again, etc. This continues until one of the searches epands some node X which the other search has also epanded. At that point, Search knows a path from S to X, and Search knows a path from G to X, which provides us with a path from X to G. We concatenate those two paths and return our path from S to G. Don t stress about further implementation details here! Repeat part (a) with the bidirectional versions of the algorithms from before. Give the tightest possible bounds, using big O notation, on both the absolute best and worst case number of node epansions b the bidirectional search algorithm. Your bound should still be a function of variables from the set {N, D, L}. (i) [ pt] Bidirectional DFS Graph Search Best case: Worst case: (ii) [ pt] Bidirectional BFS Tree Search Best case: Worst case:
7 In parts (c)-(e) below, consider the following graph, with start state S and goal state G. Edge costs are labeled on the edges, and heuristic values are given b the h values net to each state. In the search procedures below, break an ties alphabeticall, so that if nodes on our fringe are tied in values, the state that comes first alphabeticall is epanded first. SID: h = h = S B A C 6 h = G h = 4 h = (c) [ pt] Greed Graph Search What is the path returned b greed graph search, using the given heuristic? S A G S A C G S B A C G S B A G S B C G (d) A* Graph Search (i) [ pt] List the nodes in the order the are epanded b A* graph search: Order: (ii) [ pt] What is the path returned b A* graph search? S A G S A C G S B A C G S B A G S B C G (e) Heuristic Properties (i) [ pt] Is this heuristic admissible? If so, mark Alread admissible. If not, find a minimal set of nodes that would need to have their values changed to make the heuristic admissible, and mark them below. Alread admissible Change h(s) Change h(a) Change h(b) Change h(c) Change h(d) Change h(g) (ii) [ pt] Is this heuristic consistent? If so, mark Alread consistent. If not, find the minimal set of nodes that would need to have their values changed to make the heuristic consistent, and mark them below. Alread consistent Change h(s) Change h(a) Change h(b) Change h(c) Change h(d) Change h(g)
8 Q4. [8 pts] CSPs Four people, A, B, C, and D, are all looking to rent space in an apartment building. There are three floors in the building,,, and (where is the lowest floor and is the highest). Each person must be assigned to some floor, but it s ok if more than one person is living on a floor. We have the following constraints on assignments: A and B must not live together on the same floor. If A and C live on the same floor, the must both be living on floor. If A and C live on different floors, one of them must be living on floor. D must not live on the same floor as anone else. D must live on a higher floor than C. We will formulate this as a CSP, where each person has a variable and the variable values are floors. (a) [ pt] Draw the edges for the constraint graph representing this problem. Use binar constraints onl. You do not need to label the edges. A B C D (b) [ pts] Suppose we have assigned C =. Appl forward checking to the CSP, filling in the boes net to the values for each variable that are eliminated: A B C D (c) [ pts] Starting from the original CSP with full domains (i.e. without assigning an variables or doing the forward checking in the previous part), enforce arc consistenc for the entire CSP graph, filling in the boes net to the values that are eliminated for each variable: A B C D (d) [ pts] Suppose that we were running local search with the min-conflicts algorithm for this CSP, and currentl have the following variable assignments. A B C D Which variable would be reassigned, and which value would it be reassigned to? Assume that an ties are broken alphabeticall for variables and in numerical order for values. The variable A will be assigned the new value B C D
9 Q5. [9 pts] Game Trees SID: The following problems are to test our knowledge of Game Trees. (a) Minima The first part is based upon the following tree. Upward triangle nodes are maimizer nodes and downward are minimizers. (small squares on edges will be used to mark pruned nodes in part (ii)) (i) [ pt] Complete the game tree shown above b filling in values on the maimizer and minimizer nodes. (ii) [ pts] Indicate which nodes can be pruned b marking the edge above each node that can be pruned (ou do not need to mark an edges below pruned nodes). In the case of ties, please prune an nodes that could not affect the root node s value. Fill in the bubble below if no nodes can be pruned. No nodes can be pruned
10 (b) Food Dimensions The following questions are completel unrelated to the above parts. Pacman is plaing a trick game. There are 4 portals to food dimensions. But, these portals are guarded b a ghost. Furthermore, neither Pacman nor the ghost know for sure how man pellets are behind each portal, though the know what options and probabilities there are for all but the last portal. Pacman moves first, either moving West or East. After which, the ghost can block of the portals available. You have the following gametree. The maimizer node is Pacman. The minimizer nodes are ghosts and the portals are chance nodes with the probabilities indicated on the edges to the food. In the event of a tie, the left action is taken. Assume Pacman and the ghosts pla optimall. West East P P P P X Y (i) [ pt] Fill in values for the nodes that do not depend on X and Y. (ii) [4 pts] What conditions must X and Y satisf for Pacman to move East? What about to definitel reach the P4? Keep in mind that X and Y denote numbers of food pellets and must be whole numbers: X, Y {,,,,... }. To move East: To reach P4:
11 Q6. [ pts] Something Fish SID: In this problem, we will consider the task of managing a fisher for an infinite number of das. (Fisheries farm fish, continuall harvesting and selling them.) Imagine that our fisher has a ver large, enclosed pool where we keep our fish. Harvest (pm): Before we go home each da at pm, we have the option to harvest some (possibl all) of the fish, thus removing those fish from the pool and earning us some profit, dollars for fish. Birth/death (midnight): At midnight each da, some fish are born and some die, so the number of fish in the pool changes. An ecologist has analzed the ecological dnamics of the fish population. The sa that if at midnight there are fish in the pool, then after midnight there will be eactl f() fish in the pool, where f is a function the have provided to us. (We will pretend it is possible to have fractional fish.) To ensure ou properl maimize our profit while managing the fisher, ou choose to model it using a Markov decision problem. For this problem we will define States and Actions as follows: State: the number of fish in the pool that da (before harvesting) Action: the number of fish ou harvest that da (a) [ pts] How will ou define the transition and reward functions? T (s, a, s ) = R(s, a) = (b) [4 pts] Suppose the discount rate is γ =.99 and f is as below. Graph the optimal polic π. # fish after midnight Fish population dnamic f # fish before midnight optimal action for da i (YOUR ANSWER) Optimal polic π state on da i (c) [4 pts] Suppose the discount rate is γ =.99 and f is as below. Graph the optimal polic π. # fish after midnight Fish population dnamic f # fish before midnight optimal action for da i (YOUR ANSWER) Optimal polic π state on da i
12 Q7. [8 pts] Polic Evaluation In this question, ou will be working in an MDP with states S, actions A, discount factor γ, transition function T, and reward function R. We have some fied polic π : S A, which returns an action a = π(s) for each state s S. We want to learn the Q function Q π (s, a) for this polic: the epected discounted reward from taking action a in state s and then continuing to act according to π: Q π (s, a) = s T (s, a, s )[R(s, a, s ) + γq π (s, π(s )]. The polic π will not change while running an of the algorithms below. (a) [ pt] Can we guarantee anthing about how the values Q π compare to the values Q for an optimal polic π? Q π (s, a) Q (s, a) for all s, a Q π (s, a) = Q (s, a) for all s, a Q π (s, a) Q (s, a) for all s, a None of the above are guaranteed (b) Suppose T and R are unknown. You will develop sample-based methods to estimate Q π. You obtain a series of samples (s, a, r ), (s, a, r ),... (s T, a T, r T ) from acting according to this polic (where a t = π(s t ), for all t). (i) [4 pts] Recall the update equation for the Temporal Difference algorithm, performed on each sample in sequence: V (s t ) ( α)v (s t ) + α(r t + γv (s t+ )) which approimates the epected discounted reward V π (s) for following polic π from each state s, for a learning rate α. Fill in the blank below to create a similar update equation which will approimate Q π using the samples. You can use an of the terms Q, s t, s t+, a t, a t+, r t, r t+, γ, α, π in our equation, as well as and ma with an inde variables (i.e. ou could write ma a, or a and then use a somewhere else), but no other terms. Q(s t, a t ) ( α)q(s t, a t ) + α [ ] (ii) [ pts] Now, we will approimate Q π using a linear function: Q(s, a) = w f(s, a) for a weight vector w and feature function f(s, a). To decouple this part from the previous part, use Q samp Q(s t, a t ) ( α)q(s t, a t ) + αq samp ). Which of the following is the correct sample-based update for w? w w + α[q(s t, a t ) Q samp ] w w α[q(s t, a t ) Q samp ] w w + α[q(s t, a t ) Q samp ]f(s t, a t ) w w α[q(s t, a t ) Q samp ]f(s t, a t ) w w + α[q(s t, a t ) Q samp ]w w w α[q(s t, a t ) Q samp ]w (iii) [ pt] The algorithms in the previous parts (part i and ii) are: model-based model-free for the value in the blank in part (i) (i.e.
13 Q8. [8 pts] Baes Nets: Inference SID: Consider the following Baes Net, where we have observed that D = +d. A B C D P (A) +a.5 a.5 P (B A) +a +b.5 +a b.5 a +b. a b.8 P (C A, B) +a +b +c.8 +a +b c. +a b +c.6 +a b c.4 a +b +c. a +b c.8 a b +c. a b c.9 P (D C) +c +d.4 +c d.6 c +d. c d.8 (a) [ pt] Below is a list of samples that were collected using prior sampling. Mark the samples that would be rejected b rejection sampling. +a b +c d +a b +c +d a +b c d +a +b +c +d (b) [ pts] To decouple from the previous part, ou now receive a new set of samples shown below: +a +b +c +d a b c +d +a +b +c +d +a b c +d a b c +d For this part, epress our answers as eact decimals or fractions simplified to lowest terms. Estimate the probabilit P (+a + d) if these new samples were collected using... (i) [ pt]... rejection sampling: (ii) [ pts]... likelihood weighting: (c) [4 pts] Instead of sampling, we now wish to use variable elimination to calculate P (+a + d). We start with the factorized representation of the joint probabilit: P (A, B, C, +d) = P (A)P (B A)P (C A, B)P (+d C) (i) [ pt] We begin b eliminating the variable B, which creates a new factor f. Complete the epression for the factor f in terms of other factors. f ( ) = (ii) [ pt] After eliminating B to create a factor f, we net eliminate C to create a factor f. What are the remaining factors after both B and C are eliminated? p(a) p(b A) p(c A, B) p(+d C) f f (iii) [ pts] After eliminating both B and C, we are now read to calculate P (+a + d). Write an epression for P (+a + d) in terms of the remaining factors. P (+a + d) =
14 Q9. [9 pts] Decision Networks and VPI (a) Consider the decision network structure given below: A M W U T S N Mark all of the following statements that could possibl be true, for some probabilit distributions for P (M), P (W ), P (T ), P (S M, W ), and P (N T, S) and some utilit function U(S, A): (i) [.5 pts] VPI(T ) < VPI(T ) = VPI(T ) > VPI(T ) = VPI(N) (ii) [.5 pts] VPI(T N) < VPI(T N) = VPI(T N) > VPI(T N) = VPI(T S) (iii) [.5 pts] VPI(M) > VPI(W ) VPI(M) > VPI(S) VPI(M) < VPI(S) VPI(M S) > VPI(S) (b) Consider the decision network structure given below. V A W U X Y Z Mark all of the following statements that are guaranteed to be true, regardless of the probabilit distributions for an of the chance nodes and regardless of the utilit function. (i) [.5 pts] VPI(Y ) = VPI(X) = VPI(Z) = VPI(W, Z) VPI(Y ) = VPI(Y, X) (ii) [.5 pts] VPI(X) VPI(W ) VPI(V ) VPI(W ) VPI(V W ) = VPI(V ) VPI(W V ) = VPI(W ) (iii) [.5 pts] VPI(X W ) = VPI(Z W ) = VPI(X, W ) = VPI(V, W ) VPI(W, Y ) = VPI(W ) + VPI(Y )
15 SID: Q. [5 pts] Neural Networks: Representation w w w w w w G : * * H : * * w b w w w b b w w G : * + * H : * + * w w w w w w G : * relu * H : * relu * w b w w w b b w w G 4 : * + relu * H 4 : * + relu * w b w b w w b b w w b G 5 : * + relu * + H 5 : * + relu * + For each of the piecewise-linear functions below, mark all networks from the list above that can represent the function eactl on the range (, ). In the networks above, relu denotes the element-wise ReLU nonlinearit: relu(z) = ma(, z). The networks G i use -dimensional laers, while the networks H i have some -dimensional intermediate laers. (a) [5 pts] G G G G 4 G 5 H H H H 4 H 5 None of the above G G G G 4 G 5 H H H H 4 H 5 None of the above (b) [5 pts] G G G G 4 G 5 H H H H 4 H 5 None of the above G G G G 4 G 5 H H H H 4 H 5 None of the above (c) [5 pts] G G G G 4 G 5 H H H H 4 H 5 None of the above G G G G 4 G 5 H H H H 4 H 5 None of the above
16 Q. [9 pts] Backpropagation In this question we will perform the backward pass algorithm on the formula [ ] A A Here, A =, = A A [ ], b = A = f = A [ ] [ A + A b ( ) = ], and f = A + A b b = b + b is a scalar. A b f (a) [ pt] Calculate the following partial derivatives of f. ] (i) [ pt] Find f b = [ ] [ f b f b. [ ] b b [ ] b b (b) [ pts] Calculate the following partial derivatives of b. ( b (i) [ pt] A, ) b A [ ] f(b ) f(b ) [ ] A A [ ] b + b b b (A, A ) (, ) (, ) (A, A ) (, ) ( (ii) [ pt] b A, ) b A (A, A ) (, ) (, ) (, ) (A, A ) (iii) [ pt] ( ) b, b (A, A ) (A, A ) (, ) (b, b ) (A, A ) (c) [ pts] Calculate the following partial derivatives of f. ( ) (i) [ pt] f f A, A (A, A ) (A b, A b ) (A, A ) ( b, b ) ( b, b ) ( b, b ) (ii) [ pt] ( f A, ) f A (A, A ) (A b, A b ) (A, A ) ( b, b ) ( b, b ) ( b, b ) (iii) [ pt] ( ) f, f (A b + A b, A b + A b ) (A b + A b, A b + A b ) (A b + A b, A b + A b ) (A b + A b, A b + A b ) (d) [ pts] Now we consider the general case where A is an n d matri, and is a d vector. As before, f = A. (i) [ pt] Find f A in terms of A and onl. A A A A ( A A ) (ii) [ pt] Find f in terms of A and onl. AA A ( A A ) A A A A A
17 THIS PAGE IS INTENTIONALLY LEFT BLANK SID:
Final Exam V1. CS 188 Fall Introduction to Artificial Intelligence
CS 88 Fall 7 Introduction to Artificial Intelligence Final Eam V You have approimatel 7 minutes. The eam is closed book, closed calculator, and closed notes ecept our three-page crib sheet. Mark our answers
More informationFinal. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Fall 2018 Introduction to Artificial Intelligence Practice Final You have approximately 2 hours 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationMidterm 2 V1. Introduction to Artificial Intelligence. CS 188 Spring 2015
S 88 Spring 205 Introduction to rtificial Intelligence Midterm 2 V ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 1
CS 88 Fall 207 Introduction to Artificial Intelligence Midterm You have approximately 0 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark your
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Fall 2015 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 2
CS 188 Fall 2013 Introduction to rtificial Intelligence Midterm 2 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Spring 2017 Introduction to Artificial Intelligence Midterm V2 You have approximately 80 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. Mark
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 88 Fall 208 Introduction to Artificial Intelligence Practice Final You have approximately 2 hours 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
More informationCS 188 Introduction to Fall 2007 Artificial Intelligence Midterm
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.
More informationIntroduction to Artificial Intelligence Midterm 2. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2014 Introduction to Artificial Intelligence Midterm 2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers
More informationMidterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.
CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use
More informationIntroduction to Fall 2009 Artificial Intelligence Final Exam
CS 188 Introduction to Fall 2009 Artificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a two-page crib sheet. Please use non-programmable calculators
More informationCS221 Practice Midterm
CS221 Practice Midterm Autumn 2012 1 ther Midterms The following pages are excerpts from similar classes midterms. The content is similar to what we ve been covering this quarter, so that it should be
More informationCSE 546 Midterm Exam, Fall 2014
CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,
More informationIntroduction to Spring 2009 Artificial Intelligence Midterm Exam
S 188 Introduction to Spring 009 rtificial Intelligence Midterm Exam INSTRUTINS You have 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators
More informationMidterm II. Introduction to Artificial Intelligence. CS 188 Spring ˆ You have approximately 1 hour and 50 minutes.
CS 188 Spring 2013 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 1 hour and 50 minutes. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use
More informationFinal. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2013 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except a three-page crib sheet. Please use non-programmable
More informationMidterm. Introduction to Artificial Intelligence. CS 188 Summer You have approximately 2 hours and 50 minutes.
CS 188 Summer 2014 Introduction to Artificial Intelligence Midterm You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your one-page crib sheet. Mark your answers
More informationIntroduction to Fall 2008 Artificial Intelligence Midterm Exam
CS 188 Introduction to Fall 2008 Artificial Intelligence Midterm Exam INSTRUCTIONS You have 80 minutes. 70 points total. Don t panic! The exam is closed book, closed notes except a one-page crib sheet,
More informationIntroduction to Spring 2006 Artificial Intelligence Practice Final
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Spring 2006 Artificial Intelligence Practice Final You have 180 minutes. The exam is open-book, open-notes, no electronics other than basic calculators.
More informationMidterm II. Introduction to Artificial Intelligence. CS 188 Fall ˆ You have approximately 3 hours.
CS 188 Fall 2012 Introduction to Artificial Intelligence Midterm II ˆ You have approximately 3 hours. ˆ The exam is closed book, closed notes except a one-page crib sheet. ˆ Please use non-programmable
More informationFinal Examination CS540-2: Introduction to Artificial Intelligence
Final Examination CS540-2: Introduction to Artificial Intelligence May 9, 2018 LAST NAME: SOLUTIONS FIRST NAME: Directions 1. This exam contains 33 questions worth a total of 100 points 2. Fill in your
More informationIntroduction to Fall 2011 Artificial Intelligence Final Exam
CS 188 Introduction to Fall 2011 rtificial Intelligence Final Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except two pages of crib sheets. Please use non-programmable calculators
More informationˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Summer 2015 Introduction to Artificial Intelligence Midterm 2 ˆ You have approximately 80 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
More informationˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
S 88 Summer 205 Introduction to rtificial Intelligence Final ˆ You have approximately 2 hours 50 minutes. ˆ The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
More informationCS 188 Fall Introduction to Artificial Intelligence Midterm 2
CS 188 Fall 2013 Introduction to rtificial Intelligence Midterm 2 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use
More informationFinal. CS 188 Fall Introduction to Artificial Intelligence
CS 188 Fall 2012 Introduction to Artificial Intelligence Final You have approximately 3 hours. The exam is closed book, closed notes except your three one-page crib sheets. Please use non-programmable
More informationCS 4100 // artificial intelligence. Recap/midterm review!
CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks
More informationSection Handout 5 Solutions
CS 188 Spring 2019 Section Handout 5 Solutions Approximate Q-Learning With feature vectors, we can treat values of states and q-states as linear value functions: V (s) = w 1 f 1 (s) + w 2 f 2 (s) +...
More informationCS540 ANSWER SHEET
CS540 ANSWER SHEET Name Email 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 1 2 Final Examination CS540-1: Introduction to Artificial Intelligence Fall 2016 20 questions, 5 points
More informationFinal. CS 188 Fall Introduction to Artificial Intelligence
S 188 Fall 2012 Introduction to rtificial Intelligence Final You have approximately 3 hours. The exam is closed book, closed notes except your three one-page crib sheets. Please use non-programmable calculators
More informationFinal Examination CS 540-2: Introduction to Artificial Intelligence
Final Examination CS 540-2: Introduction to Artificial Intelligence May 7, 2017 LAST NAME: SOLUTIONS FIRST NAME: Problem Score Max Score 1 14 2 10 3 6 4 10 5 11 6 9 7 8 9 10 8 12 12 8 Total 100 1 of 11
More information1. Bounded suboptimal search: weighted A*
CS188 Fall 2018 Review: Final Preparation 1. Bounded suboptimal search: weighted A* In this class you met A*, an algorithm for informed search guaranteed to return an optimal solution when given an admissible
More informationUnit 3 NOTES Honors Common Core Math 2 1. Day 1: Properties of Exponents
Unit NOTES Honors Common Core Math Da : Properties of Eponents Warm-Up: Before we begin toda s lesson, how much do ou remember about eponents? Use epanded form to write the rules for the eponents. OBJECTIVE
More informationIntroduction to Fall 2008 Artificial Intelligence Final Exam
CS 188 Introduction to Fall 2008 Artificial Intelligence Final Exam INSTRUCTIONS You have 180 minutes. 100 points total. Don t panic! The exam is closed book, closed notes except a two-page crib sheet,
More informationMIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October,
MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run
More informationMethods for Advanced Mathematics (C3) Coursework Numerical Methods
Woodhouse College 0 Page Introduction... 3 Terminolog... 3 Activit... 4 Wh use numerical methods?... Change of sign... Activit... 6 Interval Bisection... 7 Decimal Search... 8 Coursework Requirements on
More informationA. Real numbers greater than 2 B. Real numbers less than or equal to 2. C. Real numbers between 1 and 3 D. Real numbers greater than or equal to 2
39 CHAPTER 9 DAY 0 DAY 0 Opportunities To Learn You are what ou are when nobod Is looking. - Ann Landers 6. Match the graph with its description. A. Real numbers greater than B. Real numbers less than
More informationBe able to define the following terms and answer basic questions about them:
CS440/ECE448 Section Q Fall 2017 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables, axioms of probability o Joint, marginal, conditional
More informationFinal Exam, Fall 2002
15-781 Final Exam, Fall 22 1. Write your name and your andrew email address below. Name: Andrew ID: 2. There should be 17 pages in this exam (excluding this cover sheet). 3. If you need more room to work
More informationMidterm II. Introduction to Artificial Intelligence. CS 188 Fall You have approximately 3 hours.
CS 188 Fall 2012 Introduction to Artificial Intelligence Midterm II You have approximately 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators
More informationLab 5 Forces Part 1. Physics 211 Lab. You will be using Newton s 2 nd Law to help you examine the nature of these forces.
b Lab 5 Forces Part 1 Phsics 211 Lab Introduction This is the first week of a two part lab that deals with forces and related concepts. A force is a push or a pull on an object that can be caused b a variet
More informationCSE 546 Final Exam, Autumn 2013
CSE 546 Final Exam, Autumn 0. Personal info: Name: Student ID: E-mail address:. There should be 5 numbered pages in this exam (including this cover sheet).. You can use any material you brought: any book,
More informationIntroduction to Machine Learning Midterm, Tues April 8
Introduction to Machine Learning 10-701 Midterm, Tues April 8 [1 point] Name: Andrew ID: Instructions: You are allowed a (two-sided) sheet of notes. Exam ends at 2:45pm Take a deep breath and don t spend
More informationName: UW CSE 473 Final Exam, Fall 2014
P1 P6 Instructions Please answer clearly and succinctly. If an explanation is requested, think carefully before writing. Points may be removed for rambling answers. If a question is unclear or ambiguous,
More informationCS 570: Machine Learning Seminar. Fall 2016
CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or
More informationFinal exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.
Spring 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials
More informationCSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes
CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not
More informationD u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2
93 CHAPTER 4 PARTIAL DERIVATIVES We close this section b giving a proof of the first part of the Second Derivatives Test. Part (b) has a similar proof. PROOF OF THEOREM 3, PART (A) We compute the second-order
More informationMachine Learning, Midterm Exam
10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMATH 1710 College Algebra Final Exam Review
MATH 7 College Algebra Final Eam Review MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. ) There were 80 people at a pla. The admission price was $
More informationQ1. [24 pts] The OMNIBUS
CS 188 Spring 2011 Introduction to Artificial Intelligence Final Exam Solutions Q1. [24 pts] The OMNIBUS Each question is worth 1 point. Leaving a question blank is worth 0 points. Answering a multiple
More informationBe able to define the following terms and answer basic questions about them:
CS440/ECE448 Fall 2016 Final Review Be able to define the following terms and answer basic questions about them: Probability o Random variables o Axioms of probability o Joint, marginal, conditional probability
More informationName (NetID): (1 Point)
CS446: Machine Learning (D) Spring 2017 March 16 th, 2017 This is a closed book exam. Everything you need in order to solve the problems is supplied in the body of this exam. This exam booklet contains
More informationUsing first-order logic, formalize the following knowledge:
Probabilistic Artificial Intelligence Final Exam Feb 2, 2016 Time limit: 120 minutes Number of pages: 19 Total points: 100 You can use the back of the pages if you run out of space. Collaboration on the
More informationSection 5.1: Functions
Objective: Identif functions and use correct notation to evaluate functions at numerical and variable values. A relationship is a matching of elements between two sets with the first set called the domain
More informationIntroduction to Machine Learning Midterm Exam
10-701 Introduction to Machine Learning Midterm Exam Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes, but
More informationIntroduction to Spring 2009 Artificial Intelligence Midterm Solutions
S 88 Introduction to Spring 009 rtificial Intelligence Midterm Solutions. (6 points) True/False For the following questions, a correct answer is worth points, no answer is worth point, and an incorrect
More informationRevision: Neural Network
Revision: Neural Network Exercise 1 Tell whether each of the following statements is true or false by checking the appropriate box. Statement True False a) A perceptron is guaranteed to perfectly learn
More informationFinal Exam. CS 188 Fall Introduction to Artificial Intelligence
CS 188 Fall 2018 Introduction to rtificial Intelligence Final Exam You have 180 minutes. The time will be projected at the front of the room. You may not leave during the last 10 minutes of the exam. Do
More informationMachine Learning, Midterm Exam: Spring 2009 SOLUTION
10-601 Machine Learning, Midterm Exam: Spring 2009 SOLUTION March 4, 2009 Please put your name at the top of the table below. If you need more room to work out your answer to a question, use the back of
More informationName (NetID): (1 Point)
CS446: Machine Learning Fall 2016 October 25 th, 2016 This is a closed book exam. Everything you need in order to solve the problems is supplied in the body of this exam. This exam booklet contains four
More informationAndrew/CS ID: Midterm Solutions, Fall 2006
Name: Andrew/CS ID: 15-780 Midterm Solutions, Fall 2006 November 15, 2006 Place your name and your andrew/cs email address on the front page. The exam is open-book, open-notes, no electronics other than
More informationCross-validation for detecting and preventing overfitting
Cross-validation for detecting and preventing overfitting A Regression Problem = f() + noise Can we learn f from this data? Note to other teachers and users of these slides. Andrew would be delighted if
More informationMachine Learning Basics III
Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient
More informationFeedforward Neural Networks
Chapter 4 Feedforward Neural Networks 4. Motivation Let s start with our logistic regression model from before: P(k d) = softma k =k ( λ(k ) + w d λ(k, w) ). (4.) Recall that this model gives us a lot
More informationCS 188 Introduction to AI Fall 2005 Stuart Russell Final
NAME: SID#: Section: 1 CS 188 Introduction to AI all 2005 Stuart Russell inal You have 2 hours and 50 minutes. he exam is open-book, open-notes. 100 points total. Panic not. Mark your answers ON HE EXAM
More information2.3 Quadratic Functions
88 Linear and Quadratic Functions. Quadratic Functions You ma recall studing quadratic equations in Intermediate Algebra. In this section, we review those equations in the contet of our net famil of functions:
More informationFunctions. Introduction
Functions,00 P,000 00 0 970 97 980 98 990 99 000 00 00 Figure Standard and Poor s Inde with dividends reinvested (credit "bull": modification of work b Praitno Hadinata; credit "graph": modification of
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and
More informationUnit 12 Study Notes 1 Systems of Equations
You should learn to: Unit Stud Notes Sstems of Equations. Solve sstems of equations b substitution.. Solve sstems of equations b graphing (calculator). 3. Solve sstems of equations b elimination. 4. Solve
More informationMock Exam Künstliche Intelligenz-1. Different problems test different skills and knowledge, so do not get stuck on one problem.
Name: Matriculation Number: Mock Exam Künstliche Intelligenz-1 January 9., 2017 You have one hour(sharp) for the test; Write the solutions to the sheet. The estimated time for solving this exam is 53 minutes,
More information74 Maths Quest 10 for Victoria
Linear graphs Maria is working in the kitchen making some high energ biscuits using peanuts and chocolate chips. She wants to make less than g of biscuits but wants the biscuits to contain at least 8 g
More informationWe have examined power functions like f (x) = x 2. Interchanging x
CHAPTER 5 Eponential and Logarithmic Functions We have eamined power functions like f =. Interchanging and ields a different function f =. This new function is radicall different from a power function
More informationMth Quadratic functions and quadratic equations
Mth 0 - Quadratic functions and quadratic equations Name Find the product. 1) 8a3(2a3 + 2 + 12a) 2) ( + 4)( + 6) 3) (3p - 1)(9p2 + 3p + 1) 4) (32 + 4-4)(2-3 + 3) ) (4a - 7)2 Factor completel. 6) 92-4 7)
More informationSection J Venn diagrams
Section J Venn diagrams A Venn diagram is a wa of using regions to represent sets. If the overlap it means that there are some items in both sets. Worked eample J. Draw a Venn diagram representing the
More informationCSE 473: Artificial Intelligence Autumn Topics
CSE 473: Artificial Intelligence Autumn 2014 Bayesian Networks Learning II Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 473 Topics
More informationNova Scotia Examinations Mathematics 12 Web Sample 2. Student Booklet
Nova Scotia Eaminations Mathematics Web Sample Student Booklet General Instructions - WEB SAMPLE* This eamination is composed of two sections with the following suggested time allotment: Selected-Response
More informationLab 5 Forces Part 1. Physics 225 Lab. You will be using Newton s 2 nd Law to help you examine the nature of these forces.
b Lab 5 orces Part 1 Introduction his is the first week of a two part lab that deals with forces and related concepts. A force is a push or a pull on an object that can be caused b a variet of reasons.
More informationFunctions. Essential Question What is a function? Work with a partner. Functions can be described in many ways.
. Functions Essential Question What is a function? A relation pairs inputs with outputs. When a relation is given as ordered pairs, the -coordinates are inputs and the -coordinates are outputs. A relation
More informationThe exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.
CS 189 Spring 013 Introduction to Machine Learning Final You have 3 hours for the exam. The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. Please
More informationChapter Nine Chapter Nine
Chapter Nine Chapter Nine 6 CHAPTER NINE ConcepTests for Section 9.. Table 9. shows values of f(, ). Does f appear to be an increasing or decreasing function of? Of? Table 9. 0 0 0 7 7 68 60 0 80 77 73
More informationCS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions
CS 4700: Foundations of Artificial Intelligence Ungraded Homework Solutions 1. Neural Networks: a. There are 2 2n distinct Boolean functions over n inputs. Thus there are 16 distinct Boolean functions
More informationMath 115 First Midterm February 9, 2016
Math First Midterm Februar 9, 06 EXAM SOLUTIONS. Do not open this eam until ou are told to do so.. This eam has pages including this cover. There are problems. Note that the problems are not of equal difficult,
More information6.867 Machine learning
6.867 Machine learning Mid-term eam October 8, 6 ( points) Your name and MIT ID: .5.5 y.5 y.5 a).5.5 b).5.5.5.5 y.5 y.5 c).5.5 d).5.5 Figure : Plots of linear regression results with different types of
More informationLimits 4: Continuity
Limits 4: Continuit 55 Limits 4: Continuit Model : Continuit I. II. III. IV. z V. VI. z a VII. VIII. IX. Construct Your Understanding Questions (to do in class). Which is the correct value of f (a) in
More informationFinal. Introduction to Artificial Intelligence. CS 188 Summer 2014
S 188 Summer 2014 Introduction to rtificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers ON
More information1 Greedy Graph Algorithms
+//+ Algorithms & Data Structures (DL) Exam of 7 December 5 Instructions: This is a multiple-choice exam, to save ou the time of tiding up the presentation of our answers. There is exactl one correct answer
More informationAnswer Explanations. The SAT Subject Tests. Mathematics Level 1 & 2 TO PRACTICE QUESTIONS FROM THE SAT SUBJECT TESTS STUDENT GUIDE
The SAT Subject Tests Answer Eplanations TO PRACTICE QUESTIONS FROM THE SAT SUBJECT TESTS STUDENT GUIDE Mathematics Level & Visit sat.org/stpractice to get more practice and stud tips for the Subject Test
More informationCS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability
CS188: Artificial Intelligence, Fall 2009 Written 2: MDPs, RL, and Probability Due: Thursday 10/15 in 283 Soda Drop Box by 11:59pm (no slip days) Policy: Can be solved in groups (acknowledge collaborators)
More informationCh 3 Alg 2 Note Sheet.doc 3.1 Graphing Systems of Equations
Ch 3 Alg Note Sheet.doc 3.1 Graphing Sstems of Equations Sstems of Linear Equations A sstem of equations is a set of two or more equations that use the same variables. If the graph of each equation =.4
More informationFINAL: CS 6375 (Machine Learning) Fall 2014
FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for
More informationDo not tear exam apart!
6.036: Final Exam: Fall 2017 Do not tear exam apart! This is a closed book exam. Calculators not permitted. Useful formulas on page 1. The problems are not necessarily in any order of di culty. Record
More informationNotes on Discriminant Functions and Optimal Classification
Notes on Discriminant Functions and Optimal Classification Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Discriminant Functions Consider a classification problem
More informationtotal 58
CS687 Exam, J. Košecká Name: HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow students. Furthermore, you are only
More information