Stochastic Enumeration Method for Counting Trees
|
|
- Eleanor Jordan
- 5 years ago
- Views:
Transcription
1 Stochastic Enumeration Method for Counting Trees Slava Vaisman (Joint work with Dirk P. Kroese) University of Queensland January 11, 2015 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
2 Overview 1 The Tree Counting Problem Is this hard? Is this interesting? Previous work 2 Knuth s estimator Problem with Knuth s estimator What can we do about this? 3 From Knuth to Stochastic Enumeration (SE) Algorithm 4 Analysis and almost sure Fully Polynomial Randomized Approximation Scheme for random trees (Super Critical Branching Process) 5 SE in practice Network Reliability 6 What next? Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
3 The Tree Counting Problem Consider a rooted tree T = (V, E) with node set V and edge set E. 7 Which each node v is associated a cost c(v) R, (it is also possible that C(v) is a random variable). 1 5 The main quantity of interest is the total cost of the tree, Cost(T ) = v V c(v), or for r.v: ( ) Cost(T ) = E v V C(v). Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
4 The Tree Counting Problem Consider a rooted tree T = (V, E) with node set V and edge set E. 7 Which each node v is associated a cost c(v) R, (it is also possible that C(v) is a random variable). 1 5 The main quantity of interest is the total cost of the tree, Cost(T ) = v V c(v), or for r.v: ( ) Cost(T ) = E v V C(v). Linear time solution? (BFS, DFS). What if the set V is large? Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
5 Figure: Complexity classes Slava Vaisman (UQ) Stochastic enumeration January 11, / 37 Is this hard? (1) The general problem of estimating the cost of a tree is at least #P, (Valiant, 1979). (Counting CNF formula solutions) An existence of computationally efficient approximation algorithm will result in the collapse of polynomial hierarchy!!!
6 Is this an interesting problem? From theoretical point of view. Theoretical research of complexity classes (#P Counting Problems). New sampling strategies for stochastic simulation algorithms. In Practice. Early estimates of the size of backtrack trees, (Knuth). Efficient evaluation of strategies in Partially Observable Markov Decision Processes. Improved sampling strategies for Monte Carlo Tree Search (MCTS) algorithms finding large rewards under rare event settings. Network Reliability and sensitivity. Many more... Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
7 POMDP Rock Sample (1) There are 8 rocks (some of them are good and some are bad ). The robot has sensor that can scan the rocks. The sensor results are subject to errors. The robot can move, scan and collect rock. Collecting the good rocks or exiting results in a reward. Any movement and collection of bad rock is penalized. Our goal is to develop optimal plan that maximize the overall collected reward. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
8 POMDP Rock Sample (2) The robot operates in believe space b ( good and bad rocks) b = {b 1,..., b 8 }, where b i = P(rock i is good ), (for example b i = 1/2 at the beginning maximizes the entropy). Let π : b A be a mapping from the belief space to action space, and: π = argmax E π (reward). π Π Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
9 POMDP Rock Sample (2) The robot operates in believe space b ( good and bad rocks) b = {b 1,..., b 8 }, where b i = P(rock i is good ), (for example b i = 1/2 at the beginning maximizes the entropy). Let π : b A be a mapping from the belief space to action space, and: π = argmax E π (reward). π Π Using universal approximators (such as RBFs), one can compactly represent any π. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
10 POMDP Rock Sample (2) The robot operates in believe space b ( good and bad rocks) b = {b 1,..., b 8 }, where b i = P(rock i is good ), (for example b i = 1/2 at the beginning maximizes the entropy). Let π : b A be a mapping from the belief space to action space, and: π = argmax E π (reward). π Π Using universal approximators (such as RBFs), one can compactly represent any π. Crucial observation as soon as an approximation to π is given, E π (reward) becomes the tree counting problem. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
11 POMDP Rock Sample (2) The robot operates in believe space b ( good and bad rocks) b = {b 1,..., b 8 }, where b i = P(rock i is good ), (for example b i = 1/2 at the beginning maximizes the entropy). Let π : b A be a mapping from the belief space to action space, and: π = argmax E π (reward). π Π Using universal approximators (such as RBFs), one can compactly represent any π. Crucial observation as soon as an approximation to π is given, E π (reward) becomes the tree counting problem. In order to approximate the optimal plan, all we need to do is to optimize the parameters of the RBFs. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
12 Previous work Donald E. Knuth (1975) Estimating the Efficiency of Backtrack Programs Math. Comp. 29. Paul W. Purdom (1978) Tree Size by Partial Backtracking SIAM J. Comput. 7(4) Pang C. Chen (1992) Heuristic Sampling: A Method for Predicting the Performance of Tree Searching Programs. SIAM J. Comput. 21(2) Few additional attempts based on Knuth s estimator. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
13 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. k = 0, D = 1, X 0 = v 1, C = 7. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
14 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. S(X 0 ) = {v 2, v 3 }, D 0 = 2. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
15 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. S(X 0 ) = {v 2, v 3 }, D 0 = 2. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
16 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. k = 1, X 1 = v 3, D = 1 D 0 = 2, C = = 17. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
17 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. S(X 1 ) = {v 5, v 6 }, D 1 = 2. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
18 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. S(X 1 ) = {v 5, v 6 }, D 1 = 2. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
19 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. k = 2, X 2 = v 6, D = 2 D 1 = 4, C = = 53. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
20 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. X 2 = v 6, S(X 2 ) =, D 2 = 0. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
21 Knuth s estimator Input: A tree T v of height h, rooted at v. Output: An unbiased estimator C of the total cost of tree T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C c(x 0 ). Here D is the product of all node degrees encountered in the tree. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k and let D k be the number of elements of S(X k ). If k = h or when S(X k ) is empty, set D k = 0. 3 (Terminal position?): If D k = 0, the algorithm stops, returning C as an estimator of Cost(T v ). 4 (Advance): Choose an element X k+1 S(X k ) at random, each element being equally likely. (Thus, each choice occurs with probability 1/D k.) Set D D k D, then set C C + c(x k+1 )D. Increase k by 1 and return to Step 2. C = 53. Reached terminal node. Note that Cost(T ) = 57. v2, 1 v4, 3 v7, 4 v8, 2 v9, 1 v1, 7 v5, 1 v3, 5 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
22 Is this always work? (Rare-events) Consider the hair brush tree T and suppose that the costs of all vertices are zero except for v n+1, which has a cost of unity. v2 v1 v3 v2 v3 The expectation and variance of the Knuth s estimator are E (C) = 1 2 n 2n 1 + 2n 1 2 n D 0 = 1, v4 vn vn+1 vn+1 Figure: The hair brush tree. and E ( C 2) = 1 2 n (2n 1) n 1 2 n (D 0 )2 = 2 n Var (C) = 2 n 1. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
23 Is this always work? (Rare-events) Consider the hair brush tree T and suppose that the costs of all vertices are zero except for v n+1, which has a cost of unity. v2 v1 v2 The expectation and variance of the Knuth s estimator are E (C) = 1 2 n 2n 1 + 2n 1 2 n D 0 = 1, v3 v4 v3 vn and E ( C 2) = 1 2 n (2n 1) n 1 2 n (D 0 )2 = 2 n Var (C) = 2 n 1. vn+1 vn+1 Figure: The hair brush tree. CV 2 = Var (C) E (C) = 2n Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
24 Is this always work? (Rare-events) Consider the hair brush tree T and suppose that the costs of all vertices are zero except for v n+1, which has a cost of unity. v2 v1 v3 v2 v4 v3 vn The expectation and variance of the Knuth s estimator are E (C) = 1 2 n 2n 1 + 2n 1 2 n D 0 = 1, and E ( C 2) = 1 2 n (2n 1) n 1 2 n (D 0 )2 = 2 n Var (C) = 2 n 1. vn+1 vn+1 Figure: The hair brush tree. CV 2 = Var (C) E (C) = 2n Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
25 What can we do? The problem is the large variance. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
26 What can we do? The problem is the large variance. Variance reduction techniques. Common and antithetic random variables. Control variables. Conditional Monte Carlo. Stratified sampling. Importance Sampling. Multilevel Splitting. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
27 To start with Multilevel Splitting Consider (again) the hair brush tree T. B {}}{ Define some budget B 1 of parallel random walks. B/2 {}}{ v2 v1 v3 v2 v3 B/2 {}}{ Start from the root. The expected number of walks which reach the good vertex v 2 is B/2 call them the good trajectories. v4 vn vn+1 vn+1 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
28 To start with Multilevel Splitting Consider (again) the hair brush tree T. v1 Define some budget B 1 of parallel random walks. \ B/2 { }} { v2 v3 v2 v3 B {}}{ Start from the root. The expected number of walks which reach the good vertex v 2 is B/2 call them the good trajectories. v4 vn Split the good trajectories such that there are B of them again and continue to the next tree level. vn+1 vn+1 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
29 To start with Multilevel Splitting Consider (again) the hair brush tree T. \ B/2 { }} { v2 v1 v3 v2 v4 v3 B {}}{ vn+1 vn vn+1 Define some budget B 1 of parallel random walks. Start from the root. The expected number of walks which reach the good vertex v 2 is B/2 call them the good trajectories. Split the good trajectories such that there are B of them again and continue to the next tree level. Carefully choosing B (polynomial in n!), will allow us to reach the vertex of interest v n+1 with reasonably high probability. P(The process reaches the next level) = 1 1/2 B. P(The process reaches the v n+1 vertex) = (1 1/2 B ) n. B = log 2 (n) P(The process reaches the v n+1 vertex) e 1, as n. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
30 SE the main idea 1 Define a budget B N, and let B be the number of parallel random walks on the tree. 2 Using these B walks, run Knuth s Algorithm in parallel, (there are some technical issues!). 3 If some walks die, split the remaining ones to continue with B walks as usual, (multilevel splitting). Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
31 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
32 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
33 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
34 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
35 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
36 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
37 SE example with B = 2 v 1, 7 v 2, 1 v 3, 5 v 4, 3 v 5, 1 v 6, 9 v 7, 4 v 8, 2 v 9, 1 v 10, 14 v 11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
38 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by k = 0, D = 1, X 0 = {v 1 }, C SE = 7. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v6, 9 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
39 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by S(X 0 ) = {v 2, v 3 }. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
40 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by S(X 0 ) = {v 2, v 3 }. v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
41 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by k = 1, X 1 = {v 2, v 3 }, D = 2 1 = 2, C SE = v2, 1 v4, 3 v7, 4 v8, 2 v9, 1 v1, 7 v5, 1 v3, 5 v10, 14 v11, 10 v6, 9 = 13. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
42 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 1 ) = {v 4, v 5, v 6 }. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
43 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 1 ) = {v 4, v 5, v 6 }. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
44 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by k = 2, X 2 = {v 4, v 6 }, D = = 3, C SE = = 31. v2, 1 v4, 3 v7, 4 v8, 2 v9, 1 v1, 7 v5, 1 v3, 5 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
45 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 2 ) = {v 7, v 8, v 9 }. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
46 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 2 ) = {v 7, v 8, v 9 }. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
47 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. 1 (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by k = 3, X 2 = {v 8, v 9 }, D = = 4.5, C SE = = v2, 1 v4, 3 v7, 4 v8, 2 v9, 1 v1, 7 v5, 1 v3, 5 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
48 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 3 ) =, C SE = (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
49 SE Algorithm with B = 2 Input: A forest T v of height h rooted at a hypernode v, and a budget B 1. Output: An unbiased estimator v C SE of the total cost of forest T v. S(X 3 ) =, C SE = (Initialization): Set k 0, D 1, X 0 = v and C SE c(x 0 )/ X 0. 2 (Compute the successors): Let S(X k ) be the set of all successors of X k. 3 (Terminal position?): If S(X k ) = 0, the algorithm stops, returning v C SE as an estimator of Cost(T v). 4 (Advance): Choose hyper node X k+1 H(X k ) at random, each choice being equally likely. (Thus, each choice occurs with probability 1/ H(X k ).) Set D k = S(X k ) and D D X k k D, then set ( c(xk+1 ) C SE C SE + X k+1 1 and return to Step 2. ) D. Increase k by v1, 7 v2, 1 v3, 5 v4, 3 v5, 1 v7, 4 v8, 2 v9, 1 v10, 14 v11, 10 v6, 9 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
50 SE Algorithm Variance for the hairbrush tree with B = 2 v1 v2 v2 v3 v3 v4 vn vn+1 vn+1 Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
51 SE Algorithm Variance for the hairbrush tree with B = 2 v1 v2 v2 v3 v3 v4 vn vn+1 vn+1 Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
52 SE Algorithm Variance for the hairbrush tree with B = 2 v 1 v 2 v 2 v 3 v 3 v 4 v n v n+1 v n+1 Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
53 SE Algorithm Variance for the hairbrush tree with B = 2 v 1 v 2 v 2 v 3 v 3 v 4 v n v n+1 v n+1 Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
54 SE Algorithm Variance for the hairbrush tree with B = 2 v 1 v 2 v 2 v 3 v 3 v 4 v n v n+1 v n+1 Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
55 SE Algorithm Variance for the hairbrush tree with B = 2 v2 v1 v2 The expectation and variance of the Knuth s estimator are v3 v4 v3 E (C SE) = 1 }{{} P(visit v n+1 ) 2 }{{} D 1 2 }{{} c(xn) xn = 1, vn+1 vn vn+1 and ( ) ( E CSE 2 = ) 2 = 1 2 Var (C SE) = 0. Figure: The hair brush tree. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
56 SE Algorithm Variance for the hairbrush tree with B = 2 v1 The expectation and variance of the Knuth s estimator are v2 v3 v2 v3 E (C SE) = 1 }{{} P(visit v n+1 ) 2 }{{} D 1 2 }{{} c(xn) xn = 1, v4 vn and ( ) ( E CSE 2 = ) 2 = 1 2 Var (C SE) = 0. vn+1 vn+1 Figure: The hair brush tree. CV 2 = Var (CSE) E (C SE) 2 = 0. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
57 Analysis Radidlav Vaisman and Dirk P. Kroese (2014) Stochastic Enumeration Method for Counting Trees. se-tree-jacm.pdf Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
58 Analysis Radidlav Vaisman and Dirk P. Kroese (2014) Stochastic Enumeration Method for Counting Trees. se-tree-jacm.pdf Theorem (Unbiased Estimator) Let T v be tree rooted at v. Then, E(C SE (T v )) = Cost (T v ). Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
59 Analysis SE s variance Theorem (Stochastic Enumeration Algorithm Variance) Let v be a hyper node and let H(S(v)) = {w 1,..., w d } be its set of hyper children. Then, Var (C SE (T v )) = + ( ) 2 S(v) v d ( ) 2 S(v) v d 2 1 j d 1 i<j d Var ( C SE ( Twj )) ( Cost (T wi ) Cost ( )) 2 T wj w i w j Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
60 Analysis upper bound on SE s variance (1) v w 1 w 2 w 3 Cost(Tw1) Cost(Tw2) Cost(Tw d ) Suppose that Cost (T w1 ) Cost (T w2 ) Cost (T w3 ). Then, the SE Algorithm can be very efficient! Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
61 Analysis upper bound on SE s variance (2) Theorem Suppose without loss of generality that H(S (m) (v)) = {w 1,..., w d } and there exists constant a such that Cost (T w1 ) w 1 Cost (T w 2 ) w 2 Cost (T w d ) w d a Cost (T w 1 ). w 1 Then, the variance of SE estimator satisfies [ ] Cost Var (C SE (T v )) (β h (Tv ) 2 1), v ( ) where β = a 2 +2a+1 4a. That is, CV β h 1. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
62 Analysis upper bound on SE s variance (3) Is this good enough? CV β h 1 Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
63 Analysis upper bound on SE s variance (3) CV β h 1 Is this good enough? Unfortunately, for the majority of applications β > 1... Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
64 Some numerical results (1) Consider the following, very structured tree of height h. We define c(v) = 1 for all v V. The root has 3 children. The leftmost child becomes the root of full binary tree and the rest of the children will continue the root behavior recursively. a b c d Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
65 Some numerical results (2) For Knuth s algorithm, the following holds: CV 2 1.4h 1 16(h + 1) 2. Nevertheless, SE performance with B = h is quite satisfactory Knuth numerical cv Knuth analytical cv SE numerical cv cv h cv h Figure: The performance of Knuth s Algorithm and the SE Algorithm on counting recursive trees of different heights. Left panel: Knuth. Right panel: SE. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
66 Random trees Definition (Family of random trees) Consider a probability vector p = (p 0,..., p k ) that corresponds to the probability of a vertex to have 0,..., k successors respectively. Define a family of random trees F h p as all possible trees of hight at most h that are generated using p up to the level h. The family F h p is fully characterized by the probability vector p and the parameter h. The tree generation corresponds to a branching process. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
67 Random trees Definition (Family of random trees) Consider a probability vector p = (p 0,..., p k ) that corresponds to the probability of a vertex to have 0,..., k successors respectively. Define a family of random trees F h p as all possible trees of hight at most h that are generated using p up to the level h. The family F h p is fully characterized by the probability vector p and the parameter h. The tree generation corresponds to a branching process. Objective Let T = (V, E) be a random tree from F h p. By assigning the cost c(v) = 1 for all v V, the cost of the tree Cost(T ) is equal to V. Our objective is to analyse the behavior of Knuth s and SE s estimators under this setting. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
68 Super critical branching process Consider a random tree rooted at v 0 and let R m be the total number of children (population size) at level (generation) m and denote by M m the total progeny at generation m. Define µ = E(R 1 ) = jp j and σ 2 = Var(R 1 ) = j 2 p j µ 2. 0 j k From [Pakes 1971], ν m = E(M m ) = E j m R t 0 j k = 1 µm+1 1 µ, and ζ 2 m = Var(M m ) = σ 2 [ ] 1 µ 2m+1 (1 µ) 2 (2m + 1)µ m. 1 µ Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
69 Random trees expected performance Theorem (Knuth v.s SE) For a random tree T (h) the following holds. 1 Lower bound on Knuth s expected variance satisfies: ( ( ( E Var C T (h)) (h) )) ( T σ 2 + µ 2 ) 1 (σ 2 + µ 2) h µ 1 ( σ 2 + µ 2). 2 For ( hk 2 ln 2h(σ 2 + µ 2 σ ) 2 ) µ (µ 1) B max 3 2(µ 1) 2, hσ 2 µ 2, the upper bound on SE s expected variance satisfies: ( E Var (C SE (T (h)) (h) )) ( ) T B 2 heµ 2h σ 2 µ (µ 1) The SE Algorithm introduces an expected variance reduction that is approximately equal to ( ) h 1 + σ2 µ 2. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
70 How about the performance in practice? We expect that variance reduction is governed by ( 1 + σ2 µ 2 ) h term. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
71 How about the performance in practice? ( ) h We expect that variance reduction is governed by 1 + σ2 µ term. 2 For Model 1 we choose p = (0.3, 0.4, 0.1, 0.2), and h = 60. The true number of nodes is Knuth s performance is very bad. p = (0.3, 0.4, 0.1, 0.2) µ = 1.2, σ 2 = 2.6 ( ) Table: Knuth s Algorithm. Table: SE Algorithm Run Ĉ RE Average Run Ĉ SE RE Average Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
72 How about the performance in practice? ( ) h We expect that variance reduction is governed by 1 + σ2 µ term. 2 For Model 2 we choose p = (0.5, 0.1, 0.2, 0.2, 0.1), and h = 30. The true number of nodes is 551. Knuth s performance is very bad. p = (0.5, 0.1, 0.2, 0.2, 0.1) µ = 1.5, σ 2 = 2.05 ( ) Table: Knuth s Algorithm. Run Ĉ RE Average Table: SE Algorithm Run Ĉ SE RE Average Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
73 How about the performance in practice? ( ) h We expect that variance reduction is governed by 1 + σ2 µ term. 2 For Model 3 we choose p = (0.0, 0.7, 0.2, 0.1), and h = 30. The true number of nodes is Knuth s performance is good. p = (0.0, 0.7, 0.2, 0.1) µ = 1.4, σ 2 = 0.44 ( ) Table: Knuth s Algorithm. Run Ĉ RE Average Table: SE Algorithm Run Ĉ SE RE Average Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
74 Fully Polynomial Randomized Approximation Scheme A randomized approximation scheme for Cost(T ) is a non-deterministic algorithm which, when given an input tree T and a real number ε (0, 1), outputs a random variable K such that P ((1 ε)cost(t ) K (1 + ε)cost(t )) 3 4. Such a scheme is said to be fully polynomial if its execution time is bounded by some polynomial in the tree height and ε 1. If these conditions holds, such algorithm is a fully polynomial randomized approximation scheme or FPRAS. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
75 Random trees FPRAS Theorem (Almost sure FPRAS) Let F h p be a family of random trees such that for T F h p ( lim P Cost(T ) < 1 ) h P(h) ν h = 0, where P(h) > 0 is some polynomial function in h and ν h = 1 µh+1 1 µ is the expected number of nodes. In other words, for most instances, (almost surely), the actual number of nodes is not much smaller than the expectation. Then, under the above condition, and provided that µ > 1 + ε for any ε > 0, the SE algorithm is FPRAS for most of the instances in T F h p, that is, CV 2 = Var (CSE(T ) T ) (E (C SE(T ) T )) 2 is bounded by a polynomial in h with high probability. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
76 SE in practice Network Reliability and Sensitivity Terminal network reliability problems appear in many reallife applications, such as transportation grids, social and computer networks, communication systems, etc. This problem belongs to #P complexity class. s t Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
77 The Spectra Definition (Spectra not very formal) The probability (F (k)) of finding a failure set of size k (0 k # of edges) is called the Spectra. How many failure sets of size 2 are there? s t Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
78 The Spectra Definition (Spectra not very formal) The probability (F (k)) of finding a failure set of size k (0 k # of edges) is called the Spectra. How many failure sets of size 2 are there? F (2) = 2/ ( ) 10 2 s t Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
79 The Spectra why do we care? As soon as the Spectra is available we get the following benefits. Reliability calculating the network reliability Ψ(p) in linear time: E ( ) E Ψ(p) = F (k)p k (1 p) E k. k k=0 Sensitivity Birnbaum Importance Measure: BIM j = Ψ(p) p j. Sensitivity Joint Reliability Importance JRI (ij) = 2 Ψ(p) p i p j. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
80 Estimating the Spectra. Unfortunately, the Spectra is rarely available analytically. Crude Monte Carlo is not applicable rare events problem. The state of the art Permutation Monte Carlo (PMC) is better but still fails under the rare event settings. Our suggestion the SE algorithm. (Quite straight forward extension of PMC) Radidlav Vaisman, Dirk P. Kroese and Ilya B. Gertsbakh (2014) Improved Sampling Plans for Combinatorial Invariants of Coherent Systems IEEE transactions on reliability, submitted, minor revision. papers/se-spectra-ieee.pdf Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
81 Estimating the Spectra with SE an example (1) The hypercube graph H n is a regular graph with 2 n vertices and n2 n 1 edges. In order to construct a hypercube graph, label every 2 n vertices with n-bit binary numbers and connect two vertices by an edge whenever the Hamming distance of their labels is 1. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
82 Estimating the Spectra with SE an example (2) We consider H 5 with two terminals K = {0, 24}; that is (00000, 11000) in the binary representation. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
83 Estimating the Spectra with SE an example (2) We consider H 5 with two terminals K = {0, 24}; that is (00000, 11000) in the binary representation. Using full enumeration procedure we found that the first non zero value is F (4) and it is equal to Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
84 Estimating the Spectra with SE an example (2) We consider H 5 with two terminals K = {0, 24}; that is (00000, 11000) in the binary representation. Using full enumeration procedure we found that the first non zero value is F (4) and it is equal to For this relatively small graph the state of the art Permutation Monte Carlo (PMC) algorithm needs huge sample size. Using N = 10 9 samples takes about 25 hours on my Core i5 laptop, The related error is about 60%. Why? The minimal value that must be estimated by PMC is is rare event! Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
85 Estimating the Spectra with SE an example (2) We consider H 5 with two terminals K = {0, 24}; that is (00000, 11000) in the binary representation. Using full enumeration procedure we found that the first non zero value is F (4) and it is equal to For this relatively small graph the state of the art Permutation Monte Carlo (PMC) algorithm needs huge sample size. Using N = 10 9 samples takes about 25 hours on my Core i5 laptop, The related error is about 60%. Why? The minimal value that must be estimated by PMC is is rare event! The SE delivers very reliable estimates in 28 seconds with budget B = 10 and N = The related error is about 1%. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
86 What next? (Hard) finding more classes of trees that can be efficiently handled by SE, (that is, show proven performance guarantees like for the random tree case). (Not very hard) Adaptation of SE for estimation of general expression: E(S(x)). (Easy) Extending different Sequential Monte Carlo algorithms with SE mechanism (splitting). (???) Adaptation of SE for optimization. (???) Introducing Importance Sampling to SE estimator. Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
87 Thank you Slava Vaisman (UQ) Stochastic enumeration January 11, / 37
Lecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity
Universität zu Lübeck Institut für Theoretische Informatik Lecture notes on Knowledge-Based and Learning Systems by Maciej Liśkiewicz Lecture 5: Efficient PAC Learning 1 Consistent Learning: a Bound on
More informationFinal. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers
More information7. Shortest Path Problems and Deterministic Finite State Systems
7. Shortest Path Problems and Deterministic Finite State Systems In the next two lectures we will look at shortest path problems, where the objective is to find the shortest path from a start node to an
More informationCS 188 Introduction to Fall 2007 Artificial Intelligence Midterm
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.
More informationChapter 16 Planning Based on Markov Decision Processes
Lecture slides for Automated Planning: Theory and Practice Chapter 16 Planning Based on Markov Decision Processes Dana S. Nau University of Maryland 12:48 PM February 29, 2012 1 Motivation c a b Until
More informationA Note on the Complexity of Network Reliability Problems. Hans L. Bodlaender Thomas Wolle
A Note on the Complexity of Network Reliability Problems Hans L. Bodlaender Thomas Wolle institute of information and computing sciences, utrecht university technical report UU-CS-2004-001 www.cs.uu.nl
More informationSearch and Lookahead. Bernhard Nebel, Julien Hué, and Stefan Wölfl. June 4/6, 2012
Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Search and Lookahead Enforcing consistency is one way of solving constraint networks:
More informationPartially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS
Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS Many slides adapted from Jur van den Berg Outline POMDPs Separation Principle / Certainty Equivalence Locally Optimal
More informationBounded relative error and Vanishing relative error in Monte Carlo evaluation of static Network Reliability measures
Bounded relative error and Vanishing relative error in Monte Carlo evaluation of static Network Reliability measures Héctor Cancela Depto. de Investigación Operativa Instituto de Computación, Facultad
More informationMarks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:
Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,
More informationOn the complexity of approximate multivariate integration
On the complexity of approximate multivariate integration Ioannis Koutis Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 USA ioannis.koutis@cs.cmu.edu January 11, 2005 Abstract
More informationAlpha-Beta Pruning: Algorithm and Analysis
Alpha-Beta Pruning: Algorithm and Analysis Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Introduction Alpha-beta pruning is the standard searching procedure used for 2-person
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationAlpha-Beta Pruning: Algorithm and Analysis
Alpha-Beta Pruning: Algorithm and Analysis Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Introduction Alpha-beta pruning is the standard searching procedure used for solving
More information6.231 DYNAMIC PROGRAMMING LECTURE 9 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 9 LECTURE OUTLINE Rollout algorithms Policy improvement property Discrete deterministic problems Approximations of rollout algorithms Model Predictive Control (MPC) Discretization
More informationDecision Trees. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University. February 5 th, Carlos Guestrin 1
Decision Trees Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 5 th, 2007 2005-2007 Carlos Guestrin 1 Linear separability A dataset is linearly separable iff 9 a separating
More informationAlpha-Beta Pruning: Algorithm and Analysis
Alpha-Beta Pruning: Algorithm and Analysis Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Introduction Alpha-beta pruning is the standard searching procedure used for 2-person
More informationValue-Ordering and Discrepancies. Ciaran McCreesh and Patrick Prosser
Value-Ordering and Discrepancies Maintaining Arc Consistency (MAC) Achieve (generalised) arc consistency (AC3, etc). If we have a domain wipeout, backtrack. If all domains have one value, we re done. Pick
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationStanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/21/2010
Stanford University CS254: Computational Complexity Handout 8 Luca Trevisan 4/2/200 Counting Problems Today we describe counting problems and the class #P that they define, and we show that every counting
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationProbabilistic Planning. George Konidaris
Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Sachin Patil Guest Lecture: CS287 Advanced Robotics Slides adapted from Pieter Abbeel, Alex Lee Outline Introduction to POMDPs Locally Optimal Solutions
More informationCapacity of a channel Shannon s second theorem. Information Theory 1/33
Capacity of a channel Shannon s second theorem Information Theory 1/33 Outline 1. Memoryless channels, examples ; 2. Capacity ; 3. Symmetric channels ; 4. Channel Coding ; 5. Shannon s second theorem,
More informationComputers and Mathematics with Applications. Project management for arbitrary random durations and cost attributes by applying network approaches
Computers and Mathematics with Applications 56 (2008) 2650 2655 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Project
More information1 Computational Problems
Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationLECTURE 3. Last time:
LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate
More informationGenerative v. Discriminative classifiers Intuition
Logistic Regression (Continued) Generative v. Discriminative Decision rees Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University January 31 st, 2007 2005-2007 Carlos Guestrin 1 Generative
More informationChapter 3 Deterministic planning
Chapter 3 Deterministic planning In this chapter we describe a number of algorithms for solving the historically most important and most basic type of planning problem. Two rather strong simplifying assumptions
More informationBounded Treewidth Graphs A Survey German Russian Winter School St. Petersburg, Russia
Bounded Treewidth Graphs A Survey German Russian Winter School St. Petersburg, Russia Andreas Krause krausea@cs.tum.edu Technical University of Munich February 12, 2003 This survey gives an introduction
More informationNotes for Lecture Notes 2
Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationAlternative Combinatorial Gray Codes
Alternative Combinatorial Gray Codes Cormier-Iijima, Samuel sciyoshi@gmail.com December 17, 2010 Abstract Gray codes have numerous applications in a variety of fields, including error correction, encryption,
More informationInternational Journal of Performability Engineering Vol. 10, No. 2, March 2014, pp RAMS Consultants Printed in India
International Journal of Performability Engineering Vol. 10, No. 2, March 2014, pp. 163-172. RAMS Consultants Printed in India Network Reliability Monte Carlo With Nodes Subject to Failure ILYA GERTSBAKH
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationCSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18
CSE 417T: Introduction to Machine Learning Final Review Henry Chai 12/4/18 Overfitting Overfitting is fitting the training data more than is warranted Fitting noise rather than signal 2 Estimating! "#$
More informationDecision Trees. CS57300 Data Mining Fall Instructor: Bruno Ribeiro
Decision Trees CS57300 Data Mining Fall 2016 Instructor: Bruno Ribeiro Goal } Classification without Models Well, partially without a model } Today: Decision Trees 2015 Bruno Ribeiro 2 3 Why Trees? } interpretable/intuitive,
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationTHE COMPLEXITY OF DECENTRALIZED CONTROL OF MARKOV DECISION PROCESSES
MATHEMATICS OF OPERATIONS RESEARCH Vol. 27, No. 4, November 2002, pp. 819 840 Printed in U.S.A. THE COMPLEXITY OF DECENTRALIZED CONTROL OF MARKOV DECISION PROCESSES DANIEL S. BERNSTEIN, ROBERT GIVAN, NEIL
More informationIntroduction to Arti Intelligence
Introduction to Arti Intelligence cial Lecture 4: Constraint satisfaction problems 1 / 48 Constraint satisfaction problems: Today Exploiting the representation of a state to accelerate search. Backtracking.
More informationEECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have
EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,
More information1 Algebraic Methods. 1.1 Gröbner Bases Applied to SAT
1 Algebraic Methods In an algebraic system Boolean constraints are expressed as a system of algebraic equations or inequalities which has a solution if and only if the constraints are satisfiable. Equations
More informationAn Introduction to Randomized algorithms
An Introduction to Randomized algorithms C.R. Subramanian The Institute of Mathematical Sciences, Chennai. Expository talk presented at the Research Promotion Workshop on Introduction to Geometric and
More informationDoctoral Course in Speech Recognition. May 2007 Kjell Elenius
Doctoral Course in Speech Recognition May 2007 Kjell Elenius CHAPTER 12 BASIC SEARCH ALGORITHMS State-based search paradigm Triplet S, O, G S, set of initial states O, set of operators applied on a state
More informationCSCI3390-Lecture 14: The class NP
CSCI3390-Lecture 14: The class NP 1 Problems and Witnesses All of the decision problems described below have the form: Is there a solution to X? where X is the given problem instance. If the instance is
More information6 Reinforcement Learning
6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 23. Decision Trees Barnabás Póczos Contents Decision Trees: Definition + Motivation Algorithm for Learning Decision Trees Entropy, Mutual Information, Information
More informationThe Complexity of Decentralized Control of Markov Decision Processes
The Complexity of Decentralized Control of Markov Decision Processes Daniel S. Bernstein Robert Givan Neil Immerman Shlomo Zilberstein Department of Computer Science University of Massachusetts Amherst,
More informationEstimating the Number of s-t Paths in a Graph
Journal of Graph Algorithms and Applications http://jgaa.info/ vol. 11, no. 1, pp. 195 214 (2007) Estimating the Number of s-t Paths in a Graph Ben Roberts Department of Pure Mathematics and Mathematical
More informationarxiv: v1 [math.oc] 24 Oct 2014
In-Network Leader Selection for Acyclic Graphs Stacy Patterson arxiv:1410.6533v1 [math.oc] 24 Oct 2014 Abstract We study the problem of leader selection in leaderfollower multi-agent systems that are subject
More informationBalancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm
Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu
More information14 Random Variables and Simulation
14 Random Variables and Simulation In this lecture note we consider the relationship between random variables and simulation models. Random variables play two important roles in simulation models. We assume
More informationAdaptive Crowdsourcing via EM with Prior
Adaptive Crowdsourcing via EM with Prior Peter Maginnis and Tanmay Gupta May, 205 In this work, we make two primary contributions: derivation of the EM update for the shifted and rescaled beta prior and
More informationLecture 2: Randomized Algorithms
Lecture 2: Randomized Algorithms Independence & Conditional Probability Random Variables Expectation & Conditional Expectation Law of Total Probability Law of Total Expectation Derandomization Using Conditional
More informationvia Tandem Mass Spectrometry and Propositional Satisfiability De Novo Peptide Sequencing Renato Bruni University of Perugia
De Novo Peptide Sequencing via Tandem Mass Spectrometry and Propositional Satisfiability Renato Bruni bruni@diei.unipg.it or bruni@dis.uniroma1.it University of Perugia I FIMA International Conference
More informationBDD Based Upon Shannon Expansion
Boolean Function Manipulation OBDD and more BDD Based Upon Shannon Expansion Notations f(x, x 2,, x n ) - n-input function, x i = or f xi=b (x,, x n ) = f(x,,x i-,b,x i+,,x n ), b= or Shannon Expansion
More informationRandom Walks on Graphs. One Concrete Example of a random walk Motivation applications
Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration
More informationMarkov Decision Processes
Markov Decision Processes Noel Welsh 11 November 2010 Noel Welsh () Markov Decision Processes 11 November 2010 1 / 30 Annoucements Applicant visitor day seeks robot demonstrators for exciting half hour
More informationMARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for
MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1
More informationk-protected VERTICES IN BINARY SEARCH TREES
k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from
More informationA walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation
A walk over the shortest path: Dijkstra s Algorithm viewed as fixed-point computation Jayadev Misra 1 Department of Computer Sciences, University of Texas at Austin, Austin, Texas 78712-1188, USA Abstract
More informationAlgorithms: COMP3121/3821/9101/9801
NEW SOUTH WALES Algorithms: COMP3121/3821/9101/9801 Aleks Ignjatović School of Computer Science and Engineering University of New South Wales LECTURE 9: INTRACTABILITY COMP3121/3821/9101/9801 1 / 29 Feasibility
More informationThe non-backtracking operator
The non-backtracking operator Florent Krzakala LPS, Ecole Normale Supérieure in collaboration with Paris: L. Zdeborova, A. Saade Rome: A. Decelle Würzburg: J. Reichardt Santa Fe: C. Moore, P. Zhang Berkeley:
More informationPattern Recognition Approaches to Solving Combinatorial Problems in Free Groups
Contemporary Mathematics Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups Robert M. Haralick, Alex D. Miasnikov, and Alexei G. Myasnikov Abstract. We review some basic methodologies
More informationRobust Network Codes for Unicast Connections: A Case Study
Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationLecture 5: Counting independent sets up to the tree threshold
CS 7535: Markov Chain Monte Carlo Algorithms Fall 2014 Lecture 5: Counting independent sets up to the tree threshold Lecturer: Richard Brooks and Rakshit Trivedi Date: December 02 In this lecture, we will
More informationBias and No Free Lunch in Formal Measures of Intelligence
Journal of Artificial General Intelligence 1 (2009) 54-61 Submitted 2009-03-14; Revised 2009-09-25 Bias and No Free Lunch in Formal Measures of Intelligence Bill Hibbard University of Wisconsin Madison
More informationAsymptotic redundancy and prolixity
Asymptotic redundancy and prolixity Yuval Dagan, Yuval Filmus, and Shay Moran April 6, 2017 Abstract Gallager (1978) considered the worst-case redundancy of Huffman codes as the maximum probability tends
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Geoff Hollinger Sequential Decision Making in Robotics Spring, 2011 *Some media from Reid Simmons, Trey Smith, Tony Cassandra, Michael Littman, and
More informationQ = Set of states, IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar
IE661: Scheduling Theory (Fall 2003) Primer to Complexity Theory Satyaki Ghosh Dastidar Turing Machine A Turing machine is an abstract representation of a computing device. It consists of a read/write
More informationMonte Carlo Methods. Handbook of. University ofqueensland. Thomas Taimre. Zdravko I. Botev. Dirk P. Kroese. Universite de Montreal
Handbook of Monte Carlo Methods Dirk P. Kroese University ofqueensland Thomas Taimre University ofqueensland Zdravko I. Botev Universite de Montreal A JOHN WILEY & SONS, INC., PUBLICATION Preface Acknowledgments
More informationMarkov Decision Processes
Markov Decision Processes Lecture notes for the course Games on Graphs B. Srivathsan Chennai Mathematical Institute, India 1 Markov Chains We will define Markov chains in a manner that will be useful to
More informationAlgorithms and Data Structures 2016 Week 5 solutions (Tues 9th - Fri 12th February)
Algorithms and Data Structures 016 Week 5 solutions (Tues 9th - Fri 1th February) 1. Draw the decision tree (under the assumption of all-distinct inputs) Quicksort for n = 3. answer: (of course you should
More informationStratified Splitting for Efficient Monte Carlo Integration
Stratified Splitting for Efficient Monte Carlo Integration Radislav Vaisman, Robert Salomone, and Dirk P. Kroese School of Mathematics and Physics The University of Queensland, Brisbane, Australia E-mail:
More informationA An Overview of Complexity Theory for the Algorithm Designer
A An Overview of Complexity Theory for the Algorithm Designer A.1 Certificates and the class NP A decision problem is one whose answer is either yes or no. Two examples are: SAT: Given a Boolean formula
More informationLecture 1 : Data Compression and Entropy
CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for
More informationTHIS paper is aimed at designing efficient decoding algorithms
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used
More informationStreaming Algorithms for Optimal Generation of Random Bits
Streaming Algorithms for Optimal Generation of Random Bits ongchao Zhou Electrical Engineering Department California Institute of echnology Pasadena, CA 925 Email: hzhou@caltech.edu Jehoshua Bruck Electrical
More informationA NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING
A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,
More informationCoupling. 2/3/2010 and 2/5/2010
Coupling 2/3/2010 and 2/5/2010 1 Introduction Consider the move to middle shuffle where a card from the top is placed uniformly at random at a position in the deck. It is easy to see that this Markov Chain
More informationCOMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati
COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning Hanna Kurniawati Today } What is machine learning? } Where is it used? } Types of machine learning
More informationKönig s Lemma and Kleene Tree
König s Lemma and Kleene Tree Andrej Bauer May 3, 2006 Abstract I present a basic result about Cantor space in the context of computability theory: the computable Cantor space is computably non-compact.
More informationFinal exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007.
Fall 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Fall term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials
More informationComputational Logic. Davide Martinenghi. Spring Free University of Bozen-Bolzano. Computational Logic Davide Martinenghi (1/30)
Computational Logic Davide Martinenghi Free University of Bozen-Bolzano Spring 2010 Computational Logic Davide Martinenghi (1/30) Propositional Logic - sequent calculus To overcome the problems of natural
More informationACO Comprehensive Exam October 14 and 15, 2013
1. Computability, Complexity and Algorithms (a) Let G be the complete graph on n vertices, and let c : V (G) V (G) [0, ) be a symmetric cost function. Consider the following closest point heuristic for
More informationFinal exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007.
Spring 2007 / Page 1 Final exam of ECE 457 Applied Artificial Intelligence for the Spring term 2007. Don t panic. Be sure to write your name and student ID number on every page of the exam. The only materials
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationThe Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision
The Particle Filter Non-parametric implementation of Bayes filter Represents the belief (posterior) random state samples. by a set of This representation is approximate. Can represent distributions that
More informationCovering Linear Orders with Posets
Covering Linear Orders with Posets Proceso L. Fernandez, Lenwood S. Heath, Naren Ramakrishnan, and John Paul C. Vergara Department of Information Systems and Computer Science, Ateneo de Manila University,
More informationCS 4100 // artificial intelligence. Recap/midterm review!
CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks
More informationData Mining Classification: Basic Concepts and Techniques. Lecture Notes for Chapter 3. Introduction to Data Mining, 2nd Edition
Data Mining Classification: Basic Concepts and Techniques Lecture Notes for Chapter 3 by Tan, Steinbach, Karpatne, Kumar 1 Classification: Definition Given a collection of records (training set ) Each
More informationCSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes
CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not
More informationFinding Consensus Strings With Small Length Difference Between Input and Solution Strings
Finding Consensus Strings With Small Length Difference Between Input and Solution Strings Markus L. Schmid Trier University, Fachbereich IV Abteilung Informatikwissenschaften, D-54286 Trier, Germany, MSchmid@uni-trier.de
More informationData Mining. CS57300 Purdue University. Bruno Ribeiro. February 8, 2018
Data Mining CS57300 Purdue University Bruno Ribeiro February 8, 2018 Decision trees Why Trees? interpretable/intuitive, popular in medical applications because they mimic the way a doctor thinks model
More informationPerformance Guarantees for Information Theoretic Active Inference
Performance Guarantees for Information Theoretic Active Inference Jason L. Williams, John W. Fisher III and Alan S. Willsky Laboratory for Information and Decision Systems and Computer Science and Artificial
More informationLecture 1 : Probabilistic Method
IITM-CS6845: Theory Jan 04, 01 Lecturer: N.S.Narayanaswamy Lecture 1 : Probabilistic Method Scribe: R.Krithika The probabilistic method is a technique to deal with combinatorial problems by introducing
More informationPhylogenetics: Parsimony and Likelihood. COMP Spring 2016 Luay Nakhleh, Rice University
Phylogenetics: Parsimony and Likelihood COMP 571 - Spring 2016 Luay Nakhleh, Rice University The Problem Input: Multiple alignment of a set S of sequences Output: Tree T leaf-labeled with S Assumptions
More informationDescription Logics: an Introductory Course on a Nice Family of Logics. Day 2: Tableau Algorithms. Uli Sattler
Description Logics: an Introductory Course on a Nice Family of Logics Day 2: Tableau Algorithms Uli Sattler 1 Warm up Which of the following subsumptions hold? r some (A and B) is subsumed by r some A
More information