Outline DP paradigm Discrete optimisation Viterbi algorithm DP: 0 1 Knapsack. Dynamic Programming. Georgy Gimel farb
|
|
- Emil Darcy McLaughlin
- 6 years ago
- Views:
Transcription
1 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Dynamic Programming Georgy Gimel farb (with basic contributions by Michael J. Dinneen) COMPSCI 69 Computational Science /
2 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Dynamic Programming (DP) Paradigm Discrete Optimisation with DP Viterbi algorithm Knapsack Problem: Dynamic programming solution Learning outcomes: Understand DP and problems it can solve Be familiar with the edit-distance problem and its DP solution Be familiar with the Viterbi algorithm Additional sources: programming problem Dynamic programming solution /
3 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Main Algorithmic Paradigms Greedy: Building up a solution incrementally, by optimising at each step some local criterion Divide-and-conquer: Breaking up a problem into separate subproblems, solving each subproblem independently, and combining solution to subproblems to form solution to original problem Dynamic programming (DP): Breaking up a problem into a series of overlapping subproblems, and building up solutions to larger and larger subproblems Unlike the divide-and-conquer paradigm, DP typically involves solving all possible subproblems rather than a small portion DP tends to solve each sub-problem only once, store the results, and use these later again, thus reducing dramatically the amount of computation when the number of repeating subproblems is exponentially large /
4 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack History of Dynamic Programming Etymology: Richard E. Bellman [ ]: Famous applied mathematician (USA) who pioneered the systematic study of dynamic programming in the 95s when he was working at RAND Corporation Dynamic programming = planning over time Secretary of Defense was hostile to mathematical research Bellman sought an impressive name to avoid confrontation It s impossible to use dynamic in a pejorative sense Something not even a Congressman could object to Reference: Bellman, R. E.: Eye of the Hurricane, An Autobiography. /
5 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Bellman s Principle of Optimality R. E. Bellman: Dynamic Programming. Princeton Univ. Press, 957, Ch.III. An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision state s Optimal policy s t w.r.t. s s i s Can t be an optimal policy w.r.t. s i time t i n The optimal policy w.r.t. s after any its state s i cannot differ from the optimal policy w.r.t. the state s i! See equation 5 /
6 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state t 6 /
7 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 7 Greedy algorithm: the total cost c = 7 t 6 /
8 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /
9 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /
10 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /
11 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 6 DP: Step 6 t 6 /
12 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 6 DP: Backtracking the cheapest route (c = ) 6 t 6 /
13 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Discrete Optimisation with DP Problem: (s,..., s n ) = arg min F (s,..., s n ) (s i S i : i=,...,n ) where an objective function F (s,..., s n ) depends on states s i ; i =,..., n, having each a finite set S i of values Frequently, an objective function to apply DP is additive: n F (s, s,..., s n ) = ψ o (s ) + ϕ i (s i, s i ) Generally, each state s i takes only a subset S i (s i+ ) S i of values, which depends on the state s i+ S i+ Overlapping subproblems are solved for all the states s i S i at each step i sequentially for i =,..., n i= 7 /
14 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Problem on Slide 7 Bellman Equation: For i =,..., n and each s i S i, Φ i (s i ) = min i (s i ) + ϕ i (s i, s i )} s i S i (s i ) B i (s i ) = arg min i (s i ) + ϕ i (s i, s i )} s i S i (s i ) Φ i (s i ) is a candidate decision for state s i at step i and B i (s i ) is a backward pointer for reconstructing a candidate sequence of states s,..., s i, s i, producing Φ i (s i ) Backtracking to reconstruct the solution: min Φ n (s n ) s n S n min F (s,..., s n ) s,...,s n s n = arg min Φ n (s n ) s n S n s i = B i (s i ) for i = n,..., /
15 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 Step i Set of states S i {} {,, } {,, } {,, } State constraints S i (s i ) {} {} {} {, } {} {, } {, } {} {, } Cost functions: ψ () = s >< ϕ (, s ) = >: >< ϕ (s, s ) = >: >< ϕ (s, s ) = >: s \s = s \s = Step i = : Φ (s = ) = ψ () = Step i = : Φ (s ) = s = Φ () + ϕ (, ) {z } {z } >< s = Φ () + ϕ (, ) {z } {z } s = Φ () + ϕ (, ) >: {z } {z } B (s ) = for s =,, s i = s i = s i = 9 = ; = = = 9 /
16 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Step i = : >< Φ (s ) = >: s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + s = Φ () + ϕ (, ) = ; {z } B () = + s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + state i /
17 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Step i = : >< Φ (s ) = >: s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = 6; {z } {z } B () = + + s = Φ () + ϕ (, ) = 6; {z } B () = + s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + state 6 6 i /
18 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Solution: Optimal solution: min F (s,..., s ) min Φ (s ) = min{6, 6, } = s,...,s s S Ending optimal state: s = min Φ (s ) = s S Backtracking preceding states for the optimal solution: state s = B () = s = B () = s = B () = 6 6 i /
19 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack DP: Applications and Algorithms Areas: Control theory Signal processing Information theory Operations research Bioinformatics Computer science: theory, AI, graphics, image analysis, systems,... Algorithms Viterbi: error correction coding, hidden Markov models Unix diff: comparing two files Smith-Waterman: gene sequence alignment Bellman-Ford: shortest path routing in networks Cocke-Kasami-Younger: parsing context free grammars /
20 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: Minimum Cost of Editing Applications: Unix diff, speech recognition, computational biology Different penalties for insertion deletion, >, and for mismatch between two characters x and y: α xy ; α xx = c l a i m l i m e α ll = α ii = α mm = cost(claim lime) = c l a i m l i m e α cl α ii = α mm = cost(claim lime) = α cl + distance /
21 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Two Strings C and C : Levenshtein, or Edit Distance Minimum number D(C, C ) of edit operations to transform C into C : insertion (weight ), deletion (), or character substitution ( if the same character, otherwise α.. > ); e.g. = α.. = delete c i substitute c j for c i insert c j D(claim, lime) = ) claim laim ) laim lim ) lim lime c l a i m l i m e c l a i m l i m e distance 5 /
22 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation Strings: x x [m] = x x... x m and y y [n] = y y... y n Substrings: x [i] = x... x i ; i m, and y j] = y... y j ; j n j x i Distance d(i, j) = D(x [i], y [j] ) i Recurrent computation: if i = ; j = i d(i, ) + if i > ; j = j d(, j ) + if i = ; j > d(i, j) = d(i, j) +, d(i, j ) +, min d(i, j ) + α otherwise xiy j }{{} ; α xx= j y j i 6 /
23 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation e α ce α le α ae α ie α me m α cm α lm α am α im i α ci α li α ai α mi l α cl α al α il α ml c l a i m 7 /
24 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
25 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
26 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
27 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
28 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
29 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
30 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
31 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
32 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
33 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
34 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
35 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
36 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
37 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
38 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
39 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
40 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
41 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
42 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
43 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
44 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
45 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
46 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
47 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
48 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
49 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
50 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
51 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
52 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
53 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
54 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /
55 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Sequence Alignment Given two strings x = x x... x m and y = y y... y n, find their alignment of minimum cost Alignment M: a set of ordered pairs (x i y j, such that each item occurs in at most one pair and there are no crossings Pairs x i y j and x i y j cross if i < i, but j > j cost(m) = (x i,y y) M α xiy j } {{ } mismatch + i:x i unmatched + j:y j unmatched } {{ } gaps Example : x = claim; y = lime M = {x y, x y, x 5 y }; cost = Example : x = ctaccg; y = tacatg M = {x y, x y, x y, x 5 y, x 5 y, x 6 y 6 }; cost = α CA + x x x x x 5 c l a i m x x x x x 5 C T A C C G l i m e T A C A T G y y y y y y y y y 5 y 6 9 /
56 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Sequence Alignment: Algorithm Sequence-Alignment ( x x... x m, y y... y n,, α) { for i = to m D[i, ] = i for j = to n D[, j] = j for i = to m for j = to n D[i, j] = min( α[x i, y j ] + D[i, j ], + D[i, j], + D[i, j ] ) } Time and space complexity Θ(mn) English words: m, n Computational biology: m = n, ( billion operations is fine, but GB array?) /
57 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Viterbi Algorithm: Probabilistic Model DP search for the most likely sequence of unobserved (hidden) states from a sequence of observations (signals) Each signal depends on exactly one corresponding hidden state The hidden states are produced by a first-order Markov model: Set of the hidden states S = {s,..., s n} Transitional probabilities P s(s i s j) : i, j {,..., n} Given the states, the signals are statistically independent Set of the signals V = {v,..., v m} Observational probabilities P o(v j s i) : v j V; s i S v [] v [] v [] v [] Log-likelihood of a sequence of states s = s [] s []... s [K], given a sequence of s [] s [] s [] s [] signals v = v [] v []... v [K] : L(s v) log Pr(s v) Pr(s v) Pr(s, v) = Pr s (s) Pr o (v s) algorithm /
58 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Maximum (Log-)Likelihood s = arg max s S K L(s v) s = s [] s []... s [K] a hidden (unobserved) Markov chain of states at steps k =,..., K with joint probability Pr s (s) = π ( ) K s [] k= P ( ) s s[k] s [k ] π (s) prior probability of state s S at step k = P s (s s ) probability of transition from state s to the next one, s v = v [] v []... v [K] an observed sequence of conditionally independent signals with probability Pr(v s) = K k= P ( ) o v[k] s [k] P o (v s) probability of observing v V in state s S at step k K ( ( ) ( )) s = arg max ψk s[k] + ϕ s[k] s [k ] s S K k= { ( log π (s) + log Po v[k] s ) k = ; s S ψ k (s) = ( log P o v[k] s ) k > ; s S { k = ; s S ϕ (s s ) = log P s (s s ) k > ; s S /
59 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Probabilistic State Transitions and Signals for States Example for S = {a, b, c}, V = {A, B, C} P s (c c) P s (c a) P s (a c) c Ps(c b) Ps(b c) Probabilistic signal generator at each P step k; P o(v s) = for all s S v V s a P s (b a) b P s (a b) P s (a a) P s (b b) Non-deterministic (probabilistic) finite automaton (NFA) P for state transitions at each step k; P s(s s ) = for all s S s S P o (A s) P o (C s) P o (B s) A B C /
60 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Graphical Model for S = {a, b, c} and Given Signals v v [] = A v [] = A v [] = B v [] = A c ψ (c) ϕ(c c) ψ (c) ψ (c) ψ (c) ϕ(a c) ϕ(b c) b ψ (b) ϕ(c b) ϕ(b b) ϕ(b c) ψ (b) ψ (b) ψ (b) a ϕ(c a) ϕ(b a) ψ (a) ϕ(a a) ψ (a) ψ (a) ψ (a) k = k = k = k = /
61 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Maximum (Log-)Likelihood via Dynamic Programming Viterbi DP algorithm: Initialisation: k = ; Φ (s [] ) = ψ (s [] ) for all s [] S Forward pass for k =,..., K and all s [k] S: Φ k ( s[k] ) { ( )} = ψ k (s [k] ) + max ϕ(s[k] s [k ] ) + Φ k s[k ] s [k ] S ( ) { ( )} B k s[k] = arg max ϕ(s[k] s [k ] ) + Φ k s[k ] s [k ] S k = K: the maximum log-likelihood state s [K] = arg max Φ K(s [K] ) s [k] S Backward pass for k = K,..., : s [k] = B k+ ( ) s [k+] 5 /
62 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a k= v [] =A k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /
63 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a k= v [] =A k= v [] =A Step k = : Initialisation k= v [] =B k= v [] =A k=5 v [5] =B 6 /
64 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /
65 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b a k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /
66 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b a k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /
67 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b a k= v [] =A Step k = 5 k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /
68 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b a k= v [] =A k= v [] =A Backtracking: s = aaaaa k= v [] =B k= v [] =A k=5 v [5] =B 6 /
69 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem Maximise n x i v i subject to n x i s i S; x i {, }; s i, v i, S > i= i= DP solution of pseudo-polynomial time complexity, O(nS) No contradiction to the NP-completeness of the problem: S is not polynomial in the length n of the problem s input The length of S is proportional to the number of bits, i.e. log S Space complexity is O(nS) (or O(S) if rewriting from µ(s) to µ() for each i) µ(i, s) the maximum value that can be obtained by placing up to i items to the knapsack of size less than or equal to s DP solution uses a table µ(i, s) or µ(s) to store previous computations 7 /
70 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Recursive definition of µ(i, s): µ(, s) = µ(i, ) = ; { µ(i, s) if si > s µ(i, s) = max {µ(i, s), µ(i, s s i ) + v i } if s i s i µ(i, ŝ) i s =... ŝ s i ŝ s = S µ(i, ŝ s i ) µ(i, ŝ) /
71 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
72 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
73 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
74 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
75 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
76 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = i = i = 9 i = i = i = s = /
77 Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = x 5 = i = x = i = 9 x = i = x = i = x = i = s = /
Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing
More information6. DYNAMIC PROGRAMMING I
6. DYNAMIC PROGRAMMING I weighted interval scheduling segmented least squares knapsack problem RNA secondary structure Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley Copyright 2013
More informationDynamic Programming 1
Dynamic Programming 1 lgorithmic Paradigms Divide-and-conquer. Break up a problem into two sub-problems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original
More informationCopyright 2000, Kevin Wayne 1
//8 Fast Integer Division Too (!) Schönhage Strassen algorithm CS 8: Algorithm Design and Analysis Integer division. Given two n-bit (or less) integers s and t, compute quotient q = s / t and remainder
More informationCS 580: Algorithm Design and Analysis
CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 208 Announcement: Homework 3 due February 5 th at :59PM Final Exam (Tentative): Thursday, May 3 @ 8AM (PHYS 203) Recap: Divide
More informationCopyright 2000, Kevin Wayne 1
/9/ lgorithmic Paradigms hapter Dynamic Programming reed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into two sub-problems, solve
More information6. DYNAMIC PROGRAMMING I
6. DYNAMIC PRORAMMIN I weighted interval scheduling segmented least squares knapsack problem RNA secondary structure Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley http://www.cs.princeton.edu/~wayne/kleinberg-tardos
More informationChapter 6. Dynamic Programming. CS 350: Winter 2018
Chapter 6 Dynamic Programming CS 350: Winter 2018 1 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into
More informationCSE 202 Dynamic Programming II
CSE 202 Dynamic Programming II Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally,
More information6.6 Sequence Alignment
6.6 Sequence Alignment String Similarity How similar are two strings? ocurrance o c u r r a n c e - occurrence First model the problem Q. How can we measure the distance? o c c u r r e n c e 6 mismatches,
More informationAreas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.
lgorithmic Paradigms hapter Dynamic Programming reed Build up a solution incrementally, myopically optimizing some local criterion Divide-and-conquer Break up a problem into two sub-problems, solve each
More informationChapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications
lgorithmic Paradigms hapter Dynamic Programming reedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into sub-problems, solve each
More informationCS 580: Algorithm Design and Analysis
CS 58: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 28 Announcement: Homework 3 due February 5 th at :59PM Midterm Exam: Wed, Feb 2 (8PM-PM) @ MTHW 2 Recap: Dynamic Programming
More informationDynamic Programming. Cormen et. al. IV 15
Dynamic Programming Cormen et. al. IV 5 Dynamic Programming Applications Areas. Bioinformatics. Control theory. Operations research. Some famous dynamic programming algorithms. Unix diff for comparing
More informationDynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming
lgorithmic Paradigms Dynamic Programming reed Build up a solution incrementally, myopically optimizing some local criterion Divide-and-conquer Break up a problem into two sub-problems, solve each sub-problem
More information6. DYNAMIC PROGRAMMING I
lgorithmic paradigms 6. DYNMI PRORMMIN I weighted interval scheduling segmented least squares knapsack problem RN secondary structure reedy. Build up a solution incrementally, myopically optimizing some
More information6. DYNAMIC PROGRAMMING II
6. DYNAMIC PROGRAMMING II sequence alignment Hirschberg's algorithm Bellman-Ford algorithm distance vector protocols negative cycles in a digraph Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison
More informationLecture 2: Divide and conquer and Dynamic programming
Chapter 2 Lecture 2: Divide and conquer and Dynamic programming 2.1 Divide and Conquer Idea: - divide the problem into subproblems in linear time - solve subproblems recursively - combine the results in
More informationChapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some
More informationHidden Markov Models Hamid R. Rabiee
Hidden Markov Models Hamid R. Rabiee 1 Hidden Markov Models (HMMs) In the previous slides, we have seen that in many cases the underlying behavior of nature could be modeled as a Markov process. However
More informationLecture 11: Hidden Markov Models
Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing
More informationMore Dynamic Programming
CS 374: Algorithms & Models of Computation, Spring 2017 More Dynamic Programming Lecture 14 March 9, 2017 Chandra Chekuri (UIUC) CS374 1 Spring 2017 1 / 42 What is the running time of the following? Consider
More informationMore Dynamic Programming
Algorithms & Models of Computation CS/ECE 374, Fall 2017 More Dynamic Programming Lecture 14 Tuesday, October 17, 2017 Sariel Har-Peled (UIUC) CS374 1 Fall 2017 1 / 48 What is the running time of the following?
More informationAside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n
Aside: Golden Ratio Golden Ratio: A universal law. Golden ratio φ = lim n a n+b n a n = 1+ 5 2 a n+1 = a n + b n, b n = a n 1 Ruta (UIUC) CS473 1 Spring 2018 1 / 41 CS 473: Algorithms, Spring 2018 Dynamic
More informationObjec&ves. Review. Dynamic Programming. What is the knapsack problem? What is our solu&on? Ø Review Knapsack Ø Sequence Alignment 3/28/18
/8/8 Objec&ves Dynamic Programming Ø Review Knapsack Ø Sequence Alignment Mar 8, 8 CSCI - Sprenkle Review What is the knapsack problem? What is our solu&on? Mar 8, 8 CSCI - Sprenkle /8/8 Dynamic Programming:
More informationFoundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM
Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM Alex Lascarides (Slides from Alex Lascarides and Sharon Goldwater) 2 February 2019 Alex Lascarides FNLP Lecture
More informationMultiscale Systems Engineering Research Group
Hidden Markov Model Prof. Yan Wang Woodruff School of Mechanical Engineering Georgia Institute of echnology Atlanta, GA 30332, U.S.A. yan.wang@me.gatech.edu Learning Objectives o familiarize the hidden
More informationDynamic programming. Curs 2015
Dynamic programming. Curs 2015 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else
More informationorder is number of previous outputs
Markov Models Lecture : Markov and Hidden Markov Models PSfrag Use past replacements as state. Next output depends on previous output(s): y t = f[y t, y t,...] order is number of previous outputs y t y
More informationNote Set 5: Hidden Markov Models
Note Set 5: Hidden Markov Models Probabilistic Learning: Theory and Algorithms, CS 274A, Winter 2016 1 Hidden Markov Models (HMMs) 1.1 Introduction Consider observed data vectors x t that are d-dimensional
More informationCSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo
CSE 431/531: Analysis of Algorithms Dynamic Programming Lecturer: Shi Li Department of Computer Science and Engineering University at Buffalo Paradigms for Designing Algorithms Greedy algorithm Make a
More informationHidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationL23: hidden Markov models
L23: hidden Markov models Discrete Markov processes Hidden Markov models Forward and Backward procedures The Viterbi algorithm This lecture is based on [Rabiner and Juang, 1993] Introduction to Speech
More informationWhat is Dynamic Programming
What is Dynamic Programming Like DaC, Greedy algorithms, Dynamic Programming is another useful method for designing efficient algorithms. Why the name? Eye of the Hurricane: An Autobiography - A quote
More informationMinimum Edit Distance. Defini'on of Minimum Edit Distance
Minimum Edit Distance Defini'on of Minimum Edit Distance How similar are two strings? Spell correc'on The user typed graffe Which is closest? graf gra@ grail giraffe Computa'onal Biology Align two sequences
More informationLecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010
Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition
More informationConditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013
Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013 Outline Modeling Inference Training Applications Outline Modeling Problem definition Discriminative vs. Generative Chain CRF General
More informationSequence modelling. Marco Saerens (UCL) Slides references
Sequence modelling Marco Saerens (UCL) Slides references Many slides and figures have been adapted from the slides associated to the following books: Alpaydin (2004), Introduction to machine learning.
More informationAn Introduction to Bioinformatics Algorithms Hidden Markov Models
Hidden Markov Models Outline 1. CG-Islands 2. The Fair Bet Casino 3. Hidden Markov Model 4. Decoding Algorithm 5. Forward-Backward Algorithm 6. Profile HMMs 7. HMM Parameter Estimation 8. Viterbi Training
More informationHidden Markov Models
Hidden Markov Models CI/CI(CS) UE, SS 2015 Christian Knoll Signal Processing and Speech Communication Laboratory Graz University of Technology June 23, 2015 CI/CI(CS) SS 2015 June 23, 2015 Slide 1/26 Content
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationPage 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence
Page Hidden Markov models and multiple sequence alignment Russ B Altman BMI 4 CS 74 Some slides borrowed from Scott C Schmidler (BMI graduate student) References Bioinformatics Classic: Krogh et al (994)
More informationDynamic programming. Curs 2017
Dynamic programming. Curs 2017 Fibonacci Recurrence. n-th Fibonacci Term INPUT: n nat QUESTION: Compute F n = F n 1 + F n 2 Recursive Fibonacci (n) if n = 0 then return 0 else if n = 1 then return 1 else
More informationChapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing
More informationComparative Gene Finding. BMI/CS 776 Spring 2015 Colin Dewey
Comparative Gene Finding BMI/CS 776 www.biostat.wisc.edu/bmi776/ Spring 2015 Colin Dewey cdewey@biostat.wisc.edu Goals for Lecture the key concepts to understand are the following: using related genomes
More informationCS 7180: Behavioral Modeling and Decision- making in AI
CS 7180: Behavioral Modeling and Decision- making in AI Hidden Markov Models Prof. Amy Sliva October 26, 2012 Par?ally observable temporal domains POMDPs represented uncertainty about the state Belief
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationOutline. Similarity Search. Outline. Motivation. The String Edit Distance
Outline Similarity Search The Nikolaus Augsten nikolaus.augsten@sbg.ac.at Department of Computer Sciences University of Salzburg 1 http://dbresearch.uni-salzburg.at WS 2017/2018 Version March 12, 2018
More informationProofs, Strings, and Finite Automata. CS154 Chris Pollett Feb 5, 2007.
Proofs, Strings, and Finite Automata CS154 Chris Pollett Feb 5, 2007. Outline Proofs and Proof Strategies Strings Finding proofs Example: For every graph G, the sum of the degrees of all the nodes in G
More informationCS483 Design and Analysis of Algorithms
CS483 Design and Analysis of Algorithms Lectures 15-16 Dynamic Programming Instructor: Fei Li lifei@cs.gmu.edu with subject: CS483 Office hours: STII, Room 443, Friday 4:00pm - 6:00pm or by appointments
More informationPair Hidden Markov Models
Pair Hidden Markov Models Scribe: Rishi Bedi Lecturer: Serafim Batzoglou January 29, 2015 1 Recap of HMMs alphabet: Σ = {b 1,...b M } set of states: Q = {1,..., K} transition probabilities: A = [a ij ]
More informationHidden Markov Models. Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from:
Hidden Markov Models Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from: www.ioalgorithms.info Outline CG-islands The Fair Bet Casino Hidden Markov Model Decoding Algorithm
More informationSequence analysis and Genomics
Sequence analysis and Genomics October 12 th November 23 rd 2 PM 5 PM Prof. Peter Stadler Dr. Katja Nowick Katja: group leader TFome and Transcriptome Evolution Bioinformatics group Paul-Flechsig-Institute
More informationCMPSCI 311: Introduction to Algorithms Second Midterm Exam
CMPSCI 311: Introduction to Algorithms Second Midterm Exam April 11, 2018. Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question. Providing more
More informationHidden Markov Models
Hidden Markov Models Slides revised and adapted to Bioinformática 55 Engª Biomédica/IST 2005 Ana Teresa Freitas Forward Algorithm For Markov chains we calculate the probability of a sequence, P(x) How
More informationLecture 2: Pairwise Alignment. CG Ron Shamir
Lecture 2: Pairwise Alignment 1 Main source 2 Why compare sequences? Human hexosaminidase A vs Mouse hexosaminidase A 3 www.mathworks.com/.../jan04/bio_genome.html Sequence Alignment עימוד רצפים The problem:
More informationWe Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named
We Live in Exciting Times ACM (an international computing research society) has named CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Apr. 2, 2019 Yoshua Bengio,
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Hidden Markov Models Barnabás Póczos & Aarti Singh Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed
More informationLocal Alignment: Smith-Waterman algorithm
Local Alignment: Smith-Waterman algorithm Example: a shared common domain of two protein sequences; extended sections of genomic DNA sequence. Sensitive to detect similarity in highly diverged sequences.
More informationDynamic Programming( Weighted Interval Scheduling)
Dynamic Programming( Weighted Interval Scheduling) 17 November, 2016 Dynamic Programming 1 Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points,
More informationAlgorithms and Theory of Computation. Lecture 9: Dynamic Programming
Algorithms and Theory of Computation Lecture 9: Dynamic Programming Xiaohui Bei MAS 714 September 10, 2018 Nanyang Technological University MAS 714 September 10, 2018 1 / 21 Recursion in Algorithm Design
More informationGraphical Models Seminar
Graphical Models Seminar Forward-Backward and Viterbi Algorithm for HMMs Bishop, PRML, Chapters 13.2.2, 13.2.3, 13.2.5 Dinu Kaufmann Departement Mathematik und Informatik Universität Basel April 8, 2013
More informationDoctoral Course in Speech Recognition. May 2007 Kjell Elenius
Doctoral Course in Speech Recognition May 2007 Kjell Elenius CHAPTER 12 BASIC SEARCH ALGORITHMS State-based search paradigm Triplet S, O, G S, set of initial states O, set of operators applied on a state
More informationThe main algorithms used in the seqhmm package
The main algorithms used in the seqhmm package Jouni Helske University of Jyväskylä, Finland May 9, 2018 1 Introduction This vignette contains the descriptions of the main algorithms used in the seqhmm
More informationData Structures in Java
Data Structures in Java Lecture 20: Algorithm Design Techniques 12/2/2015 Daniel Bauer 1 Algorithms and Problem Solving Purpose of algorithms: find solutions to problems. Data Structures provide ways of
More informationMaximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming
Dynamic Programming Arijit Bishnu arijit@isical.ac.in Indian Statistical Institute, India. August 31, 2015 Outline 1 Maximum sum contiguous subsequence 2 Longest common subsequence 3 Matrix chain multiplication
More informationSimilarity Search. The String Edit Distance. Nikolaus Augsten.
Similarity Search The String Edit Distance Nikolaus Augsten nikolaus.augsten@sbg.ac.at Dept. of Computer Sciences University of Salzburg http://dbresearch.uni-salzburg.at Version October 18, 2016 Wintersemester
More informationLecture 3: ASR: HMMs, Forward, Viterbi
Original slides by Dan Jurafsky CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 3: ASR: HMMs, Forward, Viterbi Fun informative read on phonetics The
More informationSequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them
HMM, MEMM and CRF 40-957 Special opics in Artificial Intelligence: Probabilistic Graphical Models Sharif University of echnology Soleymani Spring 2014 Sequence labeling aking collective a set of interrelated
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationAnalysis and Design of Algorithms Dynamic Programming
Analysis and Design of Algorithms Dynamic Programming Lecture Notes by Dr. Wang, Rui Fall 2008 Department of Computer Science Ocean University of China November 6, 2009 Introduction 2 Introduction..................................................................
More informationOutline. Approximation: Theory and Algorithms. Motivation. Outline. The String Edit Distance. Nikolaus Augsten. Unit 2 March 6, 2009
Outline Approximation: Theory and Algorithms The Nikolaus Augsten Free University of Bozen-Bolzano Faculty of Computer Science DIS Unit 2 March 6, 2009 1 Nikolaus Augsten (DIS) Approximation: Theory and
More informationP(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model:
Chapter 5 Text Input 5.1 Problem In the last two chapters we looked at language models, and in your first homework you are building language models for English and Chinese to enable the computer to guess
More informationSimilarity Search. The String Edit Distance. Nikolaus Augsten. Free University of Bozen-Bolzano Faculty of Computer Science DIS. Unit 2 March 8, 2012
Similarity Search The String Edit Distance Nikolaus Augsten Free University of Bozen-Bolzano Faculty of Computer Science DIS Unit 2 March 8, 2012 Nikolaus Augsten (DIS) Similarity Search Unit 2 March 8,
More informationBio nformatics. Lecture 3. Saad Mneimneh
Bio nformatics Lecture 3 Sequencing As before, DNA is cut into small ( 0.4KB) fragments and a clone library is formed. Biological experiments allow to read a certain number of these short fragments per
More informationLecture 9. Greedy Algorithm
Lecture 9. Greedy Algorithm T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018
More informationMACHINE LEARNING 2 UGM,HMMS Lecture 7
LOREM I P S U M Royal Institute of Technology MACHINE LEARNING 2 UGM,HMMS Lecture 7 THIS LECTURE DGM semantics UGM De-noising HMMs Applications (interesting probabilities) DP for generation probability
More informationO 3 O 4 O 5. q 3. q 4. Transition
Hidden Markov Models Hidden Markov models (HMM) were developed in the early part of the 1970 s and at that time mostly applied in the area of computerized speech recognition. They are first described in
More informationStatistical Methods for NLP
Statistical Methods for NLP Sequence Models Joakim Nivre Uppsala University Department of Linguistics and Philology joakim.nivre@lingfil.uu.se Statistical Methods for NLP 1(21) Introduction Structured
More informationMultiple Sequence Alignment using Profile HMM
Multiple Sequence Alignment using Profile HMM. based on Chapter 5 and Section 6.5 from Biological Sequence Analysis by R. Durbin et al., 1998 Acknowledgements: M.Sc. students Beatrice Miron, Oana Răţoi,
More informationLecture 15. Probabilistic Models on Graph
Lecture 15. Probabilistic Models on Graph Prof. Alan Yuille Spring 2014 1 Introduction We discuss how to define probabilistic models that use richly structured probability distributions and describe how
More informationCS:4330 Theory of Computation Spring Regular Languages. Finite Automata and Regular Expressions. Haniel Barbosa
CS:4330 Theory of Computation Spring 2018 Regular Languages Finite Automata and Regular Expressions Haniel Barbosa Readings for this lecture Chapter 1 of [Sipser 1996], 3rd edition. Sections 1.1 and 1.3.
More informationMarkov Chains and Hidden Markov Models. COMP 571 Luay Nakhleh, Rice University
Markov Chains and Hidden Markov Models COMP 571 Luay Nakhleh, Rice University Markov Chains and Hidden Markov Models Modeling the statistical properties of biological sequences and distinguishing regions
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationHidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010
Hidden Markov Models Aarti Singh Slides courtesy: Eric Xing Machine Learning 10-701/15-781 Nov 8, 2010 i.i.d to sequential data So far we assumed independent, identically distributed data Sequential data
More informationCS532, Winter 2010 Hidden Markov Models
CS532, Winter 2010 Hidden Markov Models Dr. Alan Fern, afern@eecs.oregonstate.edu March 8, 2010 1 Hidden Markov Models The world is dynamic and evolves over time. An intelligent agent in such a world needs
More informationSequence Alignment (chapter 6)
Sequence lignment (chapter 6) he biological problem lobal alignment Local alignment Multiple alignment Introduction to bioinformatics, utumn 6 Background: comparative genomics Basic question in biology:
More informationLecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.
Lecture 13 More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees. Announcements HW5 due Friday! HW6 released Friday! Last time Not coding in an action
More informationIntroduction to Reinforcement Learning Part 1: Markov Decision Processes
Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for
More informationApproximation: Theory and Algorithms
Approximation: Theory and Algorithms The String Edit Distance Nikolaus Augsten Free University of Bozen-Bolzano Faculty of Computer Science DIS Unit 2 March 6, 2009 Nikolaus Augsten (DIS) Approximation:
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationBioinformatics 2 - Lecture 4
Bioinformatics 2 - Lecture 4 Guido Sanguinetti School of Informatics University of Edinburgh February 14, 2011 Sequences Many data types are ordered, i.e. you can naturally say what is before and what
More informationDAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad
DAA Unit- II Greedy and Dynamic Programming By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad 1 Greedy Method 2 Greedy Method Greedy Principal: are typically
More informationVL Algorithmen und Datenstrukturen für Bioinformatik ( ) WS15/2016 Woche 16
VL Algorithmen und Datenstrukturen für Bioinformatik (19400001) WS15/2016 Woche 16 Tim Conrad AG Medical Bioinformatics Institut für Mathematik & Informatik, Freie Universität Berlin Based on slides by
More informationMarkov Chains and Hidden Markov Models. = stochastic, generative models
Markov Chains and Hidden Markov Models = stochastic, generative models (Drawing heavily from Durbin et al., Biological Sequence Analysis) BCH339N Systems Biology / Bioinformatics Spring 2016 Edward Marcotte,
More informationEvolutionary Models. Evolutionary Models
Edit Operators In standard pairwise alignment, what are the allowed edit operators that transform one sequence into the other? Describe how each of these edit operations are represented on a sequence alignment
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationHidden Markov Models for biological sequence analysis
Hidden Markov Models for biological sequence analysis Master in Bioinformatics UPF 2017-2018 http://comprna.upf.edu/courses/master_agb/ Eduardo Eyras Computational Genomics Pompeu Fabra University - ICREA
More information