Outline DP paradigm Discrete optimisation Viterbi algorithm DP: 0 1 Knapsack. Dynamic Programming. Georgy Gimel farb

Similar documents
Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

6. DYNAMIC PROGRAMMING I

Dynamic Programming 1

Copyright 2000, Kevin Wayne 1

CS 580: Algorithm Design and Analysis

Copyright 2000, Kevin Wayne 1

6. DYNAMIC PROGRAMMING I

Chapter 6. Dynamic Programming. CS 350: Winter 2018

CSE 202 Dynamic Programming II

6.6 Sequence Alignment

Areas. ! Bioinformatics. ! Control theory. ! Information theory. ! Operations research. ! Computer science: theory, graphics, AI, systems,.

Chapter 6. Weighted Interval Scheduling. Dynamic Programming. Algorithmic Paradigms. Dynamic Programming Applications

CS 580: Algorithm Design and Analysis

Dynamic Programming. Cormen et. al. IV 15

Dynamic Programming. Weighted Interval Scheduling. Algorithmic Paradigms. Dynamic Programming

6. DYNAMIC PROGRAMMING I

6. DYNAMIC PROGRAMMING II

Lecture 2: Divide and conquer and Dynamic programming

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Hidden Markov Models Hamid R. Rabiee

Lecture 11: Hidden Markov Models

More Dynamic Programming

More Dynamic Programming

Aside: Golden Ratio. Golden Ratio: A universal law. Golden ratio φ = lim n = 1+ b n = a n 1. a n+1 = a n + b n, a n+b n a n

Objec&ves. Review. Dynamic Programming. What is the knapsack problem? What is our solu&on? Ø Review Knapsack Ø Sequence Alignment 3/28/18

Foundations of Natural Language Processing Lecture 6 Spelling correction, edit distance, and EM

Multiscale Systems Engineering Research Group

Dynamic programming. Curs 2015

order is number of previous outputs

Note Set 5: Hidden Markov Models

CSE 431/531: Analysis of Algorithms. Dynamic Programming. Lecturer: Shi Li. Department of Computer Science and Engineering University at Buffalo

Hidden Markov Models

L23: hidden Markov models

What is Dynamic Programming

Minimum Edit Distance. Defini'on of Minimum Edit Distance

Lecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010

Conditional Random Fields and beyond DANIEL KHASHABI CS 546 UIUC, 2013

Sequence modelling. Marco Saerens (UCL) Slides references

An Introduction to Bioinformatics Algorithms Hidden Markov Models

Hidden Markov Models

Internet Monetization

Page 1. References. Hidden Markov models and multiple sequence alignment. Markov chains. Probability review. Example. Markovian sequence

Dynamic programming. Curs 2017

Chapter 6. Dynamic Programming. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Comparative Gene Finding. BMI/CS 776 Spring 2015 Colin Dewey

CS 7180: Behavioral Modeling and Decision- making in AI

Hidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing

Outline. Similarity Search. Outline. Motivation. The String Edit Distance

Proofs, Strings, and Finite Automata. CS154 Chris Pollett Feb 5, 2007.

CS483 Design and Analysis of Algorithms

Pair Hidden Markov Models

Hidden Markov Models. Ivan Gesteira Costa Filho IZKF Research Group Bioinformatics RWTH Aachen Adapted from:

Sequence analysis and Genomics

CMPSCI 311: Introduction to Algorithms Second Midterm Exam

Hidden Markov Models

Lecture 2: Pairwise Alignment. CG Ron Shamir

We Live in Exciting Times. CSCI-567: Machine Learning (Spring 2019) Outline. Outline. ACM (an international computing research society) has named

Introduction to Machine Learning CMU-10701

Local Alignment: Smith-Waterman algorithm

Dynamic Programming( Weighted Interval Scheduling)

Algorithms and Theory of Computation. Lecture 9: Dynamic Programming

Graphical Models Seminar

Doctoral Course in Speech Recognition. May 2007 Kjell Elenius

The main algorithms used in the seqhmm package

Data Structures in Java

Maximum sum contiguous subsequence Longest common subsequence Matrix chain multiplication All pair shortest path Kna. Dynamic Programming

Similarity Search. The String Edit Distance. Nikolaus Augsten.

Lecture 3: ASR: HMMs, Forward, Viterbi

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

STA 414/2104: Machine Learning

Dynamic Approaches: The Hidden Markov Model

STA 4273H: Statistical Machine Learning

Analysis and Design of Algorithms Dynamic Programming

Outline. Approximation: Theory and Algorithms. Motivation. Outline. The String Edit Distance. Nikolaus Augsten. Unit 2 March 6, 2009

P(t w) = arg maxp(t, w) (5.1) P(t,w) = P(t)P(w t). (5.2) The first term, P(t), can be described using a language model, for example, a bigram model:

Similarity Search. The String Edit Distance. Nikolaus Augsten. Free University of Bozen-Bolzano Faculty of Computer Science DIS. Unit 2 March 8, 2012

Bio nformatics. Lecture 3. Saad Mneimneh

Lecture 9. Greedy Algorithm

MACHINE LEARNING 2 UGM,HMMS Lecture 7

O 3 O 4 O 5. q 3. q 4. Transition

Statistical Methods for NLP

Multiple Sequence Alignment using Profile HMM

Lecture 15. Probabilistic Models on Graph

CS:4330 Theory of Computation Spring Regular Languages. Finite Automata and Regular Expressions. Haniel Barbosa

Markov Chains and Hidden Markov Models. COMP 571 Luay Nakhleh, Rice University

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Hidden Markov Models. Aarti Singh Slides courtesy: Eric Xing. Machine Learning / Nov 8, 2010

CS532, Winter 2010 Hidden Markov Models

Sequence Alignment (chapter 6)

Lecture 13. More dynamic programming! Longest Common Subsequences, Knapsack, and (if time) independent sets in trees.

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Approximation: Theory and Algorithms

Human-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg

Bioinformatics 2 - Lecture 4

DAA Unit- II Greedy and Dynamic Programming. By Mrs. B.A. Khivsara Asst. Professor Department of Computer Engineering SNJB s KBJ COE, Chandwad

VL Algorithmen und Datenstrukturen für Bioinformatik ( ) WS15/2016 Woche 16

Markov Chains and Hidden Markov Models. = stochastic, generative models

Evolutionary Models. Evolutionary Models

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Hidden Markov Models for biological sequence analysis

Transcription:

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Dynamic Programming Georgy Gimel farb (with basic contributions by Michael J. Dinneen) COMPSCI 69 Computational Science /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Dynamic Programming (DP) Paradigm Discrete Optimisation with DP Viterbi algorithm Knapsack Problem: Dynamic programming solution Learning outcomes: Understand DP and problems it can solve Be familiar with the edit-distance problem and its DP solution Be familiar with the Viterbi algorithm Additional sources: http://en.wikipedia.org/wiki/dynamic programming http://www.cprogramming.com/tutorial/computersciencetheory/dp.html http://en.wikipedia.org/wiki/knapsack problem Dynamic programming solution /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Main Algorithmic Paradigms Greedy: Building up a solution incrementally, by optimising at each step some local criterion Divide-and-conquer: Breaking up a problem into separate subproblems, solving each subproblem independently, and combining solution to subproblems to form solution to original problem Dynamic programming (DP): Breaking up a problem into a series of overlapping subproblems, and building up solutions to larger and larger subproblems Unlike the divide-and-conquer paradigm, DP typically involves solving all possible subproblems rather than a small portion DP tends to solve each sub-problem only once, store the results, and use these later again, thus reducing dramatically the amount of computation when the number of repeating subproblems is exponentially large /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack History of Dynamic Programming Etymology: Richard E. Bellman [6..9 9..9]: Famous applied mathematician (USA) who pioneered the systematic study of dynamic programming in the 95s when he was working at RAND Corporation Dynamic programming = planning over time Secretary of Defense was hostile to mathematical research Bellman sought an impressive name to avoid confrontation It s impossible to use dynamic in a pejorative sense Something not even a Congressman could object to Reference: Bellman, R. E.: Eye of the Hurricane, An Autobiography. /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Bellman s Principle of Optimality R. E. Bellman: Dynamic Programming. Princeton Univ. Press, 957, Ch.III. An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision state s Optimal policy s t w.r.t. s s i s Can t be an optimal policy w.r.t. s i time t i n The optimal policy w.r.t. s after any its state s i cannot differ from the optimal policy w.r.t. the state s i! See http://en.wikipedia.org/wiki/bellman equation 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 7 Greedy algorithm: the total cost c = 7 t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state DP: Step t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 6 DP: Step 6 t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Simple Example: Find the Cheapest Route state 6 DP: Backtracking the cheapest route (c = ) 6 t 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Discrete Optimisation with DP Problem: (s,..., s n ) = arg min F (s,..., s n ) (s i S i : i=,...,n ) where an objective function F (s,..., s n ) depends on states s i ; i =,..., n, having each a finite set S i of values Frequently, an objective function to apply DP is additive: n F (s, s,..., s n ) = ψ o (s ) + ϕ i (s i, s i ) Generally, each state s i takes only a subset S i (s i+ ) S i of values, which depends on the state s i+ S i+ Overlapping subproblems are solved for all the states s i S i at each step i sequentially for i =,..., n i= 7 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Problem on Slide 7 Bellman Equation: For i =,..., n and each s i S i, Φ i (s i ) = min i (s i ) + ϕ i (s i, s i )} s i S i (s i ) B i (s i ) = arg min i (s i ) + ϕ i (s i, s i )} s i S i (s i ) Φ i (s i ) is a candidate decision for state s i at step i and B i (s i ) is a backward pointer for reconstructing a candidate sequence of states s,..., s i, s i, producing Φ i (s i ) Backtracking to reconstruct the solution: min Φ n (s n ) s n S n min F (s,..., s n ) s,...,s n s n = arg min Φ n (s n ) s n S n s i = B i (s i ) for i = n,..., /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 Step i Set of states S i {} {,, } {,, } {,, } State constraints S i (s i ) {} {} {} {, } {} {, } {, } {} {, } Cost functions: ψ () = s >< ϕ (, s ) = >: >< ϕ (s, s ) = >: >< ϕ (s, s ) = >: s \s = s \s = Step i = : Φ (s = ) = ψ () = Step i = : Φ (s ) = s = Φ () + ϕ (, ) {z } {z } >< s = Φ () + ϕ (, ) {z } {z } s = Φ () + ϕ (, ) >: {z } {z } B (s ) = for s =,, s i = s i = s i = 9 = ; = = = 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Step i = : >< Φ (s ) = >: s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + s = Φ () + ϕ (, ) = ; {z } B () = + s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + state i /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Step i = : >< Φ (s ) = >: s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = 6; {z } {z } B () = + + s = Φ () + ϕ (, ) = 6; {z } B () = + s = min{φ () + ϕ (, ), Φ () + ϕ (, ) } = ; {z } {z } B () = + + state 6 6 i /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Computing DP Solution to Example on Slide 6 (continued) Solution: Optimal solution: min F (s,..., s ) min Φ (s ) = min{6, 6, } = s,...,s s S Ending optimal state: s = min Φ (s ) = s S Backtracking preceding states for the optimal solution: state s = B () = s = B () = s = B () = 6 6 i /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack DP: Applications and Algorithms Areas: Control theory Signal processing Information theory Operations research Bioinformatics Computer science: theory, AI, graphics, image analysis, systems,... Algorithms Viterbi: error correction coding, hidden Markov models Unix diff: comparing two files Smith-Waterman: gene sequence alignment Bellman-Ford: shortest path routing in networks Cocke-Kasami-Younger: parsing context free grammars /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: Minimum Cost of Editing Applications: Unix diff, speech recognition, computational biology Different penalties for insertion deletion, >, and for mismatch between two characters x and y: α xy ; α xx = c l a i m l i m e α ll = α ii = α mm = cost(claim lime) = c l a i m l i m e α cl α ii = α mm = cost(claim lime) = α cl + http://en.wikipedia.org/wiki/edit distance /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Two Strings C and C : Levenshtein, or Edit Distance Minimum number D(C, C ) of edit operations to transform C into C : insertion (weight ), deletion (), or character substitution ( if the same character, otherwise α.. > ); e.g. = α.. = delete c i substitute c j for c i insert c j D(claim, lime) = ) claim laim ) laim lim ) lim lime c l a i m l i m e c l a i m l i m e http://en.wikipedia.org/wiki/levenshtein distance 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation Strings: x x [m] = x x... x m and y y [n] = y y... y n Substrings: x [i] = x... x i ; i m, and y j] = y... y j ; j n j x i Distance d(i, j) = D(x [i], y [j] ) i Recurrent computation: if i = ; j = i d(i, ) + if i > ; j = j d(, j ) + if i = ; j > d(i, j) = d(i, j) +, d(i, j ) +, min d(i, j ) + α otherwise xiy j }{{} ; α xx= j y j i 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation e α ce α le α ae α ie α me m α cm α lm α am α im i α ci α li α ai α mi l α cl α al α il α ml c l a i m 7 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Levenshtein, or Edit Distance: DP Computation c l a i m l i m e 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Sequence Alignment Given two strings x = x x... x m and y = y y... y n, find their alignment of minimum cost Alignment M: a set of ordered pairs (x i y j, such that each item occurs in at most one pair and there are no crossings Pairs x i y j and x i y j cross if i < i, but j > j cost(m) = (x i,y y) M α xiy j } {{ } mismatch + i:x i unmatched + j:y j unmatched } {{ } gaps Example : x = claim; y = lime M = {x y, x y, x 5 y }; cost = Example : x = ctaccg; y = tacatg M = {x y, x y, x y, x 5 y, x 5 y, x 6 y 6 }; cost = α CA + x x x x x 5 c l a i m x x x x x 5 C T A C C G l i m e T A C A T G y y y y y y y y y 5 y 6 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Sequence Alignment: Algorithm Sequence-Alignment ( x x... x m, y y... y n,, α) { for i = to m D[i, ] = i for j = to n D[, j] = j for i = to m for j = to n D[i, j] = min( α[x i, y j ] + D[i, j ], + D[i, j], + D[i, j ] ) } Time and space complexity Θ(mn) English words: m, n Computational biology: m = n, ( billion operations is fine, but GB array?) /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Viterbi Algorithm: Probabilistic Model DP search for the most likely sequence of unobserved (hidden) states from a sequence of observations (signals) Each signal depends on exactly one corresponding hidden state The hidden states are produced by a first-order Markov model: Set of the hidden states S = {s,..., s n} Transitional probabilities P s(s i s j) : i, j {,..., n} Given the states, the signals are statistically independent Set of the signals V = {v,..., v m} Observational probabilities P o(v j s i) : v j V; s i S v [] v [] v [] v [] Log-likelihood of a sequence of states s = s [] s []... s [K], given a sequence of s [] s [] s [] s [] signals v = v [] v []... v [K] : L(s v) log Pr(s v) Pr(s v) Pr(s, v) = Pr s (s) Pr o (v s) http://en.wikipedia.org/wiki/viterbi algorithm /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Maximum (Log-)Likelihood s = arg max s S K L(s v) s = s [] s []... s [K] a hidden (unobserved) Markov chain of states at steps k =,..., K with joint probability Pr s (s) = π ( ) K s [] k= P ( ) s s[k] s [k ] π (s) prior probability of state s S at step k = P s (s s ) probability of transition from state s to the next one, s v = v [] v []... v [K] an observed sequence of conditionally independent signals with probability Pr(v s) = K k= P ( ) o v[k] s [k] P o (v s) probability of observing v V in state s S at step k K ( ( ) ( )) s = arg max ψk s[k] + ϕ s[k] s [k ] s S K k= { ( log π (s) + log Po v[k] s ) k = ; s S ψ k (s) = ( log P o v[k] s ) k > ; s S { k = ; s S ϕ (s s ) = log P s (s s ) k > ; s S /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Probabilistic State Transitions and Signals for States Example for S = {a, b, c}, V = {A, B, C} P s (c c) P s (c a) P s (a c) c Ps(c b) Ps(b c) Probabilistic signal generator at each P step k; P o(v s) = for all s S v V s a P s (b a) b P s (a b) P s (a a) P s (b b) Non-deterministic (probabilistic) finite automaton (NFA) P for state transitions at each step k; P s(s s ) = for all s S s S P o (A s) P o (C s) P o (B s) A B C /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Graphical Model for S = {a, b, c} and Given Signals v v [] = A v [] = A v [] = B v [] = A c ψ (c) ϕ(c c) ψ (c) ψ (c) ψ (c) ϕ(a c) ϕ(b c) b ψ (b) ϕ(c b) ϕ(b b) ϕ(b c) ψ (b) ψ (b) ψ (b) a ϕ(c a) ϕ(b a) ψ (a) ϕ(a a) ψ (a) ψ (a) ψ (a) k = k = k = k = /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Maximum (Log-)Likelihood via Dynamic Programming Viterbi DP algorithm: Initialisation: k = ; Φ (s [] ) = ψ (s [] ) for all s [] S Forward pass for k =,..., K and all s [k] S: Φ k ( s[k] ) { ( )} = ψ k (s [k] ) + max ϕ(s[k] s [k ] ) + Φ k s[k ] s [k ] S ( ) { ( )} B k s[k] = arg max ϕ(s[k] s [k ] ) + Φ k s[k ] s [k ] S k = K: the maximum log-likelihood state s [K] = arg max Φ K(s [K] ) s [k] S Backward pass for k = K,..., : s [k] = B k+ ( ) s [k+] 5 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a 6 6 6 k= v [] =A k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a 6 6 6 k= v [] =A k= v [] =A Step k = : Initialisation k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 6 6 a 6 6 6 k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 7 6 6 a 6 6 6 k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 7 6 7 6 a 6 6 6 k= v [] =A Step k = k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 7 6 7 5 6 a 6 6 6 5 k= v [] =A Step k = 5 k= v [] =A k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Example: S = {a, b}; V = {A, B}; v = AABAB ϕ(s s ) = ψ k (s) = { s = s s s ; for s, s S { 6 v s {A a, B b, C c} ; for s S otherwise b 7 6 7 5 6 a 6 6 6 5 k= v [] =A k= v [] =A Backtracking: s = aaaaa k= v [] =B k= v [] =A k=5 v [5] =B 6 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem Maximise n x i v i subject to n x i s i S; x i {, }; s i, v i, S > i= i= DP solution of pseudo-polynomial time complexity, O(nS) No contradiction to the NP-completeness of the problem: S is not polynomial in the length n of the problem s input The length of S is proportional to the number of bits, i.e. log S Space complexity is O(nS) (or O(S) if rewriting from µ(s) to µ() for each i) µ(i, s) the maximum value that can be obtained by placing up to i items to the knapsack of size less than or equal to s DP solution uses a table µ(i, s) or µ(s) to store previous computations 7 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Recursive definition of µ(i, s): µ(, s) = µ(i, ) = ; { µ(i, s) if si > s µ(i, s) = max {µ(i, s), µ(i, s s i ) + v i } if s i s i µ(i, ŝ) i s =... ŝ s i ŝ s = S µ(i, ŝ s i ) µ(i, ŝ) /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 i = 9 5 9 i = 9 i = 5 5 5 5 5 5 5 5 5 i = i = s = 5 6 7 9 9 /

Outline DP paradigm Discrete optimisation Viterbi algorithm DP: Knapsack Knapsack Problem: DP Solution Example: n=5; S = ; i 5 v i 5 s i 6 7 i = 5 9 5 6 9 5 5 x 5 = i = 9 5 9 x = i = 9 x = i = 5 5 5 5 5 5 5 5 5 x = i = x = i = s = 5 6 7 9 /