A note on the primal-dual method for the semi-metric labeling problem
|
|
- Darrell Perry
- 5 years ago
- Views:
Transcription
1 A note on the primal-dual method for the semi-metric labeling problem Vladimir Kolmogorov Technical report June 4, 2007 Abstract Recently, Komodakis et al. [6] developed the FastPD algorithm for the semi-metric labeling problem, which extends the expansion move algorithm of Boykov et al. [2]. We present a slightly different derivation of the FastPD method.. Preliaries Consider the following energy function: Ex = u V u x v + u,v E uv x u, x v Here G = V, E is an undirected graph. Variables x u for nodes u V belong to a discrete set x u X u. Thus, labeling x belongs to the set X = u V X u. We denote E = {u v, v u u, v E}, i.e. V, E is the directed graph corresponding to undirected graph V, E. The energy function is specified by unary terms θ u i and pairwise terms θ uv i, j i X u, j X v. It will be convenient to denote them as u;i and uv;ij, respectively. We can concatenate all these values into a single vector = { α α I} where the index set is I = {u; i} {uv; ij}. Note that uv; ij vu; ji, so uv;ij and vu;ji are the same element. We will use the notation u to denote a vector of size X u and uv to denote a vector of size X u X v... LP relaxation In this section we describe a linear programg LP relaxation of energy which plays a crucial role for algorithms in [5, 6]. This natural relaxation was studied extensively in the literature, in particular by Schlesinger [0] for a special case when θ uv i, j {0, + }, Koster et al. [7], Chekuri et al. [3], and Wainwright et al. []. Primal problem Let us introduce binary indicator variables: τ u;i = [x u = i], τ uv;ij = [x u = i, x v = j] where [ ] is the Iverson bracket: it is if its argument is true, and 0 otherwise. Variables {τ u;i }, {τ uv;ij } must belong to the following constraint set: Λ = { τ R I + τ u;i = u V } u v E, τ uv;ij = τ v;j j X v Clearly, the problem of imizing function can be formulated as follows: imize, τ subject to τ Λ τ uv;ij {0, } The LP relaxation in [0, 7, 3,, 5, 6] is obtained by dropping the integrality constraint: imize, τ subject to τ Λ 2a Reparameterization and dual problem The dual problem to 2a can be defined using the notion of reparameterization [0, ]. Definition. Suppose vectors θ and define the same energy function, i.e. Ex θ = Ex for all configurations x. Then θ is called a reparameterization of. Let us introduce message M uv;j R for directed edge u v E and label j X v. We denote M uv = {M uv;j j X v } to be a vector of size X v, and M = {M uv u v E} to be the vector of all messages. This vector defines reparameterization θ = [M] as follows: θ u;i = u + u,v E M vu;i θ uv;ij = uv;ij M uv;j M vu;i
2 In this paper denotes the original parameter vector, and θ = [M] denotes its reparameterization defined by some message vector M. To simplify notation, we denote the corresponding energy function as Ex = Ex θ = Ex. Let us define Φθ = θ u;i + u V θ uv;ij u,v E It is easy to see that for any messages M value Φ[M] is a lower bound on the energy: Φ[M] x Ex. This motivates the following maximization problem: maximize subject to Φθ θ = [M] 2b In other words, the goal is to find the tightest possible bound. This maximization problem is dual to 2a see e.g. [2]..2. Optimality conditions Consider fractional labeling τ Λ and messages M corresponding to reparameterization θ = [M]. It is wellknown that τ, M is an optimal primal-dual pair i.e. τ is an optimal solution of 2a and M is an optimal solution of 2a if and only if the following complementary slackness conditions hold: τ u;i > 0 θ u;i = θ u;i i X u τ uv;ij > 0 θ uv;ij = θ uv;i i X j u j X v 3a 3b Algorithms in [5, 6] maintain an integer primal vector τ Λ or equivalently a labeling x X. With such a restriction reparameterization θ = [M] satisfying 3 may not exist. Thus, conditions 3 must be relaxed. Given number, let us say that x, M satisfies -relaxed complementary slackness conditions if the following holds for all nodes and edges: θ u x u θ u;i + θ uv x u, x v θ uv;ij + u x u 4a uv x u, x v 4b Theorem 2 cf. [5]. Suppose that pair x, M satisfies eq. 4. Then x is an -approximation, i.e. Ex y Ey. Proof. Let us sum 4a over nodes u V and 4b over edges u, v E. We obtain Ex Φθ + Ex Ex Φθ Ey y.3. LP relaxation for submodular functions of binary variables The expansion move algorithm [2] and algorithms in [5, 6] rely on solving the imization problem with at most binary variables. In this section we consider the case when X u = {0, } or X u = {0} for all nodes u. Furthermore, we assume that function E is submodular, i.e. each term θ uv with X u = X v = {0, } satisfies θ uv;00 +θ uv; θ uv;0 + θ uv;0. Note that expressionθ uv;00 +θ uv; θ uv;0 θ uv;0 is invariant to reparameterization. This case has several important properties see e.g. []. First, it can solved efficiently by computing a maximum flow in a graph with V + 2 nodes and V + E edges. Second, the algorithm produces an integer optimal solution τ Λ or equivalently labeling x X which is a global imum of E. Thus, optimality conditions 3 are reduced to θ u x u = θ u;i θ uv x u, x v = θ uv;ij 5a 5b Finally, reparameterization θ is in a normal form, i.e. terms θ uv satisfy θ uv 0, 0 = θ uv i max, j max = θ uv;ij 6 where i max is the maximal label in X u and j max is the maximal label in X v. Handling singleton nodes In a practical implementation nodes u with X u = {0} can be handled as follows. First, for each boundary edge u, v with X u = {0}, X v = {0, } we choose message M uv so that θ uv;00 = θ uv;0. Namely, M uv;0 = 0, M uv; = uv;0 uv;00. Then we solve the LP relaxation for nodes u with X u = {0, } ignoring singleton nodes and their incident edges. In this step only messages M uv for edges u v with X u = X v = {0, } are allowed to be modified. Clearly, upon teration conditions 5 and 6 will hold for all nodes and edges. Restricting messages It is easy to see that adding a constant to vector M uv preserves optimality of M. Thus, when solving problem 2b we can require that M uv;0 = 0 for all u v E. This constraint will be used later. Given term uv and reparameterization θ uv for edge u, v with X u = X v = {0, }, messages M uv, M vu can be computed as follows: add constant C = uv;00 θ uv;00 to vector θ uv so that we get θ uv;00 = uv;00 ; 2 set M uv;0 = M vu;0 = 0, M uv; = uv;0 θ uv;0, M vu; = uv;0 θ uv;0.
3 2. FastPD algorithm [6] From now on we assume that the set X u is the same for all nodes: X u = L, and that vector satisfies the following for all nodes and edges: u;i 0 i L 7a uv;ii = 0 i L 7b uv;ij > 0 i, j L, i j 7c We also consider a special case when the triangular inequalities hold: uv;ij uv;ik + uv;kj i, j, k L 7d Note that if we add the symmetry condition uv i, j = uv j, i then eq. 7a-7c give the definition of a semimetric, and eq. 7a-7d give the definition of a metric, However, the symmetry will not be needed. We denote uv = i,j L,i =j uv;ij, max uv = max i,j L uv;ij and = 2 max max uv u,v E. 2.. General overview uv FastPD algorithm [6] maintains integer primal configuration x X and dual variables messages M which define reparameterization θ = [M]. The following invariants hold during the execution: θ uv x u, x v = 0 8a θ uv i, i = 0 i L 8b These properties can easily be ensured in the beginning: for any x it is straightforward to find messages M so that 8 holds. At each step, the algorithm selects some label k L and performs the k-expansion operation described in section 2.2. After the first pass over labels in L the following condition holds for all nodes u: θ u x u = θ u;i 9 i L The algorithm terates when there was a pass over all labels k L but configuration x has not changed. At this point an additional property holds for all edges. In the case of triangular inequalities 7d we have θ uv;ij 0 i, j L, i=x u or j=x v 0 Without triangular inequalities eq. 0 cannot be guaranteed. Instead, the following weaker version holds: θ uv;ij uv;ij max uv i, j L, i=x u or j=x v 0 The final step of the algorithm is to scale messages down: M uv;j := M uv;j / After that the -relaxed complementary slackness conditions 4 will hold and therefore labeling x will be within factor from the optimum k-expansion step The input to this step is label k and primal-dual pair x, M. Let us denote X u = {x u, k}, and let Ẽ be the restriction of function E to configurations x with x u X u. Ẽ can be viewed as an energy function of at most binary variables. We assume that label x u corresponds to 0 and label k corresponds to if k x u. Note that messages M define reparameterization not only for the original energy E, but also for the restriction Ẽ. Submodular case First, let us consider the case when function Ẽ is submodular. We get this case if, for example, triangular inequalities 7d hold. The k-expansion step solves the LP relaxation for the restriction Ẽ, as described in section.3. The LP relaxation has an integer solution. Thus, the goal is to find a global imum of Ẽ and messages M that give optimal reparameterization for Ẽ: x := arg x u e X u Ẽx M := arg max Φθ M:θ=[M] 2a 2b where the objective function in the maximization problem is Φθ = θ u;i + u V i X e u θ uv;ij i X u,v E e u j X e v To achieve efficiency, one could start with reparameterization θ = [M ] when solving 2b. This can be formulated as follows: i find message increment M δ and corresponding reparameterization θ = θ [M δ ] which maximizes Φθ; ii set M := M + M δ. When solving problem 2b, only components M uv;k for edges u v with x v k will be allowed to change. In other words, M uv;j = Muv;j for labels j L v where L v = L {k} {x v }. This is not a severe restriction: as discussed in section.3, it still allows to find an optimal solution to 2b. Function Ẽ may have several global ima. If their cost equals Ex then x is not updated, i.e. x is chosen as x. This guarantees convergence since the number of configurations is finite. Note that in the primal domain the method is equivalent the expansion move algorithm of Boykov et al. [2]. Non-submodular case If function Ẽ is non-submodular then the following operations are performed. For each nonsubmodular edge u, v with uv x u, x v > uv x u, k + uv k, x v terms uv x u, k and uv k, x v are increased by nonnegative constants until we get an equality: uv x u, x v = uv x u, x v = uv x u, k + uv k, x v
4 where is the modified parameter vector. The algorithm then proceeds in the same way as before. Namely, define Ẽ to be the restriction of function E to labelings x with x u X u. It is easy to see that Ẽ is submodular. The LP relaxation of Ẽ is now solved yielding pair x, M and corresponding parameter vector θ = [M]. Finally, vector is restored to its original value, and vector θ is changed to θ = [M]. Note that in the primal domain this algorithm is a special case of the more general majorize-imize technique see e.g. [8]. It is also equivalent to the truncation trick described in [9]. In the context of the labeling problem truncation was proposed independently in [4] and in [9]. Both papers state that the method produces an - approximation, but in [9] this fact is mentioned without proof Correctness of FastPD Theorem 3 cf. [6]. i After the k-expansion reparameterization θ = [M] satisfies conditions 8. ii After the first pass over all labels k condition 9 holds. iii Upon convergence pair x, M satisfies condtion 0 in the case of triangular inequalities 7d or condition 0 without triangular inequalities. Proof. Submodular case By assumption, conditions 8 hold for the initial reparameterization θ = [M ]: θ uv x u, x v = 0 3a θ uvi, i = 0 i L 3b We have M uv;j = M uv;j for labels j L v. Therefore, θ u i = θ ui i L u 3c θ uv i, j = θ uv i, j i L u, j L v 3d Eq. 3d and 3a yield θ uv x u, x v = 0 3e Using eq. 3e and condition 6 of the normal form we obtain θ uv k, k = 0 3f θ uv x u, k 0, θ uv k, x v 0 3g Finally, from the optimality conditions 5 we obtain θ u x u θ u i i X u 3h θ uv x u, x u = 0 3i Non-submodular case Let θ = [M ] and θ = [M] be the reparameterizations before changing and after restoring, respectively. We claim that conditions 3 hold except that 3g is replaced with the following: θ uv x u, k uv x max u, k θ uv k, x v uv k, x v max 3g Indeed, eq. 3a-3f, 3h-3i can be shown in the same way as before, using the fact that θ uv k, k = θ uvk, k, θ uvx u, x v = θ uv x u, x v where θ = [M] is the reparameterization of obtained after solving the LP relaxation. Instead of 3g we now have θ uvx u, k 0, θ uvk, x v 0 If the term for edge u, v is submodular then θ uv = θ uv, so 3g clearly holds. Otherwise, M vu;x u + M uv;k = uvx u, k θ uvx u, k uv x u, k = uv x u, x v uvk, x v uv x u, x v θmax uv which implies the first inequality in 3g. The second inequality follows from the first inequality applied to the reverse edge. We now prove the theorem using equations 3. Proof of i We already showed 8a see 3i. Eq. 8b follows from 3b, 3d and 3f. Proof of ii Let L be the set of labels processed so far at least once. Let us prove by induction on the number of steps that θ u x u θ u i for each node u V and label i L. The base of the induction is trivial: in the beginning L is empty. Suppose that the claim holds for vector θ and labeling x in the beginning of the k-expansion step. The fact θ u x v θ u i for label i = k follows from 3h. For label i L {k} we have θ u x u θ u x u = θ u x u θ u i = θ ui The first inequality follows from 3h, the second inequality is by the induction hypothesis, and the two equalities follow from 3c. Proof of iii We consider only the case with triangular inequalities 7d. The proof for the general case is entirely analogous; we just need to use eq. 3g instead of 3g. Consider the last iteration, i.e. a complete pass over labels k L in which configurationx has not changed. Let L be the set of labels processed in this iteration. Let us prove by induction on the number of steps that θ uv x, j 0 for labels j L. The base of the induction is trivial: in the beginning L is empty. Suppose that the claim holds for vector θ in the beginning of the k-expansion step. Eq. 3d and 3g imply
5 that the claim also holds for vector θ and set L {k} after the step. The analogous fact for term θ uv k, x v can be shown in the same way. Theorem 4 cf. [5, 6]. If reparameterization θ = [M] satisfies conditions 8, 9 and 0 then after message scaling the -relaxed slackness conditions 4 hold. Proof. Let M = M/ be the messages after scaling and θ = [M ] be the corresponding reparameterization. Proof of 4a Consider node u. There holds θ u;i = u;i + = u,v E θ u;i + M vu;i = u;i + θ u;i u;i Using this equality and eq. 9 we obtain θ ux u = i L θ u;i + u;i θ u;i + i L u x u u x u Proof of 4b Now consider edge u, v. Let us show that θ uv;ij 0 for all i, j L. If i = j then this follows from 8b. Suppose that i j. From 0 we get M vu;xu + M uv;j M vu;i + M uv;xv M vu;xu + M uv;xv = 0 max uv max uv Adding the first two equations and subtracting the third one yields M vu;i + M uv;j 2 max uv M vu;i + M uv;j = M vu;i + M uv;j θ uv;ij = uv;ij M vu;i + M uv;j uv;ij as claimed. Thus, i,j L θ uv;ij 0. Finally, using 8a we get θ uv x u, x v = uv x u, x v M uv;x v M vu;x u uv uv 0 = uv x u, x v uv x u, x v θ uv x u, x v = uv x u, x v. References [] E. Boros and P. L. Hammer. Pseudo-boolean optimization. Discrete Applied Mathematics, 23-3:55 225, November [2] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy imization via graph cuts. PAMI, 23, November 200. [3] C. Chekuri, S. Khanna, J. Naor, and L. Zosin. Approximation algorithms for the metric labeling problem via a new linear programg formulation. In SODA, pages 09 8, [4] N. Komodakis and G. Tziritas. Approximate labeling via the primal-dual schema. Technical Report CSD- TR , University of Crete, February [5] N. Komodakis and G. Tziritas. Approximate labeling via graph-cuts based on linear programg. PAMI, To appear. [6] N. Komodakis, G. Tziritas, and N. Paragios. Fast, approximately optimal solutions for single and dynamic MRFs. In CVPR, [7] A. Koster, C. van Hoesel, and A. Kolen. The partial constraint satisfaction problem: Facets and lifting theorems. Operations Research Letters, 233-5:89 97, 998. [8] K. Lange. Optimization. Springer Verlag, New York, [9] C. Rother, S. Kumar, V. Kolmogorov, and A. Blake. Digital tapestry. In CVPR, [0] M. I. Schlesinger. Syntactic analysis of twodimensional visual signals in noisy conditions. Kibernetika, 4:3 30, 976. [] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. MAP estimation via agreement on trees: Messagepassing and linear-programg approaches. IEEE Transactions on Information Theory, 5: , November [2] T. Werner. A linear programg approach to maxsum problem: A review. PAMI, 297:65 79, July Acknowledgements I thank Pushmeet Kohli and Philip Torr for discussions on connections between primal-dual schema and tree-reweighted message passing.
Discrete Inference and Learning Lecture 3
Discrete Inference and Learning Lecture 3 MVA 2017 2018 h
More informationOn the optimality of tree-reweighted max-product message-passing
Appeared in Uncertainty on Artificial Intelligence, July 2005, Edinburgh, Scotland. On the optimality of tree-reweighted max-product message-passing Vladimir Kolmogorov Microsoft Research Cambridge, UK
More informationOn Partial Optimality in Multi-label MRFs
On Partial Optimality in Multi-label MRFs P. Kohli 1 A. Shekhovtsov 2 C. Rother 1 V. Kolmogorov 3 P. Torr 4 1 Microsoft Research Cambridge 2 Czech Technical University in Prague 3 University College London
More informationOn Partial Optimality in Multi-label MRFs
Pushmeet Kohli 1 Alexander Shekhovtsov 2 Carsten Rother 1 Vladimir Kolmogorov 3 Philip Torr 4 pkohli@microsoft.com shekhovt@cmp.felk.cvut.cz carrot@microsoft.com vnk@adastral.ucl.ac.uk philiptorr@brookes.ac.uk
More informationMaking the Right Moves: Guiding Alpha-Expansion using Local Primal-Dual Gaps
Making the Right Moves: Guiding Alpha-Expansion using Local Primal-Dual Gaps Dhruv Batra TTI Chicago dbatra@ttic.edu Pushmeet Kohli Microsoft Research Cambridge pkohli@microsoft.com Abstract 5 This paper
More informationTighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm
Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm Dhruv Batra Sebastian Nowozin Pushmeet Kohli TTI-Chicago Microsoft Research Cambridge Microsoft Research Cambridge
More informationTighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm
Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm Dhruv Batra Sebastian Nowozin Pushmeet Kohli TTI-Chicago Microsoft Research Cambridge Microsoft Research Cambridge
More informationHow to Compute Primal Solution from Dual One in LP Relaxation of MAP Inference in MRF?
How to Compute Primal Solution from Dual One in LP Relaxation of MAP Inference in MRF? Tomáš Werner Center for Machine Perception, Czech Technical University, Prague Abstract In LP relaxation of MAP inference
More informationConvergent Tree-reweighted Message Passing for Energy Minimization
Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2006. Convergent Tree-reweighted Message Passing for Energy Minimization Vladimir Kolmogorov University College London,
More informationRounding-based Moves for Semi-Metric Labeling
Rounding-based Moves for Semi-Metric Labeling M. Pawan Kumar, Puneet K. Dokania To cite this version: M. Pawan Kumar, Puneet K. Dokania. Rounding-based Moves for Semi-Metric Labeling. Journal of Machine
More informationQuadratic Programming Relaxations for Metric Labeling and Markov Random Field MAP Estimation
Quadratic Programming Relaations for Metric Labeling and Markov Random Field MAP Estimation Pradeep Ravikumar John Lafferty School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213,
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationA new look at reweighted message passing
Copyright IEEE. Accepted to Transactions on Pattern Analysis and Machine Intelligence (TPAMI) http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber=692686 A new look at reweighted message passing
More informationGeneralized Roof Duality for Pseudo-Boolean Optimization
Generalized Roof Duality for Pseudo-Boolean Optimization Fredrik Kahl Petter Strandmark Centre for Mathematical Sciences, Lund University, Sweden {fredrik,petter}@maths.lth.se Abstract The number of applications
More informationPushmeet Kohli Microsoft Research
Pushmeet Kohli Microsoft Research E(x) x in {0,1} n Image (D) [Boykov and Jolly 01] [Blake et al. 04] E(x) = c i x i Pixel Colour x in {0,1} n Unary Cost (c i ) Dark (Bg) Bright (Fg) x* = arg min E(x)
More informationPart 7: Structured Prediction and Energy Minimization (2/2)
Part 7: Structured Prediction and Energy Minimization (2/2) Colorado Springs, 25th June 2011 G: Worst-case Complexity Hard problem Generality Optimality Worst-case complexity Integrality Determinism G:
More informationTHE topic of this paper is the following problem:
1 Revisiting the Linear Programming Relaxation Approach to Gibbs Energy Minimization and Weighted Constraint Satisfaction Tomáš Werner Abstract We present a number of contributions to the LP relaxation
More informationCSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches
CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches A presentation by Evan Ettinger November 11, 2005 Outline Introduction Motivation and Background
More informationSubproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing Huayan Wang huayanw@cs.stanford.edu Computer Science Department, Stanford University, Palo Alto, CA 90 USA Daphne Koller koller@cs.stanford.edu
More informationEnergy Minimization via Graph Cuts
Energy Minimization via Graph Cuts Xiaowei Zhou, June 11, 2010, Journal Club Presentation 1 outline Introduction MAP formulation for vision problems Min-cut and Max-flow Problem Energy Minimization via
More informationPotts model, parametric maxflow and k-submodular functions
2013 IEEE International Conference on Computer Vision Potts model, parametric maxflow and k-submodular functions Igor Gridchyn IST Austria igor.gridchyn@ist.ac.at Vladimir Kolmogorov IST Austria vnk@ist.ac.at
More informationMAP Estimation Algorithms in Computer Vision - Part II
MAP Estimation Algorithms in Comuter Vision - Part II M. Pawan Kumar, University of Oford Pushmeet Kohli, Microsoft Research Eamle: Image Segmentation E() = c i i + c ij i (1- j ) i i,j E: {0,1} n R 0
More informationPart 6: Structured Prediction and Energy Minimization (1/2)
Part 6: Structured Prediction and Energy Minimization (1/2) Providence, 21st June 2012 Prediction Problem Prediction Problem y = f (x) = argmax y Y g(x, y) g(x, y) = p(y x), factor graphs/mrf/crf, g(x,
More informationPushmeet Kohli. Microsoft Research Cambridge. IbPRIA 2011
Pushmeet Kohli Microsoft Research Cambridge IbPRIA 2011 2:30 4:30 Labelling Problems Graphical Models Message Passing 4:30 5:00 - Coffee break 5:00 7:00 - Graph Cuts Move Making Algorithms Speed and Efficiency
More informationMinimizing Count-based High Order Terms in Markov Random Fields
EMMCVPR 2011, St. Petersburg Minimizing Count-based High Order Terms in Markov Random Fields Thomas Schoenemann Center for Mathematical Sciences Lund University, Sweden Abstract. We present a technique
More informationBethe-ADMM for Tree Decomposition based Parallel MAP Inference
Bethe-ADMM for Tree Decomposition based Parallel MAP Inference Qiang Fu Huahua Wang Arindam Banerjee Department of Computer Science & Engineering University of Minnesota, Twin Cities Minneapolis, MN 55455
More informationHigher-Order Clique Reduction in Binary Graph Cut
CVPR009: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Miami Beach, Florida. June 0-5, 009. Hiroshi Ishikawa Nagoya City University Department of Information and Biological
More informationDual Decomposition for Marginal Inference
Dual Decomposition for Marginal Inference Justin Domke Rochester Institute of echnology Rochester, NY 14623 Abstract We present a dual decomposition approach to the treereweighted belief propagation objective.
More informationMANY problems in computer vision, such as segmentation,
134 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 6, JUNE 011 Transformation of General Binary MRF Minimization to the First-Order Case Hiroshi Ishikawa, Member, IEEE Abstract
More informationA Graph Cut Algorithm for Higher-order Markov Random Fields
A Graph Cut Algorithm for Higher-order Markov Random Fields Alexander Fix Cornell University Aritanan Gruber Rutgers University Endre Boros Rutgers University Ramin Zabih Cornell University Abstract Higher-order
More informationHigher-Order Clique Reduction Without Auxiliary Variables
Higher-Order Clique Reduction Without Auxiliary Variables Hiroshi Ishikawa Department of Computer Science and Engineering Waseda University Okubo 3-4-1, Shinjuku, Tokyo, Japan hfs@waseda.jp Abstract We
More information(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider
More informationPart III. for energy minimization
ICCV 2007 tutorial Part III Message-assing algorithms for energy minimization Vladimir Kolmogorov University College London Message assing ( E ( (,, Iteratively ass messages between nodes... Message udate
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationEfficient Energy Maximization Using Smoothing Technique
1/35 Efficient Energy Maximization Using Smoothing Technique Bogdan Savchynskyy Heidelberg Collaboratory for Image Processing (HCI) University of Heidelberg 2/35 Energy Maximization Problem G pv, Eq, v
More informationA Combined LP and QP Relaxation for MAP
A Combined LP and QP Relaxation for MAP Patrick Pletscher ETH Zurich, Switzerland pletscher@inf.ethz.ch Sharon Wulff ETH Zurich, Switzerland sharon.wulff@inf.ethz.ch Abstract MAP inference for general
More informationTightness of LP Relaxations for Almost Balanced Models
Tightness of LP Relaxations for Almost Balanced Models Adrian Weller University of Cambridge AISTATS May 10, 2016 Joint work with Mark Rowland and David Sontag For more information, see http://mlg.eng.cam.ac.uk/adrian/
More informationMAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches
MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches Martin Wainwright Tommi Jaakkola Alan Willsky martinw@eecs.berkeley.edu tommi@ai.mit.edu willsky@mit.edu
More informationExact MAP estimates by (hyper)tree agreement
Exact MAP estimates by (hyper)tree agreement Martin J. Wainwright, Department of EECS, UC Berkeley, Berkeley, CA 94720 martinw@eecs.berkeley.edu Tommi S. Jaakkola and Alan S. Willsky, Department of EECS,
More informationCS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More informationBeyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles
Beyond Loose LP-relaxations: Optimizing MRFs y Repairing Cycles Nikos Komodakis 1 and Nikos Paragios 2 1 University of Crete, komod@csd.uoc.gr 2 Ecole Centrale de Paris, nikos.paragios@ecp.fr Astract.
More informationMaximum Persistency via Iterative Relaxed Inference with Graphical Models
Maximum Persistency via Iterative Relaxed Inference with Graphical Models Alexander Shekhovtsov TU Graz, Austria shekhovtsov@icg.tugraz.at Paul Swoboda Heidelberg University, Germany swoboda@math.uni-heidelberg.de
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationLecture #21. c T x Ax b. maximize subject to
COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the
More informationDual Decomposition for Inference
Dual Decomposition for Inference Yunshu Liu ASPITRG Research Group 2014-05-06 References: [1]. D. Sontag, A. Globerson and T. Jaakkola, Introduction to Dual Decomposition for Inference, Optimization for
More informationGraph Cut based Inference with Co-occurrence Statistics. Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip Torr
Graph Cut based Inference with Co-occurrence Statistics Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip Torr Image labelling Problems Assign a label to each image pixel Geometry Estimation Image Denoising
More informationHomework Assignment 4 Solutions
MTAT.03.86: Advanced Methods in Algorithms Homework Assignment 4 Solutions University of Tartu 1 Probabilistic algorithm Let S = {x 1, x,, x n } be a set of binary variables of size n 1, x i {0, 1}. Consider
More informationDiscrete Markov Random Fields the Inference story. Pradeep Ravikumar
Discrete Markov Random Fields the Inference story Pradeep Ravikumar Graphical Models, The History How to model stochastic processes of the world? I want to model the world, and I like graphs... 2 Mid to
More informationOptimization of Max-Norm Objective Functions in Image Processing and Computer Vision
Optimization of Max-Norm Objective Functions in Image Processing and Computer Vision Filip Malmberg 1, Krzysztof Chris Ciesielski 2,3, and Robin Strand 1 1 Centre for Image Analysis, Department of Information
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 6, March 7, 2013 David Sontag (NYU) Graphical Models Lecture 6, March 7, 2013 1 / 25 Today s lecture 1 Dual decomposition 2 MAP inference
More informationChapter 7 Network Flow Problems, I
Chapter 7 Network Flow Problems, I Network flow problems are the most frequently solved linear programming problems. They include as special cases, the assignment, transportation, maximum flow, and shortest
More informationMultiresolution Graph Cut Methods in Image Processing and Gibbs Estimation. B. A. Zalesky
Multiresolution Graph Cut Methods in Image Processing and Gibbs Estimation B. A. Zalesky 1 2 1. Plan of Talk 1. Introduction 2. Multiresolution Network Flow Minimum Cut Algorithm 3. Integer Minimization
More informationEnergy minimization via graph-cuts
Energy minimization via graph-cuts Nikos Komodakis Ecole des Ponts ParisTech, LIGM Traitement de l information et vision artificielle Binary energy minimization We will first consider binary MRFs: Graph
More informationAccelerated dual decomposition for MAP inference
Vladimir Jojic vjojic@csstanfordedu Stephen Gould sgould@stanfordedu Daphne Koller koller@csstanfordedu Computer Science Department, 353 Serra Mall, Stanford University, Stanford, CA 94305, USA Abstract
More informationLinear and conic programming relaxations: Graph structure and message-passing
Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National
More informationInference and Representation
Inference and Representation David Sontag New York University Lecture 5, Sept. 30, 2014 David Sontag (NYU) Inference and Representation Lecture 5, Sept. 30, 2014 1 / 16 Today s lecture 1 Running-time of
More informationAn LP Formulation and Approximation Algorithms for the Metric Labeling Problem
An LP Formulation and Approximation Algorithms for the Metric Labeling Problem C. Chekuri y S. Khanna z J. Naor x L. Zosin { August 4, 2004 Abstract We consider approximation algorithms for the metric
More informationSOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?
SOLVING INTEGER LINEAR PROGRAMS 1. Solving the LP relaxation. 2. How to deal with fractional solutions? Integer Linear Program: Example max x 1 2x 2 0.5x 3 0.2x 4 x 5 +0.6x 6 s.t. x 1 +2x 2 1 x 1 + x 2
More informationDiscrete Optimization Lecture 5. M. Pawan Kumar
Discrete Optimization Lecture 5 M. Pawan Kumar pawan.kumar@ecp.fr Exam Question Type 1 v 1 s v 0 4 2 1 v 4 Q. Find the distance of the shortest path from s=v 0 to all vertices in the graph using Dijkstra
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationRevisiting the Limits of MAP Inference by MWSS on Perfect Graphs
Revisiting the Limits of MAP Inference by MWSS on Perfect Graphs Adrian Weller University of Cambridge Abstract A recent, promising approach to identifying a configuration of a discrete graphical model
More informationSubmodularization for Binary Pairwise Energies
IEEE conference on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, 214 p. 1 Submodularization for Binary Pairwise Energies Lena Gorelick Yuri Boykov Olga Veksler Computer Science Department
More informationRevisiting the Limits of MAP Inference by MWSS on Perfect Graphs
Revisiting the Limits of MAP Inference by MWSS on Perfect Graphs Adrian Weller University of Cambridge CP 2015 Cork, Ireland Slides and full paper at http://mlg.eng.cam.ac.uk/adrian/ 1 / 21 Motivation:
More informationPartial cubes: structures, characterizations, and constructions
Partial cubes: structures, characterizations, and constructions Sergei Ovchinnikov San Francisco State University, Mathematics Department, 1600 Holloway Ave., San Francisco, CA 94132 Abstract Partial cubes
More informationNetwork Design and Game Theory Spring 2008 Lecture 6
Network Design and Game Theory Spring 2008 Lecture 6 Guest Lecturer: Aaron Archer Instructor: Mohammad T. Hajiaghayi Scribe: Fengming Wang March 3, 2008 1 Overview We study the Primal-dual, Lagrangian
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationInference Methods for CRFs with Co-occurrence Statistics
Noname manuscript No. (will be inserted by the editor) Inference Methods for CRFs with Co-occurrence Statistics L ubor Ladický 1 Chris Russell 1 Pushmeet Kohli Philip H. S. Torr the date of receipt and
More informationLecture 15 (Oct 6): LP Duality
CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationUnifying Local Consistency and MAX SAT Relaxations for Scalable Inference with Rounding Guarantees
for Scalable Inference with Rounding Guarantees Stephen H. Bach Bert Huang Lise Getoor University of Maryland Virginia Tech University of California, Santa Cruz Abstract We prove the equivalence of first-order
More information17 Variational Inference
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 17 Variational Inference Prompted by loopy graphs for which exact
More informationAccelerated dual decomposition for MAP inference
Vladimir Jojic vjojic@csstanfordedu Stephen Gould sgould@stanfordedu Daphne Koller koller@csstanfordedu Computer Science Department, 353 Serra Mall, Stanford University, Stanford, CA 94305, USA Abstract
More informationSection Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.
Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch
More informationTruncated Max-of-Convex Models Technical Report
Truncated Max-of-Convex Models Technical Report Pankaj Pansari University of Oxford The Alan Turing Institute pankaj@robots.ox.ac.uk M. Pawan Kumar University of Oxford The Alan Turing Institute pawan@robots.ox.ac.uk
More informationTopic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007
CS880: Approximations Algorithms Scribe: Tom Watson Lecturer: Shuchi Chawla Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 In the last lecture, we described an O(log k log D)-approximation
More informationQuadratization of symmetric pseudo-boolean functions
of symmetric pseudo-boolean functions Yves Crama with Martin Anthony, Endre Boros and Aritanan Gruber HEC Management School, University of Liège Liblice, April 2013 of symmetric pseudo-boolean functions
More informationApproximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract)
Approximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract) Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles
More informationMATH 523: Primal-Dual Maximum Weight Matching Algorithm
MATH 523: Primal-Dual Maximum Weight Matching Algorithm We start with a graph G = (V, E) with edge weights {c(e) : e E} Primal P: max {c(e)x(e) : e E} subject to {x(e) : e hits i} + yi = 1 for all i V
More informationMin-Max Message Passing and Local Consistency in Constraint Networks
Min-Max Message Passing and Local Consistency in Constraint Networks Hong Xu, T. K. Satish Kumar, and Sven Koenig University of Southern California, Los Angeles, CA 90089, USA hongx@usc.edu tkskwork@gmail.com
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2014 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Yu-Hsin Kuo, Amos Ng 1 Introduction Last lecture
More informationCS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding
CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.
More informationApproximate Inference in Graphical Models
CENTER FOR MACHINE PERCEPTION Approximate Inference in Graphical Models CZECH TECHNICAL UNIVERSITY IN PRAGUE Tomáš Werner October 20, 2014 HABILITATION THESIS Center for Machine Perception, Department
More informationarxiv: v1 [cs.ds] 29 Aug 2015
APPROXIMATING (UNWEIGHTED) TREE AUGMENTATION VIA LIFT-AND-PROJECT, PART I: STEMLESS TAP JOSEPH CHERIYAN AND ZHIHAN GAO arxiv:1508.07504v1 [cs.ds] 9 Aug 015 Abstract. In Part I, we study a special case
More informationA Linear Programming Formulation and Approximation Algorithms for the Metric Labeling Problem
University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science January 2005 A Linear Programming Formulation and Approximation Algorithms for the Metric
More informationSTUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION
EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with
More informationIntelligent Systems:
Intelligent Systems: Undirected Graphical models (Factor Graphs) (2 lectures) Carsten Rother 15/01/2015 Intelligent Systems: Probabilistic Inference in DGM and UGM Roadmap for next two lectures Definition
More informationTopic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and
More informationMinimum cost transportation problem
Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More informationLecture 10 February 4, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof Nick Harvey Lecture 10 February 4, 2013 Scribe: Alexandre Fréchette This lecture is about spanning trees and their polyhedral representation Throughout
More informationMANY early vision problems can be naturally formulated
1568 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 10, OCTOBER 2006 Convergent Tree-Reweighted Message Passing for Energy Minimization Vladimir Kolmogorov, Member, IEEE Abstract
More informationExact Bounds for Steepest Descent Algorithms of L-convex Function Minimization
Exact Bounds for Steepest Descent Algorithms of L-convex Function Minimization Kazuo Murota Graduate School of Information Science and Technology, University of Tokyo, Tokyo 113-8656, Japan Akiyoshi Shioura
More informationRepresentations of All Solutions of Boolean Programming Problems
Representations of All Solutions of Boolean Programming Problems Utz-Uwe Haus and Carla Michini Institute for Operations Research Department of Mathematics ETH Zurich Rämistr. 101, 8092 Zürich, Switzerland
More informationInference Methods for CRFs with Co-occurrence Statistics
Int J Comput Vis (2013) 103:213 225 DOI 10.1007/s11263-012-0583-y Inference Methods for CRFs with Co-occurrence Statistics L ubor Ladický Chris Russell Pushmeet Kohli Philip H. S. Torr Received: 21 January
More informationBELIEF-PROPAGATION FOR WEIGHTED b-matchings ON ARBITRARY GRAPHS AND ITS RELATION TO LINEAR PROGRAMS WITH INTEGER SOLUTIONS
BELIEF-PROPAGATION FOR WEIGHTED b-matchings ON ARBITRARY GRAPHS AND ITS RELATION TO LINEAR PROGRAMS WITH INTEGER SOLUTIONS MOHSEN BAYATI, CHRISTIAN BORGS, JENNIFER CHAYES, AND RICCARDO ZECCHINA Abstract.
More informationON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES
ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated
More informationA GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong
A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,
More informationThe Steiner Network Problem
The Steiner Network Problem Pekka Orponen T-79.7001 Postgraduate Course on Theoretical Computer Science 7.4.2008 Outline 1. The Steiner Network Problem Linear programming formulation LP relaxation 2. The
More information