Discrete Inference and Learning Lecture 3
|
|
- Lynne Felicia Cannon
- 5 years ago
- Views:
Transcription
1 Discrete Inference and Learning Lecture 3 MVA h<p://thoth.inrialpes.fr/~alahari/disinflearn Slides based on material from Pushmeet Kohli, Nikos Komodakis, M. Pawan Kumar
2 Recap The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms
3 St-mincut based Move algorithms E(y) = θ i (y i ) + θ ij (y i,y j ) i i,j y ϵ Labels L = {l 1, l 2,, l k } Commonly used for solving non-submodular multi-label problems Extremely efficient and produce good solutions Not Exact: Produce local optima
4 Move Making Algorithms Current Solution Search Neighbourhood Optimal Move Energy Solution Space
5 Move Making Algorithms Current Solution Search Neighbourhood Optimal Move Energy Solution Space
6 Computing the Optimal Move Energy (t) y c Current Solution Search Neighbourhood Optimal Move Key Property Move Space Solution Space Bigger move space Better solutions Finding the optimal move hard
7 Moves using Graph Cuts Expansion and Swap move algorithms [Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Move Space (t) : 2 N Space of Solutions (y) : L N N L Current Solution Search Neighbourhood Number of Variables Number of Labels
8 Moves using Graph Cuts Expansion and Swap move algorithms [Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Current Solution Move to new solution Construct a move function Minimize move function to get optimal move How to minimize move functions?
9 General Binary Moves y = t y 1 + (1- t) y 2 New solution Current Solution Second solution E m (t) = E(t y 1 + (1- t) y 2 ) Minimize over move variables t to get the optimal move Move energy is a submodular QPBF (Exact Minimization Possible) Boykov, Veksler and Zabih, PAMI 2001
10 Expansion Move Variables take label α or retain current label Status: Initialize with Tree Tree Ground House Sky [Boykov, Veksler, Zabih]
11 Expansion Move Variables take label α or retain current label Status: Expand Ground Tree Ground House Sky [Boykov, Veksler, Zabih]
12 Expansion Move Variables take label α or retain current label Status: Expand House Tree Ground House Sky [Boykov, Veksler, Zabih]
13 Expansion Move Variables take label α or retain current label Status: Expand Sky Tree Ground House Sky [Boykov, Veksler, Zabih]
14 Expansion Move Variables take label α or retain current label Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Metric θ ij (l a,l b ) 0 θ ij (l a,l b ) = 0 iff a = b Semi metric Examples: Potts model, Truncated linear Cannot solve truncated quadratic [Boykov, Veksler, Zabih]
15 Expansion Move Variables take label α or retain current label Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Metric θ ij (l a,l b ) + θ ij (l b,l c ) θ ij (l a,l c ) Triangle Inequality Examples: Potts model, Truncated linear Cannot solve truncated quadratic [Boykov, Veksler, Zabih]
16 Swap Move Variables labeled α, β can swap their labels Swap Sky, House Tree Ground House Sky [Boykov, Veksler, Zabih]
17 Swap Move Variables labeled α, β can swap their labels Swap Sky, House Tree Ground House Sky [Boykov, Veksler, Zabih]
18 Swap Move Variables labeled α, β can swap their labels Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Semimetric θ ij (l a,l b ) 0 θ ij (l a,l b ) = 0 a = b Examples: Potts model, Truncated Convex [Boykov, Veksler, Zabih]
19 Summary Labelling Problem Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Submodular Quadratic Pseudoboolean Function S st-mincut Sub-problem T Move making algorithms
20 Where do we stand? Grid graph - submodular, 2-label: Use graphcuts metric : Use expansion otherwise: Use TRW, dual decomposition, relaxation Chain/Tree, 2/multi-label: Use BP
21 Dynamic Energy Minimization
22 Image Segmentation in Video n-links s = 0 st-cut E(x) x * w pq t = 1 Video frame Flow Global Optimum
23 Image Segmentation in Video Video frame Flow Global Optimum
24 Dynamic Energy Minimization E A minimize S A Can we do better? Recycling Solutions E B computationally expensive operation S B Boykov & Jolly ICCV 01, Kohli & Torr (ICCV05, PAMI07)
25 Dynamic Energy Minimization E A minimize S A Reuse flow A and B similar differences between A and B E B* Simpler energy cheaper operation E B computationally expensive operation S B Reparametrization Boykov & Jolly ICCV 01, Kohli Kohli & Torr & Torr (ICCV05, (ICCV05, PAMI07) PAMI07)
26 Dynamic Energy Minimization Original Energy E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 2a 1 ā 2 + ā 1 a 2 Reparameterized Energy E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 New Energy E(a 1,a 2 ) = 2a 1 + 5ā 1 + 9a 2 + 4ā 2 + 7a 1 ā 2 + ā 1 a 2 New Reparameterized Energy E(a 1,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 + 5a 1 ā 2 Kohli & Torr (ICCV05, PAMI07) Boykov & Jolly ICCV 01, Kohli & Torr (ICCV05, PAMI07)
27 Outline Reparameterization (lecture 1) Belief Propagation (lecture 1) Tree-reweighted Message Passing Integer Programming Formulation Linear Programming Relaxation and its Dual Convergent Solution for Dual Computational Issues and Theoretical Properties
28 First Recap of Integer Linear Program
29 Integer Linear Program max x c T x s.t. A x b x is an integer vector Every element of x is an integer
30 Integer Linear Program max x c T x s.t. A x b x Z n Every element of x is an integer
31 Example max x x 1 + x 2 s.t. x 1 0 x 2 0 4x 1 x 2 8 2x 1 + x x 1-2x 2-2 x Z n
32 Example x 1 0 x 2 0
33 Example x 1 0 x 2 0 4x 1 x 2 = 8
34 Example x 1 0 x 2 0 4x 1 x 2 8
35 Example x 1 0 x 2 0 4x 1 x 2 8 2x 1 + x x 1-2x 2-2
36 Example (2,6) (3,4) x 1 0 x 2 0 4x 1 x 2 8 2x 1 + x 2 10 (0,1) (0,0) (2,0) 5x 1-2x 2-2 x Z n max x c T x
37 Example (2,6) (3,4) x 1 0 x 2 0 4x 1 x 2 8 2x 1 + x 2 10 (0,1) (0,0) Why? (2,0) True in general? 5x 1-2x 2-2 x Z n max x c T x
38 Why is the soluaon at a vertex?
39 Integer Programming Formulation Unary Potentials Label l θ a;0 = 5 θ b;0 = θ a;1 = 2 θ b;1 = 4 Label l V a V b Labeling f(a) = 1 f(b) = 0 y a;0 = 0 y a;1 = 1 y b;0 = 1 y b;1 = 0 Any f(.) has equivalent boolean variables y a;i
40 Integer Programming Formulation Unary Potentials Label l θ a;0 = 5 θ b;0 = θ a;1 = 2 θ b;1 = 4 Label l V a V b Labeling f(a) = 1 f(b) = 0 y a;0 = 0 y a;1 = 1 y b;0 = 1 y b;1 = 0 Find the optimal variables y a;i
41 Integer Programming Formulation Unary Potentials Label l θ a;0 = 5 θ b;0 = θ a;1 = 2 θ b;1 = 4 Label l V a V b Sum of Unary Potentials a i θ a;i y a;i y a;i {0,1}, for all V a, l i i y a;i = 1, for all V a
42 Integer Programming Formulation Pairwise Potentials Label l θ ab;00 = 0 θ ab;01 = θ ab;10 = 1 θ ab;11 = 0 Label l V a V b Sum of Pairwise Potentials (a,b) ik θ ab;ik y a;i y b;k y a;i {0,1} i y a;i = 1
43 Integer Programming Formulation Pairwise Potentials Label l θ ab;00 = 0 θ ab;01 = θ ab;10 = 1 θ ab;11 = 0 Label l V a V b Sum of Pairwise Potentials (a,b) ik θ ab;ik y ab;ik y a;i {0,1} y ab;ik = y a;i y b;k i y a;i = 1
44 Integer Programming Formulation min a i θ a;i y a;i + (a,b) ik θ ab;ik y ab;ik y a;i {0,1} i y a;i = 1 y ab;ik = y a;i y b;k
45 Integer Programming Formulation min θ T y y a;i {0,1} i y a;i = 1 y ab;ik = y a;i y b;k θ = [ θ a;i. ; θ ab;ik.] y = [ y a;i. ; y ab;ik.]
46 Integer Programming Formulation min θ T y y a;i {0,1} i y a;i = 1 y ab;ik = y a;i y b;k Solve to obtain MAP labeling y*
47 Integer Programming Formulation min θ T y y a;i {0,1} i y a;i = 1 y ab;ik = y a;i y b;k But we can t solve it in general
48 Outline Reparameterization (lecture 1) Belief Propagation (lecture 1) Tree-reweighted Message Passing Integer Programming Formulation Linear Programming Relaxation and its Dual Convergent Solution for Dual Computational Issues and Theoretical Properties
49 Linear Programming Relaxation min θ T y y a;i {0,1} i y a;i = 1 y ab;ik = y a;i y b;k Two reasons why we can t solve this
50 Linear Programming Relaxation min θ T y y a;i [0,1] i y a;i = 1 y ab;ik = y a;i y b;k One reason why we can t solve this
51 Linear Programming Relaxation min θ T y y a;i [0,1] i y a;i = 1 k y ab;ik = k y a;i y b;k One reason why we can t solve this
52 Linear Programming Relaxation min θ T y y a;i [0,1] i y a;i = 1 k y ab;ik = y a;i k y b;k = 1 One reason why we can t solve this
53 Linear Programming Relaxation min θ T y y a;i [0,1] i y a;i = 1 k y ab;ik = y a;i One reason why we can t solve this
54 Linear Programming Relaxation min θ T y y a;i [0,1] i y a;i = 1 k y ab;ik = y a;i No reason why we can t solve this * *memory requirements, time complexity
55 Wainwright et al., 2001 Dual of the LP Relaxation V a V b V c min θ T y V d V e V f y a;i [0,1] V g V h V i θ i y a;i = 1 k y ab;ik = y a;i
56 Wainwright et al., 2001 V a V b V c V d V e V f V g V h V i Dual of the LP Relaxation θ 1 θ 2 θ 3 θ ρ i 0 V a V b V c V d V e V f V g V h V i ρ 1 ρ 2 ρ 3 θ 4 θ 5 θ 6 V a V b V c V d V e V f ρ i θ i = θ V g V h V i ρ 4 ρ 5 ρ 6
57 Wainwright et al., 2001 V a V b V c V d V e V f V g V h V i θ Dual of LP max ρ i q*(θ i ) ρ i θ i = θ Dual of the LP Relaxation q*(θ 1 ) q*(θ 2 ) q*(θ 3 ) ρ i 0 V a V b V c V d V e V f V g V h V i ρ 1 ρ 2 ρ 3 q*(θ 4 ) q*(θ 5 ) q*(θ 6 ) V a V b V c V d V e V f V g V h V i ρ 4 ρ 5 ρ 6
58 Wainwright et al., 2001 V a V b V c V d V e V f V g V h V i θ Dual of LP max ρ i q*(θ i ) ρ i θ i θ Dual of the LP Relaxation q*(θ 1 ) q*(θ 2 ) q*(θ 3 ) ρ i 0 V a V b V c V d V e V f V g V h V i ρ 1 ρ 2 ρ 3 q*(θ 4 ) q*(θ 5 ) q*(θ 6 ) V a V b V c V d V e V f V g V h V i ρ 4 ρ 5 ρ 6
59 Wainwright et al., 2001 Dual of the LP Relaxation max ρ i q*(θ i ) ρ i θ i θ I can easily compute q*(θ i ) I can easily maintain reparam constraint So can I easily solve the dual?
60 Outline Reparameterization (lecture 1) Belief Propagation (lecture 1) Tree-reweighted Message Passing Integer Programming Formulation Linear Programming Relaxation and its Dual Convergent Solution for Dual Computational Issues and Theoretical Properties
61 Kolmogorov, 2006 θ 1 V a V b V c TRW Message Passing ρ 1 θ 4 θ 5 θ 6 V a V b V c θ 2 V d V e V f ρ 2 V d V e V f θ 3 V g V h V i ρ 3 V g V h V i ρ 4 ρ 5 ρ 6 Pick a variable V a ρ i q*(θ i ) ρ i θ i θ
62 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g ρ i q*(θ i ) ρ i θ i θ
63 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g Reparameterize to obtain min-marginals of V a ρ 1 q*(θ 1 ) + ρ 4 q*(θ 4 ) + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
64 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g One pass of Belief Propagation ρ 1 q*(θ 1 ) + ρ 4 q*(θ 4 ) + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest
65 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g Remain the same ρ 1 q*(θ 1 ) + ρ 4 q*(θ 4 ) + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
66 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g ρ 1 min{θ 1 a;0,θ 1 a;1 } + ρ4 min{θ 4 a;0,θ 4 a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
67 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g Compute weighted average of min-marginals of V a ρ 1 min{θ 1 a;0,θ 1 a;1 } + ρ4 min{θ 4 a;0,θ 4 a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
68 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ 1 a;1 θ 4 a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ 1 a;0 θ 4 a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g θ a;0 = ρ 1 θ 1 a;0 + ρ4 θ 4 a;0 θ a;1 = ρ 1 θ 1 a;1 + ρ4 θ 4 a;1 ρ 1 + ρ 4 ρ 1 + ρ 4 ρ 1 min{θ 1 a;0,θ 1 a;1 } + ρ4 min{θ 4 a;0,θ 4 a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
69 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g θ a;0 = ρ 1 θ 1 a;0 + ρ4 θ 4 a;0 θ a;1 = ρ 1 θ 1 a;1 + ρ4 θ 4 a;1 ρ 1 + ρ 4 ρ 1 + ρ 4 ρ 1 min{θ 1 a;0,θ 1 a;1 } + ρ4 min{θ 4 a;0,θ 4 a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest
70 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g θ a;0 = ρ 1 θ 1 a;0 + ρ4 θ 4 a;0 θ a;1 = ρ 1 θ 1 a;1 + ρ4 θ 4 a;1 ρ 1 + ρ 4 ρ 1 + ρ 4 ρ 1 min{θ 1 a;0,θ 1 a;1 } + ρ4 min{θ 4 a;0,θ 4 a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
71 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g θ a;0 = ρ 1 θ 1 a;0 + ρ4 θ 4 a;0 θ a;1 = ρ 1 θ 1 a;1 + ρ4 θ 4 a;1 ρ 1 + ρ 4 ρ 1 + ρ 4 ρ 1 min{θ a;0,θ a;1 } + ρ 4 min{θ a;0,θ a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
72 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g θ a;0 = ρ 1 θ 1 a;0 + ρ4 θ 4 a;0 θ a;1 = ρ 1 θ 1 a;1 + ρ4 θ 4 a;1 ρ 1 + ρ 4 ρ 1 + ρ 4 (ρ 1 + ρ 4 ) min{θ a;0, θ a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
73 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g min {p 1 +p 2, q 1 +q 2 } min {p 1, q 1 } + min {p 2, q 2 } (ρ 1 + ρ 4 ) min{θ a;0, θ a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
74 Kolmogorov, 2006 TRW Message Passing θ 1 c;1 θ 1 b;1 θ a;1 θ a;1 θ 4 d;1 θ 4 g;1 θ 1 c;0 θ 1 b;0 θ a;0 θ a;0 θ 4 d;0 θ 4 g;0 V c V b V a V a V d V g Objective function increases or remains constant (ρ 1 + ρ 4 ) min{θ a;0, θ a;1 } + K ρ 1 θ 1 + ρ 4 θ 4 + θ rest θ
75 TRW Message Passing Initialize θ i. Take care of reparam constraint Choose random variable V a Compute min-marginals of V a for all trees Node-average the min-marginals REPEAT Can also do edge-averaging Kolmogorov, 2006
76 Example 1 l 1 2 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Pick variable V a. Reparameterize.
77 Example 1 l 1 5 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Average the min-marginals of V a
78 Example 1 l ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Pick variable V b. Reparameterize.
79 Example 1 l ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Average the min-marginals of V b
80 Example 1 l ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Value of dual does not increase
81 Example 1 l ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Maybe it will increase for V c NO
82 Example 1 l ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a f 1 (a) = 0 f 1 (b) = 0 f 2 (b) = 0 f 2 (c) = 0 f 3 (c) = 0 f 3 (a) = 0 Strong Tree Agreement Exact MAP Estimate
83 Example 2 l 1 2 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Pick variable V a. Reparameterize.
84 Example 2 l 1 4 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Average the min-marginals of V a
85 Example 2 l 1 4 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Value of dual does not increase
86 Example 2 l 1 4 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a Maybe it will decrease for V b or V c NO
87 Example 2 l 1 4 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a f 1 (a) = 1 f 1 (b) = 1 f 2 (b) = 1 f 2 (c) = 0 f 3 (c) = 1 f 3 (a) = 1 f 2 (b) = 0 f 2 (c) = 1 Weak Tree Agreement Not Exact MAP Estimate
88 Example 2 l 1 4 ρ 1 = ρ 2 =1 ρ 3 = l V a V b V b V c V c V a f 1 (a) = 1 f 1 (b) = 1 f 2 (b) = 1 f 2 (c) = 0 f 3 (c) = 1 f 3 (a) = 1 f 2 (b) = 0 f 2 (c) = 1 Weak Tree Agreement Convergence point of TRW
89 Obtaining the Labeling Only solves the dual. Primal solutions? V a V b V c V d V e V f Fix the label of V a V g V h V i θ = ρ i θ i θ
90 Obtaining the Labeling Only solves the dual. Primal solutions? V a V b V c V d V e V f Fix the label of V b V g V h V i θ = ρ i θ i θ Continue in some fixed order Meltzer et al., 2006
91 Outline Reparameterization (lecture 1) Belief Propagation (lecture 1) Tree-reweighted Message Passing Integer Programming Formulation Linear Programming Relaxation and its Dual Convergent Solution for Dual Computational Issues and Theoretical Properties
92 Computational Issues of TRW Basic Component is Belief Propagation Speed-ups for some pairwise potentials Felzenszwalb & Huttenlocher, 2004 Memory requirements cut down by half Kolmogorov, 2006 Further speed-ups using monotonic chains Kolmogorov, 2006
93 Theoretical Properties of TRW Always converges, unlike BP Kolmogorov, 2006 Strong tree agreement implies exact MAP Wainwright et al., 2001 Optimal MAP for two-label submodular problems θ ab;00 + θ ab;11 θ ab;01 + θ ab;10 Kolmogorov and Wainwright, 2005
94 Summary Trees can be solved exactly - BP No guarantee of convergence otherwise - BP Strong Tree Agreement - TRW-S Submodular energies solved exactly - TRW-S TRW-S solves an LP relaxation of MAP estimation
MAP Estimation Algorithms in Computer Vision - Part II
MAP Estimation Algorithms in Comuter Vision - Part II M. Pawan Kumar, University of Oford Pushmeet Kohli, Microsoft Research Eamle: Image Segmentation E() = c i i + c ij i (1- j ) i i,j E: {0,1} n R 0
More informationA note on the primal-dual method for the semi-metric labeling problem
A note on the primal-dual method for the semi-metric labeling problem Vladimir Kolmogorov vnk@adastral.ucl.ac.uk Technical report June 4, 2007 Abstract Recently, Komodakis et al. [6] developed the FastPD
More informationPushmeet Kohli Microsoft Research
Pushmeet Kohli Microsoft Research E(x) x in {0,1} n Image (D) [Boykov and Jolly 01] [Blake et al. 04] E(x) = c i x i Pixel Colour x in {0,1} n Unary Cost (c i ) Dark (Bg) Bright (Fg) x* = arg min E(x)
More informationPushmeet Kohli. Microsoft Research Cambridge. IbPRIA 2011
Pushmeet Kohli Microsoft Research Cambridge IbPRIA 2011 2:30 4:30 Labelling Problems Graphical Models Message Passing 4:30 5:00 - Coffee break 5:00 7:00 - Graph Cuts Move Making Algorithms Speed and Efficiency
More informationGraph Cut based Inference with Co-occurrence Statistics. Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip Torr
Graph Cut based Inference with Co-occurrence Statistics Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip Torr Image labelling Problems Assign a label to each image pixel Geometry Estimation Image Denoising
More informationRounding-based Moves for Semi-Metric Labeling
Rounding-based Moves for Semi-Metric Labeling M. Pawan Kumar, Puneet K. Dokania To cite this version: M. Pawan Kumar, Puneet K. Dokania. Rounding-based Moves for Semi-Metric Labeling. Journal of Machine
More informationPart 6: Structured Prediction and Energy Minimization (1/2)
Part 6: Structured Prediction and Energy Minimization (1/2) Providence, 21st June 2012 Prediction Problem Prediction Problem y = f (x) = argmax y Y g(x, y) g(x, y) = p(y x), factor graphs/mrf/crf, g(x,
More informationOn Partial Optimality in Multi-label MRFs
On Partial Optimality in Multi-label MRFs P. Kohli 1 A. Shekhovtsov 2 C. Rother 1 V. Kolmogorov 3 P. Torr 4 1 Microsoft Research Cambridge 2 Czech Technical University in Prague 3 University College London
More informationPart 7: Structured Prediction and Energy Minimization (2/2)
Part 7: Structured Prediction and Energy Minimization (2/2) Colorado Springs, 25th June 2011 G: Worst-case Complexity Hard problem Generality Optimality Worst-case complexity Integrality Determinism G:
More informationOn Partial Optimality in Multi-label MRFs
Pushmeet Kohli 1 Alexander Shekhovtsov 2 Carsten Rother 1 Vladimir Kolmogorov 3 Philip Torr 4 pkohli@microsoft.com shekhovt@cmp.felk.cvut.cz carrot@microsoft.com vnk@adastral.ucl.ac.uk philiptorr@brookes.ac.uk
More informationPart III. for energy minimization
ICCV 2007 tutorial Part III Message-assing algorithms for energy minimization Vladimir Kolmogorov University College London Message assing ( E ( (,, Iteratively ass messages between nodes... Message udate
More informationSubproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passing Huayan Wang huayanw@cs.stanford.edu Computer Science Department, Stanford University, Palo Alto, CA 90 USA Daphne Koller koller@cs.stanford.edu
More informationMarkov Random Fields for Computer Vision (Part 1)
Markov Random Fields for Computer Vision (Part 1) Machine Learning Summer School (MLSS 2011) Stephen Gould stephen.gould@anu.edu.au Australian National University 13 17 June, 2011 Stephen Gould 1/23 Pixel
More informationA Combined LP and QP Relaxation for MAP
A Combined LP and QP Relaxation for MAP Patrick Pletscher ETH Zurich, Switzerland pletscher@inf.ethz.ch Sharon Wulff ETH Zurich, Switzerland sharon.wulff@inf.ethz.ch Abstract MAP inference for general
More informationLearning with Structured Inputs and Outputs
Learning with Structured Inputs and Outputs Christoph H. Lampert IST Austria (Institute of Science and Technology Austria), Vienna Microsoft Machine Learning and Intelligence School 2015 July 29-August
More informationConvergent Tree-reweighted Message Passing for Energy Minimization
Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2006. Convergent Tree-reweighted Message Passing for Energy Minimization Vladimir Kolmogorov University College London,
More informationQuadratic Programming Relaxations for Metric Labeling and Markov Random Field MAP Estimation
Quadratic Programming Relaations for Metric Labeling and Markov Random Field MAP Estimation Pradeep Ravikumar John Lafferty School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213,
More informationOn the optimality of tree-reweighted max-product message-passing
Appeared in Uncertainty on Artificial Intelligence, July 2005, Edinburgh, Scotland. On the optimality of tree-reweighted max-product message-passing Vladimir Kolmogorov Microsoft Research Cambridge, UK
More informationHigher-Order Clique Reduction in Binary Graph Cut
CVPR009: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Miami Beach, Florida. June 0-5, 009. Hiroshi Ishikawa Nagoya City University Department of Information and Biological
More informationDiscrete Optimization Lecture 5. M. Pawan Kumar
Discrete Optimization Lecture 5 M. Pawan Kumar pawan.kumar@ecp.fr Exam Question Type 1 v 1 s v 0 4 2 1 v 4 Q. Find the distance of the shortest path from s=v 0 to all vertices in the graph using Dijkstra
More informationDual Decomposition for Marginal Inference
Dual Decomposition for Marginal Inference Justin Domke Rochester Institute of echnology Rochester, NY 14623 Abstract We present a dual decomposition approach to the treereweighted belief propagation objective.
More informationTightness of LP Relaxations for Almost Balanced Models
Tightness of LP Relaxations for Almost Balanced Models Adrian Weller University of Cambridge AISTATS May 10, 2016 Joint work with Mark Rowland and David Sontag For more information, see http://mlg.eng.cam.ac.uk/adrian/
More informationPotts model, parametric maxflow and k-submodular functions
2013 IEEE International Conference on Computer Vision Potts model, parametric maxflow and k-submodular functions Igor Gridchyn IST Austria igor.gridchyn@ist.ac.at Vladimir Kolmogorov IST Austria vnk@ist.ac.at
More informationFast Memory-Efficient Generalized Belief Propagation
Fast Memory-Efficient Generalized Belief Propagation M. Pawan Kumar P.H.S. Torr Department of Computing Oxford Brookes University Oxford, UK, OX33 1HX {pkmudigonda,philiptorr}@brookes.ac.uk http://cms.brookes.ac.uk/computervision
More informationProbabilistic Graphical Models
Probabilistic Graphical Models David Sontag New York University Lecture 6, March 7, 2013 David Sontag (NYU) Graphical Models Lecture 6, March 7, 2013 1 / 25 Today s lecture 1 Dual decomposition 2 MAP inference
More informationTruncated Max-of-Convex Models Technical Report
Truncated Max-of-Convex Models Technical Report Pankaj Pansari University of Oxford The Alan Turing Institute pankaj@robots.ox.ac.uk M. Pawan Kumar University of Oxford The Alan Turing Institute pawan@robots.ox.ac.uk
More informationIntroduction To Graphical Models
Peter Gehler Introduction to Graphical Models Introduction To Graphical Models Peter V. Gehler Max Planck Institute for Intelligent Systems, Tübingen, Germany ENS/INRIA Summer School, Paris, July 2013
More informationMinimizing Count-based High Order Terms in Markov Random Fields
EMMCVPR 2011, St. Petersburg Minimizing Count-based High Order Terms in Markov Random Fields Thomas Schoenemann Center for Mathematical Sciences Lund University, Sweden Abstract. We present a technique
More informationCSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches
CSE 254: MAP estimation via agreement on (hyper)trees: Message-passing and linear programming approaches A presentation by Evan Ettinger November 11, 2005 Outline Introduction Motivation and Background
More informationA Graph Cut Algorithm for Higher-order Markov Random Fields
A Graph Cut Algorithm for Higher-order Markov Random Fields Alexander Fix Cornell University Aritanan Gruber Rutgers University Endre Boros Rutgers University Ramin Zabih Cornell University Abstract Higher-order
More informationMaking the Right Moves: Guiding Alpha-Expansion using Local Primal-Dual Gaps
Making the Right Moves: Guiding Alpha-Expansion using Local Primal-Dual Gaps Dhruv Batra TTI Chicago dbatra@ttic.edu Pushmeet Kohli Microsoft Research Cambridge pkohli@microsoft.com Abstract 5 This paper
More informationEfficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials by Phillip Krahenbuhl and Vladlen Koltun Presented by Adam Stambler Multi-class image segmentation Assign a class label to each
More informationGeneralized Roof Duality for Pseudo-Boolean Optimization
Generalized Roof Duality for Pseudo-Boolean Optimization Fredrik Kahl Petter Strandmark Centre for Mathematical Sciences, Lund University, Sweden {fredrik,petter}@maths.lth.se Abstract The number of applications
More informationTransformation of Markov Random Fields for Marginal Distribution Estimation
Transformation of Markov Random Fields for Marginal Distribution Estimation Masaki Saito Takayuki Okatani Tohoku University, Japan {msaito, okatani}@vision.is.tohoku.ac.jp Abstract This paper presents
More informationDiscrete Optimization in Machine Learning. Colorado Reed
Discrete Optimization in Machine Learning Colorado Reed [ML-RCC] 31 Jan 2013 1 Acknowledgements Some slides/animations based on: Krause et al. tutorials: http://www.submodularity.org Pushmeet Kohli tutorial:
More informationEfficient Inference with Cardinality-based Clique Potentials
Rahul Gupta IBM Research Lab, New Delhi, India Ajit A. Diwan Sunita Sarawagi IIT Bombay, India Abstract Many collective labeling tasks require inference on graphical models where the clique potentials
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2014 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Yu-Hsin Kuo, Amos Ng 1 Introduction Last lecture
More informationHigher-Order Clique Reduction Without Auxiliary Variables
Higher-Order Clique Reduction Without Auxiliary Variables Hiroshi Ishikawa Department of Computer Science and Engineering Waseda University Okubo 3-4-1, Shinjuku, Tokyo, Japan hfs@waseda.jp Abstract We
More informationGraph Cut based Inference with Co-occurrence Statistics
Graph Cut based Inference with Co-occurrence Statistics Lubor Ladicky 1,3, Chris Russell 1,3, Pushmeet Kohli 2, and Philip H.S. Torr 1 1 Oxford Brookes 2 Microsoft Research Abstract. Markov and Conditional
More informationJunction Tree, BP and Variational Methods
Junction Tree, BP and Variational Methods Adrian Weller MLSALT4 Lecture Feb 21, 2018 With thanks to David Sontag (MIT) and Tony Jebara (Columbia) for use of many slides and illustrations For more information,
More informationIntelligent Systems:
Intelligent Systems: Undirected Graphical models (Factor Graphs) (2 lectures) Carsten Rother 15/01/2015 Intelligent Systems: Probabilistic Inference in DGM and UGM Roadmap for next two lectures Definition
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationEnergy minimization via graph-cuts
Energy minimization via graph-cuts Nikos Komodakis Ecole des Ponts ParisTech, LIGM Traitement de l information et vision artificielle Binary energy minimization We will first consider binary MRFs: Graph
More informationFinite Metric Spaces & Their Embeddings: Introduction and Basic Tools
Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function
More informationMaximum Persistency via Iterative Relaxed Inference with Graphical Models
Maximum Persistency via Iterative Relaxed Inference with Graphical Models Alexander Shekhovtsov TU Graz, Austria shekhovtsov@icg.tugraz.at Paul Swoboda Heidelberg University, Germany swoboda@math.uni-heidelberg.de
More informationTHE topic of this paper is the following problem:
1 Revisiting the Linear Programming Relaxation Approach to Gibbs Energy Minimization and Weighted Constraint Satisfaction Tomáš Werner Abstract We present a number of contributions to the LP relaxation
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3
More informationA new look at reweighted message passing
Copyright IEEE. Accepted to Transactions on Pattern Analysis and Machine Intelligence (TPAMI) http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber=692686 A new look at reweighted message passing
More informationVariational algorithms for marginal MAP
Variational algorithms for marginal MAP Alexander Ihler UC Irvine CIOG Workshop November 2011 Variational algorithms for marginal MAP Alexander Ihler UC Irvine CIOG Workshop November 2011 Work with Qiang
More informationTighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm
Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm Dhruv Batra Sebastian Nowozin Pushmeet Kohli TTI-Chicago Microsoft Research Cambridge Microsoft Research Cambridge
More informationTighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm
Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm Dhruv Batra Sebastian Nowozin Pushmeet Kohli TTI-Chicago Microsoft Research Cambridge Microsoft Research Cambridge
More informationDoes Better Inference mean Better Learning?
Does Better Inference mean Better Learning? Andrew E. Gelfand, Rina Dechter & Alexander Ihler Department of Computer Science University of California, Irvine {agelfand,dechter,ihler}@ics.uci.edu Abstract
More informationHow to Compute Primal Solution from Dual One in LP Relaxation of MAP Inference in MRF?
How to Compute Primal Solution from Dual One in LP Relaxation of MAP Inference in MRF? Tomáš Werner Center for Machine Perception, Czech Technical University, Prague Abstract In LP relaxation of MAP inference
More informationMachine Learning for Data Science (CS4786) Lecture 24
Machine Learning for Data Science (CS4786) Lecture 24 Graphical Models: Approximate Inference Course Webpage : http://www.cs.cornell.edu/courses/cs4786/2016sp/ BELIEF PROPAGATION OR MESSAGE PASSING Each
More informationSemi-Markov/Graph Cuts
Semi-Markov/Graph Cuts Alireza Shafaei University of British Columbia August, 2015 1 / 30 A Quick Review For a general chain-structured UGM we have: n n p(x 1, x 2,..., x n ) φ i (x i ) φ i,i 1 (x i, x
More informationLinear and conic programming relaxations: Graph structure and message-passing
Linear and conic programming relaxations: Graph structure and message-passing Martin Wainwright UC Berkeley Departments of EECS and Statistics Banff Workshop Partially supported by grants from: National
More informationClique trees & Belief Propagation. Siamak Ravanbakhsh Winter 2018
Graphical Models Clique trees & Belief Propagation Siamak Ravanbakhsh Winter 2018 Learning objectives message passing on clique trees its relation to variable elimination two different forms of belief
More information14 : Theory of Variational Inference: Inner and Outer Approximation
10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction
More information15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018
15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following
More informationConvex Relaxations for Markov Random Field MAP estimation
Convex Relaxations for Markov Random Field MAP estimation Timothee Cour GRASP Lab, University of Pennsylvania September 2008 Abstract Markov Random Fields (MRF) are commonly used in computer vision and
More informationCOMP90051 Statistical Machine Learning
COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Trevor Cohn 24. Hidden Markov Models & message passing Looking back Representation of joint distributions Conditional/marginal independence
More informationSubmodularity in Machine Learning
Saifuddin Syed MLRG Summer 2016 1 / 39 What are submodular functions Outline 1 What are submodular functions Motivation Submodularity and Concavity Examples 2 Properties of submodular functions Submodularity
More informationProbabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Day 4: Expectation and Belief Propagation Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/
More informationEnergy Minimization via Graph Cuts
Energy Minimization via Graph Cuts Xiaowei Zhou, June 11, 2010, Journal Club Presentation 1 outline Introduction MAP formulation for vision problems Min-cut and Max-flow Problem Energy Minimization via
More informationGraphical Model Inference with Perfect Graphs
Graphical Model Inference with Perfect Graphs Tony Jebara Columbia University July 25, 2013 joint work with Adrian Weller Graphical models and Markov random fields We depict a graphical model G as a bipartite
More informationVariational Algorithms for Marginal MAP
Variational Algorithms for Marginal MAP Qiang Liu Department of Computer Science University of California, Irvine Irvine, CA, 92697 qliu1@ics.uci.edu Alexander Ihler Department of Computer Science University
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference IV: Variational Principle II Junming Yin Lecture 17, March 21, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3 X 3 Reading: X 4
More information17 Variational Inference
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 17 Variational Inference Prompted by loopy graphs for which exact
More informationRevisiting the Limits of MAP Inference by MWSS on Perfect Graphs
Revisiting the Limits of MAP Inference by MWSS on Perfect Graphs Adrian Weller University of Cambridge CP 2015 Cork, Ireland Slides and full paper at http://mlg.eng.cam.ac.uk/adrian/ 1 / 21 Motivation:
More informationJoint Optimization of Segmentation and Appearance Models
Joint Optimization of Segmentation and Appearance Models David Mandle, Sameep Tandon April 29, 2013 David Mandle, Sameep Tandon (Stanford) April 29, 2013 1 / 19 Overview 1 Recap: Image Segmentation 2 Optimization
More informationMax$Sum(( Exact(Inference(
Probabilis7c( Graphical( Models( Inference( MAP( Max$Sum(( Exact(Inference( Product Summation a 1 b 1 8 a 1 b 2 1 a 2 b 1 0.5 a 2 b 2 2 a 1 b 1 3 a 1 b 2 0 a 2 b 1-1 a 2 b 2 1 Max-Sum Elimination in Chains
More informationHigher-Order Energies for Image Segmentation
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Higher-Order Energies for Image Segmentation Jianbing Shen, Senior Member, IEEE, Jianteng Peng, Xingping Dong, Ling Shao, Senior Member, IEEE, and Fatih Porikli,
More informationAccelerated dual decomposition for MAP inference
Vladimir Jojic vjojic@csstanfordedu Stephen Gould sgould@stanfordedu Daphne Koller koller@csstanfordedu Computer Science Department, 353 Serra Mall, Stanford University, Stanford, CA 94305, USA Abstract
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationAccelerated dual decomposition for MAP inference
Vladimir Jojic vjojic@csstanfordedu Stephen Gould sgould@stanfordedu Daphne Koller koller@csstanfordedu Computer Science Department, 353 Serra Mall, Stanford University, Stanford, CA 94305, USA Abstract
More informationLecture 8: PGM Inference
15 September 2014 Intro. to Stats. Machine Learning COMP SCI 4401/7401 Table of Contents I 1 Variable elimination Max-product Sum-product 2 LP Relaxations QP Relaxations 3 Marginal and MAP X1 X2 X3 X4
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationWhat Metrics Can Be Approximated by Geo-Cuts, or Global Optimization of Length/Area and Flux
Proceedings of International Conference on Computer Vision (ICCV), Beijing, China, October 2005 vol.., p.1 What Metrics Can Be Approximated by Geo-Cuts, or Global Optimization of Length/Area and Flux Vladimir
More informationMANY early vision problems can be naturally formulated
1568 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 10, OCTOBER 2006 Convergent Tree-Reweighted Message Passing for Energy Minimization Vladimir Kolmogorov, Member, IEEE Abstract
More informationEfficient Energy Maximization Using Smoothing Technique
1/35 Efficient Energy Maximization Using Smoothing Technique Bogdan Savchynskyy Heidelberg Collaboratory for Image Processing (HCI) University of Heidelberg 2/35 Energy Maximization Problem G pv, Eq, v
More informationPreliminaries. Introduction to EF-games. Inexpressivity results for first-order logic. Normal forms for first-order logic
Introduction to EF-games Inexpressivity results for first-order logic Normal forms for first-order logic Algorithms and complexity for specific classes of structures General complexity bounds Preliminaries
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationLecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1
Lecture 10 Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1 Recap Sublinear time algorithms Deterministic + exact: binary search Deterministic + inexact: estimating diameter in a
More informationDual Decomposition for Inference
Dual Decomposition for Inference Yunshu Liu ASPITRG Research Group 2014-05-06 References: [1]. D. Sontag, A. Globerson and T. Jaakkola, Introduction to Dual Decomposition for Inference, Optimization for
More informationarxiv: v1 [stat.ml] 26 Feb 2013
Variational Algorithms for Marginal MAP Qiang Liu Donald Bren School of Information and Computer Sciences University of California, Irvine Irvine, CA, 92697-3425, USA qliu1@uci.edu arxiv:1302.6584v1 [stat.ml]
More informationSystem Planning Lecture 7, F7: Optimization
System Planning 04 Lecture 7, F7: Optimization System Planning 04 Lecture 7, F7: Optimization Course goals Appendi A Content: Generally about optimization Formulate optimization problems Linear Programming
More informationMessage-Passing Algorithms for GMRFs and Non-Linear Optimization
Message-Passing Algorithms for GMRFs and Non-Linear Optimization Jason Johnson Joint Work with Dmitry Malioutov, Venkat Chandrasekaran and Alan Willsky Stochastic Systems Group, MIT NIPS Workshop: Approximate
More informationVariational Inference (11/04/13)
STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further
More informationInference and Representation
Inference and Representation David Sontag New York University Lecture 5, Sept. 30, 2014 David Sontag (NYU) Inference and Representation Lecture 5, Sept. 30, 2014 1 / 16 Today s lecture 1 Running-time of
More informationECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning
ECE 6504: Advanced Topics in Machine Learning Probabilistic Graphical Models and Large-Scale Learning Topics Summary of Class Advanced Topics Dhruv Batra Virginia Tech HW1 Grades Mean: 28.5/38 ~= 74.9%
More informationCS Lecture 8 & 9. Lagrange Multipliers & Varitional Bounds
CS 6347 Lecture 8 & 9 Lagrange Multipliers & Varitional Bounds General Optimization subject to: min ff 0() R nn ff ii 0, h ii = 0, ii = 1,, mm ii = 1,, pp 2 General Optimization subject to: min ff 0()
More informationA weighted Mirror Descent algorithm for nonsmooth convex optimization problem
Noname manuscript No. (will be inserted by the editor) A weighted Mirror Descent algorithm for nonsmooth convex optimization problem Duy V.N. Luong Panos Parpas Daniel Rueckert Berç Rustem Received: date
More informationBayesian Machine Learning - Lecture 7
Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1
More informationEfficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials Philipp Krähenbühl and Vladlen Koltun Stanford University Presenter: Yuan-Ting Hu 1 Conditional Random Field (CRF) E x I = φ u
More informationarxiv: v1 [cs.cv] 22 Apr 2012
A Unified Multiscale Framework for Discrete Energy Minimization Shai Bagon Meirav Galun arxiv:1204.4867v1 [cs.cv] 22 Apr 2012 Abstract Discrete energy minimization is a ubiquitous task in computer vision,
More informationDiscrete Markov Random Fields the Inference story. Pradeep Ravikumar
Discrete Markov Random Fields the Inference story Pradeep Ravikumar Graphical Models, The History How to model stochastic processes of the world? I want to model the world, and I like graphs... 2 Mid to
More informationInference Methods for CRFs with Co-occurrence Statistics
Noname manuscript No. (will be inserted by the editor) Inference Methods for CRFs with Co-occurrence Statistics L ubor Ladický 1 Chris Russell 1 Pushmeet Kohli Philip H. S. Torr the date of receipt and
More informationBeyond Loose LP-relaxations: Optimizing MRFs by Repairing Cycles
Beyond Loose LP-relaxations: Optimizing MRFs y Repairing Cycles Nikos Komodakis 1 and Nikos Paragios 2 1 University of Crete, komod@csd.uoc.gr 2 Ecole Centrale de Paris, nikos.paragios@ecp.fr Astract.
More informationFundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner
Fundamentals CS 281A: Statistical Learning Theory Yangqing Jia Based on tutorial slides by Lester Mackey and Ariel Kleiner August, 2011 Outline 1 Probability 2 Statistics 3 Linear Algebra 4 Optimization
More information