Inderjit Dhillon The University of Texas at Austin

Size: px
Start display at page:

Download "Inderjit Dhillon The University of Texas at Austin"

Transcription

1 Inderjit Dhillon The University of Texas at Austin ( Universidad Carlos III de Madrid; 15 th June, 2012) (Based on joint work with J. Brickell, S. Sra, J. Tropp)

2 Introduction 2 / 29 Notion of distance central to math

3 Introduction 2 / 29 Notion of distance central to math Fundamental distance axioms: symmetry & triangle inequality Non-negativity,

4 Introduction 3 / 29

5 Introduction 3 / 29 The Metric Nearness Problem J. Brickell, I. Dhillon, S. Sra, and J. Tropp SIAM J. Matrix Analy. and Appl. (2008) (SIAM Oustanding Paper Prize 2011)

6 Introduction - motivation 4 / 29 Imagine that you make measurements that encode distances (Euclidean or non-euclidean) between points

7 Introduction - motivation 5 / 29 Measurement difficulties 1 Noise 2 Incomplete observations 3 Uncertainty 4 Faulty instruments, etc. 5 Budget, $$ or Time

8 Introduction - motivation 5 / 29 Measurement difficulties 1 Noise 2 Incomplete observations 3 Uncertainty 4 Faulty instruments, etc. 5 Budget, $$ or Time Measured values may violate key properties of distances 1 Nonnegativity 2 Symmetry 3 triangle inequality

9 Introduction - motivation 5 / 29 Measurement difficulties 1 Noise 2 Incomplete observations 3 Uncertainty 4 Faulty instruments, etc. 5 Budget, $$ or Time Measured values may violate key properties of distances 1 Nonnegativity 2 Symmetry 3 triangle inequality Can we optimally restore these properties? Why?

10 Introduction - motivation 5 / 29 Measurement difficulties 1 Noise 2 Incomplete observations 3 Uncertainty 4 Faulty instruments, etc. 5 Budget, $$ or Time Measured values may violate key properties of distances 1 Nonnegativity 2 Symmetry 3 triangle inequality Can we optimally restore these properties? Why? 1 Metricity is a fundamental property, we want it!

11 Introduction - motivation 5 / 29 Measurement difficulties 1 Noise 2 Incomplete observations 3 Uncertainty 4 Faulty instruments, etc. 5 Budget, $$ or Time Measured values may violate key properties of distances 1 Nonnegativity 2 Symmetry 3 triangle inequality Can we optimally restore these properties? Why? 1 Metricity is a fundamental property, we want it! 2 Your algorithm might expect metric space inputs 3 Original motivation: faster lookup in a biological database 4 Many NP-Hard probs admit better approx. on metric data

12 Introduction - setup 6 / 29 Interpoint distance measurements between n points Let n n interpoint distance matrix be D

13 Introduction - setup 6 / 29 Interpoint distance measurements between n points Let n n interpoint distance matrix be D Assume for simplicity, D is symmetric and nonnegative. But entries of D might violate triangle inequality

14 Introduction - setup 6 / 29 Interpoint distance measurements between n points Let n n interpoint distance matrix be D Assume for simplicity, D is symmetric and nonnegative. But entries of D might violate triangle inequality b a c D =

15 Introduction - setup 7 / 29 Metric Nearness (MN) Find matrix M nearest to D that satisfies -ineq.

16 Introduction - setup 7 / 29 Metric Nearness (MN) Find matrix M nearest to D that satisfies -ineq. a 10 4 b c a 9 4 b c

17 Introduction - other problems 8 / 29 MN: very different from, and simpler than Euclidean Distance Matrix (EDM) problems EDM (inverse problem): Given distance matrix D, find n points in R m such x i x j = d ij

18 Introduction - other problems 8 / 29 MN: very different from, and simpler than Euclidean Distance Matrix (EDM) problems EDM (inverse problem): Given distance matrix D, find n points in R m such x i x j = d ij Euclidean embedding does not always exist

19 Introduction - other problems 8 / 29 MN: very different from, and simpler than Euclidean Distance Matrix (EDM) problems EDM (inverse problem): Given distance matrix D, find n points in R m such x i x j = d ij Euclidean embedding does not always exist Approximate solution: Multidimensional scaling Metric MDS: Find x i R m with least distortion

20 Introduction - other problems 8 / 29 MN: very different from, and simpler than Euclidean Distance Matrix (EDM) problems EDM (inverse problem): Given distance matrix D, find n points in R m such x i x j = d ij Euclidean embedding does not always exist Approximate solution: Multidimensional scaling Metric MDS: Find x i R m with least distortion Just find nearest metric, no embedding! #-metric spaces #-Euclidean spaces Thus, MN is an easier problem!

21 MN - Formulation 9 / 29 For every distinct triple (i, k, j) of indices m ij m ik + m kj (triangle) #-triangles: 3 `n 3 (each edge of, once on lhs of ineq.)

22 MN - Formulation For every distinct triple (i, k, j) of indices m ij m ik + m kj (triangle) #-triangles: 3 `n 3 (each edge of, once on lhs of ineq.) Let M n denote the metric cone, (a closed, convex, polyhedral cone) of all n n metric matrices, M n = { M : m ii = 0, m ij = m ji 0, satisfy (triangle) } 9 / 29

23 MN - Formulation For every distinct triple (i, k, j) of indices m ij m ik + m kj (triangle) #-triangles: 3 `n 3 (each edge of, once on lhs of ineq.) Let M n denote the metric cone, (a closed, convex, polyhedral cone) of all n n metric matrices, M n = { M : m ii = 0, m ij = m ji 0, satisfy (triangle) } Metric Nearess M {argmin X D : X M n } Convex optimization problem 9 / 29

24 MN - l p -norm Formulation 10 / 29 min X 1 p x ij d ij p i<j x ij x ik + x kj for distinct triples (i, k, j)

25 MN - l p -norm Formulation 10 / 29 min X 1 p x ij d ij p i<j x ij x ik + x kj for distinct triples (i, k, j) for p = 1 and p = : linear programming for p = 2: quadratic programming with other p: general convex programming

26 MN - l p -norm Formulation 10 / 29 min X 1 p x ij d ij p i<j x ij x ik + x kj for distinct triples (i, k, j) for p = 1 and p = : linear programming for p = 2: quadratic programming with other p: general convex programming Vectorized MN problem min 1 p x d p p Ax 0

27 Formulation - qualitative 11 / 29 Choice of p governs qualitative solution Small p: favor large changes to few edges Large p: favor small changes to many edges

28 Formulation - qualitative 11 / 29 Choice of p governs qualitative solution Small p: favor large changes to few edges Large p: favor small changes to many edges a 10 4 b 5 Invalid triangle c

29 Formulation - qualitative 11 / 29 Choice of p governs qualitative solution Small p: favor large changes to few edges Large p: favor small changes to many edges b a Obvious l 1 solution c

30 Formulation - qualitative 11 / 29 Choice of p governs qualitative solution Small p: favor large changes to few edges Large p: favor small changes to many edges a α b c 5 + (1 α) l 1 solution: total overall change = 1

31 Formulation - qualitative 11 / 29 Choice of p governs qualitative solution Small p: favor large changes to few edges Large p: favor small changes to many edges b a c l 1, l 2 and l solution!

32 MN - Solution 12 / 29 MN problem for n n symmetric matrix D #-variables = ( ) n 2 #-constraints = 3 ( ) n 3

33 MN - Solution 12 / 29 MN problem for n n symmetric matrix D #-variables = ( ) n 2 #-constraints = 3 ( ) n nodes: 124,750 vars, 62,125,500 constraints!

34 MN - Solution 12 / 29 MN problem for n n symmetric matrix D #-variables = ( ) n 2 #-constraints = 3 ( ) n nodes: 124,750 vars, 62,125,500 constraints! Focus on LP and QP (1,2, ) But general LP, QP solvers too expensive

35 MN - Solution 12 / 29 MN problem for n n symmetric matrix D #-variables = ( ) n 2 #-constraints = 3 ( ) n nodes: 124,750 vars, 62,125,500 constraints! Focus on LP and QP (1,2, ) But general LP, QP solvers too expensive Obvious imperative: Exploit problem structure!

36 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c

37 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c Fix triangles one by one triangle fixing algo

38 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c Fix triangles one by one triangle fixing algo Fix triangles enough number of times

39 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c Fix triangles one by one triangle fixing algo Fix triangles enough number of times Convergence theorem

40 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c Fix triangles one by one triangle fixing algo Fix triangles enough number of times Convergence theorem Turns out to be a coordinate ascent procedure

41 MN - Solution 13 / 29 Fixing a single triangle is easy a a a α b 5 c b 5 + (1 α) c b c Fix triangles one by one triangle fixing algo Fix triangles enough number of times Convergence theorem Turns out to be a coordinate ascent procedure Strictly convex objective needed What to do for p = 1 and p =? Optimally perturb; works well Separate talk in itself C++ implementation available

42 MN and APSP 14 / 29

43 MN and APSP 15 / 29 Famous graph-theory problem: All Pairs Shortest Paths R. Floyd published algorithm in 1962; research still active!

44 MN and APSP 15 / 29 Famous graph-theory problem: All Pairs Shortest Paths R. Floyd published algorithm in 1962; research still active! Find shortest-paths between every pair of vertices

45 MN and APSP 15 / 29 Famous graph-theory problem: All Pairs Shortest Paths R. Floyd published algorithm in 1962; research still active! Find shortest-paths between every pair of vertices A special case of metric nearness!

46 MN and APSP 16 / 29 Allow asymmetric D Assume D 0 for simplicity

47 MN and APSP 16 / 29 Allow asymmetric D Assume D 0 for simplicity Decrease Only Metric Nearness (DOMN) Find nearest matrix M M n such that m ij d ij for all (i, j)

48 MN and APSP 16 / 29 Allow asymmetric D Assume D 0 for simplicity Decrease Only Metric Nearness (DOMN) Find nearest matrix M M n such that m ij d ij for all (i, j) Theorem Let M A be the APSP solution for D. Then M A also solves DOMN. Moreover, any M M n that satisfies M D, also satisfies M M A.

49 MN and APSP 16 / 29 Allow asymmetric D Assume D 0 for simplicity Decrease Only Metric Nearness (DOMN) Find nearest matrix M M n such that m ij d ij for all (i, j) Theorem Let M A be the APSP solution for D. Then M A also solves DOMN. Moreover, any M M n that satisfies M D, also satisfies M M A. APSP solution gives tightest metric solution Intuitive proof: APSP gives shortest paths, and shortest-paths essentially define the triangle inequality.

50 MN and APSP 17 / 29 Can solve APSP by solving DOMN

51 MN and APSP 17 / 29 Can solve APSP by solving DOMN Most famous APSP method: Floyd-Warshall algorithm A triangle fixing algorithm. Goes through triangles in a fixed, predetermined, order

52 MN and APSP 17 / 29 Can solve APSP by solving DOMN Most famous APSP method: Floyd-Warshall algorithm A triangle fixing algorithm. Goes through triangles in a fixed, predetermined, order Cast DOMN as LP; solve using primal-dual scheme Go through triangles in data dependent order Primal-dual algo: half a talk in itself.

53 MN and APSP 17 / 29 Can solve APSP by solving DOMN Most famous APSP method: Floyd-Warshall algorithm A triangle fixing algorithm. Goes through triangles in a fixed, predetermined, order Cast DOMN as LP; solve using primal-dual scheme Go through triangles in data dependent order Primal-dual algo: half a talk in itself. 1 Begin with dual feasible solution (e.g. x ij = min pq d pq ) 2 Find set of active constraints (holding with equality) 3 Solve resulting restricted dual to identify which variables can be increased 4 Increase variables by maximally allowable level 5 Repeat steps 2 4 to convergence

54 MN and APSP 18 / 29 Implemented with efficient priority queues, e.g., Fibonacci heaps method runs in O(n 3 ) time FW runs in time Θ(n 3 ) Finding O(n 3 ɛ ) algo a major open problem DOMN-LP empirically O(n 2.8 ) on some graphs

55 MN and APSP 18 / 29 Implemented with efficient priority queues, e.g., Fibonacci heaps method runs in O(n 3 ) time FW runs in time Θ(n 3 ) Finding O(n 3 ɛ ) algo a major open problem DOMN-LP empirically O(n 2.8 ) on some graphs Hidden open problem metricity (more later)

56 Experimental results 19 / 29

57 Triangle fixing vs CPLEX 20 / CPLEX L 2 Triangle Fixing 300 Running time (secs) Size of input matrix (n x n) l 2 -norm MN

58 Triangle fixing vs CPLEX 20 / CPLEX L 1 Triangle Fixing 140 Running time (secs) Size of input matrix (n x n) l 1 -norm MN

59 Primal-dual vs FW 21 / x 104 Primal Dual Floyd Warshall 10 Distance from answer Algorithm progress (iterations) x 10 4 Convergence comparison

60 Primal-dual vs FW 21 / 29 6 x 104 Primal Dual Floyd Warshall 5 Iterations to converge Size of problem (n) Iterations to converge; each iteration is O(n) operations

61 Key messages so far 22 / 29 Metric nearness is a fundamental problem

62 Key messages so far 22 / 29 Metric nearness is a fundamental problem Convex optimization formulation with -fixing algorithms

63 Key messages so far 22 / 29 Metric nearness is a fundamental problem Convex optimization formulation with -fixing algorithms MN includes APSP as a special case

64 Discussion Open problems 23 / 29

65 24 / 29 Easy Extensions 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj )

66 Easy Extensions 24 / 29 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj ) 2 Missing values in D, e.g., by using ij w ij x ij d ij, where w ij 1/σ ij (w ij = 0 for missing value)

67 Easy Extensions 24 / 29 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj ) 2 Missing values in D, e.g., by using ij w ij x ij d ij, where w ij 1/σ ij (w ij = 0 for missing value) 3 Ordinal constraints, e.g., d ij < d pq then m ij < m pq

68 Easy Extensions 24 / 29 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj ) 2 Missing values in D, e.g., by using ij w ij x ij d ij, where w ij 1/σ ij (w ij = 0 for missing value) 3 Ordinal constraints, e.g., d ij < d pq then m ij < m pq 4 Box constraints l ij m ij u ij

69 24 / 29 Easy Extensions 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj ) 2 Missing values in D, e.g., by using ij w ij x ij d ij, where w ij 1/σ ij (w ij = 0 for missing value) 3 Ordinal constraints, e.g., d ij < d pq then m ij < m pq 4 Box constraints l ij m ij u ij 5 Bregman divergence based objective functions ij B φ(x ij, d ij ), e.g., with B φ as KL-Divergence.

70 24 / 29 Easy Extensions 1 Relaxed triangle-ineq: x ij λ ikj (x ik + x kj ) 2 Missing values in D, e.g., by using ij w ij x ij d ij, where w ij 1/σ ij (w ij = 0 for missing value) 3 Ordinal constraints, e.g., d ij < d pq then m ij < m pq 4 Box constraints l ij m ij u ij 5 Bregman divergence based objective functions ij B φ(x ij, d ij ), e.g., with B φ as KL-Divergence. All easy extensions of our triangle-fixing algos

71 Open problems - combinatorial 25 / 29 FW algorithm for APSP is combinatorial Θ(n 3 ) strongly polynomial runtime

72 Open problems - combinatorial 25 / 29 FW algorithm for APSP is combinatorial Θ(n 3 ) strongly polynomial runtime Our triangle fixing algorithm is iterative Does combinatorial algorithm exist?

73 Open problems - combinatorial 26 / 29 DOMN was equivalent to APSP What about increase only MN?

74 Open problems - combinatorial 26 / 29 DOMN was equivalent to APSP What about increase only MN? Trickier than DOMN because of non uniqueness a 10 4 a a 4 + α b 5 c b 5 c b 5 + (1 α) c Maybe easier to solve than general MN

75 Open problems - statistics 27 / 29 MN as complement to multidimensional scaling? Applications to data clustering Graph partitioning for metric graphs, etc.

76 Open problems - linear algebra 28 / 29 a b d c A 4 =

77 Open problems - linear algebra 28 / 29 a b d c A 4 = We noticed: A n has three singular values; n 2, 2n 2, and 3n 4, with multiplicities 1, n 1, and n(n 3)/2, respectively.

78 Open problems - linear algebra 28 / 29 a b d c A 4 = We noticed: A n has three singular values; n 2, 2n 2, and 3n 4, with multiplicities 1, n 1, and n(n 3)/2, respectively. S. Sra proved this recently. But ramifications of this result: (e.g. to metric geometry, metricity, etc.) open problem.

79 In conclusion 29 / 29

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Kernel Learning with Bregman Matrix Divergences

Kernel Learning with Bregman Matrix Divergences Kernel Learning with Bregman Matrix Divergences Inderjit S. Dhillon The University of Texas at Austin Workshop on Algorithms for Modern Massive Data Sets Stanford University and Yahoo! Research June 22,

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms Design and Analysis of Algorithms CSE 5311 Lecture 22 All-Pairs Shortest Paths Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 All Pairs

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Stochastic Proximal Gradient Algorithm

Stochastic Proximal Gradient Algorithm Stochastic Institut Mines-Télécom / Telecom ParisTech / Laboratoire Traitement et Communication de l Information Joint work with: Y. Atchade, Ann Arbor, USA, G. Fort LTCI/Télécom Paristech and the kind

More information

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous

More information

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given. HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

More information

Lecture: Expanders, in theory and in practice (2 of 2)

Lecture: Expanders, in theory and in practice (2 of 2) Stat260/CS294: Spectral Graph Methods Lecture 9-02/19/2015 Lecture: Expanders, in theory and in practice (2 of 2) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough.

More information

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Duality. Geoff Gordon & Ryan Tibshirani Optimization / Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider

More information

Spherical Euclidean Distance Embedding of a Graph

Spherical Euclidean Distance Embedding of a Graph Spherical Euclidean Distance Embedding of a Graph Hou-Duo Qi University of Southampton Presented at Isaac Newton Institute Polynomial Optimization August 9, 2013 Spherical Embedding Problem The Problem:

More information

Bounds on the Traveling Salesman Problem

Bounds on the Traveling Salesman Problem Bounds on the Traveling Salesman Problem Sean Zachary Roberson Texas A&M University MATH 613, Graph Theory A common routing problem is as follows: given a collection of stops (for example, towns, stations,

More information

Nonlinear Discrete Optimization

Nonlinear Discrete Optimization Nonlinear Discrete Optimization Technion Israel Institute of Technology http://ie.technion.ac.il/~onn Billerafest 2008 - conference in honor of Lou Billera's 65th birthday (Update on Lecture Series given

More information

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools Manor Mendel, CMI, Caltech 1 Finite Metric Spaces Definition of (semi) metric. (M, ρ): M a (finite) set of points. ρ a distance function

More information

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation -

Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Online generation via offline selection - Low dimensional linear cuts from QP SDP relaxation - Radu Baltean-Lugojan Ruth Misener Computational Optimisation Group Department of Computing Pierre Bonami Andrea

More information

Solving Linear and Integer Programs

Solving Linear and Integer Programs Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University Ed Rothberg ILOG, Inc. DAY 2 2 INFORMS Practice 2002 1 Dual Simplex Algorithm 3 Some Motivation Dual simplex vs. primal

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases

A Generalized Uncertainty Principle and Sparse Representation in Pairs of Bases 2558 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 9, SEPTEMBER 2002 A Generalized Uncertainty Principle Sparse Representation in Pairs of Bases Michael Elad Alfred M Bruckstein Abstract An elementary

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Support Vector Machines Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

The three-dimensional matching problem in Kalmanson matrices

The three-dimensional matching problem in Kalmanson matrices DOI 10.1007/s10878-011-9426-y The three-dimensional matching problem in Kalmanson matrices Sergey Polyakovskiy Frits C.R. Spieksma Gerhard J. Woeginger The Author(s) 2011. This article is published with

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

The Graph Realization Problem

The Graph Realization Problem The Graph Realization Problem via Semi-Definite Programming A. Y. Alfakih alfakih@uwindsor.ca Mathematics and Statistics University of Windsor The Graph Realization Problem p.1/21 The Graph Realization

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints By I. Necoara, Y. Nesterov, and F. Glineur Lijun Xu Optimization Group Meeting November 27, 2012 Outline

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Basics and SOS Fernando Mário de Oliveira Filho Campos do Jordão, 2 November 23 Available at: www.ime.usp.br/~fmario under talks Conic programming V is a real vector space h, i

More information

CO759: Algorithmic Game Theory Spring 2015

CO759: Algorithmic Game Theory Spring 2015 CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should

More information

Math 304 (Spring 2010) - Lecture 2

Math 304 (Spring 2010) - Lecture 2 Math 304 (Spring 010) - Lecture Emre Mengi Department of Mathematics Koç University emengi@ku.edu.tr Lecture - Floating Point Operation Count p.1/10 Efficiency of an algorithm is determined by the total

More information

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and

More information

Topics in Approximation Algorithms Solution for Homework 3

Topics in Approximation Algorithms Solution for Homework 3 Topics in Approximation Algorithms Solution for Homework 3 Problem 1 We show that any solution {U t } can be modified to satisfy U τ L τ as follows. Suppose U τ L τ, so there is a vertex v U τ but v L

More information

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018 Usual rules. :) Exercises 1. Lots of Flows. Suppose you wanted to find an approximate solution to the following

More information

2 Notation and Preliminaries

2 Notation and Preliminaries On Asymmetric TSP: Transformation to Symmetric TSP and Performance Bound Ratnesh Kumar Haomin Li epartment of Electrical Engineering University of Kentucky Lexington, KY 40506-0046 Abstract We show that

More information

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions? SOLVING INTEGER LINEAR PROGRAMS 1. Solving the LP relaxation. 2. How to deal with fractional solutions? Integer Linear Program: Example max x 1 2x 2 0.5x 3 0.2x 4 x 5 +0.6x 6 s.t. x 1 +2x 2 1 x 1 + x 2

More information

Basic Properties of Metric and Normed Spaces

Basic Properties of Metric and Normed Spaces Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion

More information

A Geometric Approach to Graph Isomorphism

A Geometric Approach to Graph Isomorphism A Geometric Approach to Graph Isomorphism Pawan Aurora and Shashank K Mehta Indian Institute of Technology, Kanpur - 208016, India {paurora,skmehta}@cse.iitk.ac.in Abstract. We present an integer linear

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

III. Applications in convex optimization

III. Applications in convex optimization III. Applications in convex optimization nonsymmetric interior-point methods partial separability and decomposition partial separability first order methods interior-point methods Conic linear optimization

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007

Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 CS880: Approximations Algorithms Scribe: Tom Watson Lecturer: Shuchi Chawla Topic: Balanced Cut, Sparsest Cut, and Metric Embeddings Date: 3/21/2007 In the last lecture, we described an O(log k log D)-approximation

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Lecture Notes for Chapter 25: All-Pairs Shortest Paths

Lecture Notes for Chapter 25: All-Pairs Shortest Paths Lecture Notes for Chapter 25: All-Pairs Shortest Paths Chapter 25 overview Given a directed graph G (V, E), weight function w : E R, V n. Goal: create an n n matrix of shortest-path distances δ(u,v). Could

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

Positive semidefinite rank

Positive semidefinite rank 1/15 Positive semidefinite rank Hamza Fawzi (MIT, LIDS) Joint work with João Gouveia (Coimbra), Pablo Parrilo (MIT), Richard Robinson (Microsoft), James Saunderson (Monash), Rekha Thomas (UW) DIMACS Workshop

More information

Travelling Salesman Problem

Travelling Salesman Problem Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman

More information

The quest for finding Hamiltonian cycles

The quest for finding Hamiltonian cycles The quest for finding Hamiltonian cycles Giang Nguyen School of Mathematical Sciences University of Adelaide Travelling Salesman Problem Given a list of cities and distances between cities, what is the

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Stanford University (Joint work with Persi Diaconis, Arpita Ghosh, Seung-Jean Kim, Sanjay Lall, Pablo Parrilo, Amin Saberi, Jun Sun, Lin

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths

Lecture 11. Single-Source Shortest Paths All-Pairs Shortest Paths Lecture. Single-Source Shortest Paths All-Pairs Shortest Paths T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to, rd Edition, MIT Press, 009 Sungkyunkwan University Hyunseung Choo choo@skku.edu

More information

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems

A Compact Linearisation of Euclidean Single Allocation Hub Location Problems A Compact Linearisation of Euclidean Single Allocation Hub Location Problems J. Fabian Meier 1,2, Uwe Clausen 1 Institute of Transport Logistics, TU Dortmund, Germany Borzou Rostami 1, Christoph Buchheim

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Facial reduction for Euclidean distance matrix problems

Facial reduction for Euclidean distance matrix problems Facial reduction for Euclidean distance matrix problems Nathan Krislock Department of Mathematical Sciences Northern Illinois University, USA Distance Geometry: Theory and Applications DIMACS Center, Rutgers

More information

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider

More information

Lecture 10: Duality in Linear Programs

Lecture 10: Duality in Linear Programs 10-725/36-725: Convex Optimization Spring 2015 Lecture 10: Duality in Linear Programs Lecturer: Ryan Tibshirani Scribes: Jingkun Gao and Ying Zhang Disclaimer: These notes have not been subjected to the

More information

Bregman Divergences. Barnabás Póczos. RLAI Tea Talk UofA, Edmonton. Aug 5, 2008

Bregman Divergences. Barnabás Póczos. RLAI Tea Talk UofA, Edmonton. Aug 5, 2008 Bregman Divergences Barnabás Póczos RLAI Tea Talk UofA, Edmonton Aug 5, 2008 Contents Bregman Divergences Bregman Matrix Divergences Relation to Exponential Family Applications Definition Properties Generalization

More information

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Game Theory Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin Bimatrix Games We are given two real m n matrices A = (a ij ), B = (b ij

More information

Bregman Divergences for Data Mining Meta-Algorithms

Bregman Divergences for Data Mining Meta-Algorithms p.1/?? Bregman Divergences for Data Mining Meta-Algorithms Joydeep Ghosh University of Texas at Austin ghosh@ece.utexas.edu Reflects joint work with Arindam Banerjee, Srujana Merugu, Inderjit Dhillon,

More information

arxiv: v1 [cs.lg] 5 Jun 2018

arxiv: v1 [cs.lg] 5 Jun 2018 A PROJECTION METHOD FOR METRIC-CONSTRAINED OPTIMIZATION NATE VELDT 1, DAVID F. GLEICH 2, ANTHONY WIRTH 3, AND JAMES SAUNDERSON 4 arxiv:1806.01678v1 [cs.lg] 5 Jun 2018 Abstract. We outline a new approach

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Week 4. (1) 0 f ij u ij.

Week 4. (1) 0 f ij u ij. Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

LECTURE 13 LECTURE OUTLINE

LECTURE 13 LECTURE OUTLINE LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming

More information

13 : Variational Inference: Loopy Belief Propagation and Mean Field

13 : Variational Inference: Loopy Belief Propagation and Mean Field 10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

1 The independent set problem

1 The independent set problem ORF 523 Lecture 11 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Tuesday, March 29, 2016 When in doubt on the accuracy of these notes, please cross chec with the instructor

More information

Delsarte s linear programming bound

Delsarte s linear programming bound 15-859 Coding Theory, Fall 14 December 5, 2014 Introduction For all n, q, and d, Delsarte s linear program establishes a series of linear constraints that every code in F n q with distance d must satisfy.

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Supplement: Hoffman s Error Bounds

Supplement: Hoffman s Error Bounds IE 8534 1 Supplement: Hoffman s Error Bounds IE 8534 2 In Lecture 1 we learned that linear program and its dual problem (P ) min c T x s.t. (D) max b T y s.t. Ax = b x 0, A T y + s = c s 0 under the Slater

More information

Partitioning Algorithms that Combine Spectral and Flow Methods

Partitioning Algorithms that Combine Spectral and Flow Methods CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 15-11/11/2009 Partitioning Algorithms that Combine Spectral and Flow Methods Lecturer: Michael Mahoney Scribes: Kshipra Bhawalkar and Deyan

More information

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved. Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 P and NP P: The family of problems that can be solved quickly in polynomial time.

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP:

Solving the MWT. Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP: Solving the MWT Recall the ILP for the MWT. We can obtain a solution to the MWT problem by solving the following ILP: max subject to e i E ω i x i e i C E x i {0, 1} x i C E 1 for all critical mixed cycles

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information