CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31
Outline Brief history of interior point method Basic idea of interior point method 2 / 31
A brief history of linear program In 1949, G B Dantzig proposed the simplex algorithm; In 1971, Klee and Minty gave a counter-example to show that simplex is not a polynomial-time algorithm In 1975, L V Kantorovich and T C Koopmans, Nobel prize, application of linear programming in resource distribution; In 1979, L G Khanchian proposed a polynomial-time ellipsoid method; In 1984, N Karmarkar proposed another polynomial-time interior-point method; In 2001, D Spielman and S Teng proposed smoothed complexity to prove the efficiency of simplex algorithm 3 / 31
In 1979, L G Khanchian proposed a polynomial-time ellipsoid method for LP Figure: Leonid G Khanchian 4 / 31
In 1984, N Karmarkar proposed a new polynomial-time algorithm for LP 5 / 31
Basic idea of interior point method 6 / 31
KKT conditions for LP Let s consider a linear program in slack form and its dual problem, ie Primal problem: min c T x st Ax = b x 0 Dual problem: max b T y st A T y c KKT condition: Primal feasibility: Ax = b, x 0 Dual feasibility: A T y c Complementary slackness: x i = 0, or a T i y = c i for any i = 1,, m 7 / 31
Rewriting KKT condition Let s consider a linear program in slack form and its dual problem, ie Primal problem: min c T x st Ax = b x 0 Dual problem: max b T y st A T y c KKT condition: Primal feasibility: Ax = b, x 0; Dual feasibility: A T y + σ = c, σ 0; Complementary slackness: x i σ i = 0 These conditions consists of m + 2n equations over m + 2n variables plus non-negative constraints 8 / 31
Rewrite KKT conditions further Let s define diagonal matrix: x 1 0 0 σ 1 0 0 X = 0 x 2 0, Σ = 0 σ 2 0 0 0 x 3 0 0 σ 3 and rewrite the complementary slackness as: x 1 σ 1 0 0 1 0 x 1 σ 1 XΣe = 0 x 2 σ 2 0 1 = 0 = x 2 σ 2 0 0 x 3 σ 3 1 0 x 3 σ 3 KKT condition: Primal feasibility: Ax = b, x 0; Dual feasibility: A T y + σ = c, σ 0; Complementary slackness: XΣe = 0 We call (x, y, σ) interior point if x > 0 and σ > 0 Question: How to find (x, y, σ) satisfy the KKT conditions? 9 / 31
A simple interior-point method Basic idea: Assume we have already known ( x, ȳ, σ) that are both primal and dual feasible, ie, A x = b, x > 0 A T ȳ + σ = c, σ > 0 and try to improve to another point (x, y, σ ) to make complementary slackness hold Improvement strategy: Starting from ( x, ȳ, σ), we follow a direction ( x, y, σ) such that ( x + x, ȳ + y, σ + σ) is a better solution, ie, it comes closer to satisfying the complementary slackness To find such a direction, we substitute ( x + x, ȳ + y, σ + σ) into the KKT conditions: Primal feasibility: A x + x = b, x + x 0; Dual feasibility: A T (ȳ + y) + ( σ + σ) = c, σ + σ 0; Complementary slackness: ( X + X)( Σ + Σ)e = 0 10 / 31
Since we start from ( x, ȳ, σ) such that A x = b, x > 0 A T ȳ + σ = c, σ > 0, the above conditions change into A x = 0, x + x 0; A T y + σ = 0, σ + σ 0; X Σ + Σ X = X Σe X Σe Note that when x and σ are small, the final non-linear term X Σe is small relative to X Σe and can thus be dropped out, generating a linear system A x = 0, x + x 0; A T y + σ = 0, σ + σ 0; X Σ + Σ X = X Σe The solution are: σ = X 1 ( X Σ Σ x) = σ X 1 Σ x ( ) ( ) ) Σ X 1 A T x ( σ = A 0 y 0 11 / 31
1: Set initial solution x such that A x = b, x > 0; 2: Set initial solution ȳ, σ such that A T ȳ + σ = c, σ > 0; 3: while TRUE do 4: Solve ( X 1 Σ A T A 0 ) ( ) x y = ) ( σ 0 5: Set σ = X 1 ( X Σ Σ x) = σ X 1 Σ x; 6: Calculate θ to make x + θ x 0, and σ + θ σ 0; 7: Update x = x + θ x, ȳ = ȳ + θ y, σ = σ + θ σ; 8: if x i σ i ϵ for all i = 1,, m then 9: break; 10: end if 11: end while 12: return (x, y, σ); 12 / 31
An example min 2x 1 +15x 2 st 12x 1 +24x 2 120 16x 1 +16x 2 120 30x 1 +12x 2 120 x 1 15 x 2 15 x 1 0 x 2 0 13 / 31
An execution Starting point 1: x = (10, 10) x 1 x 2 0 5 10 15 0 5 10 15 14 / 31
Another execution Starting point 2: x = (14, 1) x 1 x 2 0 5 10 15 0 5 10 15 15 / 31
Centered interior-point method To avoid the poor performance, we need a way to keep the iterate away from the boundary until the solution approaches the optimum An efficient way to achieve this goal is to relax complementary slackness: XΣe = 0 into XΣe = 1 (t > 0) t We start from a small t, and gradually increase it as the algorithm proceeds At each iteration, we execute the previous affine interior point method 16 / 31
Running center path algorithm Starting point 1: x = (10, 10) x 1 x 2 0 5 10 15 0 5 10 15 17 / 31
Running center path algorithm Starting point 2: x = (14, 1) x 1 x 2 0 5 10 15 0 5 10 15 18 / 31
Another viewpoint Actually, the relaxed complementary slackness XΣe = 1 t (t > 0) corresponds to the following linear program: min c T x 1 t n i=1 log x i st Ax = b x 0 19 / 31
Consider a general optimization problem Suppose we are trying to solve a convex program with inequality constraints: min f(x) st Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable 20 / 31
Let s start from equality constrained quadratic programming Suppose we are trying to solve a QP with equality constraints : min st 1 2 xt P x + Q T x + r Ax = b Applying Lagrangian conditions, we have Ax = b, and P x + Q + A T λ = 0 Thus, the optimum point x can be solved as follows: [ ] [ ] [ ] P A T x Q = A 0 λ b 21 / 31
Then how to minimize f(x) with equality constraints? Newton s method Suppose we are trying to minimize a convex function f(x), which is not a QP min st f(x) Ax = b Basic idea: let s try to improve from a feasible solution x At x, we write the Taylor extension, and use quadratic approximation to f(x + x) = f(x) + f(x) + 1 2 xt 2 f(x) x min st f(x) A(x + x) = b Thus, the optimum point x can be solved as follows: [ 2 f(x) A T ] [ ] [ ] x f(x) = A 0 λ b Since this is just an approximation to f(x), we need to iteratively perform line search 22 / 31
Then how to minimize f(x) with inequality constraints? Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable Basic idea: 1 Transform it into a series of equality constrainted optimisation problems; 2 Solve each equality constrainted optimisation problem using Newton s method 23 / 31
Log barrier function: transform into equality constraints 24 / 31
Log barrier function Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable This is equivalent to the following optimization problem min f(x) + I_(g(x)) st Ax = b { 0 if x 0 where I_(x) = otherwise But indicator function is not differentiable 25 / 31
Log barrier function: smooth approximation to the indicator function Suppose we are trying to solve a convex program with inequality constraints: min st f(x) Ax = b g(x) 0 where f(x) and g(x) are convex, and twice differentiable This is equivalent to the following optimization problem min st f(x) 1 t log( g(x)) Ax = b Basic idea: a smooth approximation to indicator function 1 t log( x), and this approximation improves as t 26 / 31
Log barrier function approximates indicator function 1 t log( x) approximates the indicator function, and the approximation improves as t For each setting of t, we obtain an approximation to the original optimisation problem 27 / 31
Interior method and central path Solve a sequence of optimization problem: min st f(x) 1 t log( g(x)) Ax = b For any t > 0, we define x (t) as the optimal solution t increases step by step t should not be too large initially, as it is not easy to solve it using Newton s method Central path: {x (t) t > 0} 28 / 31
Interior point method for LP Consider the following LP min c T x st Ax b x 0 29 / 31
Central path For any t > 0, we define x (t) as the optimal solution to: min f(x) 1 t log( g(x)) st Ax = b x (t) is not optimal solution to the original problem The duality gap is bounded by m t 30 / 31
Interior point algorithm 31 / 31