Chapter 6 Interior-Point Approach to Linear Programming
|
|
- Alexandra Russell
- 5 years ago
- Views:
Transcription
1 Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1
2 Linear Programming Problem Minimize c T x (Primal) s.t. Ax = b x 0 where A R m n c,x R n b R m Feasible domain P = {x R n Ax = b} {x R n x 0} ւ Affine Subspace ւ positive orthant Primal interior feasible solution x R n s.t. Ax = b,x > 0. Slide#2
3 Dual Problem Maximize b T w s.t. A T w+s = c s 0 w R m w : dual variables s : dual slacks Dual interior feasible solution: (w,s) R m R n s.t. A T w+s = c s > 0 Slide#3
4 Primal-Dual Problem Find (x;w;s) R n R m R n such that Ax = b, x 0 (Primal Feasibility) A T w+s = c, s 0 (Dual Feasibility) x T s = 0 (Complementary slackness) Slide#4
5 What s Special about LP? P = {x R n Ax = b, x 0} is a polyhedral set with vertices. For a consistent LP, its optimum is either unbounded or is attained at one vertex of P. P Slide#5
6 Solving LP Problems Fact: x* P Optimum Question: How to find x? Slide#6
7 Simplex Method Step 1: Start at a vertex x 0. Step 2: If current vertex x k is optimal, STOP! x x k. Otherwise, Step 3: Move to a better neighboring vertex x k+1. GOTO Step 2. x 2 x * x 1 x 0 Slide#7
8 Is Simplex Method Good? In general, it visits about m n vertices (linear in m, sub-linear in n). In the worst cases, Klee and Minty (1971) showed that it traverses 2 n 1 vertices (exponential-time algorithm). Large scale problems may take a long time to run. Slide#8
9 Basic Strategy of Interior-Point Approach Stay inside of P. Check more directions of movement. Shorten travelling path. i.e., Increase complexity at each iteration but reduce total number of iterations. Slide#9
10 Interior-Point Approach Step 1: Start with an interior feasible solution. Step 2: If the current solution is optimal, STOP! Otherwise, Step 3: Move to a better interior solution. Go to Step 2. - good direction? - right step-length? Slide#10
11 Interior-Point Methods Projective scaling method Affine scaling method - Primal affine scaling algorithm - Dual affine scaling algorithm - Primal-Dual algorithm Potential reduction method Path-Following method Slide#11
12 Primal Affine Scaling Algorithm Minimize c T x (P) s.t. Ax = b x 0 Find an interior feasible solution x k R n s.t. Ax k = b and x k > 0. Slide#12
13 Good direction (A) Reduce the objective value c T x k+1 c T x k c T x k +α k c T d k x c T d k x 0 Candidate: d k x = c (negative gradient) (Steepest descent) Slide#13
14 (B) Keep feasibility Ax k+1 = b Ax k +α k Ad k x b Ad k x = 0 i.e., d k x N(A) : null space of A Candidate: projected negative gradient d k x = (I AT (AA T ) 1 A)( c) Slide#14
15 Valid Step-length Fact: As long as d k x N(A) Ax k+1 = b no matter what value α k is. However, x k+1 > 0 is required! i.e., We have to know how far x k is away from the boundary of the non-negative orthant {x R n x 0}. Slide#15
16 (C) Scaling e = ( 1, 1,, 1) If x k = e, then (1) x k is one-unit away from the boundary. (2) As long as α k < 1, x k+1 > 0. Slide#16
17 Scale x k to be e Define X k =diag(x k ) = x k 1 0 x k x k n, then X 1 k x k = e. Moreover, consider the following transformations: x R n + x = X y k X X -1 k k y = X x R n + y -1 k one-to-one onto boundary to boundary interior to interior Slide#17
18 x = X y k Min T c x Min c TX y k s.t. A x = b s.t. A X y = b k x > 0 y > 0 x k y k = e y k+1 = y k +α k d k y d k y d k y = [I X k A T (AX 2 ka T ) 1 AX k ] ( X k c) x k+1 = X k y k+1 = X k y k +α k X k d k y d k y = x k + α k d k y dk x d k x = X k[i X k A T (AX 2 k AT ) 1 AX k ]X k c α k = 0.99 (say) 0 < α k < 1 Slide#18
19 Observations (1) Another way to determine step-length α k : Since d k y = P k ( X k c) AX k d k y = 0 and AX k y k+1 = AX k y k +α k AX k d k y = b, in order to make sure that we need y k+1 > 0, y k +α k d k y > 0, e b 0 Case 1: If d k y 0 then α k (0, ). Case 2: If (d k y ) i < 0 for some i, then 1 α k = min i { (d k (d y) k i < 0} y) i Slide#19
20 or α α k = min{ (d k y ) (d k y) i < 0} i for some α (0,1). (2) As in (1) x k+1 = X k y k+1 = X k (e+α k d k y ) = x k +α k X k d k y = x k +α k X k ( P k X k c) = x k α k X k [I X k A T (AX 2 k AT ) 1 AX k ]X k c = x k α k X 2 k[c A T (AX 2 ka T ) 1 AX 2 kc] }{{} = x k α k X 2 k [c AT w k ] }{{} d k x w k = x k α k d k x Slide#20
21 (3) c T x k+1 = c T (x k +α k X k d k y ) = c T x k +α k c T X k ( P k X k c) = c T x k α k P k X k c 2 = c T x k α k d k y 2 -X c k θ k y d = - P X c k k Hence, c T x k+1 c T x k and c T x k+1 < c T x k if d k y 0 Lemma 7.1 If x k P, x k > 0 with d k y > 0, then (P) is unbounded below. Slide#21
22 (4) For x k P 0 = {x R n Ax = b, x > 0}, if d k y = P k X k c = 0, then X k c falls in the orthogonal space of N(AX k ), i.e., X k c row space of (AX k ) u k s.t. (AX k ) T u k = X k c or (u k ) T AX k = c T X k (u k ) T A = c T For any x P c T x = (u k ) T Ax = (u k ) T b constant Any feasible solution is optimal!! (Lemma 7.2) In particular, x k is optimal! Slide#22
23 (5) Combining (3) & (4), if the standard form LP is bounded below and c T x is not a constant, then {c T x k k = 1,2, } is well-defined and strictly decreasing. (Lemma 7.3) (6) w k (AX 2 ka T ) 1 AX 2 kc dual estimate r k c A T w k reduced cost If r k 0, then w k is dual feasible and (x k ) T r k = e T X k r k becomes the duality gap, i.e., c T x k b T w k = e T X k r k. Slide#23
24 Therefore, if r k 0 and e T X k r k = 0, Stopping rule then x k x w k w. (7) d k y = [I X k A T (AXkA 2 T ) 1 AX k ]( X k c) = X k (c A T (AXkA 2 T ) 1 AXkc) 2 = X k (c A T w k ) = X k r k Slide#24
25 Primal Affine Scaling Algorithm Step 1: Set k 0, ε > 0, 0 < α < 1 Step 2: Compute find x 0 > 0 and Ax 0 = b. w k = (AX 2 k AT ) 1 AX 2 k c r k = c A T w k If then r k 0, and e T X k r k ε STOP! x x k, w w k Otherwise, Step 3: Compute If d k y d k y = X kr k > 0, then STOP! Unbounded. If d k y = 0, then STOP! x x k Otherwise, Slide#25
26 Step 4: Find α α k = min i { (d k (d k y y) ) i < 0} i x k+1 = x k +α k X k d k y k k +1 Go to Step 2. Slide#26
27 AN EXAMPLE min 2x 1 +x 2 s.t. x 1 x 2 15 x 2 15 x 1,x 2 0 x 2 15 x = 0 4 x = x = x 1 x = 0 2 Slide#27
28 Reformulate to standard form min 2x 1 +x 2 s.t. x 1 x 2 +x 3 = 15 x 2 +x 4 = 15 x 1,x 2,x 3,x 4 0 and x 0 = is feasible Slide#28
29 MATRIX FORMAT where A = min c T x s.t. Ax = b x , b = c = , x 0 = X 0 = Slide#29
30 SCALING y = X 1 0 x = x 1 x 2 x 3 x 4, y 1 = x 1 /10 y 2 = x 2 /2 y 3 = x 3 /7 y 4 = x 4 /13 The problem is transformed to min 2(10y 1 )+(2y 2 ) = 20y 1 +2y 2 s.t. 10y 1 2y 2 +7y 3 = 15 2y 2 +13y 4 = 15 y 1,y 1,y 3,y y = 0 4 y = 0 1 y = 0 3 y y = 0 2 Slide#30
31 The new matrix form min c T y s.t. Āy = b y 0 where Ā = , b = c = , and y 0 = Ã = AX 0 c = X 0 c y 0 = X 1 0 x 0 Slide#31
32 Slide#32 Step direction in transformed space d 0 y = PX 0 c = y 1 = y = =
33 Scale back x = X 0 y x 1 = 10 y 1 = 16.6 x 2 = 2 y 2 = 2.14 x 3 = 7 y 3 = 0.49 x 4 = 13 y 4 = x1 x Slide#33
34 How to Start? 1. Big-M method (LP) Min c T x s.t. Ax = b x 0 Objective: to make e = Ae = b? be feasible, i.e., Method: Adding an artificial variable x a with a large positive number M for (LP ) Min c T x+mx a s.t. Ax+(b Ae)x a = b x 0, x a 0 Slide#34
35 Properties: (1) (LP ) is a standard form LP with n+1 variables and m constraints. (2) e = R n+1 is an interior feasible solution of (LP ). (3) If x a > 0 in (x,x a ) then (LP) is infeasible. Otherwise, either (LP) is unbounded or x is optimal to (LP). Slide#35
36 2. Two-Phase method (LP) Min c T x s.t. Ax = b x 0 Choose any x 0 > 0, calculate V = b Ax 0. If V = 0, then x 0 is interior feasible. Otherwise, consider (Phase-I) Min u s.t. Ax+Vu = b x 0, u 0 Slide#36
37 Properties: (1) (Phase-I) is a standard form LP with n+1 variables and m constraints. (2) ˆx 0 = x 0 u 0 = x 0 1 is interior feasible for (Phase-I). (3) (Phase-I) is bounded below by 0. (4) Apply primal-affine scaling to (Phase-I) will generate x u for (Phase-I). If u > 0, (LP) is infeasible. Otherwise, x > 0 for (Phase-II) as an initial feasible solution. Slide#37
38 Properties of Primal Affine Scaling (1) The convergence proof, i.e., {x k } x under Non-degeneracy assumption (Theorem 7.2) is given by Vanderbei/Meketon/ Freedman in (1985). (2) Convergence proof without Non-degeneracy assumption, T. Tsuchiya (1991) P. Tseng/ Z. Luo (1992) (3) The computational bottleneck is to find (AX 2 ka T ) 1 (4) No polynomial-time proof - J. Lagarias showed primal affine scaling is only of super-linear rate. Slide#38
39 - N. Megiddo/ M. Shub showed that primal affine scaling might visit all vertices if it moves too close to the boundary. (5) In practice, VMF reported # iterations Simplex Affine Scaling m n m n (6) It may lose primal feasibility due to machine accuracy (Phase-I again). (7) May be sensitive to primal degeneracy. Slide#39
40 Improving Primal-Affine Scaling Objective: Stay away from the boundary! 1. Potential Push Method x c x = c x k T T k x k x k-1 Min n j=1 log e x j s.t. Ax = b, x > 0 c T x = c T x k (x k ) to replace x k Slide#40
41 2. Logarithmic Barrier Function Method Min c T x µ n j=1 log e x j s.t. Ax = b x > 0 (1) {x (µ) µ > 0} x (2)d k µ = 1 µ X k[i X k A T (AX 2 k AT ) 1 AX k ](X k c µe) = 1 µ X kp k (X k c)+x k P k e = d k x +X k P k e }{{} centering force (3) Polynomial-time proof, i.e., terminates in O( nl) iterations C. Gonzaga (1989) (Problems in Proof!!) C. Roos/ J. Vial (1990) - Total complexity O(n 3 L)! Slide#41
42 Dual Affine Scaling (D) Max b T w s.t. A T w+s = c s 0 Given (w k,s k ) dual interior feasible, i.e., A T w k +s k = c s k > 0 Objective find (d k w,d k s) and β k > 0 such that w k+1 = w k +β k d k w s k+1 = s k +β k d k s is still dual interior feasible, and b T w k+1 b T w k Slide#42
43 Observations: (1) Scaling w k R m no scaling needed s k > 0 scale to e = -1 S k s k k u = e = S k s - space u space S k = s k 1 s k s k n = diag (s k ) u = S 1 k s s = S k u d u = S 1 k d s d s = S k d u Slide#43
44 (2) Dual Feasibility A T w k+1 +s k+1 }{{} = AT (w k +β k d k w )+(sk +β k d k s ) c = (A T w k +s k ) }{{} +β k }{{} (AT d k w +dk s ) c > 0 A T d k w +d k s = 0 is required! Sk 1 A T d k w +Sk 1 d k }{{ s = 0 } d k u AS 1 k (S 1 k A T d k w +d k u) = 0 (AS 2 k A T )d k w +AS 1 k d k u = 0 d k w = (ASk 2 A T ) 1 AS 1 }{{ k dk } u Q Slide#44
45 (3) Increase Objective value b T d k w = bt Qd k u 0 We can choose d k u = QT b then b T d k w = b T QQ T b = Q T b 2 0!! d k w = Qd k u = QQ T b = (ASk 2 A T ) 1 ASk 1 S 1 b }{{}}{{} Q = (AS 2 k A T ) 1 b k A T (ASk 2 A T ) 1 Q T and d k s = A T d k w = A T (AS 2 k A T ) 1 b Slide#45
46 (4) Step-size β k s k+1 = s k }{{} +β k d k s > 0 > 0 (i) d k s = 0, problem (D) has a constant objective value and (w k,s k ) optimal (ii) d k s > 0, β k (0, ) problem (D) is unbounded (iii) some (d k s) i < 0 β k = min i { αsk i (d k s) i (d k s ) i < 0} for α (0,1) Slide#46
47 (5) Primal estimate then x k = S 2 k d k s Ax k = AS 2 k ( A T d k w) = AS 2 k A T d k w = (AS 2 k A T )(AS 2 k A T ) 1 b = b Hence x k is a primal estimate, once x k 0, then x k is primal feasible. If c T x k b T w k = 0, then x k x w k w s k s Slide#47
48 (6) Dual Affine Scaling Algorithm Step 1: Set k = 0 and find (w 0,s 0 ) s.t. A T w 0 +s 0 = c, s 0 > 0 Step 2: Set S k = diag (s k ) Compute d k w = (AS 2 k A T ) 1 b d k s = AT d k w Step 3: If d k s = 0, STOP! w k w, s k s If d k s > 0, STOP! (D) is unbounded Step 4: Compute x k = S 2 k d k s If x k 0 and c T x k b T w k ε STOP! w k w, s k s, x k x Slide#48
49 Step 5: Compute β k = min i { αsk i (d k s) i (d k s ) i < 0} for 0 < α < 1 Step 6: w k+1 = w k +β k d k w s k+1 = s k +β k d k s Set k k +1 Go to Step 2. Slide#49
50 (7) Starting Dual Affine Scaling Find (w 0,s 0 ) s.t. A T w 0 +s 0 = c s 0 > 0 If c > 0, then w 0 = 0, s 0 = c will do. (Big - M Method) Define p R n p i = 1 if c i 0 0 if c i > 0 Consider, for a large M > 0, (Big-M Problem) Max b T w+mw a s.t. A T w+pw a +s = c w,w a unrestricted s 0 Slide#50
51 (a) (Big-M) is a standard LP with n constraints and m+1+n variables (b) Define c = max i c i and θ > 1 then w = 0 w a = θ c s = c+θ cp > 0 is an intial interior feasible solution for problem (D). (c) (w a ) 0 = θ c < 0 Since M > 0 is large (w a ) k ր 0 as k ր + if (w a ) k does not approach or cross zero, then problem (D) is infeasible. Slide#51
52 (8) Performance (i) No polynomial-time proof. (ii) Computational bottleneck (AS 2 k A T ) 1. (iii) Less sensitive to primal degeneracy and numerical errors, but sensitive to dual degeneracy. (iv) Improves dual objective function very fast, but attaining primal feasibility is slow. Slide#52
53 (9) Improvement (i) Logarithmic Barrier Function Method (µ > 0) Max b T w+µ n j=1 ln[c j A T j w] s.t. A T w < c w = 1 µ (AS 2 K A T ) 1 b (AS 2 }{{} K A T )AS 1 k e }{{} d k w cetering force as µ 0, w k (µ) w J. Renegar O(n 3.5 L) P. Vaidya O(n 3 L) C. Roos/ J. Vial O(n 3 L) Slide#53
54 (ii) Power Series Method w * s* order 3 w s 0 0 order 1 order 2 continuous trajectory d w(β) d β = limit βk 0 w k+1 w k β k O.D.E. = [AS(β) 2 A T ] 1 b d s(β) d β = ATd w(β) d β Initial condition w(0) = w 0, s(0) = s 0 where S(β) = diag(s 0 +βd s ) Slide#54
55 Power-Series Expansion: w(β) = w 0 + i=1 β j [ 1 j! ] [dj w(β) d β j ] β=0 s(β) = s 0 + i=1 β j [ 1 j! ][dj s(β) d β j ] β=0 (a) As long as [ dj w(β) d β j ] β=0 and [ dj s(β) d β j ] β=0, j = 1,2, n are known, w(β), s(β) are known (b) Dual Affine Scaling is the case of first-order approximation w(β) = w 0 +β[ d w(β) d β ] β=0 s(β) = s 0 +β[ d s(β) d β ] β=0 (c) A power-series approximation of order 4 or 5 cuts total # of iterations by 1/2. Slide#55
56 Primal-Dual Algorithm (P) Min c T x s.t. Ax = b x 0 (D) Max b T w s.t. A T w+s = c s 0 Assumptions: (A1) S = {x R n Ax = b, x > 0} (A2)T = {(w,s) R m+n A T w+s = c, s > 0} (A3) A has full row rank. (1) Consider (for µ > 0) (P µ ) Min c T x µ n j=1 ln x j s.t. Ax = b x > 0 Strictly convex objective function with linear constraints. At most one unique global optimum. Completely characterized by K-K-T conditions. Slide#56
57 L(x,w) = c T x µ n j=1 ln x j +w T (b Ax) (x > 0, w : unrestricted) L w (x,w) = b Ax L x (x,w) = c µ x 1 µ x 2. µ x n A T w Define µ x j = sj > 0 Then Kuhn-Tucker conditions becomes (K-K-T) Ax b = 0, x > 0 A T w+s c = 0, s > 0 XSe = µe x 1 x 2... x n s 1 s 2... s n = x 1 s 1 x 2 s 2. x n s n = µ µ. µ or x T s = n j=1 x j s j = nµ Slide#57
58 (2) Consider (for µ > 0) (D µ ) Max b T w+µ n j=1 ln s j s.t. A T w+s = c s > 0 Same (K-K-T) can be obtained. Basic Ideas (1) For µ > 0, let (x(µ),w(µ),s(µ)) be a solution to K-K-T, then x(µ) is optimal to (P µ ) and (w(µ),s(µ)) optimal to (D µ ). (2) For x(µ) S (w(µ),s(µ)) T duality gap g(µ) = c T x(µ) b T w(µ) = (c T w(µ) T A)x(µ) = s T (µ)x(µ) = nµ Slide#58
59 (3) As µ 0 g(µ) = 0 : no duality gap x(µ) x w(µ) w s(µ) s (4) Define center path Γ = K-K-T conditions {(x(µ),w(µ),s(µ)) with µ > 0 Follow the center path from large µ > 0 reducing to zero. s Γ µ >>0 (x( ), s( µ )) µ 1 1 (x( µ ), s( µ )) 2 2 µ = 0 (x*,s*) x Slide#59
60 (5) On Γ, since x j s j = µ, so x and s play equal role, i.e., x and s are not biased toward either (x j = 0) or (s j = 0) - center path (6) Path-Following Stay on(close to) Γ (x(µ 1 ),w(µ 1 ),s(µ 1 )) µ 1 ց µ 2 Move to (x(µ 2 ),w(µ 2 ),s(µ 2 )) µ 2 ց µ 3. (x,w,s ) Slide#60
61 Questions: (1) Given µ > 0, when does (x(µ), w(µ), s(µ)) exist? (2) How to find it? (3) How to reduce µ? (4) When will (x(µ), w(µ), s(µ)) converge as µ 0? Slide#61
62 Answers: (1) Lemma 7.6: Under the assumptions (A1)-(A3), x(µ), w(µ), s(µ) exist and x(µ) x w(µ) w as µ 0. s(µ) s (2) To find a solution, consider solving the K-K-T conditions Ax b = 0 A T w+s c = 0 XSe µe = 0 System of nonlinear equations F(z) = 0 z = (x,w,s) To find z (µ) s.t. F(z (µ)) = 0 by Newton Method! Slide#62
63 Newton Method (i) One dimensional case f(z) z * 2 1 z z f(z 1 ) f(z 2 ) z 1 z 2 f (z 1 ) z 2 = z 1 f(z1 ) f (z 1 ) (ii) Higher dimensional case z 2 = z 1 [J F (z 1 )] 1 F(z 1 ) or J F (z 1 ) z = F(z 1 ) where J F (z 1 ) = and z = z 2 z 1 F i (z) z j z=z 1 Slide#63
64 Solving Ax b = 0 A T w+is c = 0 XSe µe = 0 Suppose we have (x k,w k,s k ) with x k S, (w k,s k ) T and x k js k j µ k Want to find s.t. (d k x,dk w,dk s ) and (βk P,βk D ) x k+1 = x k +β k P dk x S (w k+1 = w k +β k D dk w ; sk+1 = s k +β k D dk s ) T and x k+1 j s k+1 j µ k+1 < µ k Slide#64
65 By Newton Method, we have A A T I S k 0 X k d k x d k w d k s = 0 0 v k where v k = µ k e X k S k e Ad k x = 0 (1) A T d k w +dk s = 0 (2) S k d k x +X k d k s = v k (3) by (2), by (3), AX k S 1 k A T d k w = AX ks 1 k d k s or d k w = (AX k S 1 k A T ) 1 AX k S 1 k d k s (4) d k s = X 1 k v k X 1 k S k d k x AX k S 1 k d k s = AS 1 k v k Ad k x }{{} (5) 0 by (1) Slide#65
66 by (4) d k w = (AX k S 1 k A T ) 1 AS 1 k v k (6) by (2) d k s = A T d k w (7) by (3) d k x = S 1 k (v k X k d k s) (8) (Observation 1) If d k x > 0 and c T d k x < 0, then (P) is unbounded! If d k s > 0 and bt d k w > 0, then (D) is unbounded! Otherwise β k P = min i β k D = min i αx k i (d k x) i (d k x ) i < 0 αs k i (d k s) i (d k s ) i < 0 (Observation 2) µ k+1 (xk+1 ) T s k+1 n Slide#66
67 Primal-Dual Algorithm Step 1: k 0, choose ε 1 > 0,0 < α,δ < 1 (x 0,w 0,s 0 ) S T Step 2: µ k (xk ) T s k n v k µ k e X k S k e Step 3: If µ k < ε 1 Stop! x x k, w w k, s s k Otherwise, compute d k w (AX k S 1 k A T ) 1 AS 1 k v k d k s A T d k w d k x S 1 k (v k X k d k s ) Step 4: If d k x = 0 or dk s = 0 µ k (1 δ)µ k (δ > 0,δ < 1) v k µ k e X k S k e Go to Step 3 Otherwise, Slide#67
68 Step 5: If d k x > 0, c T x k < 0, Stop! (P) is unbounded! If d k s > 0, bt d k w > 0, Stop! (D) is unbounded! Otherwise Step 6: β k P = min i αx k i (d k x) i (d k x ) i < 0 β k D = min i αs k i (d k s ) i (d k s) i < 0 x k+1 x k +βp kdk x w k+1 w k +βdd k k w s k+1 s k +βd kdk s k k +1 Go to Step 2. Slide#68
69 Properties: (1) The computational bottleneck is to find (AX k S 1 k A T ) 1. (2) The scaling matrix X k S 1 k is a geometric mean of X k and S 1 k. Non-biased toward x or s! More robust then both the primal and the dual algorithms. (3) To start the primal-dual algorithm, check 3.5 on p. 45 of Ch. 7. (4) There are many different ways of choosing µ k and β k P,β k D. Slide#69
70 Kojima/ Mizuno/ Yoshise (Feb 87) picked µ k and βp, k βd k in a special formula and presented an O(n 4 L) primal-dual algorithm. Monteiro/ Adler (May 87) improved to O(n 3 L) by properly chosen β k P, βk D and µ k+1 = µ k (1 0.1/ n). (5) A practical implementation without requiring x k S, (w k,s k ) T is included in Section 3.6 on p.47 of Ch. 7. Slide#70
71 Matrix Computation Primal Dual (AX 2 A T ) 1 (AS 2 A T ) 1 Primal-Dual (AXS 1 A T ) 1 Matrix Inversion?? - Cholesky Factorization - Conjugate Gradient - LQ Factorization - Matrix Partition - Infinitely Summable Series (I N) 1 = I +N +N Chebychev approximation - Sparsity - Parallel Computing Slide#71
72 Cholesky Factorization & Forward/ Backward Solve Computational Bottleneck: Primal Dual Primal-Dual (AX 2 k AT ) 1 (AS 2 k A T ) 1 (AX k S 1 k A T ) 1 Observations: 1. Basically, we are solving (AD k A T )u = v for u diagonal with positive elements 2. When A has full row-rank m(< n) and D k has positive diagonals, M = AD k A T is symmetric positive definite. Slide#72
73 3. When M is symmetric positive definite, a unique lower triangular matrix L s.t. M = LL T and L has positive diagonals. 4. Mu = v LL T u }{{} = v z we may first solve Lz = v for z, then solve L T u = z for u. 5. Lz = v l 11 l 21 l l m1 l m2 l mm z 1 z 2. z m = v 1 v 2. v m Slide#73
74 l 11 z 1 = v 1 z 1 = v 1 l 11 l 21 z 1 +l 22 z 2 = v 2 z 2 = v 2 l 21 z 1.. l m1 z 1 +l m2 z 2 + +l mm z m = v m z m = v m m 1 i=1 l mi z i l mm z 1 z 2 z m Forward Solve!! l L T u = z l 11 l 21 l m1 l 22 l m2... l m 1 m 1 l m m 1 l mm u 1 u 2. u m = z 1 z 2. z m l mm u m = z m u m = z m l mm l m 1 m 1 u m 1 +l m m 1 u m = z m 1 u m 1 = z m 1 l m m 1 u m l m 1 m 1... l 11 u 1 +l 21 u 2 + +l m1 u m = z m u 1 = z m m i=2 l i1 u i l 11 u m u m 1 u 2 u 1 Backward Solve! Slide#74
75 7. Example M = v = Mu = v, i.e. u = M 1 v = l 11 l 21 l 22 0 l 31 l 32 l 33 l 41 l 42 l 43 l 44 }{{} l 11 l 21 l 31 l 41 l 22 l 32 l 42 l 33 l 43 0 l 44 }{{} l 2 11 = 1 l 11 = 1 = 1 l 11 l 21 = 0 l 21 = 0 l 11 = 0 l 11 l 31 = 2 l 31 = 2 l 11 = 2 l 11 l 41 = 1 l 41 = 1 l 11 = 1 l 21 l 21 +l 22 l 22 = 4 l 22 = = 2 l 21 l 31 +l 22 l 32 = 8 l 32 = l 22 = 4 l 21 l 41 +l 22 l 42 = 10 l 42 = l 22 = 5 L L T 1st column 2nd l31 2 +l2 32 +l2 33 = 29 l 33 = = 3 l 31 l 41 +l 32 l 42 +l 33 l 43 = 22 l 43 = = 0 l 33 l l2 42 +l2 43 +l2 44 = 42 3rd l 44 = = 4} 4th Slide#75
76 Slide#76 L = Solve Lz = v z 1 z 2 z 3 z 4 = z 1 = 2, z 2 = 3,z 3 = 0,z 4 = 4 Solve L T u = z u 1 u 2 u 3 u 4 = u 4 = 1, u 3 = 0,u 2 = 1,u 1 = 1 u =
77 8. Cholesky Factorization Algorithm l 11 m 11 for i = 2 to m end l i1 mi1 for j=2 to m l11 for i = j to m s m i j for k=1 to j-1 s s- l i k l j k end if i=j else l j j s l j j s lj j end end end Slide#77
78 9. Forward Solve z = 1 for end v 1 l 11 i = 2 to m s = 0 for k = 1 to i-1 s s + l ik z k end z i = (v i - s) l ii 9. Backward Solve u = m z m l mm for i = 1 to m-1 end s = 0 for k = m-i+1 to m s s + l k(m-i) u k end u = m-i ( z - s ) m-i l m-i m-i Slide#78
79 Conjugate Gradient Method Solving Mu = v in at most m iterations, where M is m m. (M.R. Hestenes / E. Stiefel 1952) Basic Idea: (Error correction method) (1) Start from an approximate solution u k. (2) Evaluate an error function h k. (3) Move along a direction d k which reduces the error. (4) Moving directions are mutually conjugate w.r.t. M, i.e., (d k ) T Md j = 0, for j k. (5) u k+1 = u k +α k d k with an appropriate step-size α k. Slide#79
80 (1) Given u k is known, Mu k? = v (2) Define r k = v Mu k (residual vector) and h k = (r k ) T M 1 r k (error function) i.e., h k = (v Mu k ) T M 1 (v Mu k ) = (u k ) T Mu k 2v T u k +v T M 1 v. (3) If d k is given, then determine α k s.t. u k+1 = u k +α k d k minimizes h k+1. Set h k+1 α k = 0, we have α k = (dk ) T r k (d k ) T Md k = (dk ) T r k (d k ) T p k where p k = Md k. Slide#80
81 d k : a good direction which reduces the error function. How about the negative gradient? dh k du k = 2(Muk v) = 2r k (4) To be mutually conjugate w.r.t. M, define d k = r k β k Md k 1 and require that We have (Md k 1 ) T d k = 0. β k = (Mdk 1 ) T r k (Md k 1 ) T p k 1 = (pk 1 ) T r k (p k 1 ) T p k 1 where p k 1 = Md k 1. Slide#81
82 (5) Note that r k+1 = v Mu k+1 = v M(u k +α k d k ) = (v Mu k ) α k Md k = r k α k p k Algorithm CG: Choose arbitrary u 0, k 0, ε > 0. Compute d 0 = r 0 = v Mu 0. Repeat p k = Md k α k = (dk ) T r k (d k ) T p k u k+1 = u k +α k d k r k+1 = r k α k p k β k+1 = (pk ) T r k+1 (p k ) T p k Slide#82
83 d k+1 = r k+1 β k+1 p k k k +1 until r k+1 ε. Output u k+1 as the solution. Complexity: Matrix-vector multiplication O(m 2 ) Terminate in m iterations O(m) Total O(m 3 ) M : large, sparse, γ non-zeros per row we need (γ + 5)km multiplications where k m. Slide#83
84 LQ Factorization Basic Idea: M = I B symmetric p.d. 0 < λ i < 1 k=0 B k = (I +B +B 2 + ) M 1 = (I B) 1 = k=0 B k Mu = v u = M 1 v = [I +B +B 2 + ] v (LP) Min c T x s.t. Ax = b x 0 Slide#84
85 A : m n with full row-rank A = L m m Q m n, QQ T = I }{{} orthonormal Min c T x (LP) s.t. LQx = b x 0 Min c T x (LP ) s.t. Qx = L 1 b x 0 w k = [QD 2 k QT ] 1 }{{} QD2 k c α[i QX k Q T }{{} ] B symmetric p.d. 0 < λ i < 1 Slide#85
86 Extentions Quadratic Programming: (QP) 1 Min 2 xt Qx+c T x s.t. Ax = b x 0 (QCQP) Min s.t. 1 2 xt Qx+c T x 1 2 xt H k x+h T kx = c k, k = 1,,m x 0 Convex Programming: Min f(x) s.t. Ax = b x 0 Slide#86
87 Semi-Infinite Programming: Min c T x s.t. n j=1 x j f j (t) g(t) x 0 t T n variables T constraints infinitely many Slide#87
88 Second-order Cone Programming Min s.t. c T x Ax = b x K (SOC) where x, c R n, b R m, K = {(x 1,x 2,...,x n ) R n x x 2 n 1 x n } Semidefinite Programming Min s.t. C X A X = b X S n + (SDP) where X, C S n, b R m, A = and is a linear operator. A 1 A 2. A m,a i S n, Slide#88
Numerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationAn E cient A ne-scaling Algorithm for Hyperbolic Programming
An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationChapter 1 Linear Programming. Paragraph 5 Duality
Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.
Lecture 2: The Simplex method. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form. 3. The Simplex algorithm. 4. How to find an initial basic solution. Lecture
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised October 2017 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More informationNew Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More information1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations
The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationInput: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function
Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More information4. The Dual Simplex Method
4. The Dual Simplex Method Javier Larrosa Albert Oliveras Enric Rodríguez-Carbonell Problem Solving and Constraint Programming (RPAR) Session 4 p.1/34 Basic Idea (1) Algorithm as explained so far known
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationOptimisation and Operations Research
Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School
More information9.1 Linear Programs in canonical form
9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationAN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationA QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING
A QUADRATIC CONE RELAXATION-BASED ALGORITHM FOR LINEAR PROGRAMMING A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More information