Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Size: px
Start display at page:

Download "Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions"

Transcription

1 Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm for linear programming performs much better in implementation than small-neighborhood counterparts In this paper, we provide a unified way to enlarge the neighborhoods of predictorcorrector interior-point algorithms for linear programming We prove that our methods not only enlarge the neighborhoods but also retain the so-far best known iteration complexity and superlinear or quadratic convergence of the original interior-point algorithms The idea of our methods is to use the global minimizers of proximity measure functions Key words neighborhoods Linear programming, interior-point algorithms, iteration complexity, This work was partially supported by the National Natural Science Foundation of China under Grant No and Grant No Part of the work was done while the author was a research staff of The Fields Institute for Research in Mathematical Sciences, University of Toronto, Ontario, Canada in 00 Institute of Applied Mathematics, AMSS, Chinese Academy of Science, Beijing , China ybzhao@amssaccn 1

2 1 Introduction Consider the canonical linear programming problem: min{c T x : Ax = b, x 0}, where c R n, b R m, A R m n and ranka = m We assume that the problem has an interior-point, ie, there exists an x 0, s 0, y 0 such that Ax 0 = b, A T y 0 + s 0 = c, x 0 > 0, s 0 > 0 This assumption guarantees the existence of the central path on which most interior-point algorithms are based There are two key factors that are closely related to the practical behavior of an interior-point algorithm, ie, the search direction and the neighborhood of the central path A large body of implementation shows that interior-point algorithms using wide neighborhoods perform much better than those counterparts based on small neighborhoods Thus, the enlargement of neighborhoods is an interesting and important aspect for improving the efficiency of interior-point algorithms This is why many authors have been studying the wide-neighborhood interior-point algorithms for optimization problems see, for example, [1, 5, 6, 11, 15, 18,, 3] A neighborhood of the central path is determined by some proximity measure function that is used to measure the distance between the point x, s > 0 and the central path The following proximity measure functions are widely used in the literature of interior-point methods for linear programming [8, 16, 19, 0]: δ x, s, µ = xs µ e, δ x, s, µ = xs µ e, δ log x, s, µ = xt s µ n + n log µ 1 n logx i s i, δ Φ x, s, µ = 1 xs µ µ xs In general cases, the parameter µ can be calculated at x, s, and thus we can write µ as µx, s For any given proximity measure function δx, s, µx, s, the neighborhood of the central path can be defined as N τ = {x, s > 0 : δx, s, µx, s τ}, where τ is a given positive number For most interior-point algorithms, the parameter µ > 0 is set to be the duality gap x T s/n However, we note that most proximity measure functions used in interior-point algorithms have global minimizers with respect to µ For any given x, s > 0, let µ x, s denote the global minimizer of δx, s,, ie, δx, s, µ x, s δx, s, µ for all µ > 0 Then the corresponding neighborhood is given by N τ = {x, s > 0 : δx, s, µ x, s τ}

3 Clearly, the neighborhood N τ is larger than N τ for any µx, s µ x, s, ie, for a given proximity measure function δ, the neighborhood N τ is the largest neighborhood This suggests that we may use the neighborhood N τ to replace the original neighborhood N τ of interior-point algorithms The purpose of this paper is to adapt the predictor-corrector-type interior-point algorithms to use the new neighborhood N τ so that the algorithms can work in wider neighborhoods We first give here a brief review on predictor-corrector methods The primal and dual predictor-corrector algorithm for linear programming was proposed by Mizuno, Todd and Ye [10], also see Barnes et al [] This method was later extended to complementarity problems and other optimization problems see, for example, [5, 7, 14, 15, 17, 18, 1] Some authors also proposed the infeasible version of this method such that the algorithm can start from an infeasible point see, for instance, [1, 13] A more practical predictor-corrector algorithm was proposed by Mehrotra [9], which was based on the power series algorithm in [3] Other power series high-order predictor-corrector algorithms can be found in [4, 15, 17], etc Two neighborhoods are used in predictor-corrector algorithms: One is used in so-called predictor steps and the other is used in corrector steps Since the original Mizuno-Todd-Ye method is actually working in small neighborhoods, several authors tried to modify this algorithm to work in some wider neighborhoods in order to achieve a faster convergence of the algorithm see, for example, [5, 6, 11, 14, 15] Let µ gap denote the duality gap x T s/n throughout this paper We note that µ gap = x T s/n is the global minimizer of the proximity measure function δ log x, s, µ with respect to µ Therefore, it is natural for δ log -based interior-point algorithms to use the duality gap as the updating rule for µ during the course of iteration However, for other proximity measure functions, µ gap is not necessary to be a global minimizer For example, the global minimizer see [11] of the function δ Φ x, s, µ is x µ x, s = T s n x i s i 1 The predictor-corrector algorithms that use δ Φ x, s, µ as the proximity measure function have been studied by several authors [11, 14] Especially, Peng, Terlaky and Zhao [11] has already studied how designing an interior-point algorithm by using the above-mentioned minimizer µ x, s In this paper, we focus our attention on the cases of -norm and - norm neighborhoods that were commonly used in the literature of interior-point algorithms We note that the global minimizer of the -norm proximity measure function δ x, s, µ is µ x, s = x T s x T s n x i s i = x T, s and the global minimizer see Lemma 31 in this paper of the -norm proximity measure function δ x, s, µ is µ x, s = max 1in x i s i + min 1in x i s i By using the global minimizers of the proximity measure functions δ and δ, we provide in this paper a unified method to enlarge the neighborhood of the central path, and hence we give a new design for predictor-corrector algorithms Also, we prove that 3

4 our algorithms retain both the so-far best known iteration complexity and the quadratic convergence of the original algorithms This paper is organized as follows In Section, we describe the algorithm based on - norm neighborhoods, and prove that the algorithm retains the best known O n logx T 0 s 0/ε- iteration complexity for small-neighborhood algorithms In Section 3, we study the case of -norm neighborhoods, and prove that the algorithm retains the best known On logx T 0 s 0/ε- iteration complexity for -norm neighborhood algorithms Finally, we point out that both algorithms in the paper are quadratically convergent in the sense that the duality gap sequences converge to zero quadratically We use the following notation throughout the paper e denotes the vector with all components equal to 1 For any positive vectors x, s R n and a real number q, the symbols xs and x q denote the vector whose components are x i s i and x q i i = 1,, n, respectively For any vector x R n, we denote minx and maxx denote the smallest and the largest components of x, respectively, ie, minx = min 1in x i and maxx = max 1in x i Algorithms based on -norm neighborhoods In this section, we consider the predictor-corrector algorithm based on the small neighborhood determined by the -norm proximity measure function: δ x, s, µ = xs µ e, which was used in the original Mizuno-Todd-Ye method As we pointed out in the last section, for a given vector x, s > 0 it is easy to verify that the global minimizer of δ x, s, µ with respect to µ is µ x, s = x T s x T s In what follows, we denote µ x, s by µ for simplicity We first give two constants τ and τ in 0, 1 satisfying τ r τ τ < τ where r := 1 + τ 3 1 Since τ 0, 1, it is easy to verify that r < 1 The idea of predictor-corrector methods is very natural Roughly speaking, the predictor-corrector method follows the central path of linear programming by alternating predictor steps and corrector steps Starting from a point in certain neighborhood N τ and moving along a predictor search director, the predictor step produces a predicted point in N τ that is larger than N τ At the predicted point, a corrector director is calculated, and a suitable stepsize along the corrector direction yields the next iterate that is back inside the neighborhood N τ In our algorithm, the predictor search direction is the same as the one in Mizuno-Todd-Ye method [10], ie, the solution to the following system: A x = 0, A T y + s = 0, s x + x s = xs 4

5 We denote by xθ = x + θ x, sθ = s + θ s The steplength θ is determined by the following rule where θ = max{t 0, 1] : xθ, sθ > 0, δ xθ, sθ, µ θ τ, θ 0, t]}, 3 µ θ := µ xθ, sθ = [xθ ] T sθ xθ T sθ Different from traditional methods, our corrector search direction is the solution to the system A x = 0, A T y + s = 0, 4 s x + x s = xs x s µ A characteristic of system 4 is that the duality gap remains invariant along the direction x, s Actually, since x T s = 0, we have x + θ x T s + θ s = x T s + θx T s + s T x + θ x T s = x T s + θx T s + s T x = x T s + θ x T s x T s µ = x T s The last equality above follows from the fact µ algorithm as follows = x T s /x T s We now describe the Algorithm 1 Given two positive numbers τ and τ in 0,1 satisfying 1, and an initial feasible point x 0, s 0 > 0 such that δ x 0, s 0, µ 0 τ where µ 0 = x0 T s 0 Set s 0 T s 0 k := 0, and do the following steps: Step 1 Predictor Solve system with x, s := x k, s k for the search direction x k, s k Compute the steplength θ k according to 3, and set x k, s k = x k + θ k x k, s k + θ k s k, and µ k = [ xk ] T s k x k T s k Step Corrector Solve system 4 with x, s, µ = x k, s k, µ k for the corrector direction x k, s k Set x k+1, s k+1 = x k + α k x k, s k + α k s k, where α k = min x k s k r µk max x k s k max x k s k 5 min x k s k

6 Set k := k + 1 Repeat the above steps until certain stopping criterion, for instance µ k gap = x k T s k /n ε, is satisfied At each step if x k, s k > 0, the positivity of x k, s k follows directly from the choice of θ k Later, we will see that the positivity of x k+1, s k+1 follows from the choice of α k see the proof of Theorem 1 Thus, Algorithm 1 is well-defined We now aim at proving that the algorithm has an O n log x 0 T s 0 ε -iteration complexity First, we have several technical results Lemma 1 [10] Let x, s > 0 be given For any given u R n, if x, s is the solution to the system A x = 0, A T y + s = 0, s x + x s = u, then x, s satisfies the following inequalities x s i i = x s 1 i i 4 xs 1/ u, i I + i I where I + = {i : x s i i > 0} and I = {i : x s i i < 0} Hence, x s 1 4 xs 1/ u For the sake of convenience, we suppress the iteration index k and denote x k, s k, x k, s k by x, s and x, s, respectively, if there is no confusion arising Similarly, denote x k, s k and x k, s k by x, s and x, s, respectively We also denote by p = xs µ and w = x s µ Consider the interval [0, ˆη where ˆη = min wi <0 p i w i Clearly, there exists a unique ˆθ 0, 1 such that the transformation η = θ /1 θ is a bijection from [0, ˆθ to [0, ˆη For any given x, s > 0, since µ = x T s /x T s, it is easy to verify that and where p i = x i s i /µ for i = 1,, n We recall that [δ x, s, µ ] = n xt s µ = n [xt s] x T s, 5 n n p i = p i, 6 xθ, sθ = x + θ x, s + θ s and µ θ = xθ T sθ xθ T sθ From system, it is evident that xθ T sθ = 1 θx T s and x i θs i θ = 1 θx i s i + θ x i s i 7 6

7 We now prove the next result Lemma Let η = θ /1 θ Then for any η [0, ˆη, we have [δ xθ, sθ, µ θ] [δ x, s, µ ] fη where fη = 1 η1 + η w n 1 p i + ηw i Proof By 5, 6 and 7, we have [δ xθ, sθ, µ θ] [δ x, s, µ ] = n xθt sθ n xt s µ θ µ [ ] = xt s xθ T sθ µ xθ T sθ [ ] = xt s 1 θx T s µ n [1 θx i s i + θ x i s i ] = xt s x T s µ n x i s i + η x i s i n n p i = p i 1 n p i + ηw i = n p i n ηp i w i + η w i n p i + ηw i 8 Since x i s i + s i x i = x i s i for all i = 1,, n, we have x i s i + s i x i + x i s i x i s i = x i s i for all i = 1,, n Thus, 4x i s i x i s i x i s i which implies that n x i s i x i s i 1 4 x T s Note that equality 5 implies that x T s/µ n, ie, n p i n Dividing both sides of the inequality above by µ, we have n p i w i x T s 4µ Since x T s = 0, we have n w i = 0, which implies that w i = w i, i I + i I 7 = xt s 4µ n 4 9

8 where I + = {i : w i > 0} and I = {i : w i < 0} Setting u = xs in Lemma 1 and by using 9, we can easily see that Therefore, by 9 and 10, we have By harmonic inequality, we have Thus, by 8, 11 and 1, we have i I + w i xt s 4µ n 4 and w xt s 4µ n 4 10 n n p i w i + ηwi n p i w i + η w w i n n + η w w i i I + n 1 + η w 11 n p i + ηw i n 1 p i + ηw i n 1 [δ xθ, sθ, µ θ] [δx, s, µ ] η n p i [ n 1 + η w ] n 1 n p i + ηw i n ηn 1 + η w n = 1 η1 + η w =: fη n 1 p i + ηw i 1 p i + ηw i The second inequality above follows from the fact n p i n Let θ be the steplength given by 3, and let η = θ 1 θ Therefore, θ = η η + η + 4η 13 To prove the iteration complexity of Algorithm 1, we need an estimate of the lower bound of θ From 13, it is sufficient to estimate the lower bound of η since the function ζt = is an increasing function in 0, t t+ t +4t Lemma 3 Let θ be the steplength in predictor step, determined by 3, and let η = θ 1 θ Then we have { β minp η min, 1 β minp τ τ }, w n1 + β minp 8

9 where β 0, 1 is a given number independent of n, for instance, β = 09 Proof We first note that fη satisfies the properties: f0 = 0 and fη as η ˆη Thus, there exists an η such that f η = τ τ > 0 Let η be the smallest solution to the above equation If η β minp w, then we have ie, Since either η β minp w τ τ = f η = 1 η1 + η w 1 η1 + β minp n n 1 p i + ηw i 1 minp η w 1 η1 + β minp n 1 β minp, η 1 β minp τ τ n1 + β minp or η β minp w, we conclude that { β minp, 1 β minp τ τ } w n1 + β minp η min To prove the desired result, it suffices to show that η η Actually, since f0 = 0 and since η is the smallest solution to the equation ft = τ τ > 0 in the interval 0, ˆη, we deduce that fη τ τ for all η [0, η] Thus, for any θ [0, θ] where θ satisfies that θ /1 θ = η, it follows from Lemma that [δ xθ, sθ, µ θ] τ + [δ x, s, µ ] τ τ The last inequality follows from that δ x, s, µ τ Thus, for any θ [0, θ], we have δ xθ, sθ, µ θ τ < 1 which also implies that xθ, sθ > 0 By definition of θ, we conclude that θ θ, and hence η η The proof is complete We are ready to prove the following result Theorem 1 Let x k, s k, µ k be generated by Algorithm 1 Then the following properties hold i δ x k, s k, µ k τ for all k 0; δ x k, s k, µ k τ for all k 0 ii The duality gap µ k+1 gap = 1 θ µ k k gap for all k 0 9

10 iii There exists a constant σ 0, 1 independent of n such that θ k σ n for all k 0, which implies that Algorithm 1 has an O n log x 0 T s 0 -iteration complexity Proof Result i can be proved by deduction We assume that δ x k, s k, µ k τ By the choice of θ, k the inequality δ x k, s k, µ k τ holds trivially It suffices to prove that δ x k+1, s k+1, µ k+1 τ We first note that for any constant τ 0, 1, we have r := ε τ 1 + τ 3 < 1 Since δ x k, s k, µ k τ, we have 1 τ µ k x k i sk i 1 + τ µk Thus, [min x k s k ] τ max x k s k µ k τ The above inequality can be written as 1 + τ 1 1 τ τ τ 1 + τ 1 + τ1 τ = 1 + τ 3 = 1 r 1 1 r µk max x k s k [min x k s k ] 1 Therefore, α k = min x k s k r µk max x k s k [min x k s k ] µ k 1 + τ 1 µ k max x k s k max x k s k 1 1 r µk max x k s k [min x k s k ] max x k s k µ Noting that for any 0 < t k max x k s k 1 t xk s k µ k = max 1in 1 t xk i sk i µ k = 1 tmin xk s k µ k In particular, setting t = α k, we have xk s k 1 αk = 1 α k min xk s k 14 µ k µ k 10

11 On the other hand, by Lemma 1, we have w k 1 4 µ k xk s k 1 x k s k xk s k µ k 1 µ k xk s k 1 e xk s k µ k max xk s k µ k e xk s k µ k 15 We also note that α k is the solution to the following quadratic equation with respect to t : 1 t min xk s k µ k + t max xk s k µ k = r Therefore, by 14 and 15 and noting that x k s k / µ k e τ, we have x k+1 s k+1 µ k e = = = x k s k + α k x k s k x k s k / µ + α k x k s k µ k e xk s k µ k e e α k xk s k µ k + α k w k xk s k µ k e e α k xk s k µ k + α k w k xk s k x k s k e αk µ k µ k e + α k w k 1 α k min xk s k x k s k µ k µ k e + α k w k [ 1 α k min xk s k µ k + αk max x k s k x k s k ] x k s k µ k µ k e µ k [ 1 α k min xk s k µ k + αk max x k s k ] x k s k µ k µ k e r τ τ e Since τ < 1, the inequality above also implies that x k+1, s k+1 > 0 Since µ k+1 is the global minimizer of the function δ x k+1, s k+1, µ with respect to µ > 0, we conclude that Result i is proved δ x k+1, s k+1, µ k+1 = x k+1 s k+1 x k+1 s k+1 µ k+1 e µ k e τ 11

12 Note that µ k gap = 1 θ k µ k gap and It follows that x k+1 T s k+1 = x k + α k x k T s k + α k s k = x k T s k + α k e T x k s k xk s k = x k T s k µ k µ k+1 gap = µ k gap = 1 θ k µ k gap + α k x k T s k Result ii follows We now prove iii Since n p i n and δ x, s, µ τ, it follows that 1 minp 1 τ We also note that w n/4 From Lemma 3, we conclude that η k ρ n, where ρ = min Since the function ξt := { 4β1 τ, 1 β 1 τ τ τ 1 + β t t+ t +4t is an increasing function on 0,, we obtain } θ k = η k + η k η k + 4η k ρ/n ρ/n + ρ/n + 4ρ/n σ n where 0 < σ = < 1 Therefore, the iteration complexity of the algorithm is /ρ O n log x 0 T s 0 ε 3 Algorithms based on -norm neighborhoods Since the original Mizuno-Todd-Ye method is actually working in small neighborhoods of the central path, many investigators tried to adapt this method to work in some wider neighborhoods such as the -norm neighborhood in order to achieve a faster convergence of the algorithms See, for example, [5, 11, 14, 17] In this section, we consider the predictorcorrector algorithm working in -norm neighborhoods We discuss how to enlarge further the -norm neighborhood by using the global minimizer of the -norm proximity measure function, and prove that the proposed algorithm here retains the best known iteration complexity for -norm-neighborhood interior-point algorithms First, we have the following result Lemma 31 For any given x, s > 0, the unique global minimizer of the function δ x, s, µ = xs µ e with respect to µ is given by µ = and the least value of δ x, s, is given by δ x, s, µ = maxxs + minxs, maxxs minxs maxxs + minxs 1

13 Proof For any given vector x, s > 0 and scalar t > 0, it is easy to verify that δ x, s, t = max x i s i 1in 1 t { } maxxs = max 1 t, minxs 1 t { maxxs = max 1, 1 minxs } t t When 0 < t maxxs+minxs, it follows that δ x, s, t = maxxs 1, t ] which is decreasing in the interval 0, maxxs+minxs When t maxxs+minxs, it follows that δ x, s, t = 1 minxs, t [ which is increasing in the interval maxxs+minxs, Thus for any given x, s > 0, the global minimizer of δx, s, in 0, is t = maxxs+minxs, at which the least value of the proximity measure function is given by δ x, s, t = maxxs t 1 = 1 minxs t = maxxs minxs maxxs + minxs The proof is complete In this section, all notation is similar to those in Section except for µ being replaced by µ For instance, we still use θ to denote the stepsize in predictor steps, and x, s is the search direction in the predictor step θ is given by θ = max{t 0, 1] : xθ > 0, sθ > 0, δ xθ, sθ, µ θ τ, θ 0, t]}, 16 where xθ = x + θ x, sθ = s + θ s and Denote by µ θ = maxxθsθ + minxθsθ θ max = max{t 0, 1] : xθ > 0, sθ > 0, θ 0, t]} 17 Clearly, θ θ max We now specify the algorithm as follows Algorithm 31 Given a constant τ such that 0 < τ 1, and given τ such that τ 1 n 1 τ = τ < τ, and given an initial feasible point x0, s 0 > 0 such that δ x 0, s 0, µ 0 τ where µ 0 = maxx0 s 0 +minx 0 s 0 Set k := 0, and do the following steps: 13

14 Step 1 Predictor Solve system with x, s := x k, s k for the search direction x k, s k Compute the damping parameter θ k according to 16, and set x k, s k = x k + θ x k k, s k + θ s k k, µ k = max xk s k + min x k s k Step Corrector Solve the following system for the corrector direction x k, s k : A x k = 0, A T ȳ k + s k = 0, s k x k + x k s k = µ k e x k s k Set x k+1, s k+1 = x k + α k x k, s k + α k s k, where α k = τ τ n τ τn µ k / min x k s k Set k := k + 1 Repeat the above steps until certain stoping criterion, for instance µ k gap ε, is satisfied Later, we will see that the stellength α k is well-defined We now start to analyze the algorithm and prove that the algorithm has an O n log x0 T s 0 ε -iteration complexity For simplicity, we suppress the iteration index k when there is no confusion arising Lemma 3 Let x, s be the predictor search direction Let θ max be defined as 17 and x i s i ˆη = min x i s i <0 x i s i = min p i w i <0 w i, where p := xs µ and w := x s µ If θ max < 1, then ˆη η max := θ max /1 θ max Proof By definition of θ max, it follows that { θ max = min min x i <0 x i x i, min s i <0 } s i s i Thus, there exists some i 0 such that either x i0 +θ max x i0 = 0 or s i0 +θ max s i0 = 0 Hence, 0 = x i0 + θ max x i0 s i0 + θ max s i0 = 1 θ max x i0 s i0 + θ max x i0 s i0, 18 Since θ max < 1 and x i0 s i0 > 0, equality 18 implies that x i0 s i0 < 0 Dividing both sides of 18 by 1 θ max, we have x i0 s i0 + η max x i0 s i0 = 0, where η max = θ max /1 θ max By definition of ˆη, we conclude that η max ˆη We now prove that η = θ /1 θ has a lower bound 14

15 Lemma 33 Let η = θ /1 θ, where θ is the steplength in the predictor step Then { } 41 τ09 4 τ τ 1 η min, 1 + τ 1 + τ n Proof Noting that the predictor step starting from the point x, s that satisfies δ x, s, µ τ There are only two cases Case 1 δ xθ, sθ, µ θ δ x, s, µ < τ τ for all 0 < θ < θ max In this case, by definition of θ, we have that θ = θ max There are only two sub-cases Sub-case 1 θ max = 1 For this case, η = θ /1 θ as θ θ max = 1 Thus, η > 09 minp/ w 19 holds trivially, since η = in this case Sub-case θ max < 1 Since θ = θ max < 1, it follows that η = η max := θ max /1 θ max By Lemma 3, we have η max ˆη 09 minp/ w Therefore, for this sub-case, inequality 19 remains valid Case There exists a point θ 0, θ max such that δ x θ, s θ, µ θ δx, s, µ τ τ > 0 Since δ x0, s0, µ 0 δx, s, µ = 0, by continuity, there must exist a point θ 0, θ max such that δ xθ, sθ, µ θ δx, s, µ = τ τ Let θ be the smallest solution to the above equation Let η = θ /1 θ We now prove that η is bounded from below Since either η 09 minp/ w or η 09 minp/ w, it is sufficient to prove that η has a lower bound if η 09 minp/ w We first note that maxxs + η x s maxxs + η x s, 0 and that for all η 09 minp/ w we have minxs + η x s minxs η x s > 0 1 It is easy to see that that for any given number c > 0, the function ϕt = t c t+c is increasing with respect to t, and the function χt = c t c+t is decreasing with respect to t Thus, by 0, 1 and Lemma 31, for any η 09 minp/ w we have δ xθ, sθ, µ θ = maxxθsθ minxθsθ maxxθsθ + minxθsθ = max1 θxs + θ x s min1 θxs + θ x s max1 θxs + θ x s + min1 θxs + θ x s maxxs + η x s minxs + η x s = maxxs + η x s + minxs + η x s maxxs + η x s minxs + η x s maxxs + η x s + minxs + η x s 15

16 maxxs + η x s minxs η x s maxxs + η x s + minxs η x s = maxxs minxs maxxs + minxs + η x s maxxs + minxs = δ x, s, µ + η w where w = x s/µ The first inequality above follows from 0 and the increasing property of ϕt, and the second inequality follows from 1 and decreasing property of χt The last equality above follows from the fact maxxs + minxs = µ Since δ x, s, µ τ, we have 1 τ p i = x i s i /µ 1 + τ Thus, by Lemma 1, we have w 1 xs 1/ xs = xt s n1 + τ 3 4µ 4µ 4 Noting that holds for any η 09 minp/ w Thus, if η 09 minp/ w, from and 3, it follows that Therefore, we have Since η δ xθ, sθ, µ θ δ x, s, µ 4 τ τ w n1 + τ { } 09 minp η 4 τ τ min, w n1 + τ δ xθ, sθ, µ θ τ + δx, s, µ τ τ, which also implies that xθ, sθ > 0 By definition of θ, we conclude that θ θ This in turn implies that η η Therefore, both case 1 and case imply that { } 09 minp 4 τ τ η min, w n1 + τ { } 41 τ09 4 τ τ 1 min, 1 + τ 1 + τ n The last inequality follows from 3 and 1 + τ minp 1 τ In what follows, we only consider the case n Lemma 34 Suppose that n There exists a constant γ 0, 05] independent of n such that 1 θ 1 α + α µ 1 γθ, µ gap where θ and α are the steplength in predictor and corrector steps in Algorithm 31, respectively 16

17 Proof By choice of τ and τ, we have τ τ < 1 τ n min x s n µ, ie, 1 τ τn µ min x s This implies that the steplength α used in the corrector step is well-defined First we note that τ τ τ τ α = n τ τn µ / min x k s k n On the other hand, since n and τ 1 τ n τ τ As a result, by Lemma 33 we have Since 0 < τ 1, we have 1 η = τ < τ < 1, it is easy to see that 091 τ 4 τ τ n1 + τ τ 1 + τ τ > 1 Therefore, there exists a constant γ 0, 05] such that Since τ < τ, it follows that Thus, we have 1 τ1 γ 1 + τ τ 1 τ1 γ 1 + τ τ 1 > 1 ie, α < τ τ n η 1 τ1 γ τ = θ 1 τ1 γ 1 θ τ 1 τ1 γ 1 + τ τ θ 1 τ1 γ, 1 θ τ 1 θ α τ 1 τ θ 1 γ Adding both sides of the above inequality by 1 θ yields 1 θ 1 + α τ 1 γθ 1 τ 17

18 Since x s/ µ e τ, which implies that 1 τ x i s i / µ 1 + τ, we have ie, µ / µ gap 1/1 τ Therefore, we have 1 θ The proof is complete µ gap µ = xt s n µ min x s µ 1 τ, 1 α + α µ µ gap We now prove the main result in this section 1 θ 1 α + α = 1 θ 1 + α τ 1 τ 1 τ 1 γθ Theorem 31 Suppose that n Let τ and τ be given as in Algorithm 31, and let x k, s k, µ k and x k, s k, µ k be generated by Algorithm 31 Then the following properties hold i δ x k, s k and µ k τ, δ x k, s k and µ k τ for all k ii µ k+1 gap 1 γθ µ k k gap for all k, where γ is given as in Lemma 34 iii There exists a constant σ 0, 1 independent of n such that θ k σ n, and hence the algorithm has an O n log x0 T s 0 ε -iteration complexity Proof Item i can be proved by deduction It is sufficient to show that if δ x k, s k, µ k τ, then δ x k, s k, µ k τ and δ x k+1, s k+1, µ k+1 τ We now assume that δ x k, s k, µ k τ The fact δ x k, s k, µ k τ follows immediately from the choice of the steplength θ k Since τ < 1, this also implies that x k, s k remains positive We now prove that the next iterate x k+1, s k+1 is back inside the original neighborhood, ie, δ x k+1, s k+1, µ k+1 τ We first note that x k s k 1 µ k 4 µ k x s 1/ µ k e x k s k µk x s 1/ 4 e xk s k µ k µk n 4 min x k s k e xk s k Therefore, x k+1 s k+1 e = µ k = µ k x k s k + α k µ k e x k s k + α k x k s k µ k e xk s k 1 αk µ k e + α k xk s k µ k 18

19 1 α k x k s k µ k e + α k x k s k µ k 1 α k x k s k µ k e + αk µ k n 4 min x k s k e xk s k µ k = 1 α k + α k n µ k 4 min x k s x k s k x k s k k µ k e µ k e 1 α k + α k n τ µ k 4 min x k s k From the beginning of the proof of Lemma 34, we have 1 τ τn µ k / min x k s k Thus, the following quadratic equation in t has at least one solution It is easy to see that τ 4 1 t + t n τ µ k 4 min x k s k = 5 τ τ α k = τ τ n1 + 1 τ τn µ k / min x k s k is the least solution to equation 5 Thus, it follows from 4 that x k+1 s k+1 e τ = τ τ τ µ k This also implies that x k+1, s k+1 > 0 Since µ k+1 = maxx k+1 s k+1 + minx k+1 s k+1 / is the global minimizer of the proximity measure function δ x k+1, s k+1,, we conclude that δ x k+1, s k+1, µ k+1 x k+1 s k+1 = e τ, µ k+1 as desired We now prove ii Notice that x k T s k = 0 By using that fact that µ k gap = 1 θ k µ k gap and by Lemma 34, we have µ k+1 gap = xk+1 T s k+1 n = xk + α k x k T s k + α k s k n = xk T s k + α k x k T s k + s k T x k n = 1 α k µ k gap + α k µ k 19

20 = µ k gap 1 θ k 1 α k + α k µk 1 γθ k µ k gap µ gap 1 α k + α k µk µ gap Finally, we prove that θ k σ n where σ 0, 1 is a constant independent of n Actually, in the proof of Lemma 34, we have pointed out that η k 4 τ τ/n1 + τ Notice that τ τ = 1 τ n We conclude that ηk ν for some constant ν independent of n Observe that n the function ξt := is increasing function on 0, Thus, we obtain t t+ t +4t θ k = η k + η k η k + 4η k µ k gap ν/n ν/n + ν/n + 4ν/n σ n ν where 0 < σ = ν+ ν+4 < 1 Therefore, the algorithm has an O n log x0 T s 0 ε -iteration complexity Before closing this section, we point out that both Algorithms 1 and 31 are quadratically convergent in the sense that the duality gap sequences generated by these algorithms converge to zero quadratically Actually, by a proof similar to that of Theorem 41 in [11], we can easily obtain the following result Theorem 3 Let x k, s k be generated by Algorithm 1 or Algorithm 31 Then the algorithm is quadratically convergent in the sense that µ k+1 gap = Oµ k gap Moreover, every accumulation point of the sequence x k, s k is a strictly complementary solution of the problem 4 Conclusions and future work Most numerical experiments demonstrate that interior-point algorithms working in wider neighborhoods perform better than those counterparts using smaller neighborhoods Inspired by this fact, we present in this paper a unified method to enlarge the neighborhoods of interior-point algorithms As an example, we consider so-called predictor-corrector methods, and show how to use the least value of a proximity measure function to enlarge the neighborhoods of the original methods We also prove that the proposed algorithms in this paper retain the best known iteration complexity and local superlinear convergence of the original algorithms Our methods can be viewed as a new design for interior-point methods It is worth mentioning that the following proximity function is also widely used in interior-point algorithms: δx, xs s, µ = µ e, where the operation performs componentwise, ie, for any vector y R n, the ith component of the vector y is min0, y i, i = 1,, n In R n ++, we note that δ x, s, µ = 0 0

21 for all µ 0, minxs], and δ x, s, µ > 0 for all µ > minxs This implies that the global minimum point of δ x, s, µ with respect to µ is not unique In fact, any µ 0, minxs] is the global minimum point and the least value of the proximity function is zero Therefore, if we take µ to be one of these minimum points, the neighborhood is enlarged to be the whole positive orthant, ie, R n ++ In this case, we can say that the interior-point algorithms do not need a particular neighborhood of central path, and thus the algorithms do not require any proximity measure function The analysis for such an interior-point algorithm is a worth and interesting future work References [1] KM Anstreicher and R A Bosch, A new infinity-norm path following algorithm for linear programming, SIAM J Optim, , pp [] ER Barnes, S Chopra and D J Jensen, The affine scaling method with centering, Technical Report, Department of Mathematical Sciences, IBM T J Watson Research Center, Yorktown Heights, NY, 1988 [3] DA Bayer and JC Lagarias, Karmarkar s linear programming algorithm and Newton s method, Math Programming, , pp [4] TJ Carpenter, IJ Lustig, JM Mulvey and DF Shanno, Higher-order predictorcorrector interior point methods with application to quadratic objectives, SIAM J Optim, , pp [5] CC Gonzaga, Complexity of predictor-corrector algorithms for LCP based on a large neighborhood of the central path, SIAM J Optim, , pp [6] PF Huang and Y Ye, An asymptotical O nl-iteration path-following linear programming algorihtm that used wide neighborhoods, SIAM J Optim, , pp [7] J Ji, FA Potra and R Sheng, On the local convergence of a predictor-corrector method for semidefinite programming, SIAM J Optim, , pp [8] M Kojima, N Megiddo, T Noma and A Yoshise, A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems, Lecture Notes in Computer Science, 538, Springer-Verlag, Berlin, 1991 [9] S Mehrotra, On the implemention of a primal-dual interior-point method, SIAM J Optim, 199, pp [10] S Mizuno, MJ Todd and Y Ye, On adaptive step primal-dual interior-point algorithms for linear programming, Math Oper Res, , pp [11] JM Peng, T Terlaky and YB Zhao, A self-regularity based predictor-corrector algorithm for linear optimization, Technical Report, Department of Computing and Software, McMaster University, Canada, 00 [1] FA Potra, A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points, Math Programming, , pp

22 [13] FA Potra, An infeasible-interior-point predictor-corrector algorithm for linear programming, SIAM J Optim, , pp 19 3 [14] FA Potra, The Mizuno-Todd-Ye algorithm in a large neighborhood of the central path, European J Oper Res, , pp [15] FA Potra, A superliner covergent predictor-corrector method for degenerate LCP in a wide neighborhood of the central path with O nl-iteration complexity, Math Program, , no, Ser A, [16] C Roos, T Terlaky and J-Ph Vial, Theory and Algorithms for Linear Optimization An Interior Point Approach, Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, Ltd, Chichester, 1997 [17] J Stoer and M Wechs, The complexity of high-order predictor-corrector methods for solving sufficient linear complementarity problems, Optim Methods Softw, , pp [18] J Sun and G Zhao, Global linear and local quadratic convergence of a long-step adaptive-mode interior point method for some monotone variational inequality problems, SIAM J Optim, , pp [19] S J Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, USA, 1997 [0] Y Ye, Interior-Point Algorithms: Theory and Analysis, John Wiley & Sons, Inc, New York, NY, 1997 [1] Y Ye and K Anstreicher, On quadratic and O nl convergence of predictor-corrector algorithm for LCP, Math Programming, , pp [] G Zhao, Interior-point algorithms for linear complementarity problems based on large neighborhoods of the central path, SIAM J Optim, , pp [3] YB Zhao and J Han, Two interior-point methods for nonlinear P -complementarity problems, J Optim Theory Appl, , pp

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path

Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function

A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization

A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Mathematical Programming manuscript No. will be inserted by the editor) Florian A. Potra Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

More information

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path

Key words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye Probabilistic Analysis of an Infeasible-Interior-Point Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider

More information

Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path

Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Math. Program., Ser. B 008 111:43 7 DOI 10.1007/s10107-006-0068- FULL LENGTH PAPER Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Florian

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

An EP theorem for dual linear complementarity problems

An EP theorem for dual linear complementarity problems An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes

A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques

A polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques Math. Program. 86: 9 03 (999) Springer-Verlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen

More information

Interior Point Methods for Nonlinear Optimization

Interior Point Methods for Nonlinear Optimization Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*

Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)

More information

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm

A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, Dianne P. O Leary September 24, 2007 Abstract Consider linear programs in

More information

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS

A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,

More information

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise 1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro

More information

New Interior Point Algorithms in Linear Programming

New Interior Point Algorithms in Linear Programming AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point

More information

Interior-point algorithm for linear optimization based on a new trigonometric kernel function

Interior-point algorithm for linear optimization based on a new trigonometric kernel function Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999.

Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. Primal-Dual Interior-Point Methods by Stephen Wright List of errors and typos, last updated December 12, 1999. 1. page xviii, lines 1 and 3: (x, λ, x) should be (x, λ, s) (on both lines) 2. page 6, line

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Corrector-predictor methods for sufficient linear complementarity problems

Corrector-predictor methods for sufficient linear complementarity problems Comput Optim Appl DOI 10.1007/s10589-009-963-4 Corrector-predictor methods for sufficient linear complementarity problems Filiz Gurtuna Cosmin Petra Florian A. Potra Olena Shevchenko Adrian Vancea Received:

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 95120-6099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE

More information

Interior Point Methods for Linear Programming: Motivation & Theory

Interior Point Methods for Linear Programming: Motivation & Theory School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

IMPLEMENTATION OF INTERIOR POINT METHODS

IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS IMPLEMENTATION OF INTERIOR POINT METHODS FOR SECOND ORDER CONIC OPTIMIZATION By Bixiang Wang, Ph.D. A Thesis Submitted to the School of Graduate Studies in Partial

More information

Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások

Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások General LCPs Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások Illés Tibor BME DET BME Analízis Szeminárium 2015. november 11. Outline Linear complementarity problem: M u + v =

More information

Contraction Methods for Convex Optimization and monotone variational inequalities No.12

Contraction Methods for Convex Optimization and monotone variational inequalities No.12 XII - 1 Contraction Methods for Convex Optimization and monotone variational inequalities No.12 Linearized alternating direction methods of multipliers for separable convex programming Bingsheng He Department

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

A variant of the Vavasis-Ye layered-step interior-point algorithm for linear programming

A variant of the Vavasis-Ye layered-step interior-point algorithm for linear programming A variant of the Vavasis-Ye layered-step interior-point algorithm for linear programming R. D. C. Monteiro T. Tsuchiya April, 2001 Abstract In this paper we present a variant of Vavasis and Ye s layered-step

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods

On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods Gabriel Haeser Oliver Hinder Yinyu Ye July 23, 2017 Abstract This paper analyzes sequences generated by

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Lecture 14: Primal-Dual Interior-Point Method

Lecture 14: Primal-Dual Interior-Point Method CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: Primal-Dual Interior-Point Method Disclaimer: Please tell me any mistake you noticed. In this and

More information

א K ٢٠٠٢ א א א א א א E٤

א K ٢٠٠٢ א א א א א א E٤ المراجع References المراجع العربية K ١٩٩٠ א א א א א E١ K ١٩٩٨ א א א E٢ א א א א א E٣ א K ٢٠٠٢ א א א א א א E٤ K ٢٠٠١ K ١٩٨٠ א א א א א E٥ المراجع الا جنبية [AW] [AF] [Alh] [Ali1] [Ali2] S. Al-Homidan and

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Advances in Convex Optimization: Theory, Algorithms, and Applications

Advances in Convex Optimization: Theory, Algorithms, and Applications Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne

More information

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani

Interior-point methods Optimization Geoff Gordon Ryan Tibshirani Interior-point methods 10-725 Optimization Geoff Gordon Ryan Tibshirani Analytic center Review force field interpretation Newton s method: y = 1./(Ax+b) A T Y 2 A Δx = A T y Dikin ellipsoid unit ball of

More information

Chapter 14 Linear Programming: Interior-Point Methods

Chapter 14 Linear Programming: Interior-Point Methods Chapter 14 Linear Programming: Interior-Point Methods In the 1980s it was discovered that many large linear programs could be solved efficiently by formulating them as nonlinear problems and solving them

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl (29) 141: 231 247 DOI 1.17/s1957-8-95-5 Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization G. Al-Jeiroudi J. Gondzio Published online: 25 December

More information

Conic Linear Optimization and its Dual. yyye

Conic Linear Optimization and its Dual.   yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

informs DOI /moor.xxxx.xxxx c 200x INFORMS

informs DOI /moor.xxxx.xxxx c 200x INFORMS MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 2x, pp. xxx xxx ISSN 364-765X EISSN 526-547 x xxx xxx informs DOI.287/moor.xxxx.xxxx c 2x INFORMS A New Complexity Result on Solving the Markov

More information

Convex Optimization and l 1 -minimization

Convex Optimization and l 1 -minimization Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l

More information

Lecture 3 Interior Point Methods and Nonlinear Optimization

Lecture 3 Interior Point Methods and Nonlinear Optimization Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L

More information

An interior-point gradient method for large-scale totally nonnegative least squares problems

An interior-point gradient method for large-scale totally nonnegative least squares problems An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice

More information