c 2005 Society for Industrial and Applied Mathematics

Size: px
Start display at page:

Download "c 2005 Society for Industrial and Applied Mathematics"

Transcription

1 SIAM J. OPTIM. Vol. 15, No. 3, pp c 2005 Society for Industrial and Applied Mathematics CONVERGENCE CONDITIONS AND KRYLOV SUBSPACE BASED CORRECTIONS FOR PRIMAL-DUAL INTERIOR-POINT METHOD SANJAY MEHROTRA AND ZHIFENG LI Abstract. We present convergence conditions for a generic primal-dual interior-point algorithm with multiple corrector directions. The corrector directions can be generated by any approach. The search direction is obtained by combining predictor and corrector directions through a small linear program. We also propose a new approach to generate corrector directions. This approach generates directions using information from an appropriately defined Krylov subspace. We propose efficient implementation strategies for our approach that follow the analysis of this paper. Numerical experiments illustrating the features of the proposed approach and its practical usefulness are reported. Key words. linear program, primal-dual interior-point method, inexact search direction, Krylov subspace, convergence AMS subject classifications. 90C05, 90C06, 90C30, 90C51 DOI /S Introduction. Implementations based on primal-dual interior-point methods have emerged as an efficient approach for solving large scale linear and nonlinear programs. The practical performance of these methods is improved significantly by introducing corrector directions to a Newton direction [10, 12, 16, 18]. The corrector directions are computed from an already factored matrix to reduce overall matrix factorizations in the algorithm. The predictor-corrector scheme builds on the philosophy that once an expensive factorization step is performed, it should be used to the best possible extent before doing a refactorization. Methods built on this scheme were proposed by Karmarkar et al. [15]; Mehrotra [18]; Gondzio [10]; and Jarre and Wechs [12]. The method in Karmarkar et al. [15] is based on a power-series expansion of a particular parameterization of a primal-dual trajectory toward a point on the central path. Mehrotra [17, 18] explicitly introduced the predictor-corrector concepts and proposed strategies based on adaptively generating information from fixed point iterations and/or a higher order Taylor expansion. Gondzio [10] proposed improvements based on centrality corrections, and more recently Jarre and Wechs [12] have given improved heuristics for generating the corrector steps. Jarre and Wechs [12] also proved that their approach of generating and combining corrector directions gives a convergent algorithm. Since there can be many different ways to generate the corrector directions, it is important to consider conditions that ensure the convergence of a predictor-corrector interior-point algorithm which allows generic computations of corrector directions. This is addressed in the first part of this paper. We give these conditions in the form of a linear program. The conditions are derived from a refinement of the analysis appearing in Kojima, Megiddo, and Mizuno [14] and Jarre and Wechs [12]. We also propose a new approach for generating corrector directions. The idea is to incorporate information generated from an iterative scheme to improve the Received by the editors July 9, 2003; accepted for publication (in revised form) March 20, 2004; published electronically April 8, The work of both authors was supported in part by NSF grants DMI and DMI and ONR grant N /P Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL (mehrotra@iems.nwu.edu, zhifeng@iems.nwu.edu). 635

2 636 SANJAY MEHROTRA AND ZHIFENG LI performance of a direct method based implementation for unstructured problems. Although efforts have been made to use indirect methods (for instance, GMRES/QMR methods [5, 6] and the conjugate gradient method [11, 13, 21]), it is generally understood that for unstructured problems direct methods are more suitable [1, 3, 16]. The iterative methods are found more suitable for structured problems, particularly network flow problems [11, 13, 20, 22, 23]. The performance of iterative methods depends on the quality of preconditioner. The proposed method for generating corrector directions is a hybrid of direct and iterative methods. The particular design is based on two observations. First, for the information from indirect methods to be effective, iterates should stay in a certain subspace. In particular, the infeasibility of the linear equations in the iterative methods should remain only in equations corresponding to the nonlinear part of the KKT conditions. The second observation is that the directions generated by iterative methods are generally weak and typically by themselves not good enough to give sufficient progress in the algorithm. This is particularly true when high solution accuracy is desired. As a result, iterative methods don t give a stable implementation when accuracy requirements are high. Our approach to generating corrector directions uses a Krylov subspace. We use an exact factorization from an earlier iteration to ensure the desired properties in the directions we generate. This can be viewed as choosing an easily available special preconditioner for the KKT equations. In this sense, our method is a hybrid of higherorder correction strategy and an inexact iterative method. A linear programming subproblem is solved at each iteration to generate a combined direction. Numerical experiments using a modification of the software package PCx [3] show the advantage of this approach for the problems with relatively dense Cholesky factors. This paper is organized as follows. In the next section we give conditions that guarantee the convergence of a generic predictor-corrector algorithm. We present the linear programming subproblem used for generating a search direction and our modified primal-dual interior-point algorithm in this section. Convergence analysis of the proposed generic algorithm is given in section 3. In section 4 we discuss Krylov subspace based corrector directions. Computational results are given in section 5. Concluding remarks and future research directions are discussed in section 6. Some proofs and tables containing numerical results are given in the appendix. Notation. Lower-case letters are used for vectors, and upper-case letters are used for matrices. Superscript T denotes the transpose of a vector. For a vector denoted by a lower-case letter, the upper-case letter means the diagonal matrix whose diagonal elements are the components of the vector. For example, X =diag(x 1,x 2,...,x n )is the diagonal matrix associated with the vector x =(x 1,x 2,...,x n ) T R n. e denotes the vector (1, 1,...,1) T. The vector Euclidean norm is denoted by x = x T x,and x is the infinity norm. The norm of a matrix A, max x =1 Ax, is represented by A. The notation x 0(orx R n +) implies that all components of the vector x are nonnegative. The same symbol, 0, is used to denote the number zero, the zero vector, and the zero matrix. I denotes the identity matrix. 2. Convergence conditions for a generic primal-dual predictor-corrector algorithm. Preliminaries. We describe our ideas for the standard form linear program: (1) min x Ax = b, x 0}, x R n{ct

3 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 637 where A R m n, b R m,andc R n. We assume that A has full row rank. The dual problem to (1) is written as (2) max s R n,y R m{bt y A T y + s = c, s 0}. The KKT optimality conditions for the primal and dual problems (1) and (2) are as follows: (3) (4) (5) Ax = b, A T y + s = c, Xs =0, x, s 0. Equations (3) and (4) are primal and dual feasibility, respectively, and the nonlinear equation (5) is called the complementarity condition. Points (x, y, s) R n + R m R n + satisfying (3), (4), and Xs = μe are called points on the central path and are denoted by (x(μ),y(μ),s(μ)) for some parameter μ. Primal-dual interior-point algorithms can be viewed as a variant of Newton s method applied to the system of equations (6) Ax = b, A T y + s = c, Xs = r, where r is a suitably chosen vector to be specified later. A linearization of (6) at (x, y, s) withx>0,s > 0 yields the system of linear equations (7) X S 0 I 0 A T Δs Δx = 0 A 0 Δy where the right-hand side in (7) is given by r q p, (8) p := b Ax, q := c A T y s, r := r Xs. The predictor direction (or the primal-dual affine direction) (Δx aff,δy aff,δs aff )is given by taking r = 0. A matrix factorization is required for computing the predictor direction. This factorization dominates the computational efforts in interior-point methods. Therefore, in practice it is often advantageous to use the same matrix again with appropriately chosen right-hand sides to improve on the predictor direction. The directions computed with the goal of improving the predictor direction are called the corrector directions. LP subproblem. Since several different approaches [10, 12, 15, 18] are possible to compute corrections, it is useful to identify conditions under which a generic predictor-corrector algorithm will converge. Let (Δx 0,Δy 0,Δs 0 ),..., (Δx j,δy j, Δs j )bej additional corrector directions generated by solving (7) with certain choices of (r, q, p). These can be Mehrotra correctors, Gondzio correctors, Jarre Wechs correctors, or other correctors such as the one described in section 4. Let the direction (Δx c, Δy c, Δs c ) be computed from (7) with right-hand side (μe, 0, 0). Here μ is an appropriately chosen parameter. Let (Δx(ρ x ), Δy(ρ s ), Δs(ρ s )) represent a combination

4 638 SANJAY MEHROTRA AND ZHIFENG LI of these directions: (9) Δx(ρ x )=ρ x aff Δx aff + ρ x 0Δx 0 + ρ x 1Δx ρ x j Δx j + ρ x c Δx c, Δy(ρ s )=ρ s aff Δy aff + ρ s 0Δy 0 + ρ s 1Δy ρ s jδy j + ρ s cδy c, Δs(ρ s )=ρ s aff Δs aff + ρ s 0Δs 0 + ρ s 1Δs ρ s jδs j + ρ s cδs c, p(ρ x )=AΔx(ρ x ), q(ρ s )=A T Δy(ρ s )+Δs(ρ s ), where ρ x =(ρ x aff,ρx 0,...,ρ x j,ρx c )andρ s =(ρ s aff,ρs 0,...,ρ s j,ρs c). The combination (9) should be such that it ensures overall convergence by taking a step of appropriate length along (Δx(ρ x ), Δy(ρ s ), Δs(ρ s )). Toward this we consider the linear subproblem (sublp): (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) max t, x +Δx(ρ x ) 0, s +Δs(ρ s ) 0, Δx(ρ x ) c 1 δx, Δs(ρ s ) c 1 δs, p p(ρ x ) (1 δ) p, p(ρ x ) δ p, q q(ρ s ) (1 δ) q, q(ρ s ) δ q, s T Δx(ρ x )+x T Δs(ρ s ) δ(β 2 1)x T s, SΔx(ρ x )+XΔs(ρ s ) δ(β 1 μe Xs), c T (x +Δx(ρ x )) b T (y +Δy(ρ s )) (1 t) μ, 0 t δ 1, where 0 <β 1 <β 2 < 1andc 1 > 1 are some parameters and μ = xt s n, μ =max{ct x b T y, x T s, γ p Ax b,γ d A T y + s c } for some γ p > 0, γ d > 0. The parameters β 1,β 2,andγ satisfy additional conditions: β 2 < 1 <β 1 /γ, β 1 γ/2n, andγ< 1. These conditions are used in lower bounding the step length in the proof while satisfying (20) and Lemma 3.3. The parameter γ controls closeness to centrality while taking a step. The constant c 1 is a large constant whose theoretical value is specified in (34) in Appendix A. Conditions (10) and (11) in sublp preserve the nonnegativity of x and s. Conditions (12) and (13) ensure that the combined predictor-corrector direction remains bounded. Conditions (14) (17) reduce the infeasibility of primal and dual problems at a sufficiently fast rate. Condition (18) ensures that the complementary gap reduces at a sufficient rate. Condition (19) is used to ensure that the combined search direction stays inside the neighborhood of the central path while giving sufficient progress. Condition (20) guarantees the reduction in the difference of primal and dual objectives along the direction (Δx(ρ x ), Δy(ρ s ), Δs(ρ s )), while simultaneously ensuring that it is not very different from the complementarity gap and feasibility. The use of two variables t and δ allow us to use different values for ρ x and ρ s. This allows us to develop a practical implementation that very closely maintains the theoretical properties discussed in this and the next section. The sublp (10) (20) is more general than a similar search finding problem introduced by Jarre and Wechs [12]. In particular, it allows the use of generic corrector directions, as well as different primal and

5 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 639 dual combinations while maintaining global convergence. Essentially, sublp ensures that the combined direction retains the properties of the Newton direction in [14] and only strengthens them without compromising the global convergence. We will discuss implementation aspects of these conditions in section 5. Generic predictor-corrector method. Algorithm 1. INPUT: An initial feasible point (x 0,y 0,s 0 ) > 0, convergence parameters: ɛ,ɛ p,ɛ d > 0,ω, and algorithm parameters: μ 0 =max{γ p Ax 0 b,γ d A T y 0 +s 0 c } with γ p,γ d > 0, 0 <γ<β 1 <β 2 < 1withβ 1 γ/(2n) andγ min i x i s i /max{x T s, μ 0 }. For k =0, 1, 2,... Step 1. Set (x, y, s) =(x k,y k,s k )andlet μ = xt s n, μ =max{ct x b T y, x T s, μ k }. If μ <ɛ, Ax b <ɛ p,and A T y + s c <ɛ d,orif (x, s) >ω, then STOP, else go to Step 2. Step 2. Compute the predictor direction (Δx aff, Δy aff, Δs aff ) and generate j correction directions (Δx 0, Δy 0, Δs 0 ),...,(Δx j, Δy j, Δs j ). In addition, (if needed) compute (Δx c, Δy c, Δs c ), then solve sublp to obtain the search direction (Δx(ρ x ), Δy(ρ s ), Δs(ρ s )); Step 3. Set Δx =Δx(ρ x ), Δy =Δy(ρ s ), Δs =Δs(ρ s ), and compute { ᾱ := max α (X + αδx)(s + αδs) γ } α [0,1] n max{(1 αδ ) μ, (x + αδx) T (s + αδs)}e ; where δ is the value of δ at an optimum solution of sublp. Step 4. Update k k + 1 and the new iterate (x k+1,y k+1,s k+1 ):=(x k,y k,s k )+ᾱ(δx, Δy, Δs), μ k+1 = max {γ p A T y k+1 + s k+1 c,γ d Ax k+1 b }. End (For) 3. Global convergence of the generic predictor-corrector method. We prove the following convergence theorem for Algorithm 1 in this section. Theorem 1. Algorithm 1 stops after finitely many iterations. In particular, in Step 1, it either identifies an approximate solution of problems (1) and (2) or generates (x k,s k ) satisfying (x k,s k ) >ω for some given ω. Furthermore, the algorithm converges linearly. Our proof of Theorem 1 borrows heavily from the analysis in Jarre and Wechs [12] and Kojima, Megiddo, and Mizuno [14]. In particular, Kojima, Megiddo, and Mizuno [14] showed (for l = 2) in (21) (23) that for suitably chosen parameters ρ aff (Δx aff, Δy aff, Δs aff )+ρ c (Δx c, Δy c, Δs c ) can be used to generate a sequence of iterates in a neighborhood defined by

6 640 SANJAY MEHROTRA AND ZHIFENG LI (21) (22) (23) x i s i γ xt s n for 1 i n, x T s γ p Ax b l or Ax b l ɛ p, x T s γ d A T y + s c l or A T y + s c l ɛ d. The iterates generated in Algorithm 1 are in the neighborhood for l =. We divide this proof into three parts. The first part constructs a feasible solution for sublp in Step 2 of Algorithm 1. The second part shows a lower bound for the step length ᾱ in Step 3 that maintains the new iterate in the desired neighborhood. The third part proves global linear convergence. These are given in the next three subsections. Now let us assume that (x, s) ω is satisfied throughout the algorithm. Otherwise, the algorithm terminates and the theorem holds true Feasible solution for the sublp. We show that ρ x = ρ s = ρ := ( 1 c 1, 0,..., 0, β1 c 1 ), δ := 1 c 1,and t := 1 4c 1 is a feasible solution to the sublp. The constant c 1 is defined in (34) while proving Lemma 3.1. Note that c 1 (Δx( ρ), Δy( ρ), Δs( ρ)) is the Newton direction in Kojima, Megiddo, and Mizuno [14]. The construction here is similar to Jarre and Wechs except for small differences in parameter selection. Lemma 3.1. For the choice of c 1 given in (34) and ρ defined as above, Δx( ρ) and Δs( ρ) satisfy X 1 Δx( ρ), S 1 Δs( ρ) 1. Proof. See the appendix. Feasibility of conditions (10) (13), (14) (17), and (18) (19). From Lemma 3.1 it is easy to see that (10) (13) hold for (Δx( ρ), Δy( ρ), Δs( ρ)). The inequalities (14) (17) are feasible because p( ρ) = 1 c 1 AΔx aff = 1 c 1 p, and similarly q( ρ) = 1 c 1 q. Conditions (18) and (19) are satisfied because XΔs( ρ) +SΔx( ρ) = 1 c 1 (β 1 μe Xs) and β 2 >β 1. Feasibility of condition (20). The following result was proved in Jarre and Wechs [12]. Lemma 3.2. The vectors Δx( ρ) and Δs( ρ) satisfy c T (x +Δx( ρ)) b T (y +Δy( ρ)) 1 ) (x T s + u T Λ 2 u)+ (1 1c1 (c T x b T y), 2c 1 where u := c 1 (XΔs( ρ)+sδx( ρ)) + Xs and Λ:=X 1/2 S 1/2. Using the definition of μ, u = β 1 μe, andβ 1 γ/2n from Lemma 3.2 we obtain ( c T (x +Δx( ρ)) b T (y +Δy( ρ)) 1 1 ) μ + 1 u T Λ 2 u 2c 1 2c 1 ( = 1 1 ) μ + 1 β 2c 1 2c 1μ 2 2 e T Λ 2 e 1 ( 1 1 ) μ + 1 γ nμ 2 2c 1 2c 1 2n γmax{μ, μ/n} (by (33) in Appendix A) ( 1 1 ) μ. 4c 1

7 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD A lower bound for ᾱ in Step 3. In this section we give a lower bound for the step size taken along a direction obtained from an optimal solution of sublp. Note that although we do not require ρ x = ρ s, the present analysis uses equal steps for the combined primal and dual search directions. The ability to take different steps for primal and dual directions is more desirable from an implementation perspective. Lemma 3.3. The step length ᾱ in Step 3 of Algorithm 1 is at least β 2 < 1 <β 1 /γ. Proof. Following Jarre and Wechs [12] we consider two cases. Case I. (1 αδ) μ (x + αδx(ρ x )) T (s + αδs(ρ s )). In this case, (X + αδx(ρ x ))(s + αδs(ρ s )) γ (1 αδ) μe n = Xs + α(xδs(ρ x )+SΔx(ρ s )) }{{} r(ρ):= + α 2 ΔX(ρ x )Δs(ρ s ) γ (1 αδ) μe n =(1 αδ)(xs γ n μe) + α(r(ρ)+δxs)+α 2 ΔX(ρ x )Δs(ρ s ) }{{} 0 αδβ 1 μe + α 2 X 1 ΔX(ρ x )(XS)S 1 Δs(ρ s ) (from (19)) αδβ 1 μe α 2 nμc 2 1δ 2 e (by (10) (13)). Since δ<1, the above inequality holds when 0 α β1 nc 2 1 Case II. (1 α ρ) μ<(x + αδx) T (s + αδs). In this case, β1 nc 2 1 δ. β 1 γβ 2 (n+γ)c 2 1, (X + αδx(ρ x ))(s + αδs(ρ s )) γ n (x + αδx(ρx )) T (s + αδs(ρ s ))e = Xs + α(xδs(ρ s )+SΔx(ρ x )) + α 2 ΔX(ρ x )Δs(ρ s ) γ n (xt s + α(x T Δs(ρ s )+s T Δx(ρ x )) + α 2 Δx(ρ x ) T Δs(ρ s ))e =(1 αδ)(xs γμe) + α(r(ρ)+δxs) α γ }{{} n ((r(ρ)+δxs)t e)e 0 + α 2 (ΔX(ρ x )Δs(ρ s ) γ n (ΔxT (ρ x )Δs(ρ s ))e) αδβ 1 μe δα γ n β 2(x T s)e + α 2 (X 1 Δx(ρ x ))(XS)(S 1 Δs(ρ s )) γ n (X 1 Δx(ρ x )) T (XS)(S 1 Δs(ρ s ))e) (from (18) and (19)) αδμ(β 1 γβ 2 )e α 2 (n + γ)δ 2 c 2 1μe (by (10) (13)). Hence ᾱ has the lower bound 0 < ᾱ = β1 γβ2 (n+γ)c 2 1 and δ<1. β1 γβ2, because β δ(n+γ)c 2 2 < 1 <β 1 /γ Convergence. From conditions (14) (17), Ax k+1 b = p k + αp k (ρ x ) p k p k (ρ x ) +(1 α) p k (ρ x ) (1 αδ) p k, and similarly A T y k+1 + s k+1 c (1 αδ) q k. Hence, μ k (1 αδ) μ k 1. Because ᾱ ˆα := β 1 γβ 2 (n + γ)c 2 and δ ˆδ := 1, 1 4c 1

8 642 SANJAY MEHROTRA AND ZHIFENG LI μ k is bounded above by (24) μ k (1 ˆαˆδ) k μ 0. Condition (20) of the sublp guarantees that c T x k+1 b T y k+1 (1 ˆαˆδ) μ k. From Ax = b p, A T y + s = c q, we have (25) x k+1t s k+1 = c T x k+1 b T y k+1 q k+1t x k+1 + p k+1t y k+1. Note that y k+1 = (AA T ) 1 A(c q k+1 s k+1 ) (AA T ) 1 A ( c + μ k+1 /γ d + ω ) c 3 ω. Inserting these estimates into (25) yields μ k+1 μ k (1 ˆαˆδ)+ω (1 ˆαˆδ) μ k /γ p +c 3 ω (1 ˆαˆδ) μ k /γ d μ k (1 ˆαˆδ)+c 4 (1 ˆαˆδ) k+1, where c 4 = 2c3ω μ 0 min{γ p,γ d }. By Proposition 1 in [12] we have constants c M < 1andM>0 such that μ k M(c M ) k. 4. Generating search directions from Krylov subspace. In this section we present a new method for generating corrector directions. The basic idea is to blend information from an iterative method with information generated from a direct method. The analysis of the previous section was motivated to develop stable implementation based on this approach for generating corrector directions. The standard approach for solving the linear system (7) in interior-point algorithms is to use direct methods based on a factorization of some matrix. Usually, this is done by first reducing the linear system resulting from the KKT optimality conditions to a symmetric positive definite linear system that is then solved by sparse Cholesky factorization [3, 7]. An alternative is to solve the symmetric indefinite 2 2- block system by means of a sparse adaption of the Bunch Parlett decomposition [2]. In both cases, a substantial amount of fill-in may occur. The fill-ins during factorization in direct methods are one of the motivations for using iterative methods. When working with the normal equations, we may use a preconditioned CG method [13]. However, if the augmented or full system is used, Krylov subspace algorithms are useful [5]. Many iterative Krylov subspace algorithms have been developed to solve large sparse linear systems of equations for instance, the Lanczos method, GMRES, and QMR [8, 9]. The power basis used in the definition of Krylov subspaces is, in general, ill-conditioned and requires preconditioning. When using an iterative method, linear systems are solved to a low relative accuracy. The approach we develop in this section is a hybrid of direct and indirect methods for corrector directions. First, a tentative future iterate is generated from existing predictor and corrector directions. A Krylov subspace is generated at this tentative

9 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 643 iterate. The Krylov subspace corrector directions are appropriately translated and combined with the predictor and possibly other corrector directions. This process can be repeated if beneficial. To illustrate the idea, let H k be the coefficient matrix in (7) at the kth iteration, z be the solution vector, and b be the right-hand side. That is, H k = Xk S k 0 I 0 A T, z = Δs Δx, b = r (26) q. 0 A 0 Δy p Then the linear system (7) is (27) H k z = b. This system is solved by a direct method, e.g., Cholesky factorization of normal equations. Let (x +,y +,s + ) be a new iterate which is tentatively generated by using some search direction x + = x k + α x + d x, y + = y k + α s + d y, s + = s k + α s + d s, where α x + and α s + are appropriate steps and (d x,d y,d s ) satisfies Ad x = p, A T d y + d s = q. At the tentative iterate, in a standard interior-point algorithm we should solve (28) H + z = b +, b+ =[r +,p +,q + ] T, to generate subsequent iterates. In the proposed approach we use this system to generate corrector directions via a Krylov subspace. We do this by preconditioning H + with H 1 k. Definition 1. The Krylov subspace of the above linear equation (28) with preconditioner H k isdefinedtobe (29) K j (H k,h +, b + ):=span{b 1, Θb 1, Θ 2 b 1,...,Θ j b 1 }, where b 1 = H 1 b k + and Θ=I H 1 k H+. Let b i+1 =Θ i b 1 for i =1, 2,.... Then these vectors of the Krylov subspace have a useful property described in the following proposition. Proposition 1. The vectors b 1,b 2,...,b j satisfy [ ] [ ] [ ] 0 I A T b 1 q + 0 I A T = A 0 0 p +, b i =0 for i =2,.... A 0 0 Proof. The proof of first equality is from the definition of H k. The second equality follows from observing that H k b i =(H k H + )Θ i 1 b 1 and noting that H k and H + are the same in the rows under consideration. Remark 1. We can generate at most n linearly independent Krylov subspace vectors. This is true because at most n linearly independent vectors are possible that would satisfy the equations in Proposition 1. Remark 2. In practice, we want to generate only a few Krylov subspace vectors so checking linear independence is not an issue. Remark 3. A standard implementation of interior-point method using iterative methods will use the Krylov subspace vectors to generate an approximate direction at (x +,y +,s + ), and move to the next iteration. This approach in our experience results in a less stable implementation with generally inferior performance.

10 644 SANJAY MEHROTRA AND ZHIFENG LI 5. Implementation and numerical experiments. In this section we first discuss modifications for the search direction conditions given in sublet. We follow this with a discussion on practical implementation strategies that use these conditions. Computational results based on an implementation of Krylov subspace corrector directions are discussed subsequently. The discussion in this section assumes that Δx 0 is the Mehrotra corrector (obtained by solving (7) with the right-hand side of (μe Xs Δx aff Δs aff, 0, 0)), Δx i = b i,whereb i,i =1,..., are Krylov vectors in the previous section, and (d x,d y,d s ):=(Δx aff, Δy aff, Δs aff )+(Δx 0, Δy 0, Δs 0 ) in primal and dual spaces. The steps α x +,α s + are computed using the Mehrotra step length heuristic as implemented in PC [3]. Under these assumptions conditions (14) (17) in the sublp simplify to (30) δ ρ x aff +(1 α + x )ρ x 1 1andδ ρ s aff +(1 α + s )ρ s 1 1. Modification to sublp. In our implementation we replace (30) with the following conditions: and.1δ ρ x aff +(1 α + x )ρ x 1 1and.1δ ρ s aff +(1 α + s )ρ s 1 1, λ[ρ x aff + ρ x 1(1 α + x )]+(1 λ)[ρ s aff + ρ s 1(1 α + s )] δ, where the parameter λ is chosen in [0, 1]. The weight factor λ balances reduction of the primal and dual infeasibility. Definition of λ follows Jarre and Wechs [12]; γ λ := p p γ p p +γ d q,whereγ p and γ d are some parameters. This modification gave better computational performance for several problems. This is the only modification we made to sublp in our implementation. Implementation strategy. Each major iteration starts with a fresh matrix factorization used to generate the Mehrotra corrector and Krylov directions as described in section 4. The computational results discussed below are based on a prespecified number of Krylov vectors. In order to compute the combined direction we first solve sublp heuristically without including (computing) (Δx c, Δy c, Δs c ), and ignoring conditions (12), (13), and (19). If the optimal objective value of this sublp is larger than a threshold (.1), we scale the combined direction by 1/(ρ aff +(1 α x + )ρ 1 ) and 1/(ρ aff +(1 α s + )ρ 1 ) (ensuring feasibility of linear equality constraint for a full step), and compute a step using the Mehrotra step length heuristic [18]. If the new point generated in this way satisfies the centrality neighborhood conditions (21) (23) and it satisfies μ k μk, then we accept it and go to the new iteration. Otherwise (if t is less than the threshold, or the new iterate computed above fails to satisfy neighborhood conditions, or μ k+1 > 1 2 μk ), the direction (Δx c, Δy c, Δs c )and conditions (12), (13), and (19) are added to the sublp and a new combined direction is computed. We use the Mehrotra step length heuristic for the combined direction. If it is successful then we update the iterates. Otherwise, we compute an equal step for primal and dual combined directions as suggested in Step 3 of Algorithm 1 according to the theory. The implementation strategy is explained in Figure 1. We now describe reasons for the above implementation strategy. The analysis of Algorithm 1 in section 3 assumes that equal step length is computed in both

11 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 645 Fig. 1. Flow chart of implementation of Algorithm 1. primal and dual spaces. It is well known that the practical performance of these methods is improved significantly by using different steps in primal and dual spaces. With the above implementation strategy we are ensuring the theoretical convergence properties of the algorithm while allowing different steps when possible. Also note that trial points generated using the Mehrotra step length heuristic are typically not at the boundary of the central path neighborhood. This generally works better than an implementation which always forces iterates to be at the boundary of the central path neighborhood. Conditions (12) and (13) are unlikely to be active for most of the problems because of the large value of c 1 defined in (34). Condition (19) is used in the worst case analysis, and sometimes rules out good search directions. Consequently, always maintaining feasibility of constraints (19) while solving sublp generally gives an inferior performance. There are a few exceptions to this, which are mentioned during the discussion on computational results below. Also, the direction (Δx c, Δy c, Δs c )can be ignored in many problems because the Mehrotra corrector contains some centering information. Solutions of sublp. For a few Krylov directions the sublp has a small number of variables, but a much larger number of constraints. We used a combination of primal and dual simplex methods to solve the sublp. An in-house research implementation of the primal and dual simplex method (with full pricing) was used. This wasdoneasfollows. We first identified a good candidate set of constraints from sublp and used these to define our first small LP. Conditions (14) (18) and (20) are always in sublp and a subset of (10), (11) is used to build the sublp. A heuristic to guess the set

12 646 SANJAY MEHROTRA AND ZHIFENG LI of constraints most likely to be active is used to build the initial sublp. Subsequent sublps always include (14) (18) and a subset of (10), (11). A subset of conditions (12), (13), and (19) are also included in sublps if they are needed. Since 0 is a feasible solution for sublp, after adding slacks to this problem we immediately have a feasible basis. Using this as an initial basis we solved the small LP to optimality. The optimal solution provides a feasible basis for the dual of sublp (we call it primal). Using this as a starting primal feasible basis, the primal simplex method was used. A small linear program was constructed by including at most n most violated inequalities of sublp corresponding to the primal feasible basis, where n is the number of variables for the original problem we are trying to solve. The small linear program was solved to optimality (tolerance 10 8 ), and the process was repeated until sublp had no violated (tolerance 10 2 ) inequality. Interestingly, no significant difference in the solution times and the number of iterations was observed for results with smaller optimality tolerance (10 8 ). Computational results. We implemented our algorithm in PCx release 1.1. The numerical results for the Netlib problems are reported in the attached tables. PCx represents the PCx version with only predictor and Mehrotra s corrector. PCxH represents PCx version with predictor and Mehrotra s corrector, and adaptively adding Gondzio s higher order corrections according to the ratio cost for back solve and cost for factorization. No sublp is solved in this implementation. We denote the modified algorithm from this paper by PCx0-4 for zero, one, two, three, and four Krylov subspace vectors. PCx0 uses the same (predictor and corrector) directions as PCx, except it solves a sublp to generate the combined direction. PCx2F and PCx4F denote the results with two and four Krylov vectors and full sets of constraints and centering direction in all the sublp, respectively. All versions are run on scaled problems. The numerical experiments were performed on a Sun Sparc Ultra-5 10 with 256MB RAM. The parameters used were γ =10 4, γ p = γ d = 100, β 1 =1.5γ, β 2 =0.5, and c 1 = O(n). These parameters have generally worked well. Note that γ controls the width of the central path neighborhood. Larger values of γ (we tried 10 2 or 10 3 ) generally result in slightly worse performance. However, for values of γ =10 2 and 10 3 the implementation had additional numerical breakdowns for problems dfl001 and greenbea, respectively, if at an iteration we satisfied exactly the centrality constraints (21) and then we reduced γ to 0.99γ. This has helped occasionally with numerical stability and better performance. The following termination criterions were used for all runs. Optimal termination occurs when the current iterate satisfies the following test: p 1+ b T prifeastol, q 1+ c dualfeastol, μ 1+ c T x opttol, where prifeastol, dualfeastol, andopttol are three tolerances whose default values are 10 8,10 8,and10 10 x, respectively. If x or s s ,where(x 0,s 0 )is an initial point, then we declare unbounded variables found. These stopping criterion are subjective; however, they have worked well in our case. The results in Tables 1 and 2 give the number of iterations (matrix factoriza-

13 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 647 tions) needed to achieve the indicated accuracy. The column density is the density of nonzero entries in the matrix A(S 1 X)A T, on which Cholesky factorization is performed. The column dg is defined to be log 10 ( ct x b T s ), which measures the 1+ c T x accuracy of the difference between primal and dual objective values upon termination. The column It. is the number of iterations. The column #slp reports the average number of times small sublp were constructed and solved. The column #C reports the number of times a full sublp was solved as a result of either t<.1or failure in the Mehrotra step length heuristic to produce an iterate in the central path neighborhood. Several situations where PCx reports Unknown status are now traced to either numerical difficulties or situations where the solutions are getting large, suggesting that the problem is either infeasible or has an unbounded optimal face. These are indicatedintables1and2. We can draw several conclusions from the results in Tables 1 and 2. First, there is no significant difference in the performance of PCx and PCx0. This suggests that by using the suggested strategy we can implement primal-dual methods while ensuring global convergence. For most problems, PCx0 took fewer (by one or two) iterations; however, it took significantly more iterations for problem pilot4. For this problem the performance improved when a full set of constraints were used in sublp in all the iterations (see results in Table 3). Unfortunately, adding a full set of constraints makes the performance worse for other problems. This can be seen from the results for PCx2F and PCx4F in Tables 1 and 2. We conclude that adding one Krylov direction saves about 20%, two Krylov directions saves about 25%, three Krylov directions saves about 30%, and four Krylov directions saves about 37% number of matrix factorizations. This decrease in factorization is in addition to the decrease already obtained from using the Mehrotra corrector. We experimented with adding more Krylov directions, and our computational results suggest that the incremental decrease in the number of iterations as a result of additional Krylov directions is not significant. Since for a large number of Netlib test problems the cost of Cholesky factorization is not much more than the cost for a forward and back solve, improvement in CPU time is not expected on these problems. However, problems for which time to compute Cholesky factorization dominates the total time improvements in CPU times are expected. Computation times for these problems are reported in Table 4. The best improvement is for problem pds-20, where the CPU time reduces by approximately 30%. The results on infeasible Netlib problems are given in Table 5. Interestingly, in this case no significant reduction in the number of matrix factorizations is observed. We think that this is likely because H k 1 may no longer be a good preconditioner for H k. For problems ceria3d and cplex2, we encountered numerical problems before detecting infeasibility or unbounded variables. Finally, we comment on the time it takes to find the composite direction from the solution of sublp (with partial or full sets of constraints). In our current implementation, for problems where the Cholesky factorization is inexpensive, this time may be as much as 20% to 30% of the total CPU time on average. However, for problems where the Cholesky factorization is expensive (Table 4), this time is typically less than 8% of the total CPU time on average. While finding the composite direction, the calculations are typically dominated by the effort in finding violated inequalities,

14 648 SANJAY MEHROTRA AND ZHIFENG LI Table 1 Numerical experiments with Netlib problems. Problems Density PCx PCxH PCx0 PCx1 PCx2 PCx3 PCx4 PCx2F PCx4F Rows Cols (%) It. dg It. dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg It. dg It. dg 25fv (25,6.6,11) 10 (19,5.47,2) 7 (18,5.11,0) 7 (17,5.94,0) 7 (15,7,0) bau3b (36,6.75,19) 6 (29,6.45,10) 6 (26,6.27,4) 6 (24,6.58,1) 5 (22,7.46,1) adlittle (11,3,0) 7 (10,3.8,0) 13 (10,3.70,0) 12 (9,3.44,0) 13 (8,4.13,0) afiro (7,2.29,0) 8 (7,2.71,0) 10 (6,3,0) 14 (6,3.17,0) 13 (6,3,0) agg (18,3.56,1) 8 (15,4.2,0) 11 (15,4.27,0) 9 (12,4.83,0) 8 (10,5.5,0) agg (22,3.96,1) 9 (19,4.84,1) 9 (18,5.28,1) 12 (16,6,1) 7 (15,6.13,1) agg (21,4.62,2) 8 (17,4.88,1) 7 (17,5.06,0) 10 (15,5.73,0) 8 (14,5.93,0) bandm (17,3.47,1) 8 (14,4.29,0) 8 (13,4.46,0) 10 (11,4.91,0) 7 (11,5.73,0) beaconfd (10,3.3,0) 9 (9,4,0) 10 (9,3.78,0) 10 (8,4,0) 10 (7,4.43,0) blend (9,2.56,0) 12 (8,4.13,0) 13 (8,3.38,0) 12 (7,4.43,0) 9 (7,4,0) bnl (36,7.58,21) 7 (29,8.38,13) 8 (35,8.91,19) 10 (30,10.53,15) 6 (27,11.82,14) bnl (32,6.63,17) 4 (27,5.78,6) 6 (24,5.21,1) 7 (22,6.46,1) 8 (19,6.9,0) boeing (20,5.25,5) 7 (17,5.29,3) 9 (16,5.38,3) 8 (13,5.46,0) 7 (12,6.08,0) boeing (12,3.17,0) 8 (11,3.64,0) 8 (11,3.82,0) 11 (10,4.3,0) 13 (10,4.4,0) bore3d (15,3.2,0) 13 (12,4.5,0) 8 (12,4.25,0) 10 (10,4.1,0) 11 (11,3.64,0) brandy * 4 16* 7 (19,4.21,3)** 5 (15,4.73,2)** 8 (16,5.25,4)** 7 (13,4.77,1)** 8 (13,5.31,3)** 8 18** 5 18** 6 capri (18,4.11,2) 8 (15,4.13,0) 9 (15,4.33,0) 10 (13,5.08,0) 7 (12,5.42,0) cycle (23,6.13,8) 7 (18,5.11,1) 10 (15,4.47,0) 6 (15,6.13,0) 13 (13,6.31,0) czprob (27,7.63,16) 6 (20,7.55,8) 8 (17,7.41,5) 7 (15,7.33,2) 7 (16,9.44,5) d2q06c (29,6.35,14) 7 (24,6.25,7) 7 (22,6.18,6) 6 (19,6.47,2) 6 (18,6.39,0) d6cube (19,6.42,11) 7 (15,6.87,6) 7 (14,6.14,4) 7 (13,6.62,2) 7 (11,6.36,0) degen (12,2.92,0) 11 (10,4.1,0) 8 (10,4.10,0) 9 (9,4.33,0) 9 (8,5.13,0) degen * 8 (16,4.69,2) 7 (13,4.46,0) 8 (12,4.67,0) 8 (15,4.73,1)** 8 (10,6.1,0) dfl (47,7.06,31) 6 (38,7.84,19) 6 (39,8.08,17) 7 (37,7.78,10) 8 (31,7.65,6) ** 3 e (17,3.35,0) 9 (14,4.36,0) 7 (14,4.57,0) 11 (12,6.42,0) 8 (11,5.73,0) etamacro (23,3.83,1) 7 (18,4.83,0) 7 (17,5.41,0) 7 (16,6.38,0) 7 (14,6.36,0) fffff (27,6.07,11) 7 (21,4.81,1) 7 (19,5.42,1) 7 (18,6.17,1) 7 (16,6.19,0) finnis (23,4.26,3) 7 (18,4.78,1) 7 (18,4.44,0) 7 (16,5.63,0) 7 (14,5.71,0) fit1d (17,4.24,3) 7 (14,4.43,0) 8 (13,4.92,0) 7 (12,5.58,0) 8 (11,6.36,0) fit1p (17,4.82,4) 7 (13,5,1) 6 (13,5.69,1) 7 (11,5.18,0) 6 (12,6.33,0) fit2d (22,4.86,5) 6 (18,4.44,1) 7 (14,4.43,0) 5 (13,5.31,0) 5 (12,5.42,0) fit2p (20,3,0) 6 (15,4,0) 6 (15,4.2,0) 6 (13,4.77,0) 6 (12,6.08,1) forplan (22,3.82,1) 7 (17,4.71,0) 8 (16,4.44,0) 11 (14,5.29,0) 7 (13,5.92,0) ganges (19,4.74,2) 7 (13,4.54,0) 8 (13,6.08,1) 6 (11,6.73,0) 8 (11,6.91,0) gfrd-pnc (18,3.56,0) 8 (14,5,0) 8 (12,5.58,0) 6 (12,7.08,0) 6 (11,6.27,0) greenbea * 6 48* 6 (36,8.97,33) 6 (29,8.35,20) 7 (32,9.03,21) 4 (31,10.84,19) 6 (28,10.89,17) greenbeb * 6 39* 6 (32,8,25)** 4 (31,7.61,17)** 5 (28,6.82,10)** 8 (28,8.36,8)** 10 (25,9.08,5)** 11 33** 5 29** 7 grow (18,7.78,9) 7 (17,8.06,8) 9 (19,8.21,10) 10 (15,9.33,7) 7 (13,8.62,5) grow (21,7.91,11) 8 (19,9.32,10) 10 (21,8.62,12) 9 (17,10.59,9) 7 (15,11.13,7) grow (15,6.33,7) 8 (14,7.36,6) 8 (15,6.8,7) 12 (13,8.23,5) 11 (11,8.36,4) israel (19,3.68,1) 9 (15,4.27,0) 8 (15,4.53,0) 7 (14,5.21,0) 7 (12,5.5,0) kb (12,2.75,0) 11 (10,3.3,0) 8 (9,3,0) 9 (8,3.75,0) 13 (8,3.75,0) lotfi (14,2.93,0) 9 (12,3.92,0) 11 (11,3.64,0) 11 (10,4.3,0) 11 (9,5.44,0) maros-r (17,3.47,3) 7 (14,4.64,2) 8 (13,4.39,1) 6 (12,4.5,0) 7 (11,5.36,1) maros (19,4.58,2) 6 (16,4.19,0) 8 (15,4.33,0) 9 (14,5.21,0) 8 (12,6.25,0) nesm (25,5.12,10) 6 (20,4.75,2) 6 (20,5.45,1) 6 (19,5.74,1) 6 (17,6.94,0) perold (32,8.66,17) 6 (26,9.42,11) 7 (25,9.36,9) 7 (23,11.04,8) 8 (21,12.67,7) pilot (36,6.83,23) 6 (29,7.03,12) 6 (25,6.28,6) 7 (23,7.13,5) 6 (22,7.59,3) pilot (68,9.97,58) 7 (65,10.57,52)** 7 (61,11.85,50) 7 (63,13.37,51) 8 (53,15.08,42)** pilotja (29,7.07,11) 6 (24,9.08,8) 6 (23,9,6) 7 (23,10.78,6) 7 (21,12.24,5) pilotnov (17,5.35,6) 8 (14,3.64,0) 8 (14,5.5,2) 8 (12,4.83,0) 7 (11,5.55,0) * = Terminated with unknown status. ** = Numerical instability detected. = Infeasibility was detected. = Unbounded variables found. = Infeasibility detected in preprocess.

15 CONVERGENCE OF A GENERALIZED PREDICTOR CORRECTOR METHOD 649 Table 2 Numerical experiments with Netlib problems (continued). Problems Density PCx PCxH PCx0 PCx1 PCx2 PCx3 PCx4 PCx2F PCx4F Rows Cols (%) It. dg It. dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg (It.,#sLP,#C) dg It. dg It. dg pilotwe (48,8.48,33) 7 (35,8.03,14) 7 (31,9.74,10) 7 (27,10,3) 6 (27,10,3) pilot (34,6.18,18) 6 (26,5.19,5) 6 (25,5.4,4) 6 (22,5.36,1) 6 (19,6.37,0) recipe (9,2.33,0) 9 (7,4,0) 8 (8,3.25,0) 15 (7,3.71,0) 9 (7,3.71,0) scagr (16,4.81,3) 7 (14,4.86,1) 7 (13,5,1) 8 (12,6.17,1) 9 (11,5.55,0) scagr (14,3.64,0) 8 (12,4.17,0) 9 (11,4.46,0) 10 (9,5.78,0) 8 (9,4.78,0) scfxm (18,3.89,2) 12 (15,4.27,1) 8 (14,5.07,1) 9 (11,5.91,0) 7 (11,5.91,0) ** 5 scfxm (19,4.21,3) 7 (16,3.94,0) 10 (15,4.93,1) 8 (13,5.77,0) 7 (12,6.83,0) scfxm (21,5.81,6) 10 (16,4.44,0) 8 (15,5.33,1) 8 (14,5.43,0) 8 (12,7,0) scrs (21,4.24,1) 8 (17,4.24,0) 10 (15,4.87,0) 8 (15,5.67,0) 12 (13,6.46,0) scsd (9,2.67,0) 8 (8,3.38,0) 11 (8,3.25,0) 13 (7,4.43,0) 9 (7,4.29,0) scsd (12,3.08,0) 9 (11,4.09,0) 10 (10,5.9,1) 9 (9,5.44,0) 9 (9,5.56,0) scsd (12,3.58,0) 6 (11,4.82,1) 12 (10,5,0) 9 (9,6.67,0) 14 (8,5.63,0) sctap (16,4.63,5) 12 (13,3.85,0) 11 (12,3.83,1) 11 (11,6,1) 10 (9,4.78,0) sctap (13,5.54,3) 7 (11,5.18,1) 9 (10,4.3,0) 9 (10,4.9,0) 12 (9,6.33,0) sctap (13,4.62,4) 7 (12,3.83,0) 12 (11,4.64,0) 11 (10,4.83,0) 9 (10,5.9,0) seba (14,5.14,4) 13 (11,5.36,1) 10 (11,4.82,1) 13 (10,5.4,0) 9 (9,5.89,0) share1b (19,4.26,3) 8 (15,4.2,0) 9 (15,3.73,0) 11 (12,4.83,0) 8 (12,5.67,0) share2b (17,6.12,8) 7 (14,6.93,6) 7 (14,6.29,6) 10 (13,7.62,5) 10 (12,7.25,5) shell (20,4.40,2) 7 (18,5.5,2) 8 (16,5.13,0) 7 (14,5.5,0) 7 (13,7.23,0) ship04l (12,4,1) 8 (10,4.3,0) 8 (10,4.4,0) 8 (10,4.6,0), 8 (8,4.89,0) ship04s (13,3.92,1) 8 (11,4.36,0) 7 (10,4.4,0) 9 (10,4.8,0) 7 (8,5.5,0) ship12l (14,3.79,2) 7 (13,4.92,1) 9 (13,5.54,1) 9 (11,5.46,0) 6 (11,5.82,1) ship12s (12,3.17,0) 9 (10,4.4,0) 7 (10,4.3,0) 8 (9,4.22,0) 9 (8,4.25,0) sierra (19,2.63,0) 7 (15,4.2,0) 6 (15,4.8,0) 8 (13,5.69,0) 8 (11,6.46,0) stair (14,4.71,2) 9 (13,4.62,1) 9 (12,5.58,2) 11 (11,5.64,1) 8 (11,5.18,0) standata (13,3.39,0) 15 (11,4,0) 14 (10,5.1,0) 7 (10,4.4,0) 15 (8,4.63,0) standmps (22,5.55,6) 7 (18,4.5,1) 11 (20,5.85,4) 12 (15,7.07,2) 7 (16,6.94,1) stocfor (11,3.18,0) 12 (10,4,0) 12 (9,4,0) 11 (8,3.38,0) 12 (7,4.14,0) stocfor (20,4.95,4) 7 (16,4.94,0) 6 (15,5.07,0) 6 (15,5.87,0) 11 (13,8,0) truss (20,3.85,4) 8 (16,3.56,0) 6 (15,4.13,1) 6 (15,4.73,0) 8 (12,5.83,0) tuff (19,4.47,3) 11 (15,4.53,1) 7 (15,4.53,1) 12 (12,5.75,0) 10 (12,4.83,0) vtpbase (11,2.82,0) 15 (9,3.11,0) 13 (9,3,0) 14 (7,4.57,0) 12 (7,3.71,0) wood1p (19,4.42,2) 9 (17,4.47,2)** 5 (16,4.75,1) 14 (16,6,1) 12 (14,5.93,1) woodw (30,6.83,18) 8 (25,7.28,11) 7 (21,6.43,4) 6 (20,5.55,2) 6 (19,7.21,2) cre-a (24,5.92,13) 6 (19,6.26,6) 6 (18,5.28,3) 6 (16,6.31,1) 6 (15,7.47,0) cre-b (40,7.53,29) 5 (32,8.59,21) 6 (28,9.07,18) 5 (27,11.04,17) 6 (23,11.52,14) cre-c (25,6,12) 7 (20,5.3,5) 6 (19,5.63,4) 7 (18,7.17,3) 8 (15,7.53,1) cre-d (39,7.69,28) 5 (32,8.44,20) 6 (29,8.48,18) 5 (28,10.07,16) 6 (26,12.54,17) ken (15,2.67,0) 9 (12,3.92,0) 8 (12,4.58,1) 9 (11,4.27,0) 11 (10,5.8,0) ken (21,4.33,6) 5 (16,3.63,0) 5 (15,4.33,0) 5 (14,4.5,0) 7 (13,5.39,0) ken (28,6.07,15) 5 (19,3.74,0) 5 (21,5.38,3) 6 (18,5.56,2) 5 (16,6.13,0) ken (30,5.6,15) 4 (24,5.83,9) 4 (23,6.35,9) 5 (21,6.1,3) 4 (21,7.1,5) osa (18,6.94,8) 7 (18,8.11,8) 6 (20,8.1,9) 9 (15,8,5) 10 (12,6,0) osa (19,8.26,9) 7 (18,9.44,10) 7 (21,8.81,12) 6 (18,8.11,8) 6 (18,10.28,7) osa (24,10.13,15) 10 (17,8.59,7) 7 (24,6.35,9) 10 (19,11,10) 6 (18,10.94,7) osa (22,7.77,12) 5 (19,9.84,11) 7 (30,10.27,22) 6 (20,9.57,11) 6 (15,8.53,4) pds (25,5.72,12) 6 (19,6.58,7) 6 (19,6.37,6) 7 (17,7.41,5) 6 (15,9,4) pds (35,6.8,22) 5 (28,6.96,14) 5 (27,6.83,13) 5 (26,7.85,13) 5 (23,10.48,11) pds (41,7.1,30) 5 (34,7.59,22) 5 (31,7.71,18) 5 (29,9.28,18) 5 (27,10.78,16) pds (58,8.88,53) 4 (45,9.22,36) 4 (42,9.02,31) 4 (36,10.03,25) 5 (34,11.9,24) Total (2194,518.1,829) (1804,549.3,456) (1752,618,425) (1587,634.4,318), (1438,685.8,250) Average (21.7,5.13,8.2) (17.9,5.44,4.5) (17.3,5.6,4.21) (15.7,6.28,3.15) (14.2,6.79,2.48) * = Terminated with unknown status. ** = Numerical instability detected. = Infeasibility was detected. = Unbounded variables found. = Infeasibility detected in preprocess.

16 650 SANJAY MEHROTRA AND ZHIFENG LI Table 3 Netlib problems which need strong centering components. Program PCx PCxH PCx2 PCX0F PCx2F PCx4 PCx4F Problems It. digit It. digit It. digit It. digit It. digit It. digit It. digit pilot Table 4 Netlib problems showing computational savings. Program Density PCx PCxH PCx4 Problems Rows Cols (%) It. Sec. It. Sec. It. Sec. bnl cycle d2q06c d6cube dfl israel maros-r pilot pilot seba pds pds pds which requires a dense matrix vector product. This work grows as O(nks), where k is the number of Krylov directions and s is the number of times we check for violated inequalities before finding the optimal solution in sublp. The computational results suggest that s is typically between 5 and 7 on average. 6. Conclusion. In this paper, we gave conditions for achieving linear convergence for a generic primal-dual interior-point method with corrections. A small linear program is formulated to generate the search direction. We also proposed an approach for generating correction directions by generating a Krylov subspace. Numerical results indicate that this method is promising. We described an implementation strategy balancing the need for efficiency with theoretical convergence. In our proposed method, it is also possible to include directions from the previous iterations (Δx i, Δy i, Δs i )withi =1,...,k 1 in the sublp. In this case, the search direction can be computed in the following way: Δx(θ, ρ) =θ 1 Δx θ k 1 Δx k 1 +Δx(ρ), Δy(θ, ρ) =θ 1 Δy θ k 1 Δy k 1 +Δy(ρ), Δs(θ, ρ) =θ 1 Δs θ k 1 Δs k 1 +Δs(ρ), where (Δx(ρ), Δy(ρ), Δs(ρ)) is given in (9). Also one can use successive tentative solutions and generate more Krylov vectors at these tentative iterates. The practical value of both of these ideas is a topic of further research. Appendix(ProofofLemma1).Define (Δx, Δy, Δs) :=c 1 (Δx( ρ), Δy( ρ), Δs( ρ)). Then Lemma 1 is equivalent to showing X 1 Δx, S 1 Δs c 1. Note that (Δx, Δy, Δs) is a solution of (7) for p = b Ax, q = c A T y s, r = β 1 μe Xs. Let D := X 1/2 S 1/2 and Π R := DA T (AD 2 A T ) 1 AD

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM JIAN-FENG HU AND PING-QI PAN Abstract. The simplex algorithm computes the simplex multipliers by solving a system (or two

More information

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Christian Keil (c.keil@tu-harburg.de) and Christian Jansson (jansson@tu-harburg.de) Hamburg University of Technology

More information

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 Stephan Engelke and Christian Kanzow University of Hamburg Department of Mathematics Center

More information

Finding an interior point in the optimal face of linear programs

Finding an interior point in the optimal face of linear programs Mathematical Programming 62 (1993) 497-515 497 North-Holland Finding an interior point in the optimal face of linear programs Sanjay Mehrotra* Department of lndustrial Engineering and Management Sciences,

More information

How good are projection methods for convex feasibility problems?

How good are projection methods for convex feasibility problems? RAL-TR-2006-015 How good are projection methods for convex feasibility problems? N I M Gould September 21, 2006 Council for the Central Laboratory of the Research Councils c Council for the Central Laboratory

More information

Research Article On the Simplex Algorithm Initializing

Research Article On the Simplex Algorithm Initializing Abstract and Applied Analysis Volume 2012, Article ID 487870, 15 pages doi:10.1155/2012/487870 Research Article On the Simplex Algorithm Initializing Nebojša V. Stojković, 1 Predrag S. Stanimirović, 2

More information

Solving the normal equations system arising from interior po. linear programming by iterative methods

Solving the normal equations system arising from interior po. linear programming by iterative methods Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior

More information

Constraint Reduction for Linear Programs with Many Constraints

Constraint Reduction for Linear Programs with Many Constraints Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine

More information

arxiv: v2 [math.oc] 11 Jan 2018

arxiv: v2 [math.oc] 11 Jan 2018 A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that

More information

A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs

A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs Computational Optimization and Applications manuscript No. (will be inserted by the editor) A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs Daniel Espinoza

More information

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0,

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0, A GENERALIZATION OF THE REVISED SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING PING-QI PAN Abstract. Recently the concept of basis, which plays a fundamental role in the simplex methodology, were generalized

More information

SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING

SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING Philip E. GILL, Walter MURRAY, Dulce B. PONCELEÓN and Michael A. SAUNDERS Technical Report SOL 91-7 July 1991 Abstract

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs

A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs Comput Optim Appl (2014) 59:617 638 DOI 10.1007/s10589-014-9692-6 A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs Daniel Espinoza Eduardo Moreno Received:

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION MEHDI KARIMI, SHEN LUO, AND LEVENT TUNÇEL Abstract. We propose a family of search directions based on primal-dual entropy in

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A less conservative variant of Robust Optimization

A less conservative variant of Robust Optimization A less conservative variant of Robust Optimization Ernst Roos, Dick den Hertog CentER and Department of Econometrics and Operations Research, Tilburg University e.j.roos@tilburguniversity.edu, d.denhertog@tilburguniversity.edu

More information

Implementation of Warm-Start Strategies in Interior-Point Methods for Linear Programming in Fixed Dimension

Implementation of Warm-Start Strategies in Interior-Point Methods for Linear Programming in Fixed Dimension Implementation of Warm-Start Strategies in Interior-Point Methods for Linear Programming in Fixed Dimension Elizabeth John E. Alper Yıldırım May 11, 2006 Abstract We implement several warm-start strategies

More information

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING

FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Open-pit mining problem and the BZ algorithm

Open-pit mining problem and the BZ algorithm Open-pit mining problem and the BZ algorithm Eduardo Moreno (Universidad Adolfo Ibañez) Daniel Espinoza (Universidad de Chile) Marcos Goycoolea (Universidad Adolfo Ibañez) Gonzalo Muñoz (Ph.D. student

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming

Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Preconditioning and iterative solution of symmetric indefinite linear systems arising from interior point methods for linear programming Joo-Siong Chai and Kim-Chuan Toh Abstract We propose to compute

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

Interior Point Methods for LP

Interior Point Methods for LP 11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Chapter 14 Linear Programming: Interior-Point Methods

Chapter 14 Linear Programming: Interior-Point Methods Chapter 14 Linear Programming: Interior-Point Methods In the 1980s it was discovered that many large linear programs could be solved efficiently by formulating them as nonlinear problems and solving them

More information

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242

More information

A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS

A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS SIAM J OPTIM Vol 10, No 3, pp 852 877 c 2000 Society for Industrial and Applied Mathematics A SPECIALIZED INTERIOR-POINT ALGORITHM FOR MULTICOMMODITY NETWORK FLOWS JORDI CASTRO Abstract Despite the efficiency

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood J Optim Theory Appl 2016 170:562 590 DOI 10.1007/s10957-015-0826-5 Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood Alireza Asadi 1 Cornelis Roos 1 Published online:

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization

Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl (29) 141: 231 247 DOI 1.17/s1957-8-95-5 Convergence Analysis of the Inexact Infeasible Interior-Point Method for Linear Optimization G. Al-Jeiroudi J. Gondzio Published online: 25 December

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION

A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract

More information

SVM May 2007 DOE-PI Dianne P. O Leary c 2007

SVM May 2007 DOE-PI Dianne P. O Leary c 2007 SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Preconditioned Conjugate Gradients in an Interior Point Method for Two-stage Stochastic Programming

Preconditioned Conjugate Gradients in an Interior Point Method for Two-stage Stochastic Programming Preconditioned Conjugate Gradients in an Interior Point Method for Two-stage Stochastic Programming Jacek Gondzio Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

The Q Method for Second-Order Cone Programming

The Q Method for Second-Order Cone Programming The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

Large-scale Linear and Nonlinear Optimization in Quad Precision

Large-scale Linear and Nonlinear Optimization in Quad Precision Large-scale Linear and Nonlinear Optimization in Quad Precision Ding Ma and Michael Saunders MS&E and ICME, Stanford University US-Mexico Workshop on Optimization and its Applications Mérida, Yucatán,

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D.

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D. MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC Nebojša V Stojković, Predrag S Stanimirović and Marko D Petković Abstract We investigate the problem of finding the first basic solution

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS

IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Improving Performance of The Interior Point Method by Preconditioning

Improving Performance of The Interior Point Method by Preconditioning Improving Performance of The Interior Point Method by Preconditioning Mid-Point Status Report Project by: Ken Ryals For: AMSC 663-664 Fall 27-Spring 28 6 December 27 Background / Refresher The IPM method

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

On a wide region of centers and primal-dual interior. Abstract

On a wide region of centers and primal-dual interior. Abstract 1 On a wide region of centers and primal-dual interior point algorithms for linear programming Jos F. Sturm Shuzhong Zhang y Revised on May 9, 1995 Abstract In the adaptive step primal-dual interior point

More information

The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction

The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction Florian Jarre, Felix Lieder, Mathematisches Institut, Heinrich-Heine

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems

RESEARCH ARTICLE. A strategy of finding an initial active set for inequality constrained quadratic programming problems Optimization Methods and Software Vol. 00, No. 00, July 200, 8 RESEARCH ARTICLE A strategy of finding an initial active set for inequality constrained quadratic programming problems Jungho Lee Computer

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Linear programming II

Linear programming II Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality

More information

An interior-point gradient method for large-scale totally nonnegative least squares problems

An interior-point gradient method for large-scale totally nonnegative least squares problems An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice

More information