A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
|
|
- Clinton Skinner
- 5 years ago
- Views:
Transcription
1 J Nonlinear Funct Anal 08 (08), Article ID 3 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG ZHANG,, ZHENGWEI HUANG Three Gorges Mathematical Research Center, China Three Gorges University, Yi Chang 44300, China College of Economics and Management, China Three Gorges University, Yi Chang 44300, China Abstract In this paper, ellipsoidal estimations are used to track the central path of a linear complementarity problem (LCP) A wide neighborhood primal-dual interior-point algorithm is devised to search an ε-approximate solution of the LCP along the ellipse The algorithm is proved to be polynomial with the complexity bound O(nlog (x0 ) T s 0 ε ), which is as good as the linear programming analogue The numerical results show that the proposed algorithm is efficient and reliable Keywords Interior-point algorithm; Ellipsoidal approximation; Polynomial complexity; Linear complementarity problem 00 Mathematics Subject Classification 90C33, 90C5 INTRODUCTION After Karmarkar s landmark paper [], primal-dual interior-point methods (IPMs) have shown their power in solving linear programming (LP), quadratic programming (QP), linear complementarity problem (LCP) and other optimization problems [, 3, 4, 5] It is noted that LCPs [6, 7] are a general class of mathematical problems having important applications in mathematical programming and equilibrium problems Therefore, we should pay more attention on IPMs for LCPs The existing primal-dual IPMs usually search for the optimizers along a straight line related to the first and higher-order derivatives associated with the central path [8, 9] Although, theoretically, the central path has an important role in primal-dual IPMs, but practically, following the central path is difficult and arduous To remedy, Yang [0,,, 3] developed the idea of arc-search in LP, which search for optimizers along an ellipse that is an approximation of the central path Yang [] showed that arcsearch along ellipse may be a better method than one-dimensional search methods because the algorithm is proved to be polynomial with a better bound than the bounds of all existing higher-order algorithms Then, the arc-search techniques were applied to primal-dual path-following IPM for convex quadratic programming (CQP) by Yang [4] The algorithm enjoyed currently the best known polynomial complexity bound O( nlog ε ) Later, Yang et al [5, 6] introduced two wide neighborhood infeasible Corresponding author addresses: @63com (B Yuan), zmwang@ctgueducn (M Zhang), zhengweihuang@ctgueducn (Z Huang) Received January 6, 08; Accepted August, 08
2 B YUAN, M ZHANG, Z HUANG IPMs with arc-search for LP and symmetric optimization (SO) with the complexity bounds O(n 5 4 log ε ) and O(r 5 4 log ε ), respectively Pirhaji et al [7] explored a l -neighborhood infeasible IPM for LCP with the iteration bound O(nlog ε ) Here, our main purpose is to generalize the arc-search feasible IPM for LP [, 8] to LCP Compared with [6, 7], the new algorithm has a weak restrict on step size and works in the negative infinity neighborhood of the central path, which is a popular wide neighborhood where the software packages run Moreover, the algorithm enjoys the iteration complexity O(nlog (x0 ) T s 0 ε ), which is abound at least as good as the best bound of existing wide neighborhood IPMs The numerical tests show that the proposed algorithm has a tendency to solve problems faster than some other IPMs The outline of this paper is organized as follows In Section, we recall some fundamental concepts of the LCP such as the central path and its ellipsoidal estimation In Section 3, we introduce a wide neighborhood primal-dual IPM with arc-search for the LCP In Section 4, we give the technical results that are required in the convergent analysis and obtain the complexity bound of algorithm In Section 5, we provide some preliminary numerical results Finally, we close the paper by some conclusions in Section 6 We use the following notations throughout the paper All vectors are considered to be column vectors Denote the vector of all ones with appropriate dimension by e, n n-dimensional matrix space by R n n, the non-negative orthant of R n by R n +, Hadamard product of two vectors x R n and s R n by xs, the ith component of x by x i, the Euclidean norm and the -norm of x by x and x, the transpose of matrix A by A T If x R n, X:=diag(x) represents the diagonal matrix having the elements of x as diagonal entries Finally, we define an initial point of any algorithm by (x 0,s 0 ), the point after the kth iteration by (x k,s k ) THE CENTRAL PATH AND ITS ELLIPSOIDAL APPROXIMATION The LCP requires to find a pair of vectors (x,s) R n such that s = Mx + q, xs = 0, x,s 0, () where q R n and M is a n n positive semi-definite matrix, that is, x T Mx 0, for x R n Denote the feasible set of problem () by and the interior feasible set F = {(x,s) R n R n : s = Mx + q, (x,s) 0}, () F 0 = {(x,s) R n R n : s = Mx + q, (x,s) > 0} (3) Throughout the paper, we assume that the interior-point condition (IPC) holds [9], ie, F 0 is not empty Basic idea of the feasible IPMs is to replace the second equation in () by the perturbed equation xs = µe, with µ > 0, to get the following parameterized system s = Mx + q, xs = µe, (4) x,s 0
3 A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 3 If the LCP satisfies the IPC, then system (4) has a unique solution for every µ > 0, denoted by (x(µ), s(µ)) The set of all such solution constructs a homotopy path, which is called the central path of the LCP and used as a guideline to solution of the LCP [0] Thus, the central path is an arc in R n parameterized as a function of µ defined as C = {(x(µ),s(µ)) : µ > 0} (5) If µ 0, then the limit of the central path exists and is an optimal solution for LCP Recently, Yang [0, ] defined an ellipse ξ (α) in a n-dimensional space to approximate the central path C, as following ξ (α) = {(x(α),s(α)) : (x(α),s(α)) = acos(α) + bsin(α) + c}, (6) where a, b R n are the axes of the ellipse, orthogonal to each other, and c R n is the center of the ellipse Let z = (x(µ),s(µ)) = (x(α 0 ),s(α 0 )) ξ (α) be close to or on the central path Using the approach of Yang [], we define the first and second derivatives at (x(α 0 ),s(α 0 )) to have the form as if they were on the central path, which satisfy [ ] [ ] [ ] M I ẋ 0 =, (7) S X ṡ xs σ µe [ M S ] [ I X ẍ s ] = [ 0 ẋṡ ], (8) where µ = xt s n, σ (0, 4 ) is the center parameter Let α [0, π ] and (x(α),s(α)) be updated by (x,s) after the searching along the ellipse Similar to the proof of Theorem 3 in [], we have the following theorem Theorem Let (x(α),s(α)) be an arc defined by (6) passing through a point (x,s) ξ (α), and its first and second derivatives at (x,s) be (ẋ,ṡ) and (ẍ, s), which are defined by (7) and (8) Then, an ellipse approximation of the central path is given by x(α) = x sin(α)ẋ + ( cos(α))ẍ, (9) s(α) = s sin(α)ṡ + ( cos(α)) s (0) For convenience of reference, assume g(α) := cos(α) Using Theorem, (7) and (8), we get where x(α)s(α) = xs sin(α)(xs σ µe) + χ(α), () χ(α) := g (α)ẋṡ sin(α)g(α)(ẋ s + ṡẍ) + g (α)ẍ s () We will use negative infinity neighborhood of the central path in the literature, defined as N (γ) = {(x,s) F 0 : min(xs) γµ}, (3) where γ (0, ) is a constant independent of n This neighborhood is widely used in implementations too In order to facilitate the analysis of the algorithm, we denote sin( ˆα) := max{sin(α) : (x(α),s(α)) N (γ), sin(α) [0,], α [0, π ]} (4)
4 4 B YUAN, M ZHANG, Z HUANG 3 THE ARC-SEARCH INTERIOR-POINT ALGORITHM The proposed algorithm starts from an initial arbitrary point (x 0,s 0 ) N (γ), which is close to or on the central path C Using an ellipse that passes through the point (x 0,s 0 ), the algorithm approximates the central path and gets a new iterate (x(α),s(α)), which makes a certain decline of the duality gap The procedure is repeated until find an ε-approximate solution of the LCP A more formal description of the algorithm is presented next Algorithm : (Arc-search IPM for LCP) Input An accuracy parameter ε > 0, neighborhood parameters γ (0, ), center parameter σ (0, 4 ), initial point (x 0,s 0 ) with µ 0 = (x0 ) T s 0 n, and k := 0 If (x k ) T s k ε, then stop for iteration k =,, step Solve the systems (7) and (8) to get (ẋ,ṡ) and (ẍ, s); step Compute sin( ˆα k ) by (4), and let (x k+,s k+ ) = (x( ˆα k ),s( ˆα k )); step 3 Calculate µ k+ = (xk+ ) T s k+ n, k := k +, and go to step end(for) Based on the previous discussion, we state several lemmas which are necessary for convergence analysis of the algorithm Lemma 3 Let (x,s) be a strictly feasible point of LCP, (ẋ,ṡ) and (ẍ, s) meet (7) and (8) respectively, (x(α),s(α)) be calculated using (9) and (0) Then Mx(α) s(α) = q Proof Since (x,s) is a strict feasible point, we find from Theorem, (7) and (8) that Mx(α) s(α) = M[x sin(α)ẋ + ( cos(α))ẍ] [s sin(α)ṡ + ( cos(α)) s] = Mx sin(α)mẋ + ( cos(α))mẍ s + sin(α)ṡ ( cos(α)) s = Mx s sin(α)(mẋ ṡ) + ( cos(α))(mẍ s) = q This completes the proof Lemma 3 Let ẋ,ṡ,ẍ and s be defined in (7) and (8), and let M be a positive semi-define matrix Then the following relations hold: ẋ T ṡ = ẋ T Mẋ 0, ẍ T s = ẍ T Mẍ 0, (3) ẍ T ṡ = ẋ T s = ẋ T Mẍ, (ẋ T ṡ)( cos(α)) (ẍ T s)sin (α) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) (3) (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α), (ẋ T ṡ)sin (α) (ẍ T s)( cos(α)) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) (33) (ẋ T ṡ)sin (α) + (ẍ T s)( cos(α))
5 A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 5 Proof Pre-multiplying ẋ T and ẍ T to the first rows of (7) and (8) respectively, we have ẋ T ṡ = ẋ T Mẋ and ẍ T s = ẍ T Mẍ The first two inequalities of (3) follow from the fact that M is positive semi-definite Similarly, we also have ẍ T Mẋ = ẍ T ṡ and ẋ T Mẍ = ẋ T s, which mean ẍ T ṡ = ẋ T s = ẋ T Mẍ Since M is a positive semi-definite matrix, we have [( cos(α))ẋ + sin(α)ẍ] T M[( cos(α))ẋ + sin(α)ẍ] = (ẋ T Mẋ)( cos(α)) + (ẋ T Mẍ)sin(α)( cos(α)) + (ẍ T Mẍ)sin (α) = (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α) + (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) 0 From the above result, we can get the first inequality of (3) Similarly, we also have [( cos(α))ẋ sin(α)ẍ] T M[( cos(α))ẋ sin(α)ẍ] = (ẋ T Mẋ)( cos(α)) (ẋ T Mẍ)sin(α)( cos(α)) + (ẍ T Mẍ)sin (α) = (ẋ T ṡ)( cos(α)) + (ẍ T s)sin (α) (ẍ T ṡ + ẋ T s)sin(α)( cos(α)) 0, which implies the second inequality of (3) Substituting ( cos(α))ẋ and sin(α)ẍ by sin(α)ẋ and ( cos(α))ẍ respectively, following the same way, we obtain (33) immediately Remark 33 In the current LCP case, (3) implies that iterative directions do not have orthogonality, which holds in the LP case This yields some difficulties in the analysis of the algorithm for LCP In what follows, we will develop some new results on the iterative directions Lemma 34 [] For α [0, π ], sin(α) sin (α) = cos (α) cos(α) To estimate the upper bound for µ(α), we give the following lemma Lemma 35 Let (x,s) N (γ), (ẋ,ṡ) and (ẍ, s) be calculated from (7) and (8) Let x(α) and s(α) be defined as (9) and (0) respectively Then µ(α) = µ[ ( σ)sin(α)] + n [( cos(α)) (ẍ T s ẋ T ṡ) sin(α)( cos(α))(ẋ T s + ṡ T ẍ)] µ[ ( σ)sin(α)] + n [ẍt s(sin 4 (α) + sin (α))]
6 6 B YUAN, M ZHANG, Z HUANG Proof Using (9), (0), (7) and (8), we get nµ(α) = x(α) T s(α) = [x T ẋ T sin(α) + ẍ T ( cos(α))][s ṡsin(α) + s( cos(α))] = x T s x T ṡsin(α) + x T s( cos(α)) ẋ T ssin(α) + ẋ T ṡsin (α) ẋ T ssin(α)( cos(α)) + ẍ T s( cos(α)) ẍ T ṡsin(α)( cos(α)) + ẍ T s( cos(α)) = x T s (x T ṡ + s T ẋ)sin(α) + (x T s + s T ẍ)( cos(α)) (ẋ T s + ṡ T ẍ)sin(α)( cos(α)) + ẋ T ṡsin (α) + ẍ T s( cos(α)) = nµ[ ( σ)sin(α)] + ( cos(α)) (ẍ T s ẋ T ṡ) (ẋ T s + ṡ T ẍ)sin(α)( cos(α)) nµ[ ( σ)sin(α)] + ( cos(α)) (ẍ T s ẋ T ṡ) + ( cos(α)) ẋ T ṡ + sin (α)ẍ T s = nµ[ ( σ)sin(α)] + ( cos(α)) ẍ T s + sin (α)ẍ T s nµ[ ( σ)sin(α)] + (sin 4 (α) + sin (α))(ẍ T s), where the first inequality follows from Lemma 3, and the second inequality is from Lemma 34 This proves the lemma 4 THE COMPLEXITY ANALYSIS In the following, we present some technical results which are required for establishing the main results of convergence analysis For simplicity in the analysis, we omit the indicator k and set D = X S, where x,s R n + Lemma 4 [] Let u,v R n Then uv Du D v ( Du + D v ) In what follows, we give the upper bounds for Dẋ, D ṡ, Dẍ and D s, which are very important to our subsequent analysis Lemma 4 If (x,s) N (γ), D = X S and (ẋ,ṡ) is the solution of (7) Then and where β σ + σ γ Dẋ + D ṡ β µn, ẋṡ β µn, Dẋ β µn, D ṡ β µn, Proof Pre-Multiplying the second equation in (7) by (XS), we get Dẋ + D ṡ = (XS) (xs σ µe)
7 A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 7 Then, taking squared norm on both sides yields From ẋ T ṡ 0, we have Dẋ + ẋ T ṡ + D ṡ = (xs) σ µn + (XS) σ µe Using this result and Lemma 4, we obtain x T s σ µn + σ µe (xs) min µn σ µn + σ µn γ β µn Dẋ + D ṡ β µn Dẋ β µn, D ṡ β µn, and This proves this lemma ẋṡ β µn Lemma 43 If (x,s) N (γ), D = X S and (ẍ, s) is the solution of (8), then we have and where β = β γ Dẍ + D s β µn, ẍ s β µn, Dẍ β µn, D s β µn, Proof Pre-multiplying (XS) to the second equation in (8), taking squared norm on both sides, and using ẍ T s 0, we obtain Dẍ + D s (XS) ( ẋṡ) From the above inequality and Lemma 4, we get ẋṡ (xs) min 4 ẋṡ γµ β µn, Dẍ β µn, D s β µn, and This completes the proof ẍ s β µn Using Lemma 4 and Lemma 43, we have the following lemma Lemma 44 If β 3 = β β, then ẋ s Dẋ D s β 3 µn 3, ẍṡ Dẍ D ṡ β 3 µn 3
8 8 B YUAN, M ZHANG, Z HUANG Let sin( ˆα 0 ) = β 4 n, where β 4 = β +β +4β 3 σ( γ) Then we get the next lemmas Lemma 45 Let χ(α) be defined as () For all α (0, ˆα 0 ], we have χ(α) sin(α)µ( γ)σe Proof Using Lemma 4, Lemma 43, Lemma 44 and g(α) sin (α), we have χ(α) = g (α)ẋṡ sin(α)g(α)(ẋ s + ṡẍ) + g (α)ẍ s [ sin 4 (α) ẋṡ sin 3 (α) ẋ s + ṡẍ sin 4 (α) ẍ s ]e [ sin4 (α)β µn + sin 3 (α)β 3 µn 3 + sin4 (α)β µn ]e sin(α)µ[sin3 ( ˆα 0 )β n + 4sin ( ˆα 0 )β 3 n 3 + sin 3 ( ˆα 0 )β n ]e = sin(α)µ[ β β β 3 n β4 n + β β4 3n]e sin(α)µ β 4 [β + β + 4β 3 ]e = sin(α)µ( γ)σe This completes the proof Lemma 46 Let (x,s) N (γ), sin( ˆα) be defined by (4) Then, for all sin(α) [0,sin( ˆα 0 )], we have (x(α),s(α)) N (γ) and sin( ˆα) sin( ˆα 0 ) Proof Using (), (3), Lemma 45 and 35, we have min(x(α)s(α)) = min[xs sin(α)(xs σ µe) + χ(α)] min(xs)( sin(α)) + sin(α)σ µ + min(χ(α)) ( sin(α))γµ + sin(α)σ µ sin(α)( γ)σ µ = γµ(α) σγµ sin(α) + sin(α)σ µ sin(α)( γ)σ µ γ n [( cos(α)) (ẍ T s ẋ T ṡ) sin(α)( cos(α))(ẋ T s + ṡ T ẍ)] = γµ(α) + sin(α)( γ)σ µ sin(α)( γ)σ µ γ n ( cos(α)) (ẍ T s ẋ T ṡ) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) + sin(α)( γ)σ µ γ n ( cos(α)) (ẍ T s + ẋ T ṡ) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) + sin(α)( γ)σ µ γ n sin (α)(ẋ T ṡ) γ n ( cos(α)) (ẍ T s) + γ n sin(α)( cos(α))(ẋt s + ṡ T ẍ) γµ(α) where the last three inequalities follow from the facts ẋ T ṡ 0, Lemma 34 and (33), respectively, which imply x(α)s(α) > 0 Since x(α) and s(α) have continuity in [0, ˆα 0 ] and (x,s) > 0, hence x(α) >
9 A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 9 0, s(α) > 0 By (3) and (4), we have (x(α),s(α)) N (γ) Using (4), we obtain sin( ˆα) sin( ˆα 0 ) The proof is completed Now, we are ready to present the iteration complexity bound of Algorithm To this end, we need to obtain a new upper bound of µ(α) From Lemma 35 and Lemma 43, it follows that for α [0, ˆα 0 ] Hence, µ(α) µ[ ( σ)sin(α)] + n [ẍt s(sin 4 (α) + sin (α))] µ[ ( σ)sin(α)] + n [ ẍ s (sin 4 (α) + sin (α))] µ[ ( σ β n (sin3 (α) + sin(α)))sin(α)] µ[ ( σ β β 3 4 n β β 4 )sin(α)] ( 4 sin(α))µ, where the last two inequalities come from sin( ˆα 0 ) = β 4 n, β < β 4 and 0 < σ < 4 Theorem 47 Let (x 0,s 0 ) N (γ), sin( ˆα 0 ) = β 4 n Then Algorithm terminates in at most O(nlog (x0 ) T s 0 ε ) iterations Proof Using the relation sin( ˆα) sin( ˆα 0 ), we have µ( ˆα) ( 4 sin( ˆα))µ ( 4 sin( ˆα 0))µ = ( 4β 4 n )µ Then, we obtain (x k ) T s k ( 4β 4 n )k (x 0 ) T s 0 Thus the inequality (x k ) T s k ε is satisfied if ( 4β 4 n )k (x 0 ) T s 0 ε Taking logarithms, we get Note that k log( 4β 4 n ) log ε (x 0 ) T s 0 log( 4β 4 n ) 4β 4 n 0, Thus, for k 4β 4 nlog (x0 ) T s 0 ε, we have (x k ) T s k ε This completes the proof
10 0 B YUAN, M ZHANG, Z HUANG 5 NUMERICAL RESULTS In this section, we represent some numerical results for our new Algorithm and the large-update algorithm based on trigonometric kernel function (ψ(t) = t + 6 π( t) π tan(h(t)),with h(t) = 4t+ ) and classical kernel function (ψ(t) = t logt) in [] for solving the same problems Moreover, the results show that our algorithm is more effective We list the number of iteration(iter), the duality gap(gap) and the total CPU time(time) when the algorithms terminate Considering the following test problems 5 Text problems Problem M = , q = / 4 3/, x 0 = 5 5 Problem [3] Problem 3 [4] M = n 3 M = , q =, q =, x 0 =, x 0 = 5 The number of iterations for our algorithm In the following, we select the different optimization parameters γ,σ and accuracy parameter ε = 0 6 for the Algorithm The number of iterations for test problems are stated in Tables -3 TABLE The number of iterations for Problem σ γ Gap Iter Time /6 / e /8 / e /8 / e /0 / e Taking different dimensions and parameters σ,γ, the results are displayed in Tables -3 We find that σ has more obvious effects on iteration numbers than γ For Table and Table 3, the smaller σ is, the better iteration numbers will be For Table, the larger σ is, the better iteration numbers will be
11 σ γ A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH TABLE The number of iterations for Problem n=0 n=5 Iter Gap Time Iter Gap Time /6 / e e /0 / e e /5 / e e /5 / e e n=0 n=5 n=30 Iter Gap Time Iter Gap Time Iter Gap Time 7 63e e e e e e e e e e e e TABLE 3 The number of iterations for Problem 3 σ γ n=0 n=50 n=00 Iter Gap Time Iter Gap Time Iter Gap Time /6 / 73e e e /0 / e e e /0 / e e e /5 / e e e n=00 n=500 n=000 Iter Gap Time Iter Gap Time Iter Gap Time e e e e e e e e e e e e The number of iterations for large-update algorithm based on kernel functions in [] We take different barrier update parameters θ, threshold parameter τ = 5, and accuracy ε = 0 4 The iteration numbers for test problems based on the trigonometric kernel function and classical kernel function are presented in Tables 4-9, respectively Comparing the results in Tables 4-9, it suggests that, with higher accuracy, our algorithm has less iteration number It shows that our algorithm is more efficient in practice and the iteration number is insensitive to the size of problem We can intuitively see that searching along an ellipse that approximates
12 B YUAN, M ZHANG, Z HUANG TABLE 4 The number of iterations for Problem based on trigonometric kernel function θ Gap Iter e e e TABLE 5 The number of iterations for Problem based on classical kernel function θ Gap Iter e e e TABLE 6 The number of iterations for Problem based on trigonometric kernel function θ n=0 n=5 n=0 n=5 n= TABLE 7 The number of iterations for Problem based on classical kernel function θ n=0 n=5 n=0 n=5 n= TABLE 8 The number of iterations for Problem 3 based on trigonometric kernel function θ n=0 n=50 n=00 n=00 n=500 n= TABLE 9 The number of iterations for Problem 3 based on classical kernel function θ n=0 n=50 n=00 n=00 n=500 n= the central path is promising and attractive Furthermore, the results for our algorithm show very slow growth as n increases, which is precisely what is hoped for interior-point algorithms
13 A WIDE NEIGHBORHOOD PRIMAL-DUAL IPM WITH ARC-SEARCH 3 6 CONCLUDING REMARKS This paper generalized a wide neighborhood IPM with the arc-search from the LP to the LCP, which searches the optimizers along the ellipses that approximate the central path Moreover, the iteration complexity bound of the proposed algorithm is devised, namely O(nlog (x0 ) T s 0 ε ) The numerical results show that the new proposed algorithm is well promising in practice in comparison with some other IPMs We expect that the same idea will be proved to be efficient for the P (κ) LCP Acknowledgements The authors were supported by Natural Science Foundation of China (No7470) and Excellent Foundation of Graduate Student of China Three Gorges University (07YPY08) REFERENCES [] NK Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (984), [] M Kojima, S Mizuno, A Yoshise, A primal-dual interior-point algorithm for linear programming, Progress in Mathematial Programming; Interior Point Related Methods, New York, Springer-Verlag, (989), 9-47 [3] YQ Bai, ME Ghami, C Roos, A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization, SIAM J Optim 5 (004), 0-8 [4] SJ Wright, A path-following interior-point algorithm for linear and quadratic problems, J Anal Oper Res 6 (996), [5] YE Nesterov, Interior-point methods in convex programming: Theory and Applications, Society for Industrial and Applied Mathematics Philadelphia, 994 [6] H Mansouri, M Pirhaji, A polynomial interior-point algorithm for monotone linear complementarity problems, J Optim Theory Appl 57 (03), [7] RW Cottle, JS Pang, RE Stone, The linear complementarity problem, New York, Academic press, 99 [8] P Huang, Y Ye, An asymptotical O( nl)-iteration path-following linear programming algorithm that use wide neighborhoods, SIAM J Optim 6 (996), [9] B Jansen, C Roos, T Terlaky, Improved complexity using higher-order correctors for primal-dual Dikin affine scaling, Math Program 76 (996), 7-30 [0] Y Yang, Arc-search path-following interior-point algorithms for linear programming, Optimization Online, 009 [] Y Yang, A polynomial arc-search interior-point algorithms for linear programming, J Optim Theory Appl 58 (03), [] Y Yang, CurveLP-A MATLAB implementation of an infeasible interior-point algorithm for linear programming, Numer Algo 74 (06), -30 [3] Y Yang, M Yamashita, An arc-search O(nL) infeasible-interior-point algorithm for linear programming, Optim Lett (08), [4] Y Yang, A polynomial arc-search interior-point algorithms for convex quadratic programming, Eur J Oper Res 5 (0), 5-38 [5] X Yang, Y Zhang, H Liu, A wide neighborhood infeasible-interior-point method with arc-search for linear programming, J Comput Math Appl 5 (06), 09-5 [6] X Yang, H Liu, Y Zhang, An arc-search infeasible-interior-point method for symmetric optimization in a wide neighborhood of the central path, Optim Lett (07), 35-5 [7] M Pirhaji, M Zangiabadi, H Mansouri, An l -neighborhood infeasible interior-point algorithm for linear complementarity problems, 4OR 5 (07), -3 [8] XM Yang, HW Liu, CH Liu, Arc-search interior-point algorithm, J Jilin Univ (Science Edition), 5 (04), [9] C Roos, T Terlaky, J Vial, Interior-point methods for linear optimization, Spring, New York, 006 [0] M Kojima, N Megiddo, T Noma, A Yoshise, A unified approach to interior point algorithms for linear complementarity problems, Lecture Notes in Computer Science, vol 538, Springer, Berlin, 99
14 4 B YUAN, M ZHANG, Z HUANG [] Y Zhang, DT Zhang, On polynomiality of the Methrotra-type predictor-corrector interior-point algorithms, Math Program 68 (995), [] ME Ghami, Primal-Dual algorithms for P (κ) linear complementarity problems based on kernel-function with trigonometric barrier term, Optim Theory Decis Mak Oper Res Appl 3 (03), [3] PT Harker, JS Pang, A damped Newton method for linear complementarity problem, Lect Appl Math 6 (990), [4] C Geiger, C Kanzow, On the resolution of monotone complementarity problems, Comput Optim Appl 5 (996), 55-73
A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationInterior-point algorithm for linear optimization based on a new trigonometric kernel function
Accepted Manuscript Interior-point algorithm for linear optimization based on a new trigonometric kernel function Xin Li, Mingwang Zhang PII: S0-0- DOI: http://dx.doi.org/./j.orl.0.0.0 Reference: OPERES
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationA path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal
Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationA New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization
A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point
More informationA Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Full-Newton Step On) Infeasible Interior-Point Algorithm for Linear Optimization C. Roos March 4, 005 February 19, 005 February 5, 005 Faculty of Electrical Engineering, Computer Science and Mathematics
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationFull Newton step polynomial time methods for LO based on locally self concordant barrier functions
Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationAn EP theorem for dual linear complementarity problems
An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore
More information2 The SDP problem and preliminary discussion
Int. J. Contemp. Math. Sciences, Vol. 6, 2011, no. 25, 1231-1236 Polynomial Convergence of Predictor-Corrector Algorithms for SDP Based on the KSH Family of Directions Feixiang Chen College of Mathematics
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationFull-Newton-Step Interior-Point Method for the Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses and Dissertations Graduate Studies, Jack N. Averitt College of Summer 2011 Full-Newton-Step Interior-Point Method for the
More informationInterior Point Methods for Nonlinear Optimization
Interior Point Methods for Nonlinear Optimization Imre Pólik 1 and Tamás Terlaky 2 1 School of Computational Engineering and Science, McMaster University, Hamilton, Ontario, Canada, imre@polik.net 2 School
More informationA polynomial time interior point path following algorithm for LCP based on Chen Harker Kanzow smoothing techniques
Math. Program. 86: 9 03 (999) Springer-Verlag 999 Digital Object Identifier (DOI) 0.007/s007990056a Song Xu James V. Burke A polynomial time interior point path following algorithm for LCP based on Chen
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationA projection algorithm for strictly monotone linear complementarity problems.
A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationIMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS
IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS IMPLEMENTING THE NEW SELF-REGULAR PROXIMITY BASED IPMS By Xiaohang Zhu A thesis submitted to the School of Graduate Studies in Partial Fulfillment
More informationA Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationLecture 14 Barrier method
L. Vandenberghe EE236A (Fall 2013-14) Lecture 14 Barrier method centering problem Newton decrement local convergence of Newton method short-step barrier method global convergence of Newton method predictor-corrector
More informationA Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region
A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationA tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes
A tight iteration-complexity upper bound for the MTY predictor-corrector algorithm via redundant Klee-Minty cubes Murat Mut Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University
More informationChapter 14 Linear Programming: Interior-Point Methods
Chapter 14 Linear Programming: Interior-Point Methods In the 1980s it was discovered that many large linear programs could be solved efficiently by formulating them as nonlinear problems and solving them
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationAn interior-point gradient method for large-scale totally nonnegative least squares problems
An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice
More informationLecture 14: Primal-Dual Interior-Point Method
CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: Primal-Dual Interior-Point Method Disclaimer: Please tell me any mistake you noticed. In this and
More informationNew Interior Point Algorithms in Linear Programming
AMO - Advanced Modeling and Optimization, Volume 5, Number 1, 2003 New Interior Point Algorithms in Linear Programming Zsolt Darvay Abstract In this paper the abstract of the thesis New Interior Point
More informationOptimization: Then and Now
Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i
More informationA SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION
J Syst Sci Complex (01) 5: 1108 111 A SECOND ORDER MEHROTRA-TYPE PREDICTOR-CORRECTOR ALGORITHM FOR SEMIDEFINITE OPTIMIZATION Mingwang ZHANG DOI: 10.1007/s1144-01-0317-9 Received: 3 December 010 / Revised:
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationAbsolute Value Programming
O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A
More informationLineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások
General LCPs Lineáris komplementaritási feladatok: elmélet, algoritmusok, alkalmazások Illés Tibor BME DET BME Analízis Szeminárium 2015. november 11. Outline Linear complementarity problem: M u + v =
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationKey words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationPredictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path
Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity
More informationAn E cient A ne-scaling Algorithm for Hyperbolic Programming
An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationInfeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Fall 2012 Infeasible Full-Newton-Step Interior-Point Method for
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationBOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.
BOOK REVIEWS 169 BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 28, Number 1, January 1993 1993 American Mathematical Society 0273-0979/93 $1.00+ $.25 per page The linear complementarity
More informationNew stopping criteria for detecting infeasibility in conic optimization
Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:
More informationPolynomial complementarity problems
Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationA Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality
A Note on Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005 Abstract: We extend the analysis of [26] to handling more general utility functions:
More informationExchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality
Exchange Market Equilibria with Leontief s Utility: Freedom of Pricing Leads to Rationality Yinyu Ye April 23, 2005; revised August 5, 2006 Abstract This paper studies the equilibrium property and algorithmic
More informationA Simpler and Tighter Redundant Klee-Minty Construction
A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central
More information18. Primal-dual interior-point methods
L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Summer 2015 Improved Full-Newton-Step Infeasible Interior- Point
More informationLecture 24: August 28
10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationCriss-cross Method for Solving the Fuzzy Linear Complementarity Problem
Volume 118 No. 6 2018, 287-294 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationCurvature as a Complexity Bound in Interior-Point Methods
Lehigh University Lehigh Preserve Theses and Dissertations 2014 Curvature as a Complexity Bound in Interior-Point Methods Murat Mut Lehigh University Follow this and additional works at: http://preserve.lehigh.edu/etd
More informationInterior-point methods Optimization Geoff Gordon Ryan Tibshirani
Interior-point methods 10-725 Optimization Geoff Gordon Ryan Tibshirani Analytic center Review force field interpretation Newton s method: y = 1./(Ax+b) A T Y 2 A Δx = A T y Dikin ellipsoid unit ball of
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More information