Perturbed path following interior point algorithms 1. 8cmcsc8. RR n2745
|
|
- Stephany Louisa Skinner
- 6 years ago
- Views:
Transcription
1 Perturbed path following interior point algorithms 0 0 8cmcsc8 RR n2745
2 INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Perturbed path following interior point algorithms J. Frédéric Bonnans, Cecilia Pola, Raja Rebaï N 2745 Décembre 995 PROGRAMME 5 apport de recherche ISSN
3
4 Perturbed path following interior point algorithms J. Frederic Bonnans, Cecilia Pola, Raja Reba Programme 5 Traitement du signal, automatique et productique Projet Programmation mathematique Rapport de recherche n2745 Decembre pages Abstract: The path following algorithms of predictor corrector type have proved to be very eective for solving linear optimization problems. However, the assumption that the Newton direction (corresponding to a centering or ane step) is computed exactly is unrealistic. Indeed, for large scale problems, one may need to use iterative algorithms for computing the Newton step. In this paper, we study algorithms in which the computed direction is the solution of the usual linear system with an error in the right-hand-side. We give precise and explicit estimates of the error under which the computational complexity is the same as for the standard case. We also give explicit estimates that guarantee an asymptotic linear convergence at an arbitrary rate. Finally, we present some encouraging numerical results. Because our results are in the framework of monotone linear complementarity problems, our results apply to convex quadratic optimization as well. Key-words: Linear programming, linear complementarity problems, perturbation, decomposition, parallel computation, large scale problems, interior point methods, polynomial complexity, predictor corrector algorithm, infeasible algorithms. (Resume : tsvp) Supported by Human Capital and Mobility Program (European Community) CHRX-CT INRIA, B.P. 05, 7853 Rocquencourt, France. Frederic.Bonnans@inria.fr. Dpto Mat., Est. y Comp., Univ. Cantabria, Spain. pola@matsun3.unican.es INRIA, B.P. 05, 7853 Rocquencourt, France. Raja.Rebai@inria.fr. Unité de recherche INRIA Rocquencourt Domaine de Voluceau, Rocquencourt, BP 05, 7853 LE CHESNAY Cedex (France) Téléphone : (33 ) Télécopie : (33 )
5 Algorithmes de points interieurs par suivi de chemin avec perturbations Resume : Les algorithmes de suivi de chemin de type predicteur correcteur se sont averes tres ecaces pour resoudre des problemes d'optimisation lineaire. Cependant, l'hypothese du calcul exact de la direction de Newton (correspondant a un pas ane ou de centrage) est peu realiste. En eet, dans le cas de problemes de grande taille, il peut ^etre necessaire d'utiliser des algorithmes iteratifs. L'article presente une etude de ces algorithmes prenant en compte une erreur dans le membre de droite. On donne des estimations precises et explicites de l'erreur permettant de preserver la complexite algorithmique du cas non perturbe. On calcule aussi des estimations explicites permettant de garantir un taux de convergence lineaire donne. Finalement, nous presentons des resultats numeriques encourageants. Les resultats sont obtenus dans le cadre de problemes de complementarite lineaire monotone. Ils s'appliquent donc aussi a l'optimisation quadratique convexe. Mots-cle : Programmation lineaire, complementarite lineaire monotone, perturbation, decomposition, calcul parallele, problemes de grande taille, methodes de points interieurs, complexite polyn^omiale, algorithme predicteur correcteur algorithmes non realisables.
6 . Introduction. In the last decade, a new generation of polynomial algorithms, based on the idea of computing a sequence of interior points, brought a revolution in the eld of linear optimization [3, 4]. In the past few years, some algorithms that follow the central path, using Newton directions on the equation of the central path, focused the attention of the community because they both reach the optimal complexity known until now, while converging quadratically [7, 6, 8]. Most implementations actually available of interior point methods are of this type. For very large scale problems it may be useful to compute the Newton step by an iterative algorithm. It is often observed that iterative algorithms compute at a small cost a rough approximation of the solution, while it may be much more expensive to obtain a precise value of the solution. Therefore a question arises: will the good convergence properties of path following algorithms remain if the Newton direction is computed approximately? This question is meaningful in the general framework of linear complementarity problems, in which linear optimization problems can be embedded (this also allows us to embed quadratic programming ). Our concern is when the linearization of the complementarity condition is solved approximately, that is to say we consider the usual linear equations corresponding to the complementarity condition with a perturbation in the right-hand-side; we assume that the equations of the Newton step corresponding to the linear equations of the complementarity problem (i.e., in the case of linear optimization, the primal and dual linear feasibility constraints) themselves are not perturbed, but they can have a non-null right-hand-side if the starting point is infeasible. In that framework, we are able to give explicit estimates on the precision with which the Newton step is computed, in order to keep complexity at the same order as for the nonperturbed case, or an asymptotic linear convergence rate at an arbitrary rate. We give such estimates for two algorithms of predictor corrector type, in small and large neighborhoods, respectively. The paper is structured as follows. Section 2 presents the main results for the perturbed predictor corrector algorithm in small and large neighborhoods. The proofs are given in Section 3. Finally some numerical results are reported in the last section. Conventions Given a vector x 2 IR n the relation x > 0 is equivalent to x i > 0; i = ; 2; ; n, while x 0 means x i 0; i = ; 2; ; n. We denote IR n + = fx 2 IR n : x 0g and IR n ++ = fx 2 IR n : x > 0g. We write k:k instead of k:k 2. Whenever we use other norms like k:k we use the corresponding symbol. Given a vector x, the corresponding upper case symbol denotes as usual the diagonal matrix X dened by the vector. The symbol represents the vector of all ones, with dimension given by the context. We denote component-wise operations on vectors by the usual notations for real numbers. Thus, given two vectors u; v of the same dimension, uv, u=v, etc. will denote the vectors with components u i v i, u i =v i, etc. We denote the null space and range space of a matrix M by N (M) and R(M) respectively. 3
7 4 J.F. BONNANS AND C. POLA AND R. REBAI The notation x k = O( k ) means that there is a constant K (dependent on problem data) such that for every k 2 N, kx k k K k. Similarly, if x k > 0, x k = ( k ) means that (x k )? = O(= k ). Finally, x k k means that x k = O( k ) and x k = ( k ). We use the same notations for a point x in a set parameterized by, say E. We say that x = O() (resp. x = (), x ) whenever there is a constant K such that kxk K (resp. x? = O(=); x = O() and x = ()) for all x 2 E, and all small enough. In particular, x in E means that there are constants K 2 > K > 0, such that any x 2 E satises K x i K 2, i = ; ; n. Given two vector functions x and y, x y means that x i y i for i = ; ; n, for small enough. 2. Main results. The monotone horizontal linear complementarity problem (LCP) is as follows: to nd (x; s) 2 IR n IR n satisfying xs = 0; Qx + Rs = h; x; s 0; (LCP ) where h 2 IR n, and Q; R 2 IR nn are such that for any u; v 2 IR n, () Qu + Rv = 0 implies u T v 0: We denote the feasible set, and the set of solutions of (LCP) as (2) F := f(x; s) 2 IR n + IR n +; Qx + Rs = hg; (3) S := f(x; s) 2 F; xs = 0g: Also, we denote the set of strictly complementary solutions by (4) S 0 := f(x; s) 2 S; x + s > 0g: It is well known that if (LCP) represents a linear programming problem, then if S is nonempty so is S 0. This is not true in general, since it is easy to construct a quadratic program with nonempty S and empty S 0. The existence of a strictly complementary solution will be an essential assumption in points 3.2, and Let (x 0 ; s 0 ; 0 ) 2 IR n + IR n + IR ++ be given. Set Then (x 0 ; s 0 ; 0 ) is an element of the set g = (h? Qx 0? Rs 0 )= 0 : (5) F g := f(x; s; ) 2 IR n + IR n + IR ++ ; Qx + Rs = h? gg:
8 PERTURBED PATH FOLLOWING ALGORITHMS 5 If in addition x 0 s 0 = 0, then (x 0 ; s 0 ; 0 ) belongs to the infeasible central path pinned on g, dened as (6) xs = ; Qx + Rs = h? g; We consider algorithms for solving (LCP) that follow approximately this infeasible central path, which whenever g = 0 (feasible case) coincide with the (usual) central path. We assume that the algorithms under consideration generate points (x; s; ) belonging to a \small" neighborhood of the form V g := f(x; s; ) 2 F g ; k xs? k ; 0g; where > 0 and 0 > 0 are given constants, or to a \large" neighborhood N g := f(x; s; ) 2 F g ; xs? ; 0 g where 0 < < and 0 > 0 are again given constants. It is easily seen that (7) V g N g for all 0 <? ; (8) N g V g for all p n? : The algorithms studied in this paper start each iteration from data (x; s; ) 2 V g (or N g ) and then compute an approximate solution of (6) with replaced by where 2 [0; ]. The Newton direction (u; v) is solution of the following perturbed linear system: (9) su + xv =?xs + + ; Qu + Rv = (? )g; where is a perturbation taking into account the error in the computation of the Newton step. (As the unperturbed right-hand side is of order, we may expect to deal with perturbations of order, i.e. of order.) Note that the second equation is not perturbed. The new point, obtained by taking a steplength 2 (0; ] along this direction is (0) x ] = x + u; s ] = s + v; ] := (? + ); where ] is such that ] 0, and the new point (x ] ; s ] ; ] ) belongs to V g (or to N g ), provided the \centering parameter" 2 [0; ] and the steplength 2 (0; ] are chosen such that the new point belongs to a certain neighborhood of the central path.
9 6 J.F. BONNANS AND C. POLA AND R. REBAI Let us note that under the monotonicity assumption () the system (9) has a unique solution. Note also that the system (9) can be written as () su + xv = (? )(?xs + ) + (?xs + + ); Qu + Rv = (? )g; and therefore (u; v) = (u c ; v c ) + (? )(u a ; v a ) where (u c ; v c ) is the solution of (9) with =, called centering step, and (u a ; v a ) is the solution of (9) with = 0, called ane-scaling step. Now we are ready to state a generic perturbed predictor corrector algorithm (GPC), in which the set G denotes V g or N g depending on the algorithm: Algorithm GPC Data: > 0; (x 0 ; s 0 ; 0 ) 2 G: k := 0 repeat x := x k ; s := s k ; := k ; Corrector step: Compute (u c ; v c ) solution of (9) with =. x() := x + u c ; s() := s + v c. Compute c 2]0; ] such that (x( c ); s( c ); ) 2 G. x := x( c ); s := s( c ). Predictor step: Set = 0. Compute (u a ; v a ) solution of (9) with = 0. x() := x + u a ; s() := s + v a ; () := (? ). Compute a, the largest value in ]0; [ such that (x(); s(); ()) 2 G; 8 a. x k+ := x( a ); s k+ := s( a ); k+ := (? a ) k. k := k +. until k <. In the following we consider two particular algorithms of the above general scheme and we state our main results in both cases. The perturbed predictor corrector algorithm in small neighborhoods is dened as follows: Algorithm SPC Fix c > 0, a > 0, 2]0; =2] and G = V g. Specialize Algorithm GPC to this case with kk, where c during the corrector step and a during the predictor step, and c = at each iteration. Note that in GPC we require that the point obtained after a restoration step is in the neighborhood G, whereas we x here c =. We check below that when c is small enough, then the point obtained in a corrector step with c = belongs to V g. In addition, we give =2 an explicit estimate of the size of errors for which the complexity of O( p nl) for the feasible case (resp. O(nL) for the infeasible case) is preserved. Assuming the strict complementarity
10 PERTURBED PATH FOLLOWING ALGORITHMS 7 hypothesis, we also compute a tighter bound under which a given asymptotic linear rate may be observed. We say that (x 0 ; s 0 ) dominates (x ; s ) if x 0 x and s 0 s. Similarly, we say that (x 0 ; s 0 ) dominates a solution of (LCP ) if there exists (x ; s ) 2 S which is dominated by (x 0 ; s 0 ). Theorem 2.. Let L := log 2 ( 0 = ). We assume that c =2 ( c =4 whenever =4), and that (x 0 ; s 0 ) dominates a solution of (LCP ) whenever it is not feasible. Then Algorithm SPC has the following properties: (i) (Feasible case) Assume (x 0 ; s 0 ) to be feasible. If a =4, then the algorithm nds a feasible point such that k in at most 3 p n=l iterations. (ii) (Infeasible case) If a = O(n), then the algorithm nds a point such that k in at most O(nL) iterations. (iii) (Asymptotic rate of convergence) Assume S 0 6= ;. Let 2 (0; ) and a be such that (2) a + (? ) r a 2! < 2(? ) : Then lim sup k+ = k. In particular, if a =5, then k+ k =2 for k large enough. We now state the perturbed predictor corrector algorithm in large neighborhoods. Algorithm LPC Fix c > 0, a > 0, 2]0; =2] and G = N g. Specialize Algorithm GPC to this case with at each iteration kk c when computing the centering direction and kk a when computing the ane direction, and (3) c = min ; 2ku c v c k 2? kk : This choice of c implies, as we will see, that the point obtained after a corrector step belongs to the interior of the large neighborhood, so that the algorithm is well dened. The theorem below also gives an explicit estimate of the size of errors for which the known complexity of the algorithm with = 0, which is now O(nL) for the feasible case and O(n p nl) for the infeasible case, is preserved. Assuming the strict complementarity hypothesis, we also compute a tighter bound under which a given asymptotic linear rate may be observed. Theorem 2.2. Let L = log 2 ( 0 = ). Assume that c =4 and that (x 0 ; s 0 ) dominates a solution of (LCP ) whenever it is not feasible. Then Algorithm LPC has the following properties: (i) (Feasible case) Assume (x 0 ; s 0 ) to be feasible. If a =4, the algorithm nds a feasible point satisfying k in at most 6nL= 3 iterations. (ii) (Infeasible case) If a = O( p n), then the algorithm nds a point satisfying k in O(n p nl) iterations.
11 8 J.F. BONNANS AND C. POLA AND R. REBAI (iii) (Asymptotic rate of convergence) Assume S 0 6= ;. Let 2 (0; ) and a be such that 4 a 40 p 2(? )n. Then lim sup k+ = k. 3. Proof of main results. 3.. Preliminary results. In this section we start by recalling some known technical results and deriving an easy consequence of them. These results will be used for the analysis of both algorithms. Set := r xs ; d := r x s ; u := d? u; v := dv ; Q := QD; R := RD? : Multiplying the rst equation in (9) by = p xs, we obtain r s x u + r r r r x xs s v =? + xs + xs : Therefore (u; v) is solution of the scaled equation (4) u + v =? +? +? ; Qu + Rv = (? )g: From (9) and (0) it is easily seen that x ] s ] =? xs (5) ]? +? + +? uv? + : The rst statement of the lemma below is due to Mizuno ([5], Lemma ), and the second statement is due to Mizuno, Jarre and Stoer ([6], Corollary ). Lemma 3.. (i) If y; z 2 IR n satises y T z 0, then kyzk p 8 ky + zk 2. (ii) Let (^u; ^v) be solution of ^u + ^v = ^f; Q^u + R^v = ^g; and ^x; ^s such that Q^x + R^s = ^g. Then 8 < k^uk k ^fk + k^xk + k^sk; : k^vk k ^fk + k^xk + k^sk:
12 PERTURBED PATH FOLLOWING ALGORITHMS 9 The following technical estimates will be useful in the sequel. Lemma 3.2. Let (x 0 ; s 0 ; 0 ) and (x; s; ) be two elements of N g such that (x0 ; s 0 ) dominates a solution (x ; s ) of (LCP ). Set (~x; ~s) = (x? x 0 ; s? s 0 )= 0. Then x T s 0 + s T x 0 n 4 0 ; Q~x + R~s = g; kd? ~xk + k d~s k 4n p : Proof. The rst statement follows from Theorem 2.2 of [2], while the second is obvious. Let us prove the last one. We have and similarly kd? ~xk = k? s~xk k? k k d~s k = k? x~sk k? k ks~xk k? k kx~sk k? k ks~xk = k? k s T j~xj; kx~sk = k? k x T j~sj: As (x 0 ; s 0 ) dominates (x ; s ), we obtain kd? ~xk + k d~s k k? k s T (x0? x ) + x T (s0? s ) k? k (s T x 0 + x T s 0 ): Combining with k? k = p and the rst statement, the result follows. The next lemma gives upper bounds for the term kuvk= of (5). Part i will be used in the centering step and in the feasible ane-scaling step, while part ii will be useful in the infeasible ane-scaling step. Lemma 3.3. Let (x; s; ) 2 N g and (u; v) be solution of (9). Then i. If u T v 0, then (6) kuvk p 8 k? k 2 : ii. If (x 0 ; s 0 ) dominates a solution of (LCP ) and 0, then the ane direction satises (7) ku a v a k kk + 5n 2 p :
13 0 J.F. BONNANS AND C. POLA AND R. REBAI Proof. i. Using Lemma 3.i and (4) we get kuvk = kuvk p 8 ku + vk 2 = p 8 k? +? +? k 2 ; from which the result follows. ii. Applying Lemma 3.ii and Lemma 3.2 to the scaled equation (4), and using k? k = p and kk p n=, we have whenever = 0 kuk k? +? k + kd? ~xk + k d~s k ; p ( p n + kk) + 4 n p kk p + 5n p : The same estimate holds for v. Using kuvk kukkvk, the result follows Asymptotic analysis. We now prove a general result that will be used for the study of the rapid asymptotic linear convergence in Section and Here, as in those sections, we assume (LCP) has a strictly complementary solution, i.e. S 0 6= ;. It is well known that in this case there is a unique partition B [ N = f; 2; ; ng; B \ N = ;; such that for any (x; s) 2 S 0 we have x B > 0; s B = 0; x N = 0 and s N > 0. Following Bonnans and Gonzaga [] we rename the variables in the following way x (x B ; s N ) and s (x N ; s B ): Algorithm GPC is invariant with respect to the permutation, whose advantage is just to simplify the analysis by assuming that N = ;. Therefore in points and we will always refer to x as the vector of large variables and to s as the vector of small variables. Of course this change of variables can only be done in the analysis, it cannot be used by algorithms since B and N are unknown. The following lemma indicates what are the order of magnitudes in the vicinity of the infeasible central path. Lemma 3.4. ([2], Lemma 4.) If (x; s; ) 2 G, then x, s and d. Dene the scaled variables x := d? x ; s := d s: This scaling transfers x and s to the same vector x = d? x = = ds= = s: This scaling allows us to represent the solution of (9) as O() perturbations of quantities that are easily analyzed. In order to do so we represent the solution of the scaled equations (4) in terms
14 PERTURBED PATH FOLLOWING ALGORITHMS of orthogonal projections. Given M 2 IR mn, q 2 IR m and the ane space dened by Mx = q, we dene the projections operators P M;q and P M by x! P M;q x = argminfkw? xk : Mw + q = 0g; and P M := P M;0. Similarly, we denote ~ PM = I? P M the orthogonal projection on R(M T ). It is easily checked that P M;q x = P M x + P M;q 0 for any x 2 IR n and q 2 IR m. (8) Lemma 3.5. Let (x; s; ) 2 G. i. (Feasible case) If (x; s) is feasible, then the solution of the scaled system (4) satises u = P Q? + P Q (? ) + O(): v =? + ~ PQ? + ~ PQ (? ) + O(): ii. (Infeasible case) Let (~x; ~s) be such that Q~x+R~s = g. Then the solution of the scaled system (4) satises Proof. Let us rst check that u = P Q (? + d~s) + P Q (? ) + O(): v =? + ~ P Q (? + P Q (d~s)) + ~ P Q (? ) + O(): Q^u + R^v = 0 ) ^v 2 R(Q T ): Indeed, if (^u; ^v) satises the above relation, then Q(^u + u 0 ) + R^v = 0 for an arbitrary u 0 2 N (Q). Therefore (^u + u 0 ) T ^v 0. As u 0 is an arbitrary element of the vector space N (Q), it follows that (u 0 ) T ^v = 0, i.e. ^v 2 N (Q)? = R(Q T ), as was to be proved. i. In the feasible case we have Qu + Rv = 0. So, using (8), it follows that v = dv= 2 R(Q T ). Set Then (9) f :=? +? +? : u 2 f + R(Q T ); Qu + Rv = 0: This is the optimality system characterizing the orthogonal projection of f over the set dened by the second relation. Therefore u = P Q;Rv f = P Q f + P Q;Rv 0: This expression may be simplied noticing that P Q = 0. Indeed, let (x ; s ) 2 S, then s = 0 and Q(x?x )+Rs = 0; and therefore, using (8), s 2 R(Q T ) and = ds= 2 R(Q T ). Hence, we deduce that (20) u = P Q (? +? ) + P Q;Rv 0:
15 2 J.F. BONNANS AND C. POLA AND R. REBAI We obtain the desired expression for u by checking that (2) P Q;Rv 0 = O(): Indeed, using = O(), ku + vk = kfk = O() and, as u T v 0, we deduce that kvk kfk = O(), whence, using Lemma 3.4, kvk = kd? vk = O(). Now, let Q? be a right inverse for Q. Then Q(D? Q? Rv) = Rv, whence kp Q;Rv 0k kd? Q? Rvk = O(kvk) = O(): This proves the formula for u, from which we deduce that v = f? u =? +? +?? P Q?? P Q (? ) + O(); and so the last relation of part i follows. ii. Now let us deal with the infeasible case. Set ^u := u? (? )d? ~x; ^v := v? (? )d ~s ; ^f :=?(? )(d? ~x + d ~s ): Then, by using (4), we get (22) ^u + ^v = f + ^f; Q^u + R^v = 0: Hence, as in the rst part of the proof, we have ^u = P Q (f + ^f) + O(); = P Q??? d~s + (? + d~s) +?? (? )d? ~x + O(): Let us check that P Q (+d~s) = 0. Let (x ; s ) 2 S, s = 0, then Q(x+~x?x )+R(s+~s) = 0 and from (8) we deduce that s+~s 2 R(Q T ). Therefore +d~s = (d=)(s+~s) 2 R(Q T ) = N (Q)?. Combining with the above display and Lemma 3.4 we deduce ^u = P Q (? + d~s) + P Q (? ) + O(): Using u = ^u + O(), we obtain the formula for u, from which we deduce the formula for v = f? u. We will use the above lemma for the ane-scaling step in the rapid asymptotic linear convergence analysis.
16 PERTURBED PATH FOLLOWING ALGORITHMS The perturbed predictor corrector algorithm in small neighborhoods. Let us dene the centrality measure as the mapping (x; s; ) = k xs? k: If (x; s; ) 2 F g and (x; s; ) = 0, then (x; s) is the infeasible central point associated with the parameter value Polynomial convergence. Centering step. We start by describing the eect of a centering step on the centrality measure. (23) then Lemma 3.6. Let (x; s; ) be such that (x; s; ). If c > 0 is so small that c + (? ) p 8 ( + c) 2 2 ; c := (x + u c )(s + v c )? 2 : This holds in particular if c =2 whenever =2, and c =4 whenever =4. (24) (25) Proof. From (5) with = = we get (x + u c )(s + v c )? = + uc v c : From (7) and (6), using (x; s; ) and =, we have ku c v c k (? ) p 8 ( + kk)2 (? ) p 8 ( + c) 2 2 : Combining (25) and (24) and kk c we deduce (x + u c )(s + v c )? c + (? ) p 8 ( + c) 2 2 : From (23) we obtain the conclusion. Ane-scaling step. We now analyse the eect of an ane-scaling step on the centrality measure, beginning by considering an upper bound for this measure. From (5) with = 0, we get (x + u a )(s + v a )? ] = xs? + xs? +? + 2? 2 kk +?? u a v a ; ku a v a k :
17 4 J.F. BONNANS AND C. POLA AND R. REBAI By Lemma 3.6, the centrality measure after a centering step is at most =2. Hence we deduce that the point obtained with a value of the steplength along the direction (u a ; v a ) belongs to the small neighborhood whenever (26) 2 ku kk + a v a k?? 2 : Therefore the main point is to estimate ku a v a k. Lemma 3.7. Let (x; s; ) 2 V g =2. Then i. (Feasible case) If (x; s) is feasible, we have (27) ku a v a k n ( + (=2 + a)) 2 (? =2) p : 8 If in addition 0 < a =2 and ^ =2 veries (28) 2n^ 2 ( + (=2 + a)) 2 (? =2) p 8 then (x + u a ; s + v a ; (? )) ; a 3p =n. 2? a ; 8 2 (0; ^]. In particular, if a =4 then ii. (Infeasible case) Let (x 0 ; s 0 ) dominate a solution of (LCP ). If a = O(n), then a = (=n). Proof. i. Remembering that (x; s; ) =2, and using (u a ) T v a 0, we deduce from (7) and (6) with = 0, that ku a v a k (? =2) p 2? 2 : 8 Using 2 p n 2 p n( + =2); and kk a, we obtain ku a v a k [p n + ( a + p n=2)] 2 (? =2) p 8 n ( + (=2 + a)) 2 (? =2) p : 8 This proves (27). Using (x; s; ) =2, (26) and kk a, we deduce that is feasible whenever? a + 2 ku a v a k =2:? As the function f : (0; )?! IR dened by f() =? a + 2 ku a v a k?
18 PERTURBED PATH FOLLOWING ALGORITHMS 5 is increasing, for obtaining the desired inequality it suces to show that f(^) =2. Using ^ 2 and 2, we get? ^ f(^) a + 2^ 2 kua v a k : Hence using part i and (28) we obtain that after a displacement step of value ^, the new point belongs to V g. For the proof of the last statement, observe that if a =2, the result is obvious. Otherwise it suces to prove that ^ := 3p =n satises (28). In fact, as =2 and 0 < a =4, we have 2n^ 2 ( + (=2 + a)) 2 (? =2) p 8 n^ 2 (=8)2 3p < 2 4 2? a : 4 ii. Combining with (7) and (26), we deduce that is feasible whenever 2 kk kk + 2 (? ) + 5n (? ) (? ) 3=2 2 : If a =2, the conclusion is obtained. Otherwise, the right-hand-side is greater than =4. Assuming kk cn, we see that is feasible whenever (n)c + c (? ) + 5 (? ) 3=2 2 (n) 2 4 : The left-hand-side is null when = 0. Therefore, the inequality holds whenever n is small enough. The result follows. Using the above results we will can easily prove in Subsection that Algorithm SPC has polynomial convergence Rapid asymptotic linear convergence. Lemma 3.8. Let (x; s; ) be such that (x; s; ). Then r ku a v a k a a 2 + O(): (29) Proof. Dene u a := d? u a ; v a := d va. From Lemma 3.5 with = 0 we get u a v a = ua v a = (P Q? )(? + ~ PQ? ) + O():
19 6 J.F. BONNANS AND C. POLA AND R. REBAI Using Lemma 3. i, k? k that P Q and ~ PQ are contractions, we have p?, kk p + and kk a, and the fact k(p Q? )(? + ~ PQ? )k k(p Q? )( ~ PQ? )k + k(p Q? )k p k? k 2 + k? kkk 8 k? k = k? k p + kk 8 k? k k? kk k kk p + kk 8 a p a p p + p +? 8? a a = p + p?? 2 : 8 As the function! a p 8 + p? 2 attains its maximum on [0; =2] at have, using (? )? 2: k(p Q? )(? + ~ P Q? )k 2 a s 2 a p p 8 2 a + 8 +! 8 2 a + 8 = 2 a 2 a p + 8 p p 8 2 a + 8 = 2 a + 8 a p = a 2 Combining the above inequality and (29) we obtain the conclusion. a p 2 a + 8, we r a 2 : Lemma 3.9. Let 2 (0; ) and (x; s; ) such that (x; s; ) =2. If there exists a strictly complementary solution and (2) is satised, then a? for k large enough. Proof. Using the same function f as in proof of Lemma 3.7, it suces to show f(?) =2 to obtain the conclusion. From Lemma 3.8, we get f(? ) =? (? )2 a +? a + a = a? ku a v a k ; r (? ) a 2 + O(); r! + (? ) a 2 + O(): Hence, using (2) the result follows. In the next subsection we will use the previous results for proving the asymptotic rate of convergence of Algorithm LPC.
20 PERTURBED PATH FOLLOWING ALGORITHMS Proof of Theorem 2.. i. By Lemma 3.6, we know that the centrality measure after a centering step is at most =2. In the feasible case, from Lemma 3.7 i the steplength of Algorithm SPC? 3 r n k 0 : Using log 2? 3p =n satises a =n. Then k 3p p =n we obtain the conclusion. 3 ii. Similarly, in the infeasible case, from a = (=n), obtained in part ii of Lemma 3.7, we deduce that no more than O(nL) iterations is necessary. iii. This is a simple consequence of Lemmas 3.6 and The perturbed predictor corrector algorithm using large neighborhoods. In order to measure how interior is a point (x; s; ) w.r.t. N g, we will use the distance of xs= to the boundary of the set Polynomial convergence. Centering step. Let us denote T = fz 2 IR n ; z? g: x c = x + c u c ; s c = s + c v c : Lemma 3.0. Let (x; s; ) 2 N g and 2 (0; =2]. If c =4, then (x c ; s c ; ) 2 N g and (30) dist( xc s c p ) n : (3) Proof. From (5) with = x c s c Using (x; s; ) 2 N g and c we get = (? c xs )? + + c + ( c ) 2 uc v c = (? c ) xs + c + c + ( c ) 2 uc v c : ( + c (? )) = (? c ) + c (? c ) xs + c As c? = c j? j = c (? ); (? c ) + c ; + c? :
21 8 J.F. BONNANS AND C. POLA AND R. REBAI we have dist((? c ) xs + c ) c (? ) c 2 : Using (3), the above inequality and c > 0, we get dist( xc s c ) dist (? c ) xs + c c kk? ( c ) 2 kuc v c k ; c ( 2? kk )? ( c ) 2 kuc v c k : Let us rst consider the case when c =. Then, from (3), kuc v c k ( 2? kk )=2. Using kk =4, we deduce that dist( xc s c ) ( 2? kk )=2 =8, and the result follows. Otherwise dist( xc s c ) ( 2? kk ) 2 4ku c v c k : Using (6), (x; s; ) 2 N g and kk, we obtain ku c v c k kuc v c k 2 p xs 8? + n p xs 8? 2 + kk ; 2 n p 8? + kk n 3p 8 : Combining with the previous inequality and using kk =4, we obtain the conclusion. Ane-scaling step. From (5) with = 0, denoting by (x ] ; s ] ; ] ) the point obtained after a step of value, we get (32) x ] s ] = xs ] +? + 2 u a v a? and therefore, by Lemma 3.0, (33) dist( x] s ] ] p ) n?? kk? 2? ku a v a k : The following technical lemma is used in the proof of Theorem 2.2. Lemma 3.. Let (x; s; ) 2 N g.
22 PERTURBED PATH FOLLOWING ALGORITHMS 9 i. Assume that (x; s) is feasible. If a =4, then (34) ku a v a k n 3 : Moreover if dist( xs p ) then := 32n 6n satises (x + u a ; s + v a ; (? )) 2 N g ; 8 2 (0; ]: ii. If (x 0 ; s 0 ) dominate a solution of (LCP), dist( xs p ) n and " a = O( p n), then a = (=(n p n). Proof. i. Whenever (x; s) is feasible, we have (u a ) T v a 0. Hence from (6) with = 0, using (x; s; ) 2 N g, " a =4 and, we get ku a v a k This proves (34). It is easily checked that satises (35) 2 p xs 8? n p xs kk ; n p n 4 p n 2 : 3 2 n p 2 32n : Using (33) and (34) we have whenever =2 and kk =4 dist( x] s ] p ) 3 2 ] 32n? 2? n 22 3: In view of (35), the right-hand-side is positive whenever 2 (0; ]. The conclusion follows. ii. If a =2, the conclusion is obtained. Otherwise with (33) and (7), we get dist( x] s ] p p ) 3 2 ] 32n? 2kk nkk? 2 2 Assuming kk c p n, we see that is feasible whenever 3p 2 cn 32? 2cnp n? 2 2 n + 5n 2 p 0: + 5n 2 p :
23 20 J.F. BONNANS AND C. POLA AND R. REBAI Therefore is feasible whenever 3p 2 64 c(n p n) + (n p c n) p : The right-hand-side is null when = 0. Therefore, the inequality holds whenever n p n is small enough. The result follows. Using the above results we will prove in Subsection that Algorithm LPC has polynomial convergence Asymptotic rate of convergence. Lemma 3.2. Let (x; s; ) 2 N g. If there exists a strictly complementary solution and a then ku a v a k 2 a + O(): Proof. Using Lemma 3.5 with = 0, we get (36) u a v a = ua v a = (P Q (? ))((? + ~ PQ (? ))) + O(): Using k? k = p, kk = p and the fact that a projection is a contraction, we have k(p Q? )(? + P ~ Q? )k kp Q? k kk + k P ~ Q? k ; k? k kk? kk + k? k kk ; a p p + a p 2 a: Combining the above inequality and (36) we obtain the conclusion. Lemma 3.3. Let 2 (0; ), 0 =2 and (x; s; ) 2 N g such that dist( xs ) 3p 2 32n. If there exists a strictly complementary solution and 4 a 40 p 2(? )n, then a? for k large enough. i.e. Proof. It suces to show that the right-hand-side of (33) is positive whenever?,? (? )2 ku a v a k a + 3p 2 32n :
24 PERTURBED PATH FOLLOWING ALGORITHMS 2 By Lemma 3.2, this will be satised if i.e. a <? + 2 (? )2 a < 3p 2 32n :? + 2 p n = + 2(? ) (? )2 4 (? )n p 2 32 : The conclusion follows. The results of this subsection will be useful for obtaining the asymptotic rate of convergence of Algorithm LPC Proof of Theorem 2.2. i. From Lemma 3.0, we get dist( xc s c p ) n ; hence from Lemma 3. i the steplength of Algorithm LPC satises a 3 =6n. We obtain the conclusion following the same argument as in proof of part i of Theorem 2.. ii. Similarly, in the infeasible case, from a = ( n p n ), obtained in part ii of Lemma 3., we deduce that no more than O(n p nl) iterations are necessary. iii. This is an immediate consequence of Lemma Numerical experiments. In this section we present some numerical results that strongly support the theoretical estimates obtained in the preceding sections. These experiments are limited to the large neighborhood predictor corrector algorithm, choosing the size of the neighborhood = 0:0. The algorithm is the one described in this paper, in which the stepsize for the centering displacement is as follows: starting from a unit step, we divide the step by two until the new point belongs to the large neighborhood. This is a rather rough linear search. Because it proved to be ecient, we content of it. We apply this algorithm to a family of linear programming problems in standard form, i.e. (37) Min c T x ; Ax = b ; x 0; x where x 2 IR n and b 2 IR p. The necessary and sucient optimality conditions for this problem are xs = 0; Ax = b; c + A T = s; x 0; s 0;
25 22 J.F. BONNANS AND C. POLA AND R. REBAI and may be formally reduced to the linear complementarity format by writing the third condition in the equivalent form B T c = B T s; where B is a matrix whose colums form a basis of the kernel of A. The associated perturbed Newton step (u; v; ) is solution of su + xv =?xs + + ; Au = (? )g P ; A T? v = (? )g D ; where g P and g D depend on the starting point. In order to test numerically the robustness of the Newton direction, we choose to generate some perturbations of the right-hand-side of the corresponding linear system in a random way. Then we compare the number of iterations to the one obtained without perturbation. We generate the data of the problem to be solved as follows. Given an even value for n, we x p = n=2. Then the matrix A is randomly generated (we performed all computations and generations of random numbers using the standard MATLAB functions). We randomly generate two nonnegative vectors ^x and ^s. Then we take b = A^x and c = ^s. In that way, ^x and (^s; ^ = 0) are primal and dual feasible, and therefore the linear problem has solutions. The starting point is x 0 = s 0 = and 0 =. The algorithm stops when < 0?0. The perturbation is chosen so that it satises at each iteration kk = kfk; where f =?xs + is the right-hand-side of the equations of the Newton step corresponding to the linearization of the complementarity condition. That is, the perturbation parameter is the ratio (measured in the L 2 norm) between the perturbation and the right hand side. It varies between 0 and 0:25. Note that =? kfkz, where z belongs to the unit sphere of IR n. One diculty is that we do not have a direct mean for generating an element of the unit sphere of IR n with uniform probability. Therefore we propose two ways for generating z. The rst method consists in generating at random each component of a vector ^z. Then we obtain z by changing to 0 half of the components of ^z, i.e. we have either z i = ^z i or z i = 0, i = ; : : :; n. Our results are a mean over ten runs except for n = 0000 where we perform only three runs. Using this rst method for generating z, we obtain the results of tables and 2.
26 PERTURBED PATH FOLLOWING ALGORITHMS 23 n= Table : mean value of the number of iterations n= Table 2: number of iterations in the worse case We now consider a second method of generating the direction z where we choose to concentrate the perturbation in one component; i.e., all components of z are 0 except one, taken at random. We test this perturbation method only in the case n = We perform three runs for each value of. We obtain the following tables: Table 3: mean value of the number of iterations Table 4: number of iterations in the worse case From these results, we may conclude that, at least for randomly generated problems, the large neighborhood predictor corrector algorithm described in this paper is both rapid and very robust with respect to perturbations in the computation of the Newton step. REFERENCES
27 24 J.F. BONNANS AND C. POLA AND R. REBAI [] J.F. Bonnans, C.C. Gonzaga : Convergence of interior point algorithm for the monotone linear complementarity problem. Mathematics of Operations Research, to appear. [2] J.F. Bonnans, F.A. Potra : Infeasible path following algorithms for linear complementarity problems. Rapport de Recherche INRIA 2455, 994. Mathematics of Operations Research, submitted. [3] C.C. Gonzaga : Path following methods for linear programming. SIAM Review 34(992), [4] M. Kojima, N. Megiddo, T. Noma, A. Yoshise : A unied approach to interior point algorithms for linear complementarity problems, Lecture Notes in Computer Science, 538, Springer Verlag, Berlin, (99). [5] S. Mizuno : A new polynomial time method for a linear complementarity problem, Mathematical Programming, 56 (992), 3{43. [6] S. Mizuno, F. Jarre, J. Stoer : A unied approach to infeasible-interior-point algorithms via geometrical linear complementarity problems. Preprint 23(994), Math. Inst. Univ. Wurzburg, Wurzburg, Germany. [7] S. Mizuno, M.J. Todd, Y. Ye : On adaptive step primal-dual interior-point algorithms for linear programming. Mathematics of Operations Research 8(993), [8] F.A. Potra : An O(nL) infeasible interior point algorithm for LCP with quadratic convergence. Reports on Computational Mathematics 50, Department of Mathematics, The University of Iowa, Iowa City, A52242, USA, 994.
28 Unité de recherche INRIA Lorraine, Technopôle de Nancy-Brabois, Campus scientifique, 65 rue du Jardin Botanique, BP 0, VILLERS LÈS NANCY Unité de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, RENNES Cedex Unité de recherche INRIA Rhône-Alpes, 46 avenue Félix Viallet, 3803 GRENOBLE Cedex Unité de recherche INRIA Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 05, 7853 LE CHESNAY Cedex Unité de recherche INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93, SOPHIA-ANTIPOLIS Cedex Éditeur INRIA, Domaine de Voluceau, Rocquencourt, BP 05, 7853 LE CHESNAY Cedex (France) ISSN
Infeasible path following algorithms for linear complementarity problems
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Infeasible path following algorithms for linear complementarity problems J. Frédéric Bonnans, Florian A. Potra N 445 Décembre 994 PROGRAMME
More informationMultiplication by an Integer Constant: Lower Bounds on the Code Length
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Multiplication by an Integer Constant: Lower Bounds on the Code Length Vincent Lefèvre N 4493 Juillet 2002 THÈME 2 ISSN 0249-6399 ISRN INRIA/RR--4493--FR+ENG
More informationMaximizing the Cohesion is NP-hard
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Maximizing the Cohesion is NP-hard arxiv:1109.1994v2 [cs.ni] 9 Oct 2011 Adrien Friggeri Eric Fleury N 774 version 2 initial version 8 September
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationBound on Run of Zeros and Ones for Images of Floating-Point Numbers by Algebraic Functions
Bound on Run of Zeros and Ones for Images of Floating-Point Numbers by Algebraic Functions Tomas Lang, Jean-Michel Muller To cite this version: Tomas Lang, Jean-Michel Muller Bound on Run of Zeros and
More informationOptimal and asymptotically optimal CUSUM rules for change point detection in the Brownian Motion model with multiple alternatives
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Optimal and asymptotically optimal CUSUM rules for change point detection in the Brownian Motion model with multiple alternatives Olympia
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationBoundary Control Problems for Shape Memory Alloys under State Constraints
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Boundary Control Problems for Shape Memory Alloys under State Constraints Nikolaus Bubner, Jan Sokołowski and Jürgen Sprekels N 2994 Octobre
More informationBounded Normal Approximation in Highly Reliable Markovian Systems
Bounded Normal Approximation in Highly Reliable Markovian Systems Bruno Tuffin To cite this version: Bruno Tuffin. Bounded Normal Approximation in Highly Reliable Markovian Systems. [Research Report] RR-3020,
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationThe 0-1 Outcomes Feature Selection Problem : a Chi-2 Approach
The 0-1 Outcomes Feature Selection Problem : a Chi-2 Approach Cyril Duron, Jean-Marie Proth To cite this version: Cyril Duron, Jean-Marie Proth. The 0-1 Outcomes Feature Selection Problem : a Chi-2 Approach.
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationEuler scheme for SDEs with non-lipschitz diffusion coefficient: strong convergence
Author manuscript, published in "SAIM: Probability and Statistics 12 (28) 15" INSTITUT NATIONAL D RCHRCH N INFORMATIQU T N AUTOMATIQU uler scheme for SDs with non-lipschitz diffusion coefficient: strong
More informationBernstein s basis and real root isolation
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Bernstein s basis and real root isolation Bernard Mourrain Fabrice Rouillier Marie-Françoise Roy N 5149 Mars 2004 THÈME 2 N 0249-6399 ISRN
More informationPredictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path
Copyright information to be inserted by the Publishers Predictor-corrector methods for sufficient linear complementarity problems in a wide neighborhood of the central path Florian A. Potra and Xing Liu
More information1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye
Probabilistic Analysis of an Infeasible-Interior-Point Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationA new algorithm for finding minimum-weight words in a linear code: application to primitive narrow-sense BCH codes of length 511
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE A new algorithm for finding minimum-weight words in a linear code: application to primitive narrow-sense BCH codes of length 511 Anne Canteaut
More informationSign methods for counting and computing real roots of algebraic systems
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Sign methods for counting and computing real roots of algebraic systems Ioannis Z. Emiris & Bernard Mourrain & Mihail N. Vrahatis N 3669
More informationComputation of the singular subspace associated with the smallest singular values of large matrices
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Computation of the singular subspace associated with the smallest singular values of large matrices Bernard Philippe and Miloud Sadkane
More informationOptimal Routing Policy in Two Deterministic Queues
Optimal Routing Policy in Two Deterministic Queues Bruno Gaujal, Emmanuel Hyon To cite this version: Bruno Gaujal, Emmanuel Hyon. Optimal Routing Policy in Two Deterministic Queues. [Research Report] RR-37,
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationError Bounds on Complex Floating-Point Multiplication
Error Bounds on Complex Floating-Point Multiplication Richard Brent, Colin Percival, Paul Zimmermann To cite this version: Richard Brent, Colin Percival, Paul Zimmermann. Error Bounds on Complex Floating-Point
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationOptimal Routing Policy in Two Deterministic Queues
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Optimal Routing Policy in Two Deterministic Queues Bruno Gaujal Emmanuel Hyon N 3997 Septembre 2000 THÈME 1 ISSN 0249-6399 ISRN INRIA/RR--3997--FR+ENG
More informationOn the behavior of upwind schemes in the low Mach number limit. IV : P0 Approximation on triangular and tetrahedral cells
On the behavior of upwind schemes in the low Mach number limit. IV : P0 Approximation on triangular and tetrahedral cells Hervé Guillard To cite this version: Hervé Guillard. On the behavior of upwind
More informationISSN ISRN INRIA/RR FR+ENG. apport de recherche
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE 3914 Analytic Combinatorics of Chord Diagrams Philippe Flajolet, Marc Noy N3914 March 2000 THEME 2 ISSN 0249-6399 ISRN INRIA/RR--3914--FR+ENG
More informationfrom the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise
1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro
More informationHenry law and gas phase disappearance
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE arxiv:0904.1195v1 [physics.chem-ph] 7 Apr 2009 Henry law and gas phase disappearance Jérôme Jaffré Amel Sboui apport de recherche N 6891
More informationAN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT
October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationSensitivity to the cutoff value in the quadratic adaptive integrate-and-fire model
Sensitivity to the cutoff value in the quadratic adaptive integrate-and-fire model Jonathan Touboul To cite this version: Jonathan Touboul. Sensitivity to the cutoff value in the quadratic adaptive integrate-and-fire
More informationQuartic formulation of Coulomb 3D frictional contact
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Quartic formulation of Coulomb 3D frictional contact Olivier Bonnefon Gilles Daviet N 0400 January 10, 2011 Modeling, Optimization, and
More informationNumerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models
Numerical modification of atmospheric models to include the feedback of oceanic currents on air-sea fluxes in ocean-atmosphere coupled models Florian Lemarié To cite this version: Florian Lemarié. Numerical
More informationAn algorithm for reporting maximal c-cliques
An algorithm for reporting maximal c-cliques Frédéric Cazals, Chinmay Karande To cite this version: Frédéric Cazals, Chinmay Karande. An algorithm for reporting maximal c-cliques. [Research Report] RR-5642,
More informationCongestion in Randomly Deployed Wireless Ad-Hoc and Sensor Networks
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Congestion in Randomly Deployed Wireless Ad-Hoc and Sensor Networks Alonso Silva Patricio Reyes Mérouane Debbah N 6854 February 29 Thème
More informationImproved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization
J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.
More informationA BFGS-IP algorithm for solving strongly convex optimization problems with feasibility enforced by an exact penalty approach
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE A BFGS-IP algorithm for solving strongly convex optimization problems with feasibility enforced by an exact penalty approach Paul Armand
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationA full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction
Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department
More informationImproving Goldschmidt Division, Square Root and Square Root Reciprocal
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Improving Goldschmidt Division, Square Root and Square Root Reciprocal Milos Ercegovac, Laurent Imbert, David Matula, Jean-Michel Muller,
More informationOn Semigroups of Matrices in the (max,+) Algebra
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE On Semigroups of Matrices in the (max,) Algebra Stéphane GAUBERT N 2172 Janvier 1994 PROGRAMME 5 Traitement du signal, automatique et productique
More informationAn O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search
More informationCorrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path
Mathematical Programming manuscript No. will be inserted by the editor) Florian A. Potra Corrector-predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path
More informationA CMA-ES for Mixed-Integer Nonlinear Optimization
A CMA-ES for Mixed-Integer Nonlinear Optimization Nikolaus Hansen To cite this version: Nikolaus Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization. [Research Report] RR-, INRIA.. HAL Id: inria-
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationA NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING. Patrice MARCOTTE. Jean-Pierre DUSSAULT
A NOTE ON A GLOBALLY CONVERGENT NEWTON METHOD FOR SOLVING MONOTONE VARIATIONAL INEQUALITIES Patrice MARCOTTE Jean-Pierre DUSSAULT Resume. Il est bien connu que la methode de Newton, lorsqu'appliquee a
More informationOn Line Recognition of Interval Orders
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE On Line Recognition of Interval Orders Vincent Bouchitté, Roland Jégou, Jean Xavier Rampon N 2061 Octobre 1993 PROGRAMME 1 Architectures
More informationA General Framework for Nonlinear Functional Regression with Reproducing Kernel Hilbert Spaces
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE A General Framework for Nonlinear Functional Regression with Reproducing Kernel Hilbert Spaces Hachem Kadri Emmanuel Duflos Manuel Davy
More informationCCO Commun. Comb. Optim.
Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem
More informationA Second-Order Path-Following Algorithm for Unconstrained Convex Optimization
A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationStability and regulation of nonsmooth dynamical systems
Stability and regulation of nonsmooth dynamical systems Sophie Chareyron, Pierre-Brice Wieber To cite this version: Sophie Chareyron, Pierre-Brice Wieber. Stability and regulation of nonsmooth dynamical
More informationThroughput optimization for micro-factories subject to failures
Throughput optimization for micro-factories subject to failures Anne Benoit, Alexandru Dobrila, Jean-Marc Nicod, Laurent Philippe To cite this version: Anne Benoit, Alexandru Dobrila, Jean-Marc Nicod,
More informationA numerical implementation of a predictor-corrector algorithm for sufcient linear complementarity problem
A numerical imlementation of a redictor-corrector algorithm for sufcient linear comlementarity roblem BENTERKI DJAMEL University Ferhat Abbas of Setif-1 Faculty of science Laboratory of fundamental and
More informationIterative Methods for Model Reduction by Domain Decomposition
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE arxiv:0712.0691v1 [physics.class-ph] 5 Dec 2007 Iterative Methods for Model Reduction by Domain Decomposition Marcelo Buffoni Haysam Telib
More informationRigid Transformation Estimation in Point Registration Procedures: Study of the Use of Mahalanobis Distances
Rigid Transformation Estimation in Point Registration Procedures: Study of the Use of Mahalanobis Distances Manuel Yguel To cite this version: Manuel Yguel. Rigid Transformation Estimation in Point Registration
More informationGeometry and Numerics of CMC Surfaces with Radial Metrics
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Geometry and Numerics of CMC Surfaces with Radial Metrics Gabriel Dos Reis N 4763 Mars 2003 THÈME 2 ISSN 0249-6399 ISRN INRIA/RR--4763--FRENG
More informationSpurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics
More informationThis method is introduced by the author in [4] in the case of the single obstacle problem (zero-obstacle). In that case it is enough to consider the v
Remarks on W 2;p -Solutions of Bilateral Obstacle Problems Srdjan Stojanovic Department of Mathematical Sciences University of Cincinnati Cincinnati, OH 4522-0025 November 30, 995 Abstract The well known
More informationA full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function
Algorithmic Operations Research Vol7 03) 03 0 A full-newton step infeasible interior-point algorithm for linear complementarity problems based on a kernel function B Kheirfam a a Department of Mathematics,
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More informationapport de recherche ISSN
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE An Algorithm for Estimating all Matches Between Two Strings Mihail J. ATALLAH, Frederic CHYZAK, Philippe DUMAS N 3194 Juin 1997 THEME 2
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationA projection algorithm for strictly monotone linear complementarity problems.
A projection algorithm for strictly monotone linear complementarity problems. Erik Zawadzki Department of Computer Science epz@cs.cmu.edu Geoffrey J. Gordon Machine Learning Department ggordon@cs.cmu.edu
More informationA WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION
J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG
More informationContinuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces
Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 6090 Continuous Sets and Non-Attaining Functionals in Reflexive Banach Spaces Emil Ernst Michel Théra Rapport de recherche n 2004-04
More informationapport de recherche N 2589 Boubakar GAMATIE Juillet 1995 INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE PROGRAMME 2 ISSN
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE A THEORY OF SEQUENTIAL FUNCTIONS Boubakar GAMATIE N 2589 Juillet 1995 PROGRAMME 2 apport de recherche ISSN 0249-6399 A THEORY OF SEQUENTIAL
More informationLaboratoire de l Informatique du Parallélisme
Laboratoire de l Informatique du Parallélisme Ecole Normale Supérieure de Lyon Unité de recherche associée au CNRS n 1398 An Algorithm that Computes a Lower Bound on the Distance Between a Segment and
More information16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.
Chapter 3 SEPARATION PROPERTIES, PRINCIPAL PIVOT TRANSFORMS, CLASSES OF MATRICES In this chapter we present the basic mathematical results on the LCP. Many of these results are used in later chapters to
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More information1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)
Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the
More informationCorrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path
Math. Program., Ser. B 008 111:43 7 DOI 10.1007/s10107-006-0068- FULL LENGTH PAPER Corrector predictor methods for monotone linear complementarity problems in a wide neighborhood of the central path Florian
More informationNear convexity, metric convexity, and convexity
Near convexity, metric convexity, and convexity Fred Richman Florida Atlantic University Boca Raton, FL 33431 28 February 2005 Abstract It is shown that a subset of a uniformly convex normed space is nearly
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationImproved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems
Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-2016 Improved Full-Newton-Step Infeasible Interior- Point
More informationA filtering method for the interval eigenvalue problem
A filtering method for the interval eigenvalue problem Milan Hladik, David Daney, Elias P. Tsigaridas To cite this version: Milan Hladik, David Daney, Elias P. Tsigaridas. A filtering method for the interval
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. OPTIM. Vol. 15, No. 4, pp. 1147 1154 c 2005 Society for Industrial and Applied Mathematics A NOTE ON THE LOCAL CONVERGENCE OF A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR THE SEMIDEFINITE
More informationControl of Polynomial Dynamic Systems: an Example
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE Control of Polynomial Dynamic Systems: an Example Bruno Dutertre, Michel Le Borgne N 2193 janvier 1994 PROGRAMME 2 Calcul symbolique, programmation
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationRank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about
Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More information2 Section 2 However, in order to apply the above idea, we will need to allow non standard intervals ('; ) in the proof. More precisely, ' and may gene
Introduction 1 A dierential intermediate value theorem by Joris van der Hoeven D pt. de Math matiques (B t. 425) Universit Paris-Sud 91405 Orsay Cedex France June 2000 Abstract Let T be the eld of grid-based
More informationNumerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University
Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998
More informationEfficient polynomial L -approximations
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Efficient polynomial L -approximations Nicolas Brisebarre Sylvain Chevillard N???? December 2006 Thème SYM apport de recherche ISSN 0249-6399
More informationA RELATION BETWEEN SCHUR P AND S. S. Leidwanger. Universite de Caen, CAEN. cedex FRANCE. March 24, 1997
A RELATION BETWEEN SCHUR P AND S FUNCTIONS S. Leidwanger Departement de Mathematiques, Universite de Caen, 0 CAEN cedex FRANCE March, 997 Abstract We dene a dierential operator of innite order which sends
More informationy Ray of Half-line or ray through in the direction of y
Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationBounds on eigenvalues and singular values of interval matrices
Bounds on eigenvalues and singular values of interval matrices Milan Hladik, David Daney, Elias P. Tsigaridas To cite this version: Milan Hladik, David Daney, Elias P. Tsigaridas. Bounds on eigenvalues
More informationReal-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions
Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions Nikolaus Hansen, Steffen Finck, Raymond Ros, Anne Auger To cite this version: Nikolaus Hansen, Steffen Finck, Raymond
More information1. Introduction and background. Consider the primal-dual linear programs (LPs)
SIAM J. OPIM. Vol. 9, No. 1, pp. 207 216 c 1998 Society for Industrial and Applied Mathematics ON HE DIMENSION OF HE SE OF RIM PERURBAIONS FOR OPIMAL PARIION INVARIANCE HARVEY J. REENBER, ALLEN. HOLDER,
More informationKey words. linear complementarity problem, non-interior-point algorithm, Tikhonov regularization, P 0 matrix, regularized central path
A GLOBALLY AND LOCALLY SUPERLINEARLY CONVERGENT NON-INTERIOR-POINT ALGORITHM FOR P 0 LCPS YUN-BIN ZHAO AND DUAN LI Abstract Based on the concept of the regularized central path, a new non-interior-point
More informationDecomposing Bent Functions
2004 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 Decomposing Bent Functions Anne Canteaut and Pascale Charpin Abstract In a recent paper [1], it is shown that the restrictions
More information1. Introduction. Consider the following parameterized optimization problem:
SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 940 946, November 1998 004 NONDEGENERACY AND QUANTITATIVE STABILITY OF PARAMETERIZED OPTIMIZATION PROBLEMS WITH MULTIPLE
More information