On a wide region of centers and primal-dual interior. Abstract

Size: px
Start display at page:

Download "On a wide region of centers and primal-dual interior. Abstract"

Transcription

1 1 On a wide region of centers and primal-dual interior point algorithms for linear programming Jos F. Sturm Shuzhong Zhang y Revised on May 9, 1995 Abstract In the adaptive step primal-dual interior point method for linear programming, polynomial algorithms are obtained by computing Newton directions towards targets on the central path, and restricting the iterates to a neighborhood of this central path. In this paper, the adaptive step methodology is extended, by considering targets in a certain central region, which contains the usual central path, and subsequently generating iterates in a neighborhood of this region. The size of the central region can vary from the central path to the whole feasible region by choosing a certain parameter. p An O( nl) iteration bound is obtained under very mild conditions on the choice of the target points. In particular, we leave plenty of room for experimentation with search directions. The practical performance of the new primal-dual interior point method is measured on the Netlib test set for various sizes of the central region. Key words: Linear programming, primal-dual interior point method, central path, wide neighborhood. Ph.D. student, Tinbergen Institute Rotterdam, The Netherlands. y Assistant Professor, Erasmus University Rotterdam, The Netherlands.

2 Sturm and Zhang: A wide region of centers for LP 1. Introduction Consider the following standard primal and dual linear programming problems and (P) minimize c T x subject to Ax = b x 0 (D) maximize b T y subject to A T y + s = c s 0 where A is an m by n matrix. and Further denote the feasible regions of the primal and the dual problems by F P := fx < n + j Ax = bg F D := fs < n + j AT y + s = c 9y < m g where < n + denotes the nonnegative orthant. We assume that both F P and F D contain positive vectors. A vast amount of recent research has been contributed to polynomial primal-dual interior point methods that generate iterates in a certain wide neighborhood of the central path, viz. the neighborhood N ; 1() =f(x s) F p F d j min 1in x is i (1 ; )e = xt s n g: A main iteration of this method can be described as follows. First, a search direction is obtained by applying Newton's method for solving a system of equations that describes a target on the central path. Second, the method takes the largest possible step in this Newton direction without leaving the N ; 1() neighborhood. A new target will be used in the next iteration. These methods are interesting, because they are polynomial and they allow long steps, which is a prerequisite for practical eciency. We have to remark though, that in these methods, the targets for computing Newton directions are chosen on the central path only. Due to the large size of the neighborhood, the distance between the iterates and the Newton targets on the central path is usually large, which can aversely aect the accuracy of the Newton direction.

3 Sturm and Zhang: A wide region of centers for LP 3 In this paper, we propose a generic method where targets are chosen in a wide central region that includes the central path, and the iterates are restricted to an even wider neighborhood of this region. It will be shown that it is ecient to use targets in the central region that are not necessarily on the central path. In this respect, the method is similar to the target following method of Jansen, Roos, Terlaky and Vial []. In other respects however, the two methods dier greatly. In particular, the target following method generates iterates that trace a sequence of targets, whereas the generic method studied in this paper generates a sequence in a neighborhood of the central region, without necessarily tracing the targets. It has been shown by Kojima, Megiddo, Noma, and Yoshise [4] that there exists a one-to-one correspondance between an arbitrary positive vector v and a pair of positive primal and dual solutions x F P and s F D with v i = p x i s i for i =1 n: The space of vectors v that is made in this way from primal and dual solutions x and s, is commonly referred to as the v-space. Hence, the v-space coincides with the positive orthant < n ++, where any vector v < n ++ is uniquely associated to a pair of positive primal and dual solutions. The v-space facilitates a unied and unambiguous treatment of search directions for the interior point method, as will be described in Section 1.1. In Section 1., we show how some frequently used neighborhoods of the central path t in the v-space geometry. The central region and its neighborhood are introduced in Section, and the new generic algorithm is described in Section 3. We will proceed in Section 4 by obtaining an O( p nl) iteration bound for this algorithm under mild conditions on the choice of the target points. In particular, we leave plenty of room for experimentation with search directions. To illustrate this issue, we describe an implementation of the new primal-dual interior point method in Section 5, and we provide results on its performance for various sizes of the neighborhood. Before proceeding, we mention the notation used in this paper. Let e denote the allone vector, and e i the i-th unit vector. For vectors other than e i, a subscript denotes the corresponding component. For a vector x, X =diag(x) denotes the diagonal matrix formed by elements of x. The norm kkis Euclidean unless stated otherwise. Two vectors a and b satisfy a b if and only if a i b i for all i. Wewrite6 (f w) to denote the angle between two vectors f and w. As usual, by the angle 6 (f w) we mean the smallest nonnegative angle between the two vectors f and w. We use sin(f w) cos(f w) andtan(f w) asa

4 Sturm and Zhang: A wide region of centers for LP 4 simplied notation for sin(6 (f w)) cos(6 (f w)) and tan(6 (f w)). Index sets are indicated by a script capital, say S. An n-dimensional vector w subscripted by an index set S, i.e. w S, denotes the n-dimensional vector with components w i for i Sand 0 in the other components. We write :S to denote the complement ofs, sothatw :S = w ; w S. The cardinality of a set S is denoted by jsj The v-space and search directions In this section, we review the v-space approach to the primal-dual interior point method. For a more thorough discussion of this subject, we refer the reader to Kojima, Megiddo, Noma, and Yoshise [4] and Jansen, Roos, Terlaky and Vial []. Consider x s < n.let ++ q d i := x i =s i for i =1 n: It is easily seen that for xed p < n, p 6= 0,andgiven d, the relation p x + p s = p ADp x = 0 and DA T y + p s =0 x = Dp x and s = D ;1 p s uniquely describes the primal and dual directions x and s. Namely, p x and p s form an orthogonal decomposition of p. In the sequel, by `the search direction' we will mean the vector p which combines the primal and the dual search directions. Once the direction p is given, if wemove along the directions x and s simultaneously in the primal and the dual space, then the maximum possible step length retaining the feasibility is: t := maxft j v + tp x 0 v+ tp s 0g or equivalently, t 1 = ; min 1in min(x i =x i s i =s i ) : For t t let q v i (t) := (x i + tx i )(s i + ts i )fori =1 n: Remark that q v i (t) = (v i + t(p x ) i )(v i + t(p s ) i )fori =1 n (1)

5 Sturm and Zhang: A wide region of centers for LP 5 and v(0) = v. From the arithmetic-geometric mean inequality, it follows that v(t) 1 (v + tp x + v + tp s )=v + tp=: () Because p x?p s,we also have kv(t)k = kvk + tv T p: (3) As kvk = x T s is the duality gap, (3) shows that p is a descent direction if and only if v T p<0. The trajectory v(t) in the v-space satises v(t)=v + t p + o(t): The point v + t p can thus be interpreted as the target of the next iterate in the v-space if primal and dual steps tx and ts are taken, cf. Jansen et al. []. and so From now on, for given v we will scale descent directions p such that v T p = ;kvk kv(t)k =(1; t) kvk : (4) 1.. Neighborhoods of the central path In this section, we review some neighborhoods of the central path in terms of the v- space geometry. In particular, we will describe the N ; 1 - neighborhood [5, 4, 10], the N - neighborhood [6, 4, 10] and the circular cone neighborhood [11]. Note that the terminology N ; 1 and N is from Mizuno, Todd and Ye [10]. We rst mention that the primal-dual central path is a half-line in the v-space, viz. fv < n j v = kvk p n eg see e.g. [4, ]. The primal-dual interior point method of Kojima, Mizuno and Yoshise [5] originally generated a sequence of primal- dual pairs in an N 1 ; neighborhood of the central path. Remark that for any [0 1], (x s) N ; 1 ()

6 Sturm and Zhang: A wide region of centers for LP 6 if and only if X 1= S 1= e fv < n + j min 1in v i (1 ; )e with = kvk g: (5) n It follows that the v-space representation of N ; 1 is a cone. This cone will be further analyzed in Section. Kojima et al. [5] showed that their N ; 1 algorithm converges in O(nL) main iterations. Subsequently, Kojima, Mizuno and Yoshise [6] modied their algorithm by restricting the iterates to an N () neighborhood, where N () =fv < n + j V e ; e = kvk n g: They obtained an O( p nl) iteration bound for this modication. In [11] it is shown that N () =fv < n + j p n tan(e V e) g: We proposed in [11] to use instead of N () the circular cone neigborhood N (1 ), N (1 ):=fv < n + j p n ; 1 tan(e v) g (6) q and we showed that N ( 1+ 1 ) N(1 ). The axis of the circular cone neighborhood, n;1 the all-one vector e, corresponds to the primal-dual central path. Figure 1 shows the q intersection of N (), N ( 1+ 1 )andn (1 ) with the unit simplex n;1 fv < n + j e T v =1g for the case n = 3 and =0:9. To summarize, in the v-space geometry, the central path is the cone of positivemultiples of the all-one vector e, and the neighborhoods are larger cones in < n + that contain the central path. In the next section, we willintroduce a central region, and we will propose a neighborhood of this region.. The central region and its neighborhood We shall rst introduce a region of centers in the v-space. in determining a Newton search direction will be chosen in this region. The targets that are used Next, we will introduce a neighborhood of this central region. Based on these two new concepts we will then propose a generic primal-dual algorithm that generates an iterative sequence in this neighborhood.

7 Sturm and Zhang: A wide region of centers for LP 7.1. Denitions and motivation In v-space, the central region is dened as the cone C() :=fv j min 1in v i kvk = p ng for given [0 1]. It is easily seen from (5) that C() is the v-space representation of the N ; 1(1 ; ) neighborhood. As special cases, C(1) = fv j v = kvk p n eg is the well known central path, and C(0) = < n +, the nonnegative orthant. In general, C() is the intersection of n circular cones, C() =fv j cos(e i v) = p n 1 i ng[f0g: (7) We will restrict to >0 in the sequel. For this case, we have C() =fv j 0 tan(e i v) r() 1 i ng[f0g where p n ; r() := : Because C() is a closed convex set, there exists for given v auniquevector (8) v = arg min kf ; vk : fc() This vector v is known as the minimal norm projection of v onto the convex set C(). It is well known that (v ; v ) is the normal direction of a hyperplane that seperates v from C(), i.e. (v ; v) T (w ; v ) 0 for all w C(): The inequality implies (v ; v) T (w ; v) v ; v for all w C(): (9) The above relation is known as the strong seperating hyperplane theorem. Moreover, we have: Lemma.1. Let v < n + v 6= 0and let v be the projection of v onto the closed convex cone C(), i.e. then v = arg min kf ; vk fc() tan(v v) = min tan(f v): fc()

8 Sturm and Zhang: A wide region of centers for LP 8 Proof: We needonlytoprove sin(v v) = min sin(f v): fc() Since ft v ft f f C() and v ; v = min kf ; vk fc() we obtain sin(f v)= ft v ft f f ; v kvk v ; v kvk =sin(v v): As a matter of notation, we now dene the angle between the closed convex cone C() and a vector v < n as follows: tan(c() v):= min tan(f v): fc() The above lemma shows that tan(c() v)=tan(v v): For xed (0 1] and (0 1), we deneaneighborhood of the central region C() as: N ( ) :=fv < n + j r()tan(c() v) g where by denition, r() = p n;, see (8). Observe that the above denition is consistent with our earlier denition (6) of N (1 ). Figure provides an illustration of the intersection of the unit simplex with C(0:6) and N (0:6 0:7) for the case n = 3. Notice that the case =1was shown in Figure 1... A basic property of the central region neighborhood In the previous section, we have extended the notion of central path to central region, and we have generalized the circular cone neighborhood of the central path to a neighborhood of the central region, viz. the central region neighborhood N ( ). In this section, we will obtain a useful property ofn ( ).

9 Sturm and Zhang: A wide region of centers for LP Figure 1: The intersection of (4) the unit simplex with (1) N (0:9), () N (1:10) and (3) N (1 0:9) for n = Figure : The intersection of (3) the unit simplex with (1) C(0:6) and () N (0:6 0:7) for n =3.

10 Sturm and Zhang: A wide region of centers for LP 10 Lemma.. Let w < n f C() f 6= 0for some (0 1]. Then Proof: w (1 ; r()tan(f w)) ft w kfk f: We will prove the lemma by showing (e i ; f i kfk f)(e i ; f i kfk f)t (1 ; f i kfk T ff )(I ; kfk ) (10) for arbitrary i f1 ng. As a matter of notation, by A B for symmetric matrices A and B we mean that B ; A is positive semi-denite. The only nonzero eigenvalue of the rank-one matrix (e i ; f i kfk f)(e i ; f i kfk f)t is 1 ; f i kfk with corresponding eigenvector (e i ; f i kfk f). The positive semi-denite matrix (1 ; f T i ff )(I ; kfk kfk ) has also an eigenvalue 1 ; f i kfk corresponding to the eigenvector (e i ; (10). By pre-multiplying by w T and post-multiplying by w in (10), it follows that j w i ; f T w kfk f i j = vu u t 1 ; vu u t 1 ; vu fi t kwk ; (f T w) kfk kfk f i kfk j f T w tan(f w) j : kfk f i kfk f). This proves Using the fact that tan(f w) 0ifandonlyiff T w 0 and using f kfk e= p n,we obtain j w i ; f T w kfk f i j s 1 ; n p f T w n kfk tan(f w) f i = r()tan(f w) f T w kfk f i: The lemma is proved. From Lemma., it is obvious that N ( ) < n + = C(0): The following theorem provides a stronger result.

11 Sturm and Zhang: A wide region of centers for LP 11 Theorem.1. Let (0 1] (0 1). Dene := 1 q 1+ =r() (1 ; ): There holds Proof: C() N( ) C( ): The inclusion C() N( ) follows immediately from the denition of N ( ). In order to prove the second relation, we use that for any v N( ) there holds r()tan(v v) so that by applying Lemma. we obtain Using v (1 ; ) (v ) T v kv k v =(1; )cos(v v) kvk v kv k : cos(v v)= 1 q 1 + tan(v v) 1 q 1+ =r() it follows that v 1 ; q 1+ =r() kvk e= p n = kvk e= p n i.e. v C( ). So far, we have seen some nice properties of the central region and its neighborhood N ( ). In the next section, it will be shown that it is in fact easy to check whether a given vector v < n + belongs to the new neighborhood, which is important for practical implementations..3. Computing the projection on the central region In order to check the membership v N( ) for some vector v < n +, one has to compute the projection v.wewillshow in this section that this projection can be computed very eciently.

12 Sturm and Zhang: A wide region of centers for LP 1 Let T f1 ng be the index set of maximal cardinality suchthat v T < qn ; jt j kv :T k e (11) and let h := qn ; jt j kv :T k : The following lemma states that i T if and only if v i <h. Lemma.3. There holds v :T he :T : Proof: Then Suppose to the contrary that there exists some i :T with v i <h.lett 0 := T[fig. Hence kv :T 0k = kv :T k ; v i v T 0 <he< > kv :T k ; h = n ; jt 0 j n ; jt j kv :T k : qn ; jt 0 j kv :T 0k e contradicting the maximal cardinality propertyoft. The above lemma shows that v i <v j for all i T and j :T. The set T can thus be computed by sorting the components of v. Notice that kv :T + he T k = kv :T k + h jt j= nh so that using (v :T + he T ) he, wehave (v :T + he T )= kv :T + p he T ke= n:

13 Sturm and Zhang: A wide region of centers for LP 13 It follows that (v :T + he T ) C(): The following lemma shows that using the set T, one can easily compute the projection v of v onto the closed convex cone C(). Lemma.4. Let v = arg min fc() kf ; vk. It holds that Proof: v = vt (v :T + he T ) kv :T + he T k (v :T + he T ): (1) We use the characterization (7) of C(). Consider the following problem for nding the projection of v on C(): min f< nf1 kf ; vk j cos(e i f) = p n i =1 ng: (13) The above problem will have a unique solution which is the projection v. Note also that the function cos(e i f) is pseudo concave inf on < n +. It is clear that to prove the theorem, we need only to check that the solution given by (1) is a Karush-Kuhn-Tucker point of the problem given in (13). The Lagrangian function for (13) is constructed as L(f ) = 1 kf ; vk ; Its gradient is given by r f L(f ) = f ; v ; Now consider the solution nx i=1 nx i=1 i (cos(e i f) ; = p n): i kfk e i ; f i f kfk 3 = f ; v ; kfk + f T kfk 3 f: f = vt (v :T + he T ) kv :T + he T k (v :T + he T ) with multiplier = kf k (he T ; v T )=kf k (v :T + he T ; v):

14 Sturm and Zhang: A wide region of centers for LP 14 Remark that (f ) T v = kf k = (vt (v :T + he T )) kv :T + he T k which implies f?(f ; v). Moreover, (f ) T = kf k vt (v :T + he T ) kv :T + he T k (v :T + he T ; v) T (v :T + he T ) Therefore, (f ) T = kf k v T (v :T + he T ) ;kf k 3 : kf k 3 f = v :T + he T ; f : Hence, we have r f L(f )=0: Observe that is nonnegativeandf C(), i.e. cos(e i f );= p n 0 for i =1 n. Moreover, there holds nx i=1 i (cos(e i f ) ; = p n)= ( ) T f kf k ; p n e T =0 where in the last equality we notice h kv :T + he T k = p n : We have thus veried that f is a Karush-Kuhn-Tucker point, and therefore f = v. The theorem is proved. We want to remark here that in order to calculate h, T and v, it suces to sort only a (usually small) subset of the components of v. This can be seen from the following relation: p n kvk h = qn ; jt j kv :T k qn ; (n ; 1) kvk : Hence, if v i < p n kvk

15 Sturm and Zhang: A wide region of centers for LP 15 then i T,andif v i qn ; (n ; 1) kvk then i :T. It follows that we only need to sort those v i 's for which 1 p n v i kvk < 1 qn ; (n ; 1) : 3. A generic central region algorithm The selection of search directions p in the v-space, or equivalently x and s in F P and F D, is an important issue in the design of primal-dual interior point algorithms. In our generic central region algorithm, we make sure that the v-space search direction p points towards the central region C(). Certainly, we are concerned only with descent directions. In mathematical terms, for a given iterate v N( ), we areinterested in a descent direction p, with v T p = ;kvk,suchthat Let ft [0 1) j v + tp C()g 6= : t L := minft j v + tp C()g: t0 Clearly, 0 t L < 1. Since d dt v(t) j t=0= 1 p, the step length t with t =tl can be interpreted as a Newton step towards the target (v + t L p) C(). Let t( ) be the largest step length towards the boundary of N ( ), i.e. t( ) := maxft j v( t) N( ) 80 t tg: We propose to use such a step length rule that the resulting sequence of iterates generated by the generic central region algorithm is contained in N ( ). A generic central region algorithm now follows:

16 Sturm and Zhang: A wide region of centers for LP 16 Algorithm 1. Input data: (A b c), parameters 0 <<1 and 0 < 1 and initial feasible solution (x (0) s (0) ) such that v (0) =(X (0) S (0) ) 1= e N( ). Step 0 Initialization. Setk =0. Step 1 Optimality test. Stop if based on(x (k) s (k) ) apair of optimal solutions (x s ) can be found. Step Choose direction. Choose p (k) and such that (v (k) ) T p (k) = ; v (k) ft [0 1) j v (k) + tp (k) C()g 6= : Compute x (k) and s (k). Step 3 Compute step length. Compute t( ) and let t 1 t( ) such that (X (k) + tx (k) ) 1= (S (k) + ts (k) ) 1= e N( ): Step 4 Take step. Set x (k+1) = x (k) + tx (k) and s (k+1) = s (k) + ts (k). Step 5 Set k = k +1 and return to Step 1. An initial solution can be obtained by means of a self-dual formulation [13]. For a good optimality test (Step 1), we refer the reader to [9]. In Step 3 one can use a bisection method to compute the step length. The condition t 1 t( ) allows us to accept the step length already after the rst time that the lower limit of the bisection interval is updated. Hence, 1 only O(log ) trials are needed by a bisection procedure for determining the step length. t( ) In Section 5, we will describe a specic implementation of Algorithm Convergence of the generic algorithm In this section, we shall analyze the convergence of the generic primal-dual algorithm proposed in the previous section. The analysis is based on a lower bound on the step length t( ). Because kv(t)k =(1; t) kvk, see (4), a linear convergence in duality gap will be achieved once such a lower bound is known.

17 Sturm and Zhang: A wide region of centers for LP 17 Throughout this section, we consider only step lengths t satisfying 0 t maxf t j v + tp 0g: Letting g i (t) := q v i (v i + tp i ) p 1 ; t for i =1 n it follows that v i (t) =(1; t)g i (t) + t x i s i for i =1 n: Obviously, wehave t( ) maxft j r() tan(v v( t)) for all t [0 t]g: (14) We can therefore obtain a lower bound on t( ) by nding an expression for tan(v v(t)) as a function of t. In Section 4.1, the angle between v and g(t) will be estimated, and these results are used in Section 4. to derive an upper bound on the angle between v and v(t). Using (14) we shall then obtain a lower bound on t( ) Analysis concerning g(t) In this section we shall estimate sin(v g(t)). Notice that for any i f1 ng, q g i (t) = v i (v i + tp i )=(1 ; t) s = v i 1+ t v i + p i : (15) 1 ; t v i This expression can be rewritten by applying the following lemma. Lemma 4.1. For ;1 there holds q 1+ =1+ ; 1 (1 + p 1+) : The proof of Lemma 4.1 is straightforward, and therefore omitted. Using Lemma 4.1, we obtain from (15) for all i that g i (t) =v i + 1 t 1 ; t (v i + p i ) ; 1 t v i (v i + p i ) (1 ; t) (v i + g i (t)) :

18 Sturm and Zhang: A wide region of centers for LP 18 Consequently, (v ) T g(t) = (v ) T v + 1 t 1 ; t (v ) T (v + p) t ; 1 ) T (V + G(t)) ; (V + P ) v: (16) (1 ; t) (v From Lemma. and the fact that v N( ), it follows that v (1 ; ) (v ) T v kv k v =(1; )v where we used v?(v ; v). Therefore, using the obvious inequality g(t) 0, we obtain (v ) T (V + G(t)) ; (V + P ) v kv + pk 1 ; : (17) First consider the case v C(). In that case, v = v so that (v ) T (v + p) =0 and we obtain from (16) and (17) that (v ) T g(t) (v ) T v ; 1 t kv + pk (1 ; t) 1 ; if v C(): (18) Now suppose v 6 C(). Then t L > 0, and using v?v + p, wehave (v ) T (v + p) =(v ; v) T (v + p) = 1 ; t L (v ; v) T ( v + t Lp ; v): t L 1 ; t L By denition of t L and the fact that C() is a cone, there holds v+t Lp 1;t L C(). Applying the inequality (9), yields (v ) T (v + p) = 1 ; t L (v ; v) T ( v + t Lp ; v) 1 ; t L t L 1 ; t L t L Combining (19) and (17) with (16) we obtain (v ) T g(t) (v ) T v + 1 t 1 ; t L 1 ; t for the case v 6 C(). t L v ; v v ; v : (19) ; 1 t kv + pk (1 ; t) 1 ; (0) Now we easily arrive at the following lemma.

19 Sturm and Zhang: A wide region of centers for LP 19 Lemma 4.. Let 0 t maxf t j v + tp 0g. If v C() then sin(v g(t)) If v 6 C() then Proof: t kv + pk (1 ; t) (1 ; ) kvk : sin(v g(t)) (1 ; t 1 ; t L )sin(c() v) 1 ; t t L + t (1 ; t) kv + pk (1 ; ) kvk : Using (18) we obtain for v C() that ((v ) T g(t)) ((v ) T v) ; where we applied the obvious inequality t ((v ) T v) kv + pk : (1) (1 ; t) 1 ; (1 + 1 ) 1+ for any <: () We remark here that kg(t)k = vt (v + tp) 1 ; t = kvk : For v C(), we thus obtain by using (1) and v = v that sin(v g(t)) = 1 ; ((v ) T g(t)) kv k kvk This concludes the case v C(). From v?(v ; v) it follows that (v ) T v = v : t kv + pk (1 ; t) (1 ; ) kvk : (3) Applying () to (0), we thus obtain for v 6 C() that sin(v g(t)) = 1 ; ((v ) T g(t)) kv k kvk sin(c() v) ; t 1 ; t L 1 ; t t L v ; v kvk + t (1 ; t) kv + pk (1 ; ) kvk : (4)

20 Sturm and Zhang: A wide region of centers for LP 0 Now we notice that v?(v ; v) implies sin(c() v)=sin(v v)= v ; v Using (4) and (5), it follows that sin(v g(t)) (1 ; t 1 ; t L )sin(c() v) 1 ; t t L = kvk : (5) + t (1 ; t) kv + pk (1 ; ) kvk : For notational simplicity, let and p nr() 1 := (1 ; ) r() + t L r() kv + pk := max(1 ): 1 ; t L kvk Lemma 4.3. There holds < 1 < : Proof: By denition of 1 and,wehave p nr() 1 = (1 ; ) r() + = (1 ; ) q 1+ =r() = p n p n ; + : p n q r() + Using 0 <<1 and 0 < 1 <n, the lemma follows from this relation. A useful consequence of Lemma 4. is:

21 Sturm and Zhang: A wide region of centers for LP 1 Lemma 4.4. If p 6= ;v then for 0 r() tan(v g(t)) 0 t 1 ; t 1 kvk p n kv + pk : If p = ;v then Proof: tan(v g(t)) = 0 for 0 t<1: A requirement of the direction p in Algorithm 1 is that t L < 1, and so p = ;v implies v C(). Hence, g(t) =v = v if p = ;v and the last part of the lemma is proved. For p 6= ;v, we rst notice that 1by denition. Therefore, using also Lemma 4.3, t 1 ; t 1 kvk p n kv + pk kvk 1 p n kv + pk min 1in v i kv + pk where the last inequality follows from Theorem.1 and the fact that v N( ) C( ). This implies that 0 t<maxf t j v + tp 0g: We notice that v > 0andg(t) > 0 so that tan(v g(t)) 0. Remarking here that sin(v g(t)) = tan(v g(t)) 1 + tan(v g(t)) it thus follows that 0 tan(v g(t)) r() if and only if sin(v g(t)) r() + : t The case t L 1;t 1;t L now follows easily from Lemma 4. and using 1. In the sequel of the proof we consider 0 t 1;t < t L.BecausevN( 1;t L ) there holds sin(c() v) = Using Lemma 4. this implies that tan(c() v) 1 + tan(c() v) r() + : sin(v g(t)) (1 ; t 1 ; t L ) 1 ; t t L r() + + t (1 ; t) kv + pk (1 ; ) kvk :

22 Sturm and Zhang: A wide region of centers for LP Therefore, sin(v g(t)) r() + if 0 t 1 ; t 1 kvk p n kv + pk 1 ; t L t L concluding the proof. (1 ; ) kvk r() + kv + pk 4.. Analysis concerning v(t) We shall now consider step lengths t with 0 t 1 ; t 1 kvk p n kpk : Because v?(v + p), we have kpk > kv + pk, so that Lemma 4.4 yields 0 r() tan(v g(t)) : (6) In this section we shall use this relation to estimate sin(v v(t)). Remark that, using 1 < and 1, t t 1 ; t < kvk 1 p n kpk min 1in v i t (7) kpk where we applied Theorem.1, together with the obvious relations kp x k 1 kp x kkpk and kp s k 1 kp s kkpk : Using Lemma 4.1, we obtain v i (t) p 1 ; t = q (v i + tv i p i + t x i s i )=(1 ; t) vu u = g i (t) t t x i s i 1+ 1 ; t g i (t) t = g i (t)+ 1 1 ; t ; 1 t 4 for i f1 ng, so that x i s i g i (t) (1 ; t) (x i s i ) g i (t)(g i (t)+v i (t)= p 1 ; t) (v ) T v(t) p = (v ) T g(t)+ 1 t 1 ; t 1 ; t (v ) T G(t) ;1 Xs t 4 ; 1 (1 ; t) (v ) T G(t) p ;1 (G(t)+V(t)= 1 ; t) ; X S e: (8)

23 Sturm and Zhang: A wide region of centers for LP 3 Using x?s and kp x k + kp s k = kpk, one easily obtains the following result, see e.g. Jansen et al. []. Lemma 4.5. There holds kxsk kxsk 1 kxsk kpk4 : Based on (6), (8) and Lemma 4.5 we obtain the following estimation of sin(v v(t)). Lemma 4.6. If 0 Proof: t 1 p kvk 1;t nkpk then sin(v v(t)) sin(v g(t)) p + t n kpk 4 p 1 ; t 8r() kvk + t n kpk 4 (1 ; t) 8(1 ; ) kvk : 4 As x?s, we have (v ) T G(t) ;1 Xs =(v ; (v ) T g(t) g(t) T g(t) g(t))t G(t) ;1 Xs: (9) Applying here the Cauchy-Schwartz inequality yields j (v ) T G(t) ;1 Xs j v ; (v ) T g(t) g(t) T g(t) g(t) G(t) ;1 e h i By construction, g(t)? v ; (v ) T g(t) g(t) T g(t), and therefore g(t) sin(v g(t)) = v ; (v ) T g(t) 1 kxsk : (30) g(t) T g(t) g(t) = v : (31) In order to estimate G(t) ;1,we apply (6) together with Lemma. and obtain g(t) (1 ; ) (v ) T g(t) v =(1; ) cos(v g(t)) kvk kv k kv k v : (3) Now using (6) and v = v e= p n,wehave g(t)= kvk 1 ; q1+ =r() e= p n = e= p n: (33)

24 Sturm and Zhang: A wide region of centers for LP 4 Stated dierently, kg(t) ;1 ek 1 p n=( kvk), so that (30) implies together with (31) and (6) that j (v ) T G(t) ;1 Xs j Using (3) and the obvious inequality v(t) 0, we obtain v p n sin(v g(t)) kxsk kvk p n r() kxsk v kvk : (34) (v ) T G(t) ;1 (G(t)+ p V (t) ) ; X S e kg(t);1 ek 1 kxsk 1 ; t (1 ; )cos(v g(t)) v kvk v n kxsk (1 ; )cos(v g(t)) kvk (35) 3 where the last inequality follows from (33). Combining (34) and (35) with (8) yields (v ) T v(t) p (v ) T g(t) ; 1 p t n 1 ; t 1 ; t r() kxsk v kvk Using (36) and () we obtain v ; 1 t 4 n kxsk (1 ; t) (1 ; ) cos(v g(t)) kvk : (36) 3 cos(v v(t)) = ((v ) T v(t)) (1 ; t) kv k kvk cos(v g(t)) ; t 1 ; t cos(v g(t)) p n r() ; t4 n kxsk : (1 ; t) (1 ; ) kvk 4 kxsk kvk Applying Lemma 4.5 to the above relation yields the desired result. Let 0 := max(1 t L r() kpk 1 ; t L kvk ): We have already seen that because v?(v + p), we have kpk > kv + pk, so that 0 t L r() kv + pk max(1 )= : 1 ; t L kvk

25 Sturm and Zhang: A wide region of centers for LP 5 Lemma 4.7. There holds 0 r() tan(v v(t)) for 0 t 1 ; t 1 kvk p n 0 kpk : Proof: Hence, Recall from (7) that t < t. Therefore, v(t) > 0 which implies tan(v v(t)) 0. 0 tan(v v(t)) r() if and only if sin(v v(t)) r() + : Consider rst the case t t L 1;t 1;t L. It follows easily from Lemma 4. and Lemma 4.6, and using 1 0 1, that sin(v v(t)) t kv + pk (1 ; t) (1 ; ) kvk p + t n kpk p 1 ; t 8r() kvk + t4 n kpk 4 (1 ; t) 8(1 ; ) kvk : 4 Now using 0 t 1kvk 1;t p nr() 1 = (1 ; ) r() + we obtain p n 0 kpk and the denition of 1,i.e. sin(v v(t)) 1 (1 ; )r() 4 r() + r() ; 4 p 8 < r() + (1 ; )r() r() + r() + t concluding the case t L 1;t 1;t L. t Now suppose 1;t < t L 1;t L.We notice that t t so that 1;t p n kpk tp 8r() kvk kpk p 8r() 0 kvk : r() +

26 Sturm and Zhang: A wide region of centers for LP 6 This implies, by using the denition of 0, that p n kpk 1 tp 8r() kvk 4 p 1 ; t L t L r() 1 p 1 ; t L t L r() + (37) where in the last inequality we used 1+ =r() < : (38) From 1 and 0 1, it follows that nt 8(1 ; ) kpk 4 kvk 4 kpk 3(1 ; ) kvk : (39) Now applying Lemma 4. and Lemma 4.6 together with (37) and (39) and using (6) yields sin(v v(t)) The lemma is proved. r() + ; t 1 ; t L 1 ; t t L r() + + t (1 ; t) kv + pk (1 ; ) kvk + t 1 1 ; t p 1 ; t L t L r() + + t (1 ; t) kpk 3(1 ; ) kvk r() + (1 ; r() + : t 1 ; t 1 ; t L t L (1 ; 1 ; 1 p ; 1 64 )) 4.3. The main result In the previous section it has been shown that if 0 t 1 ; t 1 kvk p n 0 kpk then 0 tan(v v(t)) =r(), and therefore v(t) N( ):

27 Sturm and Zhang: A wide region of centers for LP 7 By denition of t( ), this implies that either t( ) =1or t( ) 1 ; t( ) 1 kvk p n 0 kpk : (40) Based on this relation, we arrive at the following theorem. Theorem 4.1. For Algorithm 1, choose parameters and such that 1 = O(1), 1 = O(1) and 1 1; = O(1). In every iteration k =1, choose a direction p(k) such that t (k) L = O( 1 p n ) and p (k) = O( v (k) ): Suppose that for the initial solution pair (x (0) s (0) ) there holds log((x 0 ) T s 0 )=O(L). Then Algorithm 1 terminates with an optimal solution in O( p nl) main iterations. Proof: From Lemma 4.3 and using (38), we know that 1 > = (1 ; ) q 1+ =r() > (1 ; ) p : Hence, 1 is bounded from below by a positive constant, independent of the problem size. As t (k) L = O( 1 p n )andr() =O( p n), we have ( 0 )(k) = max(1 t (k) L 1 ; t (k) L r() p (k) kv (k) k )=O(1) for k =1. Using (40) we now obtain that t (k) 1 t( ) = 1 O( p n) : From (4) we thus have fork =0 1 that (x (k+1) ) T s (k+1) =(1; t (k) )(x (k) ) T s (k) =(1; 1 O( p n) )(x(k) ) T s (k) : The theorem follows immediately from the above inequality. Theorem 4.1 involves conditions on p and t L. The following lemma shows how these conditions can be satised.

28 Sturm and Zhang: A wide region of centers for LP 8 Lemma 4.8. Let f C(), f 6= 0and < ++.If then and Proof: p = ;v + r() h kvk f T v f ; v i t L 1 1 ; t L r() kpk =(1+r() tan(f v)) kvk : For t (0 1) we have v + tp =(1; t)(v + so that t L 1 ; t L 1 r() : Moreover, using v?( kvk ft v f ; v), t 1 ; t r() h kvk f T v f ; v i ) kpk = kvk + r() tan(f v) kvk concluding the proof. If we choose f C() with r()tan(f v) =O(1), e.g. f = v, and if we choose independent of n, then Lemma 4.8 implies that the conditions of Theorem 4.1 are fullled. With respect to the conditions on p and t L in Theorem 4.1, we also remark here that, because v?(v + p) wehave cos(;v p) =kvk = kpk tan(;v p) =kv + pk = kvk : Moreover, it can be shown that if 0 t<maxf t j v + tp 0g then tan(v v + tp) = t tan(;v p): 1 ; t Therefore, t L r() kv + pk =max(1 )=max(1 r()tan(v + t Lp v) ) 1 ; t L kvk where r()tan(v v). Hence, if v + t L p is a multiple of v, then =1.

29 Sturm and Zhang: A wide region of centers for LP 9 5. An implementation In the previous sections, we have described a generic wide neighborhood method. We have deliberately stated abstract conditions for polynomiality, in order to leave enoughroom for experimentation. In this section however, we shall describe a specic implementation of the central region method. In our implementation, we require three additional xed parameters, viz. L, U and, with L U and. For given f C() with r()tan(f v) (41) and for some scalar, [ L U ] (4) we set p = ;v + r() h kvk f T v f ; v i : Using Lemma 4.8, it follows that the conditions of Theorem 4.1 are satised. At every iteration, we choose the ray f C() on the cone generated by v and e, i.e. f = f() for some [0 1], where f() :=(1; ) kvk (v ) T v v + kvk e T v e: In order to satify also (41), we have to add a restriction on. Let = maxf [0 1] j r() tan(f() v) g: In fact, is the root of a quadratic equation. It is easily seen that f() C() andr()tan(f() v) for 0 : In our implementation, the specic choice of [0 ]and [ L U ] is made by maximizing the step length t towards the boundary of < n +. In other words, we would like to solve the problem max t t s:t: v + t P AD (;v + r()((1 ; )f(0) + f(1) ; v)) 0 v + t (I ; P AD )(;v + r()((1 ; )f(0) + f(1) ; v)) 0 L a U 0 t > 0:

30 Sturm and Zhang: A wide region of centers for LP 30 Numerical experience has shown that t( ) is usually a large proportion of t,typically more than 0.99 if 0:1, so that the maximizer of t approximately also maximizes t( ). Now we transform the variables of this optimization problem as follows: 1 := 1=t := = 3 := (1 ; = ) with inverse transformation t =1= 1 = + 3 = =( + 3 ): We introduce vectors q 0 and q 1, q 0 := r()(f(0) ; v) q 1 := r()((1 ; )f(0) + f(1) ; v): In terms of the new variables, the problem becomes min 1 s:t: 1 v + P AD (;v)+ 3 P AD q 0 + P AD q 1 0 1v +(I ; P AD )(;v)+ 3 (I ; P AD )q 0 + (I ; P AD )q 1 0 L + 3 U a linear program in 3 variables. Remark that the constraint 1 0 is redundant. We apply the dual simplex method to this problem, starting from the initial feasible solution = L 3 =0 1 = ; min 1in min(et i V ;1 P AD (;v + q 1 ) e T i V ;1 (I ; P AD )(;v + q 1 )): We stop after 0 simplex iterations or when optimal solution to this auxiliary problem is found. In this way, the procedure takes only O(n) operations. In our experiments, an optimal solution to the auxiliary problem was always found before reaching the limit of 0 simplex iterations. The above paragraphs fully describe our implementation of Step in Algorithm 1. For the initialization, Step 0, we use the self-dual reformulation of Ye, Todd and Mizuno [13]. In

31 Sturm and Zhang: A wide region of centers for LP 31 particular, we use the all-one vector as an initial solution. Because we use standard doubleprecision oating point arithmatic, we cannot nd an exact optimal solution. Therefore, we alleviate our stopping criterion to eight digits of precision, as described in Xu, Hung and Ye [1]. This criterion is the self-dual analogue of the infeasible interior-point stopping criterion of Lustig, Marsten and Shanno [7, 8]. We have applied the above procedure to those feasible Netlib problems [1] for which no BOUNDS section is specied in the MPS input le. We have used the parameters =0:7 =5 L =0:05 U =10 and we have tested three dierent values for, viz. f1 0:1 0:01g: The results are listed in Table 1. It appears that the wide central region choice <1performs better than the pathfollowing choice =1. Thechoices =0:1 and =0:01 appear to be more or less equally ecient. We remark that our results are comparable to [7], but are not yet competitive tothe state-of-the-art interior point codes [8, 1]. However, we have implemented an O( p nl) central region algorithm exactly as described in the previous pages, whereas the implementations of [8] and [1] do not have any theoretical convergence guarantee. Unlike other implementations we are aware of, we donotuseany preprocessing, we take equal step lengths in the primal and the dual space, directions are not corrected and we start simply from the all-one solution. 6. Concluding remarks Most O( p nl) iteration interior point algorithms closely trace the central path, or more generally a target sequence []. In this paper however, we provided a generic algorithm in which the iterates follow the central path only in a very loose sense. Yet, it achieves the O( p nl) iteration bound under mild conditions on the target points. Interestingly,the targets do not need to be traced. In fact, our method ts in the adaptive step methodology, where the step lengths are only restricted by aneighborhood of a region of centers. We believe that our approach can help to further reduce the gap between theory and practice of interior point methods.

32 Sturm and Zhang: A wide region of centers for LP 3 Name =1:00 =0:10 =0:01 Name =1:00 =0:10 =0:01 5FV SCAGR ADLITTLE SCFXM AFIRO SCFXM AGG SCFXM AGG SCORPION AGG SCRS BANDM SCSD BEACONFD SCSD BLEND SCSD BNL SCTAP BNL SCTAP BRANDY SCTAP DQ06C * SHARE1B DEGEN SHAREB DEGEN3 * 4 5 SHIP04L E SHIP04S FFFFF SHIP08L ISRAEL SHIP08S LOTFI SHIP1L SC SHIP1S SC STOCFOR SC50A STOCFOR SC50B WOOD1P * 3 4 SCAGR Table 1: The table lists the number of main iterations. An asterisk (*) means that the program did not attain 8 digits of precision within 100 iterations.

33 Sturm and Zhang: A wide region of centers for LP 33 References [1] Gay, D.M., \Electronic mail distribution of linear programming test problems," Mathematical Programming Society COAL Newsletter 13 (1985) [] Jansen, B., Roos, C., Terlaky, T. and Vial, J.-Ph., \Primal-dual target-following algorithms for linear programming," Report Nr , FacultyofTechnical Mathematics and Informatics, Technical University Delft, The Netherlands, [3] Karmarkar, N.K., \A new polynomial-time algorithm for linear programming," Combinatorica 4 (1984) [4] Kojima, M., Megiddo, N., Noma, T. and Yoshise, A., AUnied Approach to Interior Point Algorithms for Linear Complementarity Problems, Springer-Verlag, Berlin, [5] Kojima, M., Mizuno, S. and Yoshise, A., \A primal-dual interior point algorithm for linear programming," in Progress in Mathematical Programming: Interior-Point and Related Methods pp. 9-47, (ed. Megiddo, N.), Springer-Verlag, New York, [6] Kojima, M., Mizuno, S. and Yoshise, A., \A polynomial algorithm for a class of linear complementarity problems," Mathematical Programming 44 (1989) 1-6. [7] Lustig, I.J., Marsten, R.E. and Shanno, D.F., \Computational experience with a primal-dual interior point method for linear programming," Linear Algebra and its Applications 15 (1991) [8] Lustig, I.J., Marsten, R.E. and Shanno, D.F., \On implementing Mehrotra's predictorcorrector interior point method for linear programming," SIAM Journal on Optimization (199) [9] Mehrotra, S. and Ye, Y., \Finding an interior point in the optimal face of linear programs," Mathematical Programming 6 (1993) [10] Mizuno, S., Todd, M.J. and Ye, Y., \On adaptive-step primal-dual interior-point algorithms for linear programming," Mathematics of Operations Research 18 (1993) 4, [11] Sturm, J.F. and Zhang, S., \An O( p nl) iteration bound primal-dual cone ane scaling algorithm for linear programming," submitted to Mathematical Programming. [1] Xu, X., Hung, P-F. and Ye, Y., \A simplied and self-dual linear programming algorithm and its implementation," Working Paper, College of Business Administration, The UniversityofIowa (IowaCity, IA, 1994). [13] Ye, Y., Todd, M.J. and Mizuno, S., \An O( p nl)-iteration homogeneous and selfdual linear programming algorithm," Mathematics of Operations Research 19 (1994) 1,

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT

AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, Jos F. Sturm 1 and Shuzhong Zhang 2. Erasmus University Rotterdam ABSTRACT October 13, 1995. Revised November 1996. AN INTERIOR POINT METHOD, BASED ON RANK-ONE UPDATES, FOR LINEAR PROGRAMMING Jos F. Sturm 1 Shuzhong Zhang Report 9546/A, Econometric Institute Erasmus University

More information

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions

Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise 1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye

1. Introduction A number of recent papers have attempted to analyze the probabilistic behavior of interior point algorithms for linear programming. Ye Probabilistic Analysis of an Infeasible-Interior-Point Algorithm for Linear Programming Kurt M. Anstreicher 1, Jun Ji 2, Florian A. Potra 3, and Yinyu Ye 4 Final Revision June, 1998 Abstract We consider

More information

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University

Numerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region Eissa Nematollahi Tamás Terlaky January 5, 2008 Abstract By introducing some redundant Klee-Minty constructions,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Christian Keil (c.keil@tu-harburg.de) and Christian Jansson (jansson@tu-harburg.de) Hamburg University of Technology

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res. SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD FOR QUADRATIC PROGRAMMING Emil Klafszky Tamas Terlaky 1 Mathematical Institut, Dept. of Op. Res. Technical University, Miskolc Eotvos University Miskolc-Egyetemvaros

More information

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization

A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization A New Class of Polynomial Primal-Dual Methods for Linear and Semidefinite Optimization Jiming Peng Cornelis Roos Tamás Terlaky August 8, 000 Faculty of Information Technology and Systems, Delft University

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 Stephan Engelke and Christian Kanzow University of Hamburg Department of Mathematics Center

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko Approximation Algorithms for Maximum Coverage and Max Cut with Given Sizes of Parts? A. A. Ageev and M. I. Sviridenko Sobolev Institute of Mathematics pr. Koptyuga 4, 630090, Novosibirsk, Russia fageev,svirg@math.nsc.ru

More information

4. Algebra and Duality

4. Algebra and Duality 4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

that nds a basis which is optimal for both the primal and the dual problems, given

that nds a basis which is optimal for both the primal and the dual problems, given On Finding Primal- and Dual-Optimal Bases Nimrod Megiddo (revised June 1990) Abstract. We show that if there exists a strongly polynomial time algorithm that nds a basis which is optimal for both the primal

More information

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems

A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems A Simple Derivation of a Facial Reduction Algorithm and Extended Dual Systems Gábor Pataki gabor@unc.edu Dept. of Statistics and OR University of North Carolina at Chapel Hill Abstract The Facial Reduction

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0,

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0, A GENERALIZATION OF THE REVISED SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING PING-QI PAN Abstract. Recently the concept of basis, which plays a fundamental role in the simplex methodology, were generalized

More information

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION

A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS 1. INTRODUCTION J Nonlinear Funct Anal 08 (08), Article ID 3 https://doiorg/0395/jnfa083 A WIDE NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM WITH ARC-SEARCH FOR LINEAR COMPLEMENTARITY PROBLEMS BEIBEI YUAN, MINGWANG

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction

A full-newton step feasible interior-point algorithm for P (κ)-lcp based on a new search direction Croatian Operational Research Review 77 CRORR 706), 77 90 A full-newton step feasible interior-point algorithm for P κ)-lcp based on a new search direction Behrouz Kheirfam, and Masoumeh Haghighi Department

More information

A Simpler and Tighter Redundant Klee-Minty Construction

A Simpler and Tighter Redundant Klee-Minty Construction A Simpler and Tighter Redundant Klee-Minty Construction Eissa Nematollahi Tamás Terlaky October 19, 2006 Abstract By introducing redundant Klee-Minty examples, we have previously shown that the central

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D.

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D. MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC Nebojša V Stojković, Predrag S Stanimirović and Marko D Petković Abstract We investigate the problem of finding the first basic solution

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization J Optim Theory Appl 2010) 145: 271 288 DOI 10.1007/s10957-009-9634-0 Improved Full-Newton Step OnL) Infeasible Interior-Point Method for Linear Optimization G. Gu H. Mansouri M. Zangiabadi Y.Q. Bai C.

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Research Article On the Simplex Algorithm Initializing

Research Article On the Simplex Algorithm Initializing Abstract and Applied Analysis Volume 2012, Article ID 487870, 15 pages doi:10.1155/2012/487870 Research Article On the Simplex Algorithm Initializing Nebojša V. Stojković, 1 Predrag S. Stanimirović, 2

More information

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION

GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION MEHDI KARIMI, SHEN LUO, AND LEVENT TUNÇEL Abstract. We propose a family of search directions based on primal-dual entropy in

More information

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract APPROXIMATING THE COMPLEXITY MEASURE OF VAVASIS-YE ALGORITHM IS NP-HARD Levent Tuncel November 0, 998 C&O Research Report: 98{5 Abstract Given an m n integer matrix A of full row rank, we consider the

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

An EP theorem for dual linear complementarity problems

An EP theorem for dual linear complementarity problems An EP theorem for dual linear complementarity problems Tibor Illés, Marianna Nagy and Tamás Terlaky Abstract The linear complementarity problem (LCP ) belongs to the class of NP-complete problems. Therefore

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

New stopping criteria for detecting infeasibility in conic optimization

New stopping criteria for detecting infeasibility in conic optimization Optimization Letters manuscript No. (will be inserted by the editor) New stopping criteria for detecting infeasibility in conic optimization Imre Pólik Tamás Terlaky Received: March 21, 2008/ Accepted:

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Robust linear optimization under general norms

Robust linear optimization under general norms Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma 4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University,

Room 225/CRL, Department of Electrical and Computer Engineering, McMaster University, SUPERLINEAR CONVERGENCE OF A SYMMETRIC PRIMAL-DUAL PATH FOLLOWING ALGORITHM FOR SEMIDEFINITE PROGRAMMING ZHI-QUAN LUO, JOS F. STURM y, AND SHUZHONG ZHANG z Abstract. This paper establishes the superlinear

More information

Optimization: Then and Now

Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Optimization: Then and Now Why would a dynamicist be interested in linear programming? Linear Programming (LP) max c T x s.t. Ax b αi T x b i for i

More information

Local Self-concordance of Barrier Functions Based on Kernel-functions

Local Self-concordance of Barrier Functions Based on Kernel-functions Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

Convex Programs. Carlo Tomasi. December 4, 2018

Convex Programs. Carlo Tomasi. December 4, 2018 Convex Programs Carlo Tomasi December 4, 2018 1 Introduction In an earlier note, we found methods for finding a local minimum of some differentiable function f(u) : R m R. If f(u) is at least weakly convex,

More information

y Ray of Half-line or ray through in the direction of y

y Ray of Half-line or ray through in the direction of y Chapter LINEAR COMPLEMENTARITY PROBLEM, ITS GEOMETRY, AND APPLICATIONS. THE LINEAR COMPLEMENTARITY PROBLEM AND ITS GEOMETRY The Linear Complementarity Problem (abbreviated as LCP) is a general problem

More information

An interior-point gradient method for large-scale totally nonnegative least squares problems

An interior-point gradient method for large-scale totally nonnegative least squares problems An interior-point gradient method for large-scale totally nonnegative least squares problems Michael Merritt and Yin Zhang Technical Report TR04-08 Department of Computational and Applied Mathematics Rice

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM

AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM JIAN-FENG HU AND PING-QI PAN Abstract. The simplex algorithm computes the simplex multipliers by solving a system (or two

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

CCO Commun. Comb. Optim.

CCO Commun. Comb. Optim. Communications in Combinatorics and Optimization Vol. 3 No., 08 pp.5-70 DOI: 0.049/CCO.08.580.038 CCO Commun. Comb. Optim. An infeasible interior-point method for the P -matrix linear complementarity problem

More information

Finding an interior point in the optimal face of linear programs

Finding an interior point in the optimal face of linear programs Mathematical Programming 62 (1993) 497-515 497 North-Holland Finding an interior point in the optimal face of linear programs Sanjay Mehrotra* Department of lndustrial Engineering and Management Sciences,

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information