Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem

Size: px
Start display at page:

Download "Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem"

Transcription

1 Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem Michael Chen and Sanjay Mehrotra May 23, 2007 Technical Report Department of Industrial Engineering and Management Sciences, Robert R. McCormick School of Engineering, Northwestern University, Evanston, Illinois Abstract We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a selfconcordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We use these results to propose a prototype primal interior point decomposition algorithm for the two-stage and multistage stochastic convex optimization problems admitting self-concordant barriers. Introduction We study the self-concordance properties of two and multistage stochastic convex optimization problems in this paper. These properties are then used to analyze prototype decomposition based interior point algorithms for solving these problems. Let us consider a two-stage stochastic convex problem with K scenarios: min c T x + K k= ηk x) s.t. x G L ) η k x) := min d kt y k s.t. y k G k L k x), where G and G k, k =,..., K are compact convex sets of x R n and y k R nk, k =,..., K, respectively. We assume that the set G has a non-empty relative interior, and the sets G k, k =,..., K, have a non-empty relative interior for any x that is in the relative interior of G Supported by NSF grants DMI-02005, DMI and ONR grant N Department of IE/MS, Northwestern University, Evanston, IL 60208, m-chen@northwestern.edu Department of IE/MS, Northwestern University, Evanston, IL 60208, mehrotra@iems.northwestern.edu

2 complete recourse). L and L k, k =,..., K, represent affine spaces, i.e., L {x Ax = b}, and L k x) {y k Q k y k = q k + T k x}, k =,..., K. Without loss of generality, we also assume that A and Q k, k =,..., K have full row rank. The objective functions of both first and second stage problems are linear, and we assume that d k, k =,..., K, has already absorbed the scenario probabilities π k. We now discuss the assumptions imposed on the form of ). The assumption that the feasible sets are compact and have a non-empty interior can be satisfied by introducing artificial variables and constraints. We can reformulate a nonlinear convex objective function d k y k ) as min z k and z k d k y k ) 0. We can also redefine a convex set G k = {y k h k i x, yk ) 0, i =,..., l k } equivalently as Ḡk = {Y k, ȳ) h k i ȳ, yk ) 0, ȳ x = 0}, where y k, ȳ) are the second stage decision variables and ȳ x = 0 becomes part of the definition of L k x). This satisfies the assumption that the first and the second stage variables do not interact in the nonlinear constraints. A prior assurance of full recourse assumption is more difficult, however, for a given first stage solution the feasibility of the second stage problems can be ensured by using artificial variables. A discussion of problems without complete recourse assumption is beyond the scope of this paper. We refer interested readers to Birge and Louveaux [2] for some insightful examples and the potential difficulties without this assumption. The decomposition based primal interior point algorithms developed in this paper for the two-stage stochastic convex problem generalize those proposed by Zhao [8] and Mehrotra and Ozevin [8, 9] for two-stage linear and two-stage semi-definite problems. However, the analysis in the current case requires different proof techniques since we no longer assume specific form of the feasible regions. In particular, this analysis draws from the literature on sensitivity and stability analysis of nonlinear programming [5]. A central step to this analysis is showing that the second stage barrier recourse functions and the first stage barrier functions form self-concordant families. In the multistage case, any intermediate stage recourse function contains recourse functions of the next stage, and hence imposes extra difficulty in the analysis. Using the proven selfconcordant family properties of each last two-stage problem as a starting point, we prove by induction that each objective function and the recourse function of any stage are self-concordant families with respect to the barrier parameter. The complexity values accumulate additively across the scenario tree from the last stage to the first stage. This result leads to showing that the number of the first stage Newton steps required to solve multistage stochastic convex programs are polynomial in the total complexity value, under the assumption that the second and the subsequent stage centering problems are solved exactly in a decomposition setting. This paper is organized as follows. In Section 2 we consider the two-stage problem and prove that each barrier recourse function forms a self-concordant family. In Section 3 we analyze the first stage barrier objective function and show that it also forms a self-concordant family. In Section 4 we propose both short-step and long-step path following interior point algorithms, and analyze the rate of convergence of the long step algorithm. Section 5 shows the selfconcordant family property of any intermediate stage recourse functions and objective functions for the multistage problem. This section also gives a prototype decomposition algorithm for the multistage problem. Section 6 gives some concluding remarks on the results presented in this paper. 2

3 2 Two-Stage Stochastic Convex Programs 2. Two-Stage Barrier Recourse Problem We regularize the first and second stage problems by barrier functions bx) on intg and B k y k ) on intg k, k =,..., K. Here intg and intg k represent the relative interiors of sets G and G k, k =,..., K, respectively. The two-stage barrier problem is thus: min fx, ) c T x + bx) + K k= ηk x, ) s.t. x L 2) η k x, ) min d kt y k + B k y k ) s.t. y k L k x). 3) Here is a positive scalar, bx) and B k y k ) are assumed to be non-degenerate and strongly selfconcordant-barrier functions with complexity values ϑ 0 and ϑ k of intg and intg k, k =,..., K, respectively. Definition 2. Nesterov and Nemirovskii [] and Renegar [3]. Let G R n be a closed convex domain, and intg be its non-empty interior. A function Bx) : intg R is called non-degenerate and strongly α-self-concordant if 2 Bx) is positive definite, and 3 Bx)[h, h, h] 2α 2 2 Bx)[h, h]) 3 2, x intg, h R n. 4) Furthermore, if Bx) as x converges to a boundary point of G, then Bx) is called a ϑ-self-concordant barrier if α = in 4) and ϑ sup{ Bx) T [ 2 Bx)] Bx) x intg} <. 5) The parameter ϑ is called the complexity value of a self-concordant barrier. Definition 2.2 Nesterov and Nemirovskii [] A family of functions {ψx, ), > 0} is strongly self-concordant on nonempty open convex domain Ω R n with positive scalar parameter functions α), γ), ν), ξ), and σ), where α), γ), ν) are continuously differentiable, if the following properties hold: SCF) Convexity and differentiability. ψx, ) is convex in x, continuous in x, ) Ω R +, thrice continuously differentiable in x, and twice continuously differentiable in. SCF2) Self-concordance of members. For any > 0, ψx, ) is α)-self-concordant on Ω. SCF3) Compatibility of neighbors. For every x, ) Ω R + and h R n, x ψx, ), 2 xψx, ) are continuously differentiable in, and {h T x ψx, )} {ln ν)} h T x ψx, ) ξ)α) 2 h T 2 xψx, )h) 2 6) {h T 2 xψx, )h} {ln γ)} h T 2 xψx, )h 2σ)h T 2 xψx, )h. 7) We will call the parameter ξ) the complexity value of a self-concordant family. 3

4 The definition of a self-concordant function family defined by Nesterov and Nemirovskii [] is more general than the above definition. For our development, the above simpler definition suffice. We can scale an α-self-concordant function by a factor α to get an -self-concordant function. In the following we use abbreviation self-concordant for non-degenerate and strongly self-concordant, and barrier for non-degenerate and strongly self-concordant-barrier. Nondegeneracy requires a barrier function to be a strictly convex function. Nesterov and Nemirovskii [] show the existence of an universal barrier function for any closed convex set. The parameter ϑ is called the complexity value of a self-concordant barrier. Appropriate barriers for the linear, quadratic, semi-definite, and second-order cone problems are well known. In particular, for R+, n Bx) = ln n i= ln x i with complexity value ϑ = n; for the second-order cone Kn 2 = {t, x) R n+ t x 2 }, Bx) = ln t 2 x 2 2 ) with ϑ = 2; and for the cone of symmetric positive semi-definite n n matrices X S n +, Bx) = lndetx)) with ϑ = n 2. More recently, Faybusovich [3, 4] has given self-concordant barriers for cones generated by Tchebychev systems and report experiments with such barriers. Recently Ariyawansa and Zhu [] have given a volumetric center based algorithm for two-stage semi-definite programs. Since the Vaidya volumetric barriers are another example of a self-concordant barrier, the analysis of this paper also applies to the volumetric barrier case. 2.2 Properties of Two-Stage Barrier Recourse Function In this section we show that the barrier recourse function η k x, ) is differentiable in x and, convex in x, and concave in ; it is self-concordant, and forms a self-concordant family for > 0. Since we study common properties of any barrier recourse function η k x) for k =,..., K, we drop the superscript k and represent a barrier recourse function as: where ρx, ) = min {ry) Qy = q + T x}, 8) ry) = d T y + By). 9) Let y represent the optimal solution of 8) and u be the corresponding Lagrangian multiplier for a given and x. Since ry) is strictly convex, y is unique. The Lagrangian multiplier u is also unique because the KKT conditions ) have a unique solution since Q is a full row rank matrix. Let F x, y, u, ) d + y By) + Q T u Qy q T x ). 0) For a fixed x and the optimal solution y, u ) satisfies the first order KKT conditions: The Lagrangian function for the second stage problem is F x, y, u, ) = 0. ) Lx, y, u, ) = d T y + By) + u T Qy u T q u T T x. 2) Throughout the paper y and u means the optimal primal and dual solutions, and we write y x) to stress that y is a function of x for any fixed > 0 we shall prove this later); 4

5 similarly y ) means y is a function of for any fixed x; this convention extends to u x), u ), ρx), and ρ) as well. We also let g y By), H 2 yby), R QHy) 2, 3) where H /2 y) represent the inverse of the square-root of matrix Hy). This square-root exist since Hy) is a positive definite matrix. From ) it is easy to see that u = QH Q T ) QH y ry ) = RR T ) RH /2 y ry ) 4) 2.2. Convexity and Self-Concordance of the Barrier Recourse Function of x In this section we show that ρx) is a convex and twice continuously differentiable function for a given > 0. Proposition 2. For any > 0, the barrier recourse function ρx) is convex in x. Proof: Let α + α 2 =, α, α 2 0, and x, x 2 intg. Now, α ρx ) + α 2 ρx 2 ) = d T α y x ) + α 2 y x 2 )) + α By x )) + α 2 By x 2 )) d T α y x ) + α 2 y x 2 )) + Bα y x ) + α 2 y x 2 )) ρα x + α 2 x 2 ). The first inequality holds because of the convexity of B ) and the second inequality holds since α y x ) + α 2 y x 2 ) is feasible for Qy = q + T α x + α 2 x 2 ). Lemma 2. For each fixed > 0 the optimal solution y x) and the Lagrangian multiplier u x) are differentiable functions of x. In particular, x y ) ) x) Hx) x u = 2 Rx) T Rx)Rx) T ) T x) Rx)Rx)). 5) T Proof: Let H := Hx) and R := Rx). From ) we have ) y,u,x) F y, u, x) = y,u) F y, u, x). x F y, u, x) = H QT. 0 Q 0. T. 6) The matrix y,u) F y, u, x) is invertible since H is positive definite and Q has full row rank. Its inverse is expressed as: ) y,u) F y, u, x) = H H 2 R T RR T ) RH 2 H 2 R T RR T ) RR T ) RH 2 RR T ). 7) Since the conditions of Implicit Function Theorem [5] are satisfied at y, u ), we have x y ) x) x u = [ x) y,u) F y, u, x)] x F y, u, x), giving us the desired result. 5

6 Lemma 2.2 Let x intg R n, p R n, ϱt) := ρx + tp), then we have x ρx) = T T u x) = T T Rx)Rx) T ) Rx)Hx) /2 y ry ) = x y x) T y ry ), 2 xρx) = T T Rx)R T x) ) T, ϱ 0) = x ρx) T p = y ry )[s], ϱ 0) = 2 ρx)[p, p] = 2 yby )[s, s], ϱ 0) = 3 xρx)[p, p, p] = 3 yby )[s, s, s], where x y x) = Hx) /2 Rx) Rx)Rx) T ) T and s = x y x)p. Proof: Because of strong duality we have ρx) = Lx, y x), u x), ). Hence, by using chain rule in 2) and first order KKT conditions ) we have, x ρx) = x Lx, y x), u x)) = d T + u x) T Q) x y x) + x By x)) + x u x) T Qy x) q T x) T T u x) = d T + y By x)) + u x) T Q) x y x) T T u x) = T T u x) 8) = x y x) y ry ) using 4) and 5)) 9) By differentiating 8), and using 5) we have 2 xρx) = T T x u x) 20) = T T Rx)R T x) ) T. 2) Hence, ϱ t) = p T 2 xρx + tp)p = T T Rt)R T t) ) T, where y t) := y x + tp), Ht) := Hy t)), and Rt) := QH 2 t). In particular, at t = 0 we have ϱ 0) = p T T T RR T ) T p = 2 yby )[ y x)p, y x)p], where the last equality uses Lemma 2.. Now recall that RR T = QH Q T = Q [ 2 yb y ) ] Q T. Hence, we have ϱ t) = p T T T Rt)R T t) ) Rt)R T t)) Rt)R T t) ) T p = p T T T Rt)R T t) ) QH t)ht) H t)q T Rt)R T t) ) T p. Now, at t = 0 by using Lemma 2. we have ϱ 0) = p T x y x) T Ht) t=0 x y x)p = 3 xby )[ x y x)p, x y x)p, x y x)p]. 22) We now establish that for a fixed > 0, ρx, ) is self-concordant. Theorem 2. For each > 0, the barrier recourse function ρx) is -self-concordant. Proof: By Lemma 2.2 and the fact that B is a self-concordant barrier we have 3 ρx)[p, p, p] = 3 yby )[ y x)p, y x)p, y x)p] 2 2 yby )[ y x)p, y x)p] ) 3 2 = 2 p T T T Rx)R T x) ) ) 3 2 T p = x ρx)[p, p] )

7 2.2.2 Differentiability and Concavity of the Recourse Function of In this section we bound the second derivative of ρ) for a given x. Lemma 2.3 For an x intg, optimal second stage solution y ), and Lagrangian multiplier u ) are well defined differentiable functions of, > 0. Furthermore, y ) ) u ) = H) 2 I R) T R)R) T ) R) ) ) H) 2 g) R)R) T ) R)H), 2 g) and { x ρx, )} = T T Rx, )R T x, )) Rx, )H 2 x, )gx, ). Proof: Similar to the proof of Lemma 2., we have an equation system F y, u, ) = 0. Differentiating F y, u, ) with respective to we get y,u,) F y, u, ) = y,) F y, u, ) ). x F y, u, ) = H) QT. g). 23) Q 0. 0 Now by applying the Implicit Function Theorem we have y ) ) u ) = H) 2 I R) T R)R) T ) R) ) ) H) 2 g) R)R) T ) R)H). 24) 2 g) From 8) we have { x ρx, )} = { T T u ) } 24). which gives the desired result from using Lemma 2.4 For an x intg the barrier recourse function ρ) is a concave function and it satisfies ρ) ϑ. Proof: Because of strong duality we have ρ) = Lx, y ), u ), ) for all > 0. Hence, ρ) = Lx, y ), u )) = d T + u ) T Q + g))y ) + By )) + u ) T Qy ) q T x) = By )). By differentiating 25) and applying 24) we get ρ) = y By )) T y ) = g) T H) g) H) 2 R) T R)R) T ) R)H) 2 g) ) = g)t H) 2 I R) T R)R) T ) R) ) H) 2 g) 26) g)t H) g) ϑ. 27) 25) 7

8 The first inequality above holds since I R T RR T ) R) is a projection matrix. The second inequality holds because By) is a self-concordant barrier with complexity value ϑ. The concavity of ρ) follows from 26) because H) is a positive definite matrix. 2.3 Self-Concordant Family of the Barrier Recourse Function In this section we show that when both x and vary, {ρx, ), > 0} is a self-concordant family as defined below. Proposition 2.2 The second stage barrier objective function ry, ), > 0 forms a selfconcordant family with parameters α) =, γ) = ν) =, ξ) = ϑ, σ) = 2. A proof of Proposition 2.2 is straightforward. function is self-concordant family. Next we show that the barrier recourse Theorem 2.2 The barrier recourse function {ρx, ), > 0} is a self-concordant family with parameters: α) =, γ) = ν) =, ξ) = ϑ, σ) = 2. In particular, for any h, h T x ρx, ) ϑ h T 2 xρx, )h, and {h T 2 xρx, )h} ht 2 xρx, )h. Proof: The conditions SCF) and SCF2) follow from Theorem 2.. We now show SCF3). For simplicity we use R := Rx, ), H := Hx, ) and g := gx, ). From Lemma 2.3 we have h T { x ρx, )} = h T T T RR T ) RH 2 g h T T T RR T ) T h g T H 2 R T RR T ) RH 2 g ht x ρ 2 x)h g T H g ϑ h T x ρ 2 x)h. 28) The second inequality above uses 2) and the fact that RRR T ) R is an orthogonal projection matrix; the last inequality holds since B ) is a self-concordant barrier of complexity value ϑ. 8

9 This proves 6) for ν) =, ξ) = ϑ, and α) =. Now, { h T 2 xρx, )h } = { ht T T RR T ) } T h { } ) = ht T T RRT T h ) ) ) = ht T T RRT RRT RRT T h ) { ) = ht T T RRT Q H) Q T } RRT T h ) ) = ht T T RRT Q H) {H} H) Q T RRT T h = h T { 2 yry ) } h h T 2 yry ) h = ht 2 xρx, )h, 29) where h = H Q T RR T ) T h. The inequality above follows from Proposition 2.2. The last equality follows by substituting for h and 2 xρx, ). Hence, the inequality 7) holds for ρx, ) for parameter functions γ) =, σ) = /2. 3 Properties of the First Stage Objective Function In Section 2 we studied properties of the barrier recourse function ρx, ). For a K scenario problem, each of the K barrier recourse functions satisfy: i) the optimal solution y k and optimal Lagrangian multiplier u k of 3) are continuously differentiable functions of x or ; ii) the functions η k x, ), k =,..., K are strictly convex in x, and concave in ; iii) the functions η k x, ), k =,..., K are twice continuously differentiable in x or, and continuously differentiably in x, ); iv) the functions η k x, ), k =,..., K are a self-concordant family. We now analyze the self-concordance properties of the first stage function: fx, ) = c T x + bx) + K η k x, ). 30) In particular, we show that this first stage objective function is also a self-concordant family. We use the following notations addressing scenarios consistently with our discussion of the properties of ρ ) in Section 2.2. η k ),u k ),y k,g k ),H k ),R k ),F k ) and L k ) are functions of x and/or, k =,..., K. We let g 0 ) = b ) and H 0 ) = 2 b ), g ) = f ), H ) = 2 f ), and R ) = AH ) 2. We see immediately that gx) = x bx) + K k= xηx), and Hx) = 2 xbx)+ K k= 2 xηx). It is useful to represent these summation in matrix notation, and for this purpose we define Î = [I,..., I], h [ ] T = h T Î, ḡ T x, ) = g 0T x),..., g KT x, ), k= 9

10 D 0 x) = H 0 2 x), D0 = H 0 2 x), D i x, ) = T it R i x, )R it x, ) R i x, )R it x, )) 2 R i x, )H i 2 x, ), i =,..., K, and ) 2, Di x, ) = Dx, ) = blkdiagd 0 x, ),..., D K x, )), Dx, ) = blkdiag D 0 x, ),..., D K x, )). 3) Here blkdiag represents a block diagonal diagonal matrix. Let ϑ = K ϑ i. i=0 Corollary 3. The first stage objective function fx, ) is strictly convex in x, concave in, thrice differentiable in x, twice continuously differentiable in, and continuously differentiable in x, ). Furthermore, the optimal solutions x ) and u ) are continuously differentiable in. Specifically, we have K K Hx, ) = 2 xbx)+ 2 xηx) = 2 xbx)+ T kt R k x, )R kt x, )) T k = ÎDDT Î T, k= { x fx, )} = g0 x) + x ) u ) k= K T kt Rx, )Rx, ) T ) Rx, )Hx, ) /2 g k x, ) = ÎD Dḡ, k= {fx, )} [ ϑ ϑ 0, 0] ) ) = H) 2 I R) T R)R) T ) R))H) 2 g) R)R) T ) R)H). 32) 2 g) Proof. Conclusions about Hx, ), { x fx, )}, and {fx, )} follow from Lemma 2.2, 2.3, x fx, ) + A and 2.4, and straightforward linear algebra. Let F x, u, ) = T ) u. At the Ax b optimum solution x x, ), u x, ) from KKT conditions of 2) we know that F x x, ), u x, ), ) = 0. Since, ) x,u,) F = x,u) F. F = 2 xfx, ) A T. { x fx, )}, A 0. 0 and the matrix x,) F is invertible, 32) follows from using the Implicit Function Theorem and straightforward computation. Lemma 3. The function fx, ) is -self-concordant. 0

11 Proof. Using Lemma 2.2, Theorem 2., and the assumption that the first stage barrier is -self-concordant we have 3 x fx)[p, p, p] 3 = K 3 xbx)[p, p, p] + 3 yby k )[s, s, s] 3 3 = 2) 3 k= 3 xbx)[p, p, p] ) K k= 2 2 xbx)[p, p] ) ) xbx)[p, p] yby k )[s, s, s] K k= K 2 yby k )[s, s] k= s := x y k x)p) ) ) yby k )[s, s] ) 2 = ) 3 2 using 3 2 ) ) 2 3 ) 2 ) x fx)[p, p] ) 2. The last inequality above uses that the barriers are self-concordant functions. We now show that the first stage objective functions forms a self-concordant family for > 0. Theorem 3. The first stage barrier recourse functions {fx, ), > 0} form a self-concordant family with parameters: α) =, γ) = ν) =, ξ) = ϑ, σ) = 2. Proof: SCF) and SCF2) are shown in Corollary 3. and Lemma 3.. We now prove the two inequalities 6 7) of SCF3). By Corollary 3. we have h T { x fx)} = ht D Dḡ ht h DD T ḡt D T Dḡ K = h T H 0 + T kt R k R ) ) kt T k h k= g0t H 0 g 0 + K g kt H k 2 R kt R k R ) kt R k H k 2 g k k= h T 2 xfx)h g0t H 0 g 0 + K g kt H k g k 33) k= ) h T 2 xfx)h K ϑ 0 + ϑ k = k= ϑ h T 2 xfx)h 34) Here 33) used Corollary 3. and the fact that R kt R k R kt ) R k is an orthogonal projection matrix. The inequality in 34) uses the definition of the complexity value of self-concordant

12 barriers bx), B k y k ), k =,... K. This proves 6) for our choice of parameters. Now by using 29) from Theorem 2.2 for each η k x) we obtain h T { 2 xfx) } h = h T H 0 h + K k= { h T 2 xη k } x, ) h T H 0 h K + { ht 2 xη k x, ) h T H 0 h + k= K k= = ht 2 xfx)h. ht 2 xη k x, )h This proves the second inequality 7) in the comparability with neighbors condition SCF3) in Defintion Interior Decomposition Algorithms For Two-Stage Convex Stochastic Programs We now analyze short step and long step primal path following interior point methods. We show that both of these methods obtain an ɛ-optimal solution in polynomial number of first stage Newton iterations. This analysis assumes that x fx, ) and 2 xfx, ) are computed exactly, i.e., the second stage problems 3) are solved exactly. As a consequence the algorithms presented in this section are only prototype algorithms, and their complexity is presented in terms of first stage Newton iterations. Similar assumptions were made in [8, 8, 9], since this assumption considerably simplifies the analysis. The Newton step x is defined as a solution of the system 2 xfx, ) x A T u = x fx, ) 35) h } h A x = 0. 36) Let x ) denote the optimal solution of 2) for a give. The set of solutions {x ), > 0} is called the central path. The complexity analysis focuses on the upper bound of the number of inner loops per updating of. For this purpose we define: δx, ) := xt 2 xfx) x = xfx) T [ 2 xfx)] 2 xfx). 37) φx, ) := fx, ) fx ), ) x := x x ) δx, ) := x T 2 fx) x. 38) At a major iteration k both short and long-step algorithms generate suitable approximations of x k+ ) starting from a suitable approximation of x k ). We assume that an initial 2

13 solution x 0 is available that is a suitable approximation of x 0 ) for an initial 0. The short and long step algorithms differ in the rate of decrement of. In the short step algorithm γ = σ/ ϑ, σ., while in long step algorithms γ is taken to be a small constant, say.. The short step algorithm follows the central path more closely, and requires only one Newton iteration to obtain a suitable approximation of x k+ ). While the long-step algorithm in worst case requires O ϑ) Newton iterations to obtain the next suitable approximation, where O ϑ) depends on γ, in practice a significantly fewer number of iterations are observed [7]. Lemmas 4. and 4.2 below are from [] Theorem 2.. i) and Theorem 2.3.3, which are central to the analysis of short and long-step primal interior point methods applied to functions forming a self-concordant family. Lemma 4. If δx, ) <, τ, then for any h, h, h 2 R n, i) τδ) 2 h T 2 fx, )h h T 2 f, x + τ x)h τδ) 2 h T 2 fx, )h ii) h T [ 2 fx + τ x, ) 2 fx, )]h 2 [ τδ) 2 ] h T 2 fx, )h h T 2 2 fx, )h 2. Lemma 4.2 Let κ = 2 3 2, if δx, ) 2κ, then fx, ) fx + ln + δx, ))); if δx, ) 2κ, then δx + x, ) δx,) δx+ x,) +δx,) x, ) δx, ) ) 2 δx,) 2. Since in Theorem 3. we have shown that fx, ) is a self-concordant family, the analysis of short step algorithm immediately follows from []. We state this result without proof. Theorem 4. Let 0 be the initial barrier parameter, and ɛ be the target precision. If δx 0, 0 ) κ = 2 3)/2, and a short step algorithm reduces at a constant rate γ = σ/ ϑ, σ 0., then the short step algorithm terminates with a x k, ) satisfying δx k, ) κ, ɛ in O ϑln 0 /ɛ) number of first stage Newton iterations. Renegar [3, Section 2.4. equation 2.4) and page 46 last paragraph)] shows that for a point satisfying the conditions of Theorem 4. and Theorem 4.2 the objective of the first stage problem c T x satisfies c T x z.2 ϑ, where z is the optimum objective value of the two-stage stochastic convex programming problem. Hence, it is sufficient to specify the termination criterion in these theorems using a value of. 4. Prototype Long Step Algorithm and Complexity A prototype long-step algorithm for the two-stage stochastic convex program is given as Algorithm. In order to show the convergence of this algorithm we establish an upper bound on φx k, k+ ), and a lower bound on the reduction of fx k, k+ ) using a damped Newton step. This allows us to bound the number of Newton steps after each update of. These results are given in Lemma 4.3 and Lemma 4.4, respectively. Since the condition δx, ) < of Lemma 4.3 is not directly verifiable, we give an upper bound on δx, ) by δx, ) in Lemma 4.6. The lower bound on the reduction of objective value per Newton step is established in Lemma

14 Algorithm Prototype long step algorithm for two-stage stochastic convex problem Initialize. Given ɛ, x 0, 0, γ such that δx 0, 0 ) κ, γ 0, ). Set x = x 0, = 0 and k = 0. while k > ɛ k+ = γ k while δx, k+ ) > κ x := x k solve subproblems η i x, ), i =,..., K compute the Newton direction x using 36) update x := x + +δx,) x if δx, k+ ) > 2κ; otherwise x := x + x. x k+ := x k := k + Theorem 4.2 Let 0 be the initial barrier parameter, and ɛ be the target precision. The long step algorithm needs at most O ϑln 0 /ɛ) damped Newton iterations to generate a point x k, ) satisfying δx k, ) κ from a starting point x 0 satisfying δx 0, 0 ) κ = 2 3)/2 while reducing 0 to ɛ at a linear rate 0 < γ <. Proof: In Algorithm we would like x k, k ) returning to the central path at every major iteration, i.e. x k satisfying δx k, k ) κ, k = 0,..., K. We update by a constant factor γ 0, ), k+ = γ k. If δx k, k+ ) is not less than κ, we start the kth inner loop to generate iterates ˆx 0,..., ˆx M, where ˆx 0 x k and ˆx M is the first iterate satisfying δx k+, k+ ) 2κ. Then by Lemma 4.2, since δˆx M, k+ ) 2κ, one full Newton step will restore closeness to central path, i.e, δˆx M + x, k+ ) κ. Hence, we only need to bound M. Since m = 0,..., M, δˆx m, k+ ) 2κ, by Lemma 4.2 each damped Newton step reduces objective value by at least k+ δˆx j, k+ ) ln + δˆx j, k+ )) ι, j =,..., M, where ι = k+ 2κ ln + 2κ)) > 0. On the other hand, by Lemma 4.4 we know that φx k, k+ ) = fx k, k+ ) fx k+ ), k+ ) is at most O ϑ). Hence, the number of inner loops is at most O ϑ). The conclusion follows since it is trivial to see that we need only lnɛ/ 0 )/lnγ outer loops to reach the target precision ɛ. Lemma 4.3 Let δ := δx, ) <. Then, ) δ φx, ) + ln δ) δ φx, ) ln δ) ϑ. Proof: By the Fundamental Theorem of Calculus φx, ) = x fx), ) T x + α 0 0 x T 2 xfx) + s x, ) T xds d α. 39) 4

15 From KKT conditions for the first stage problem we have x fx), ) T = u T A, and A x = 0, where u is the Lagrangian multiplier corresponding to the first stage equality constraints Ax = b. This gives x fx), ) T x = 0. 40) Furthermore, by Lemma 4.i) and that x) = x x following the definition of x in 38), we have x T 2 xfx) + t x, ) x Hence, by using 40) and 4) in 39) we have δ 2 δ + t δ) 2. 4) φx, ) = α 0 0 α x T 2 xfx) + t x, ) xdt dα δ δ dt dα + t δ) 2 ) δ + ln δ). δ Now, φx, ) = fx, ) fx), ) x fx), )x) = fx, ) fx), ) A T u ) T x) = fx, ) fx), ). 42) The second equality uses the KKT conditions for the first stage problem. The last equality follows from using Ax) = 0. Now by applying Fundamental Theorem of Calculus to φx, ), and using Lemma 4. ii) we have φx, ) = x fx) + α x, ) T xdα 0 0 x T 2 xfx) + α x, ) x x fx) + α x, ) T [ 2 fx) + α x, )] x fx) + α x, ) d α 0 δ δ + α δ ϑ d α = ln δ) ϑ. The last inequality uses 4), and Theorem 3., especially its first inequality of compatibility with neighbors. Lemma 4.4 For a given 0 < ζ <, if δ := δx, ) ζ, then φx, + ) = fx, + ) fx + ), + ) O ϑ) + 5

16 Proof: By differentiating 42) we have φx, ) = fx, ) fx), ) x fx), ) T x) fx), ) x fx), ) T x). 43) We bound fx), ) by applying i) of Corollary 3. to obtain fx), ) ϑ ϑ 0. 44) To bound x fx), ) T x) we first use Corollary 3.ii), then using the fact that P x)) := I Rx)) T Rx))Rx)) T ) Rx)) is an orthogonal projection matrix we get x fx), ) T x) = x fx), ) T Hx)) 2 P x))hx)) 2 x fx), ) / x fx), ) T Hx)) x fx), ) / ϑ, 45) where the last inequality follows from Theorem 3. 6) and Lemma 3.. Hence, we have Using Lemma 4.3 and 46) we have φx, + ) = φx, ) + + )φx, ) + φx, ) 2 ϑ ϑ 0. 46) + α φt, x) dt dα ) δ + ln δ) ) δ + ϑln δ) + 2 ϑ ϑ 0 ) dt dα + α t ) δ + ln δ) ) δ + ϑln δ) 2 ϑ ϑ 0 ) + )lnγ The conclusion follows since δ ζ, + = γ, and ζ, γ are constants. Since δ in Lemma 4.3 is not directly computable, in the following Lemma we bound δ using δx, ). This is possible as a consequence of the following result. Lemma 4.5 [3, Theorem 2.2.3] Let ψx) be a -self-concordant function, x be its minimizer, x domainψ), and x x. Then, x + = x [ 2 ψx)] ψx) satisfies x + x x x x 2 x x x x, where y 2 x = y T 2 ψx)y. Lemma 4.6 If δx, ) 6, then δx, ) 2δx, ). 6

17 Proof: Let x + = x + x be generated from x by taking a full Newton step. Since fx, ) is a -self-concordant function and x) is the minimizer of fx, ), applying Lemma 4.5 we have x x) x x x + x + x + x) x x x + x + x x) 2 x x x) x. Since x x) x = δ and x x + x = δ as defined in 37) and 38), we have for δ [0, 6 ], δ δ δ 2 δ 2 δ. 5 Multistage Stochastic Convex Problem 5. Multistage Barrier Recourse Problem Let ξ t, t =,..., T be a discrete stochastic process, where each ξ t represents data of an optimization problem. Since the data of the first stage problem is known, ξ has only one realization. Along any path ξ,..., ξ t, ξ t has only finitely many realizations as well. Hence there are only finitely many t stage scenarios considering all possible history. So we can order the set {ξ,..., ξ t )}, or equivalently establish a one-to-one mapping between {ξ,..., ξ t )} and {,..., Kt)} for any t =,..., T, where Kt) is the cardinality of the set {ξ,..., ξ t )}. For example, a binary tree has K) =, K2) = 2, K3) = 4,..., Kt) = 2 t. Hence a tuple t, k), k Kt) uniquely identifies a node in a scenario tree for the given mapping. For the purpose of uniquely identifying any node in a scenario tree, any such one-to-one mapping will suffice. We use at, k) to denote the ancestor of t, k) and Dt k to denote the set of s such that t +, s) are direct descendants of t, k). Since any t-stage scenario is a decedent of one of the t )-stage scenarios, and Dt k D j t =, k j, we have Kt) k= Dk t = Kt + ). For convenience, a superscript tk means t, k), i.e., x tk means x t,k). We use these two notations interchangeably. We define a T-stage problem as follows. η tk x at,k) ) := min c tk x tk + η t+,s) x tk ) s.t. x tk G tk L tk x at,k) ) 47) s D k t t =,..., T, k =,..., Kt), where G tk are nonempty convex domains, L tk {x tk Q tk x tk = q tk + T tk x at,k) } are affine spaces, x a,) := 0, and η T +,k) := 0. Without loss of generality, we assume Q tk, t =,..., T, k =,..., Kt), have full row rank. The definition clearly shows that the kth t-stage subproblem, η tk x at,k) ), is well defined only if x at,k) is given, and that its objective function contains the next stage subproblems η t+,s) x tk ), s D k t. We define the multistage barrier problem as follows. f tk x tk ) = c tkt x tk + B tk x tk ) + η t+,s) x tk ), s D k t η tk x at,k) ) = min f tk x tk ) s.t. x tk L tk x at,k) ), 48) t =,..., T, k =,..., Kt), x a,) := 0, η T +,k) := 0, 7

18 where each B tk ), t =,..., T, k =,..., Kt) is a self-concordant barrier function of corresponding domain intg tk with complexity value ϑ tk. We also define that ϑ tk := ϑ tk + ϑ t+,s), 49) i.e., ϑ tk is the sum of complexity values of the sub-tree rooted at scenario t, k). At least three alternative approaches are possible for solving a multistage stochastic convex program. The first possibility is to formulate its deterministic equivalent. Subsequently we can exploit the structure of the problem to perform linear algebra in parallel Hegland et al. [6]. The second approach is to formulate the problem using nonanticipativity constraints, and subsequently relax these constraints by Lagrangian dual, the progressive hedging method Rockafellar and Wets [4], and methods proposed in Ruszczynskii [6] and Mulvey and Ruszczynskii [0]. More recently, Zhao [9] has given an interior decomposition method based on the Lagrangian dual approach. The third approach is to use the Bender decomposition formulation 47). The L-shaped method of Van Slyke and Wets [7] was extended in Birge [2] for the multistage problems using this formulation. The prototype algorithm of this section also uses the Bender decomposition formulation. However, instead of using cutting planes, as is the case with L- shaped methods, it proposes an interior barrier decomposition scheme that is an extension of the method for the two-stage problem discussed in the previous section. Our proposal is based on the property that the multistage barrier problems forms a self-concordant family tree. s D k t 5.2 Self-Concordant Tree Property in Multistage Problem For a two-stage problem, Theorems 2.2 shows that the barrier recourse functions {η k x, ), > } and {rx) := c T x + bx) + η k x, ), > 0} are self-concordant families. The following two theorems establish recursively that all recourse functions and objective functions in a multistage barrier problem 48) are also self-concordant families, and show that the complexity values accumulate additively. Theorem 5. For a multistage barrier problem in 48), fix a pair t, k) and assume that {η t+,s) x tk, ), > 0}, s D k t are self-concordant families with parameter functions α t+,s) ) =, γ t+,s) ) = ν t+,s) ) =, ξ t+,s) ) = ϑt+,s) /, σ t+,s) ) = /2, then {f tk x tk, ), > 0} is a self-concordant family with following parameter functions: α tk ) =, γ tk ) = ν tk ) =, ξ tk ) = ϑtk, σtk ) = 2. Proof: The proof of convexity and differentiability of f tk x tk ) is straight forward. It is also clear that f tk x tk, ) is a self-concordant function following a proof similar to that of Lemma 3.. We need to prove condition SCF3) in Definition 2.2, i.e., that f tk x tk, ) are compatible with neighbors. Let g tk = B tk x tk ), H tk = 2 B tk x tk ). To simplify notation we let g s := η t+,s) x tk, ), H s := 2 η t+,s) x tk, ), R s := Q t+,s) H s 2, s D k t. Hence, { f tk x tk )} = g tk + {g s }, and 2 f tk x tk ) = g tk + H s. s D k t s D k t 8

19 We note that g s, H s, {g s } are computed recursively by accumulating from stage t + up to the final stage T. We now show that {f tk x tk, ), > 0} is a self-concordant family. Let Î = [I,..., I], h [ ] T = h T Î, ḡ T x, ) = g tkt,..., g st, s Dt k, D tk = H tk 2, D tk = H tk 2, D s = ) T st R s R st 2, Ds x, ) = R s R st ) 2 R s H s 2, s Dt k, and D = blkdiagd tk,..., D s, s D k t ), Dx, ) = blkdiag D tk,..., D s, s D k t ). 50) Here blkdiag represents a block diagonal diagonal matrix. Hence, { ht f tk x tk } ) = h T D Dḡ ht DD T h ḡt D T Dḡ = h T H tk + H s h gtkt H tk g tk + g s T H s g s s D k t h T 2 f tk x)h ϑ tk + s D k t ϑ s 5) = ϑtk s D k t h T 2 f tk x)h. 52) We draw reader s attention to the differences between 5) and 33). The inequality 5) is due to the assumption that all barrier recourse functions {η t+,s) x tk, ), > 0} are self-concordant families with ν tk ) =, hence 5) follows from SCF3), which shows that g s T H s g s is bounded from above by ξ) 2 α) = ϑ s. While 33) holds because the kth second stage barrier function has complexity value, which means gkt H k g k is bounded from above by ϑk. We emphasize that g st H s g s is not necessarily bounded from above, and hence we don t require η t+,s) x tk ) to be a barrier. The proof of the second inequality of 7) for f tk x tk ) follows steps similar to that for 35). Theorem 5.2 For a multistage barrier problem in 48), fix a pair t, k) and assume that {f tk x tk, ), > 0} is a self-concordant family, then {η tk x at,k), ), > 0} is also a selfconcordant family with exactly the same parameter functions α tk ) =, γ tk ) = ν tk ) =, ξ tk ) = ϑtk, σtk ) = 2. Proof: By following steps similar to those in the proof for the two-stage problem we can show that η tk x at,k), ) is convex, twice continuously differentiable, and self-concordant. We only need to show that η tk, x at,k) ) is compatible with neighbors and compute its parameter functions. Using the approach of Lemma 2.2 and Lemma 2.3 we get 9

20 2 η tk, x at,k) ) = T T R R T ) T 53) { η tk, x at,k) )} = T T R R T ) R H 2 {ḡ}, 54) where T = T tk, Q = Q tk, ḡ = f tk, x tk ), H = 2 f tk, x tk ), R = Q tk H 2. 55) Hence, we have { ht η tk x at,k), ) } = h T T T R R T ) R H 2 {ḡ} h T T T R R T ) T h {ḡ} ) T H 2 RT R R T ) R H 2 {ḡ} ht 2 η tk x at,k), )h {ḡ} ) T H {ḡ} ϑtk h T 2 η tk x at,k), )h. 56) The last inequality of 56) holds because {f tk x tk, ), > 0} is a self-concordant family by assumption. Now, we prove the second inequality 7) in condition SCF3) of Definition 2.2. = = = { h T 2 η tk x at,k) }, )h { ht T T R ) } RT T h h T T T R ) RT Q { H H} H QT R R) T h { h T 2 f tk } x tk, ) h h T 2 f tk x tk, ) h = ht 2 η tk x at,k), )h, 57) where h = H QT R R) T h. The inequality holds because {f tk x tk, ), > 0} is selfconcordant family by assumption. The last equality follows from a straightforward computation. Finally by induction we show the structure of the barrier Bender decomposition formulation: all members are self-concordant families with respect to a single control parameter and the complexity values accumulate across the tree additively. Theorem 5.3 For any subproblem t, k) in a multistage problem, both η tk x at,k), ) and its objective function {f tk x tk, ), > 0} are self-concordant families with exactly the same parameter functions α tk ) =, γ tk ) = ν) =, ξ tk ) = ϑtk, σtk ) = 2. Especially the first stage objective function is a self-concordant family with ξ,) ) = which is the total complexity values of the scenario tree. ϑ,), 20

21 Proof: For any fixed x T 2, ), η T, ) x T 2, ), ) is a two-stage problem, hence {f T, ) x T, ), )} is a self-concordant family as shown in our analysis of a two-stage problem. Assume that {f tk x tk, ), > 0} is a self-concordant family for any 2 t T 2, then Theorem 5.2 shows that {η tk x at,k), ), > 0} is a self-concordant family. Given this conclusion and by using Theorem 5. we further conclude that {f t, ) x t ) ), )} is a self-concordant family. Hence, {f tk x tk, ), > 0}, and {η tk x at,k), ), > 0}, t =,..., T, k =,..., Kt) are self-concordant families. 5.3 A Prototype Barrier Decomposition Algorithm for multistage Stochastic Programs From Theorem 5.3 the complexity of the first stage composite barrier recourse function is ϑ,). This suggests that if the Hessian and gradient of the recourse function can be computed exactly or with sufficient accuracy) then we can apply a short or long step algorithm to the first stage problem. Computing the gradient and Hessian of the recourse function requires recursive solutions of second and subsequent stage centering problems, which also decompose. This results in a prototype decomposition algorithm for multistage stochastic programs. This algorithm is outlined as Algorithm 2. It is straightforward to follow the proofs of the two-stage problem, and draw conclusions about the number of first stage Newton steps of a short step or a long step algorithm. We state the result without proof here. Theorem 5.4 Let 0 be the initial barrier parameter, and ɛ be its target value. If δ 0, x 0 ) κ = 2 3)/2, and if the short step algorithm reduces at a constant rate γ = σ/ ϑ,), where σ 0. then the short step algorithm terminates with a solution x satisfying δɛ, x) κ ) in O ϑ, )ln 0 ɛ number of first stage Newton iterations. Theorem 5.5 Let 0 be the initial barrier parameter, and ɛ be its target value. If δ 0, x 0 ) κ = 2 3)/2, and if long step algorithm reduces at rate γ < ), then this algorithm terminates with a solution x satisfying δɛ, x) κ in O ϑ,) ln 0 ɛ number of first stage Newton iterations. 6 Conclusions And Future Work In this paper we have shown that we can regularize two and mulit-stage stochastic programming problems by using self-concordant barriers on the feasible regions. We showed that the barrier decomposition problems resulting from this regularization form a self-concordant family tree whose complexity value accumulates additively. These properties allows us to apply classical path following interior point algorithms and allow us to bound the number of first stage Newton iteration required to solve these problems. The algorithms for two and multistage problems proposed in this paper are prototype algorithms since they require exact solutions for second and subsequent stage centering problems. In practice it is only possible to find an approximate solution of these problems. The computational results for the two-stage conic programs in Mehrotra and Ozevin [7] suggest that in the two-stage setting it suffices to find inaccurate 2

22 Algorithm 2 Prototype long step algorithm for multistage convex problem solve LSMNC,, ɛ); LSMNCt, k, ɛ){ } find 0, x 0, γ such that δ 0, x 0 ) κ, γ = σ/ ϑtk, σ 0. set x = x 0, = 0 while > ɛ) update = γ while δx, ) > κ) if t = T, solve η T s, s Dt k ; otherwise, solve LSMNCt +, s, ), s Dt k compute Newton direction x and update x = x + +δx,) x if δx, ) > 2κ; otherwise x = x + x. solutions of the second stage problems. This is not surprising because, although the analysis of interior point algorithm presented here requires exact Newton direction computations, it is well understood that in practical application of this method the Newton direction computation need not be exact. It remains to be seen if this empirically observed property continues to hold for the multistage case. It is a topic of future research to see how the analysis of this paper should be extended to the situation where exact solutions of second and subsequent stage centering problems are not available. It was also observed in [7] that warm-start and adaptive addition of scenarios was possible for the two-stage conic programming problems. It is a topic of future research to find if such properties generalize to the multistage case. References [] K.A. Ariyawansa and Y. Zhu. A class of polynomial volumetric barrier decomposition algorithms for stochastic semidefinite programming. Technical report, Washington State University, [2] John R. Birge and Francois Louveaux. Introduction To Stochastic Programming. Springer, New York, 997. [3] Leonid Faybusovich and Michael Gekhtman. Calculation of universal barrier functions for cones generated by chebyshev systems over finite sets. SIAM J. on Optimization, 44): , [4] Leonid Faybusovich, Thanasak Moutonglang, and Takashi Tsuchiya. Numerical experiments with universal barrier functions for cones of chebyshev systems. Computational Optimization And Applications To Appear),

23 [5] Anthony V. Fiacco. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming. Academic Press, New York, 983. [6] M. Hegland, M.R. Osborne, and J. Sun. Parallel interior point schemes for solving multistage convex programming. Annals of Operations Research, 08-4), 200. [7] Sanjay Mehrotra and M. Gokhan Ozevin. On the implementation of interior point decomposition algorithms for two-stage stochastic conic programs. Technical report, IEMS Department, Northwestern University, [8] Sanjay Mehrotra and M. Gokhan Ozevin. Decomposition-based interior point methods for two-stage stochastic convex quadratic programs with recourse. Operations Research, [9] Sanjay Mehrotra and M. Gokhan Ozevin. Decomposition-based interior point methods for two-stage stochastic semidefinite programming. SIAM Journal on Optimization, 8): , [0] John M. Mulvey and Andrzej Ruszczyński. A new scenario decomposition method for large-scale stochastic optimization. Operations Research, 433): , May 995. [] Yurii Nesterov and Arkadii Nemirovskii. Interior Point Polynomial Algorithms In Convex Programming. SIAM, Philadelphia, 994. [2] John R.Birge. An L-shaped method computer code for multistage stochastic linear programs. Numerical Methods in Stochastic Programming, R. Wets and Y. Ermoliev, eds, pages , 988. [3] James Renegar. A Mathematical View of Interior-Point Methods in Convex Optimization. SIAM, 200. [4] R. T. Rockafellar and R.J.B Wets. Scenarios and policy aggregation inoptimization under uncertainty. Mathematics of Operations Research, 6): 29, 99. [5] Walter Rudin. Principals of Mathematical Analysis. McGraw-Hill, 976. [6] Andrzej Ruszczyński. On convergence of an augmented lagrangian decomposition method for sparse convex optimization. Mathematics of Operations Research, 203): , 995. [7] R. Van Slyke and R. Wets. L-shaped linear programs with applications to optimal control and stochastic programming. SIAM Journal of Applied Mathematics, 7 4): , 967. [8] Gongyun Zhao. A log-barrier method with Benders decomposition for solving two-stage stochastic linear programs. Mathematical Programming, 903):507, 200. [9] Gongyun Zhao. A lagrangian dual method with self-concordant barriers for multi-stage stochastic convex programming. Mathematical Programming, 02): 24, January

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

On self-concordant barriers for generalized power cones

On self-concordant barriers for generalized power cones On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Solution Methods for Stochastic Programs

Solution Methods for Stochastic Programs Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Diana Barro and Elio Canestrelli. Combining stochastic programming and optimal control to solve multistage stochastic optimization problems

Diana Barro and Elio Canestrelli. Combining stochastic programming and optimal control to solve multistage stochastic optimization problems Diana Barro and Elio Canestrelli Combining stochastic programming and optimal control to solve multistage stochastic optimization problems ISSN: 1827/3580 No. 24/WP/2011 Working Papers Department of Economics

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

On Conically Ordered Convex Programs

On Conically Ordered Convex Programs On Conically Ordered Convex Programs Shuzhong Zhang May 003; revised December 004 Abstract In this paper we study a special class of convex optimization problems called conically ordered convex programs

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition

A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

AN INTERIOR RANDOM VECTOR ALGORITHM FOR MULTISTAGE STOCHASTIC LINEAR PROGRAMS

AN INTERIOR RANDOM VECTOR ALGORITHM FOR MULTISTAGE STOCHASTIC LINEAR PROGRAMS SIAM J. OPTIM. c 1998 Society for Industrial and Applied Mathematics Vol. 8, No. 4, pp. 956 972, November 1998 006 AN INTERIOR RANDOM VECTOR ALGORITHM FOR MULTISTAGE STOCHASTIC LINEAR PROGRAMS EITHAN SCHWEITZER

More information

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS

CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS

OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Augmented self-concordant barriers and nonlinear optimization problems with finite complexity

Augmented self-concordant barriers and nonlinear optimization problems with finite complexity Augmented self-concordant barriers and nonlinear optimization problems with finite complexity Yu. Nesterov J.-Ph. Vial October 9, 2000 Abstract In this paper we study special barrier functions for the

More information

Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming

Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming Decomposition with Branch-and-Cut Approaches for Two Stage Stochastic Mixed-Integer Programming by Suvrajeet Sen SIE Department, University of Arizona, Tucson, AZ 85721 and Hanif D. Sherali ISE Department,

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

Stochastic Integer Programming

Stochastic Integer Programming IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in

More information

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm

A sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems

On the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage

More information

Primal-dual IPM with Asymmetric Barrier

Primal-dual IPM with Asymmetric Barrier Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers

More information

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725 Primal-Dual Interior-Point Methods Javier Peña Convex Optimization 10-725/36-725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T

More information

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions

Full Newton step polynomial time methods for LO based on locally self concordant barrier functions Full Newton step polynomial time methods for LO based on locally self concordant barrier functions (work in progress) Kees Roos and Hossein Mansouri e-mail: [C.Roos,H.Mansouri]@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1

SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 SOR- and Jacobi-type Iterative Methods for Solving l 1 -l 2 Problems by Way of Fenchel Duality 1 Masao Fukushima 2 July 17 2010; revised February 4 2011 Abstract We present an SOR-type algorithm and a

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Constructing self-concordant barriers for convex cones

Constructing self-concordant barriers for convex cones CORE DISCUSSION PAPER 2006/30 Constructing self-concordant barriers for convex cones Yu. Nesterov March 21, 2006 Abstract In this paper we develop a technique for constructing self-concordant barriers

More information

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse

An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse An Adaptive Partition-based Approach for Solving Two-stage Stochastic Programs with Fixed Recourse Yongjia Song, James Luedtke Virginia Commonwealth University, Richmond, VA, ysong3@vcu.edu University

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control

Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control Duality and dynamics in Hamilton-Jacobi theory for fully convex problems of control RTyrrell Rockafellar and Peter R Wolenski Abstract This paper describes some recent results in Hamilton- Jacobi theory

More information

Models and Algorithms for Distributionally Robust Least Squares Problems

Models and Algorithms for Distributionally Robust Least Squares Problems Models and Algorithms for Distributionally Robust Least Squares Problems Sanjay Mehrotra and He Zhang February 12, 2011 Abstract We present different robust frameworks using probabilistic ambiguity descriptions

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS

POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Iteration-complexity of first-order penalty methods for convex programming

Iteration-complexity of first-order penalty methods for convex programming Iteration-complexity of first-order penalty methods for convex programming Guanghui Lan Renato D.C. Monteiro July 24, 2008 Abstract This paper considers a special but broad class of convex programing CP)

More information

Additional Homework Problems

Additional Homework Problems Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.

More information

Absolute Value Programming

Absolute Value Programming O. L. Mangasarian Absolute Value Programming Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information