An Interior-Point Trust-Region Algorithm for Quadratic Stochastic Symmetric Programming
|
|
- Marsha Washington
- 5 years ago
- Views:
Transcription
1 Thai Journal of Mathematics Volume 5 07 Number : ISSN An Interior-Point Trust-Region Algorithm for Quadratic Stochastic Symmetric Programming Phannipa Kabcome and Thanasak Mouktonglang Department of Mathematics, Faculty of Science Chiang Mai University Chiang Mai 5000, Thailand phannipa.math@gmail.com P. Kabcome thanasak@chiangmai.ac.th T. Mouktonglang Abstract : Stochastic programming is a framework for modeling optimization problems that involve uncertainty. In this paper, we study two-stage stochastic quadratic symmetric programming to handle uncertainty in data defining Deterministic symmetric programs in which a quadratic function is minimized over the intersection of an affine set and a symmetric cone with finite event space. Twostage stochastic programs can be modeled as large deterministic programming and we present an interior point trust region algorithm to solve this problem. Numerical results on randomly generated data are available for stochastic symmetric programs. The complexity of our algorithm is proved. Keywords : interior-point algorithm; trust-region subproblem; symmetric programming; Stochastic programming. 00 Mathematics Subject Classification : 90C30; 90C5. Introduction Deterministic optimization problems are formulated to find optimal solutions in problems with certainty in data. In fact, in some applications we cannot specify the model entirely because it depends on information which is not available at the time of formulation but that will be determined at some point in the future. Corresponding author. Copyright c 07 by the Mathematical Association of Thailand. All rights reserved.
2 38 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang Stochastic programming began in the 950s to handle those problems that involve uncertainty in data. Many real world applications carry an inherent uncertainty within them. In particular, two-stage stochastic programs have been established to formulate many applications to linear, non-linear and integer programming. Conic programming deals with an important class of tractable convex optimization problems. Symmetric programming is a generalization of linear programming that includes second-order cone programming and semidefinite programming as special cases. All of these problems are treated in a unified theoretical framework for the design of algorithms and the analysis of their complexity. Stochastic symmetric programming may be viewed as an extension of symmetric programming by allowing uncertainty in data. Interior point methods are considered to be one of the most successful classes of algorithms for solving convex optimization problems. We refer to [] which developed an interior-point trust-region algorithm for minimizing a quadratic objective function in the intersection of a symmetric cone and an affine subspace by solving sequential trust-region subproblems. Global first-order and second-order convergence results were proved. The techniques and properties in both interior-point algorithms and trust-region methods show the complexity of the algorithm by providing strong theoretical support. Our algorithm is developed as an interior-point trust-region algorithm for minimizing stochastic symmetric programming. In this paper, we present stochastic symmetric programming with a quadratic function and finite scenarios. And then we will explicitly formulate a problem as a large scale deterministic program and develop an interior-point trust-region algorithm to solve it. The organization of this paper is as follows. In section, we present some concepts and results of symmetric cone and self-concordant barrier function used in the theory of interior-point method. In section 3, we propose a model of twostage stochastic symmetric programs with linear and quadratic function. We also describe a model in deterministic programming. In section 4, we present an interior point trust-region algorithm for solving stochastic symmetric programming and provide complexity of our algorithm. In section 5, we present computational results on a case study problem. In addition, we compared our results with those of the quadprog function in the MATLAB optimization toolbox in cases of the nonnegative orthant cones. Our algorithm can solve stochastic symmetric programming problems that include variables in second order cone and cones of real symmetric positive semidefinite matrices. The quadprog function is unable to solve problems with these variables. Section 6 has some conclusion and remarks. Symmetric Cone and Self-Concordant Barrier In this section, we introduce some of the concepts and relevant results that will be useful in section 4. For more detailed exposure to those concepts, see the reference [] and [3]. Let E be finite-dimensional real Euclidean space with inner product,.
3 An Interior-Point Trust-Region Algorithm for Quadratic Definition. Cone. A subset K of E is said to be a cone if for every x K and λ > 0 imply that λx K. Definition. Convex cone. A subset K of E is a convex cone if and only if for every x,y K and λ,µ > 0 imply that λxµy K. We assume that K is a convex cone in a E. The dual of K is defined as Definition.3 Dual cone. Let K be a cone. The set is called the dual cone of K. K = {y E x,y 0, x K}. As the name suggests, K is a cone, and is always convex, even when the original cone is not. If cone K and its dual K coincide, we say that K is self-dual. In particular, this implies that K has a nonempty interior and does not contain any straight line i.e., it is pointed. We denote by GLE, the group of general linear transformations on E, and by AutK, the automorphism group of convex cone K, that is AutK = {g GLE gk = K}. Definition.4 Homogeneity. The convex cone K is said to be homogeneous if for every pair x,y intk, there exists an invertible linear operator g for which gk = K and gx = y. Definition.5 Symmetric cone. The convex cone K is said to be symmetric if it is self-dual and homogeneous. Almost all conic optimization problems in real world applications are associated with symmetric cones such as nonnegative orthant cones, second-order cones and cone of positive semi-definite matrices over the real or complex numbers. The nonnegative orthant cone R n = {x R n x i 0,i =,,...,n}. The second-order cone also known as the quadratic, Lorentz, or the ice cream cone of dimension n n Q n = {x R n x i x n and x n 0}. i= The positive semi-definite cone S n = {X X Rn n,x is positive semidefinite matric}. A self-concordant barrier function is crucial to an interior-point methods. We introduce some of the fundamentals idea a self-concordant barrier function that will be useful in this paper.
4 40 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang Definition.6 Self-Concordant Function. [3] Let K 0 K E and F :K 0 R be a C 3 smooth convex function such that F is called self-concordant function if F x[h,h,h] F xh,h 3/. for all x K 0 and for all h S. Definition.7 Self-concordant barrier. [3] A self-concordant function F is a self-concordant barrier for a closed convex set K if ϑ = sup x K 0 F x,f x F x. the value ϑ is called a barrier parameter of F. In principle, every convex cone admits a self-concordant barriersee[3], section 4. We consider a self-concordant barrier of symmetric cones as follows Fx = n lnx i i= Fx = lnx n n Fx = lndetx x i i= with ϑ = n, for cone of nonnegative orthant, with ϑ =, for second-order cone, with ϑ = n, for positive semidefinite cone. Lemma.. [4] Let Fx is a self-concordant barrier for K, then F x F x = x,.3 F x,x = ϑ..4 Lemma.. [4] If K Fx is a symmetric cone and F is a self-concordant barrier for K, then F x is a linear automorphism of K for each x K 0. The strictly convex assumption of Fx implies that F x is positive definite for every x K 0. This allows us to define a norm as follows h x = h,f xh.5 is a norm on E induced by F x. Let B x y,r denote the open ball of radius r centered at y, where the radius is measured with respect to. x. Lemma.3. [4] If Fx is a self-concordant function for K, then we have B x x, K 0 for all x K 0. Lemma.4. [4] Assume Fx is a self-concordant function for K, x K 0, and y B x x,, then Fy Fx F x,y x y x,f xy x y x 3 x 3 y x x..6
5 An Interior-Point Trust-Region Algorithm for Quadratic... 4 Lemma.5. [4] Assume Fx is a self-concordant function for K. If nx x /4, then Fx has a minimizer z and z x x nx x where nx = F x F x be the Newton step of Fx. 3 nx x nx x 3,.7 Lemma.6. [4] Assume Fx is a self-concordant barrier with barrier parameter ϑ. If x,y K 0, then F x,y x ϑ..8 3 The Problem and Its Modeling In a standard two-stage stochastic programming model, decision variables are divided into two subsets. First-stage variables are groups of variables determined before the realizations of random events are known. Once the uncertain events have unfolded, further design or operational adjustments can be made through values of the second-stage known as recourse variables, which are determined after knowing the realized values of the random events. A standard formulation of the two-stage stochastic program is min f xe[qx,ξ] 3. st. A 0 x = b, 3. x K, 3.3 where x is the first-stage decision variable, Qx, ξ is the optimal value of the second-stage problem: min f y,ξ 3.4 st. TξxWξy = hξ, 3.5 y K 3.6 where y is the second-stagevariable and the cones K and K are symmetric cones. The matrix A 0 and the vectors b and c are deterministic data, the matrices Wξ and Tξ and the vectorshξ and dξ arerandom data whoserealizationsdepend on an underlying outcome ξ with a known probability function p. 3. The Stochastic Linear Symmetric Programs SLSP The stochastic linear symmetric programs can be stated as: min c, x E[Qx, ξ] 3.7 st. A 0 x = b, 3.8 x K, 3.9
6 4 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang where x is the first-stage decision variable, Qx,ξ is the minimum of the problem min gξ, y 3.0 st. TξxWξy = hξ, 3. y K 3. where y is the second-stage variable and the cones K and K are symmetric cones. The formulation of the above problem assumes that the second-stage data ξ can be modeled as a random vector with a known probability distribution. The random vector ξ has a finite number of possible realizations, called scenarios, say ξ,...,ξ K with respective probability masses p,...,p K. Then the expectation in the first-stage problem s objective function can be written as the summation E[Qx,ξ] = p k Qx,ξ k. 3.3 A SLSP can be written as a deterministic program and represented as follows: min qx,y,y,...,y K = c,x K p k g k,y k 3.4 st. A 0 x = b, 3.5 T k xw k y k = h k,k =,,...,K, 3.6 x K,y k K,k =,,...,K, 3.7 where x R n is the first-stage decision variable and y k R k are the second stage variable. The matrix A 0 R m n, the vectors b R m and c R n are deterministic data. The matrices W k R m k n k, T k R m k n and vectors h k R n k, g k R n k are random data associated with probability for k =,,...,K. The cones K and K are symmetric cone in R n and R n k for k =,,...,K n,n k are positive integers. 3. The Stochastic Quadratic Symmetric Programs SQSP The stochastic quadratic symmetric programs can also be cast as SLSP. The matrix H 0 R n n,h 0 0 is deterministic data and the matrices H ξ is random data whose realizations depend on an underlying outcome ξ with a known probability function p. Let H k R n k n k,h k 0 be random data associated with probability for k =,,...,K. A SQSP can be formulated as min x,h 0xc,xE[Qx,ξ] 3.8 st.a 0 x = b, 3.9 x K, 3.0
7 An Interior-Point Trust-Region Algorithm for Quadratic where x is the first-stage decision variable, and Qx,ξ is the minimum of the problem min y,h ξygξ,y 3. st.tξxwξy = hξ, 3. y K, 3.3 where y is the second-stagevariable and the cones K and K are symmetric cones. A SQSP can be equivalently written as a deterministic symmetric programs as: Define min q x,y,y,...,y K = x,h 0xc,x K p k yk,h k yk g k,y k 3.4 st. A 0 x = b, 3.5 T k xw k y k = h k,k =,,...,K, 3.6 x K,y k K,k =,,...,K. 3.7 K = K K K...,K K is the symmetric cone, 3.8 X=[x,y,y,,y K ] T R n k n,x K,y k K,for,,...,K, 3.9 B = [ b, h, h,, h K] T m k R m, 3.30 C = [ c,p g,p g,,p K g K] T n k R n, 3.3 A T W 0 0 A = T 0 W 0 R m m k n K k n, 3.3. : :.. T K 0 0 W K H p H 0 0 Q = 0 0 p H 0 R n n k n K k n,. : : p K H K 3.33
8 44 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang we can rewrite a SQSP as a large deterministic program as follow minqx = X,QXC,X 3.34 st.ax = B, 3.35 X K The Interior-Point Trust-Region Algorithm In this section, we present our interior-point trust-region algorithm for solving First, we define the function as f ηi X = η i qxfx = η i x,h 0xc,x F x p k yk,h k y k g k,y k F y k 4. where FX : K R is a self-concordant barrier for the symmetric cone K with a barrier parameter ϑ. A self-concordant barrier define by FX = F x F y k where F x is a self-concordant barrier function for cone K with a barrier parameter ϑ and F y k is a self-concordant barrier function for cone K k with a barrier parameter ϑ k for k =,,3,...,K and ϑ = ϑ K ϑ k SinceqXisquadraticandfromdefinition.6thenf ηi isalsoaself-concordant barrier function. Consider interior-point trust-region algorithm in each inner iteration for a fix η i we need to find the central path for decreased the value of f ηi X. For outer iterations we need increase the value of η i to positive infinity, which implies that the central path converges to a solution that is optimal for the original problem. Let d = d,d,d,d3,,dk E be the trail step of fηi. Consider function f ηi in each inner iteration f ηi x i,j d,y i,j d,y i,j d,...,yk i,j d K f ηi x i,j,y i,j,y i,j,...,yk = η i qx i,j d,y i,j d,y i,j d,...,yk i,j d K F x i,j d F y k i,j dk η i qx i,j,y i,j,y i,j,...,yk i,j F x i,j F y k i,j. i,j
9 An Interior-Point Trust-Region Algorithm for Quadratic = η i xi,j d,h 0 x i,j d η i c,x i,j d F x i,j d η k ip k y i,j dk,hk yk i,j dk η i p k g k,y k i,j dk Fk y k i,j dk η i xi,j,h 0 x i,j ηi c,x i,j F x i,j η i p k g k,y k i,j Fk y k i,j = d,η i H 0 d η i H 0 x i,j c,d η k ip k y dk,η ip k H k ηi p k H k y k i,j gk,d k F x i,j d F x i,j i,j,hk y k i,j dk F y k i,j dk F y k i,j. 4. Wewanttofind theboundoff ηi x i,j d,y i,j d,y i,j d,...,yk i,j dk then we consider the self-concordant barrier in above equation. From Lemma.3, for any X i,j K 0, we have X i,j d K 0. Given F x i,j d α i,j <, F y k i,j d k αk < for k =,,...,K and α i,j K α k <. Implied that F x i,j d K Let F X = F x K F yk i,j d k α i,j K α k <. F y k denote the Hessian of a self-concordant function FX. Since it is positive definite for every X K 0 implies that F X is positive definite. For h = [h,h,h,,hk ] E. Let further we define a norm on E induced by F X as h X = h,f Xh = h,f xh h k,f yk h k F x i,j d F x i,j F y k i,j dk F y k i,j F x i,j,d d,f x d 3 x i,jd i,j 3 d xi,j k K d,f yk i,j dk d k 3 y k i,j 3 d k y k i,j F y k i,j,dk
10 46 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang F x i,j,d d,f K x i,j d F y k k d,f y k i,j dk α 3 i,j 3 α i,j F x i,j,d d,f K x i,j d F y k k d,f y k i,j dk i,j,dk α k 3 α k i,j,dk α i,j K α k 3 3 α i,j K α k. 4.3 The third inequality of 4.3 follows from Lemma.4. From 4.3 we can rewrite 4. as f ηi x i,j d,y i,j d,y i,j d,...,yk i,j d K f ηi x i,j,y i,j,y i,j,...,yk d,η i H 0 d η i H 0 x i,j c,d,η ip k H k d k F x i,j,d d,f x K i,jd = 3 k d,f yk i,j dk dk F yk i,j,dk α i,j K α k 3 3 α i,j K α k d, η i H 0 F x i,j d η i H 0 x i,j cf x i,j,d d k, η i p k H k F y k i,j d k α i,j K α i,j K α k 3 α k i,j k η i p k H yk i,j gk F y k i,j,dk. 4.4 d Nowtheinequality4.4givesanupperboundforf ηi x i,j d,y i,j d,y i,j,...,yk. We can try to minimize this bound given in the right-hand side of inequality. This lead to the following trust-region subproblem i,j d K
11 An Interior-Point Trust-Region Algorithm for Quadratic min d, η i H 0 F x i,j d η i H 0 x i,j cf x i,j,d d k, η i p k H k F y k i,j d k k η i p k H y k i,j gk F yk i,j,dk = m i,j d,d,d,...,dk 4.5 st. F x i,j d Let the transformation F y k i,j d k α i,j α k. 4.6 d = F x i,j d, 4.7 and define d k = F yk k i,j d,for k =,,...,K, 4.8 Q i,j = η i F x i,j H0 F x i,j I, 4.9 C i,j = F x i,j ηi H 0 x i,j cf x i,j, 4.0 Q k = η i p k F y k i,j k H F y k i,j I k,for k =,,...,K, 4. C k = F yk i,j ηi p k H k y k i,j gk F yk i,j,for k =,,...,K. 4. We can rewrite equations 4.5 and 4.6 as follow minq i,j d,d,d,...,d K = d,q i,j d C i,j,d K d k k,q d k k C,d k 4.3 st. d d k α i,j α k. 4.4 Once d i,j is computed then we obtain the trail step d i,j = F x i,j d i,j, 4.5
12 48 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang d k = F yk i,j d k for k =,,...,K 4.6 and from inequality 4.4 that we have f ηi x i,j d i,j,y i,j d,...,y K i,j d K f ηi x i,j,y i,j,y i,j,...,yk i,j q i,j d i,j,d,...,d K α i,j K α k 3 i,j 3 α i,j K α k. 4.7 Let n ηi X i,j = n ηi x i,j,y i,j,...,yk i,j be the Newton step of f η i x,y,...,y K at the point x i,j,y i,j,y i,j,...,yk i,j. We should point out that n ηi X i,j X i,j = f η i X i,j f η i X i,j,f η i X i,j f η i X i,j f η i X i,j = f η i X i,j,f η i X i,j f η i X i,j = η i H 0 x i,j cf x i,j,η i H 0 F x i,j η i H 0 x i,j cf x i,j η i p k H k yk i,j gk F yk i,j, η i p k H k F yk i,j η i p k H k y k i,j gk F yk i,j = C i,j,q i,j C i,j C k,q k C k 4.8 where the last equality follows equalities Now we are ready to present our algorithm. From what we defined in 3.9, 3.30 and 3.3, we define the feasible set by F = {X E AX = B,x K,y k K, for k =,,...,K} Algorithm : An interior-point trust-region algorithm Step 0 Initialization seti = 0,j = 0, choose starting point x 0,0,y 0,0,y 0,0,...,yK 0,0 {F}, an initial trust-region radius α 0,0 0,, and an initial parameter η 0 are given. Until convergence repeat. Step Test inner iteration termination compute n ηi X i,j X i,j
13 An Interior-Point Trust-Region Algorithm for Quadratic if n ηi X i,j X i,j go to step, otherwise set 36 x i,j,y i,j,y i,j,...,yk i,j = x i,j,y i,j,y i,j,...,yk and go to step 3. i,j Step Step calculation solve 4.3and 4.4 to obtain d i,j,d,d,...,d K and compute 4.5and 4.6 to obtain d i,j,d,d,...,d K, update x i,j,y i,j,...,yk i,j =x i,jd i,j,y i,j d,...,y K and go to step 3. Step 3 Update parameter η Set η i = θη i for some θ >, increase i by and go to step. i,j dk Lemma 4.. Any global minimizer d i,j,d,d,...,d K of the problems 4.3 and 4.4satisfies the equation Q i,j µ i,j Id i,j = C i,j, 4.0 Q k µ i,j I k d k i,j = Ck, for k =,,...,K, 4. which Q i,j µ i,j I and Q k µ i,j I k for k =,,...,K are positive semi-definite µ i,j 0 and µ i,j d i,j K d k α i,j K α k = 0. This Lemma is well-known in the trust-region literature. For proof see e.g. Section 7. of [5]. Theorem 4.. If we choose α i,j K α k =, then we have 4 f ηi x i,j,y i,j,y i,j,...,yk i,j f η i x i,j,y i,j,y i,j,...,yk i,j < which is independent of i and j. Proof. If the solution of equation 4.3 and 4.4 line on to boundary of the trust-region, i.e., d i,j K d k = α i,j K α k we have q i,jd i,j,d,d,...,d K = d i,j,q i,j d i,j C i,j,d i,j d k,q k d k C k,d k,
14 50 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang = d i,j,q i,j d i,j C i,j d i,j,q i,j d i,j d k,q k d k C k d k,q k d k d i,j, η i F x i,j H0 F x i,j I d i,j = d i,j,µ i,j d i,j d k,µ i,j d k d k, η i p k F y k i,j k H F y k i,j I k d k = µ i,j d i,j d k d i,j, η i F x i,j H0 F x i,j d i,j d k, η i p k F y k k i,j H F y k i,j = µ i,j α i,j α k d k d i,j d i,j, η i F x i,j H0 F x i,j d i,j d k, η i p k F yk k i,j H F yk i,j d k α i,j α i,j d k α k α k = In the above, the third equality follows from the equalities 4.9, 4., 4.0 and 4.. The inequality follows from the fact that η i F x i,j H 0 F x i,j and η i p k F yk i,j H k F yk i,j for k =,,...,K are positive definite or positive semidefinite. Therefore we get the last inequality. From 4.7 and 4.3 we get f ηi x i,j,y i,j,y i,j,...,yk i,j f η i x i,j,y i,j,y i,j,...,yk i,j 3 /43 3 /4 < If the solution of 4.3 and 4.4 lies in the interior of the trust-region, i.e., d i,j K d k < α i,j K α k. From Lemma 4., we know µi,j = 0 and consequently d i,j = Q i,j C i,j, d k = Q k C k for k =,,...,K which gives q i,jd i,j,d,d,...,d K = d i,j,q i,j d i,j C i,j,d i,j d k,q k d k C k,d k
15 An Interior-Point Trust-Region Algorithm for Quadratic... 5 = C,i,j,Q i,j C,i,j = C,i,j,Q i,j C,i,j Ck,Q k C k C k,q k C k = n η i X i,j X i,j. 4.5 By4.5andthemechanismofouralgorithm,weknowthat n ηi X i,j X i,j > 36 for all i and j we can rewrite 4.7 as f ηi x i,j,y i,j,y i,j,...,yk i,j f η i x i,j,y i,j,y i,j,...,yk α i,j K n η i X i,j X i,j 3 7 /43 3 /4 < 45. The proof is completed α i,j K α k 3 α k i,j 4.6 Now, weconsiderthe numberofiterationofouralgorithmthat wecan stopthe iteration when the reduction of objective function is smaller than some constant. Lemma 4.3. Let X = argmin X K qx and X = [x,y,y,,y K ] T R n n k,x K,y k K for k =,,...,K. If n η X X 6, then qx qx ϑ ϑ. 4.7 η Proof. Let Xη = argmin X K f η X. From Lemma.5 X Xη X < and from Lemma.3 we have Consider X Xη Xη X Xη X X Xη X <. 4.9 qxη qx q Xη,Xη X = F Xη,Xη X η ϑ η. 4.30
16 5 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang The first inequality follows from the convexity of qx. The equality follows from the fact that f Xη = 0 and the last inequality follows from Lemma.6. From the inequality 4.8 and 4.9 Then, we have qx qxη = q Xη,X Xη X Xη,Q X Xη F Xη =,X Xη X Xη,ηQ X Xη η η F Xη F Xη,F Xη X Xη η η X Xη X F Xη F Xη F Xη X Xη 8η = F Xη,F Xη F Xη X Xη,F Xη X Xη 8η ϑ X Xη Xη η 8η ϑ η ϑ 8η η 4.3 where the third last inequality uses the Lemma.6 and from the fact that ϑ is always greater than. By adding the inequalities 4.30 and 4.3, then we have The proof is completed qx qx ϑ ϑ. η The above Lemma tells us that to get an ǫ solution. Let η i = θη i for some θ > and, we only need provided that the number of outer iterations i satisfies Lemma 4.4. If n ηi X X 6, then η i = η 0 θ i ϑ ϑ, 4.3 ǫ i lnϑ ϑ/ǫη lnθ f ηi X f ηi Xηi θϑ ϑ. 4.34
17 An Interior-Point Trust-Region Algorithm for Quadratic Proof. From the convexity of f ηi X and the inequality 4.8, we can show that f ηi X f ηi Xη i f η i X,X Xη i = η i QX CF X,X Xη i = η i η i ηi QX CF X,X Xη i η i F X,Xη i X η i = θ f η i X η i QX CF X,f η i XX Xη i θ F X F X,F X Xηi X θ f η i X η i QX CF X f θ F X F X F X Xηi X η i X X Xη i θ n ηi X X Xη i X X θ ϑ Xη i X X Consider θ 6 3 θ ϑ 3 θ ϑ f ηi Xηi f ηi Xη i f η i Xη i,xη i Xη i = η i QXη i CF Xη i,xη i Xη i = η i η i QXη i CF Xη i,xη i Xη i η i η i F Xη i,xη i Xη i η i = θ f η i Xηi,Xη i Xη i θ F Xη i,xη i Xη i = θ F Xη i,xη i Xη i < θϑ, 4.36 where the last equality follows from the fact that Xη i minimizes f ηi X which implies that f η i X = 0, the last inequality follows from Lemma.6. By adding inequalities 4.35 and 4.36, then we have The proof is completed f ηi X f ηi Xηi θϑ ϑ.
18 54 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang Theorem 4. and Lemma 4.4 tell us that the steps in each inner iteration in at most steps in each inner iteration. 45θϑ ϑ 4.37 Theorem 4.5. If the initial point X 0,0 satisfies the conditions 4.9, for any ǫ > 0, our algorithm obtains the solution X which satisfies qx qx < ǫ in at most steps, here X = argmin X K qx. 45θϑ ϑln ϑ ϑ ǫη 0 lnθ 4.38 Proof. Inequality 4.33 provides us with the number of outer iterations i and Theorem 4.and Lemma 4.4 providesus with the number of steps in each iteration j. The number of iterations is at most the number of outer iterations multiply the number of inner iterations then we have the bound of number of iteration in at most steps. 45θϑ ϑ ln ϑ ϑ ǫη 0 lnθ Consider the case when the initial point X 0,0 does not satisfy condition 4.9. We can start from the analytic center of the feasible set K, since K is a bounded convex set. Let Xη 0 = argmin X K f η0 X and X = argmin X K qx. Since X 0,0 = argmin X K FX, we have f η0 X 0,0 f η0 Xη0 η 0 qx0,0 qx 4.39 and we choose η 0 / qx 0,0 qx, from Theorem 4. implies that condition 4.9 will be satisfied after at most 45 steps. 5 Implementing the Algorithm In this section we discuss the performance of our algorithm when the domain of the problem is in nonnegative orthant cones and second-order cones. These cones are all well known examples of symmetric cone. We consider multi dimension real sets. We compare our results with those of the MATLAB optimization toolbox is it called quadprog function in cases of the nonnegative orthant cones. Our algorithm can solve stochastic symmetric programming problems that include variables in second- order cone and cones of real symmetric positive semidefinite matrices. The quadprog function is unable to solve problems with these variables.
19 An Interior-Point Trust-Region Algorithm for Quadratic Problem Statement The parameter chosen for deterministic data of SQSP are shown below [ ] [ ] H 0 = c = A = B = The random variables H k,wk,t k,h k,d k, k =,,...,K. For simplicity, we consider the simple case K = 4 scenarios but the procedure can be easily extended to very large K. H H H H W W W 3 W 4 [ ] h h h 3 h 4 [ ]
20 56 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang T T T 3 T [ ] g g g 3 g we randomly generate probability function p in 6 cases as shown below. p p p 3 p 4 case case case case case case Results of an Interior-Point Trust-Region Algorithm In order to make a comparison, the problems are described in chapter 4. Our algorithm is then implemented under MATLAB environment release 05a. The results are in table -, for case nonnegative orthant cones, we compare our results with those of the quadprog function in MATLAB optimization toolbox and our algorithm as in table in table and. In particular, we present our results of stochastic second order cone programming as in Table 3. From the numerical results example, we have the same optimal solutions with those of the quadprog function and our algorithm. The numerical results for stochastic second order cone programming, some case of problem are same optimal solutions and some case are difference because the restrictions of cones.
21 An Interior-Point Trust-Region Algorithm for Quadratic Table : The solution of SQSP when K and K are nonnegative cone by quadprog function for R n. Case Case Case3 Case4 Case5 Case6 x x x x y y y y y y y y y 3 4.5E E E E y y y y y y y y y y E00 y y Optimal value
22 58 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang Table : The solution of SQSP when K and K are nonnegative con by an interior-point trust-region algorithm for R n. Case Case Case3 Case4 Case5 Case6 x x x x y y y y y y y y y y y y y y y y y y y y y Optimal value
23 An Interior-Point Trust-Region Algorithm for Quadratic Table 3: The solution of SQSP when K and K are second-order cone by an interior-point trust-region algorithm for SOCP. Case Case Case3 Case4 Case5 Case6 x x x x y y y y y y y y y y y y y y y y y y y y y Optimal value
24 60 Thai J. Math. 5 07/ P. Kabcome and T. Mouktonglang 6 Conclusion and Remarks In this paper, we present stochastic symmetric programming with linear function and quadratic function and finite scenarios. We can explicitly formulate the problem as a large scale deterministic programs. The complexity of our algorithm is proved to be as good as the interior-point polynomial algorithm. We have verified our performance of the algorithm on simple case study problems. Numerical results show the effectiveness of our algorithm. Acknowledgement : The authors would like to thank the Thailand Research Fund under Project RTA and Chiang Mai University, Chiang Mai, Thailand, for the financial support. References [] Y. Lu, Y. Yuan, An interior-point trust-region algorithm for general symmetric cone programming, SIAM Journal on Optimization, [] J. Faraut, A. Kornyi, Analysis on Symmetric Cones, Oxford Univ Press, Oxford, 994. [3] Y.E. Nesterov, A.S. Nemirovski, Interior-point Polynomial Algorithms in Convex Programming, SIAM Publications, SIAM, Philadelphia, USA, 994. [4] J. Renegar, A Mathematical View of Interior-Point Methods in Convex Optimization, SIAM Publications, SIAM, Philadelphia, USA, 00. [5] A.R. Conn, N.I.M. Gould, Ph.L. Toint, Trust-region Methods, SIAM Publications, SIAM, Philadelphia, USA, 000. Received February 07 Accepted 3 April 07 Thai J. Math.
An interior-point trust-region polynomial algorithm for convex programming
An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationSelected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.
. Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,
More informationSupplement: Universal Self-Concordant Barrier Functions
IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationOn Conically Ordered Convex Programs
On Conically Ordered Convex Programs Shuzhong Zhang May 003; revised December 004 Abstract In this paper we study a special class of convex optimization problems called conically ordered convex programs
More informationPrimal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization
Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics
More informationLecture 1. Stochastic Optimization: Introduction. January 8, 2018
Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing
More informationLecture 5. The Dual Cone and Dual Problem
IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationPrimal-dual path-following algorithms for circular programming
Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationThe Jordan Algebraic Structure of the Circular Cone
The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure
More informationPrimal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond
Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationAdvances in Convex Optimization: Theory, Algorithms, and Applications
Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationRecursive Construction of Optimal Self-Concordant Barriers for Homogeneous Cones
Recursive Construction of Optimal Self-Concordant Barriers for Homogeneous Cones OLENA SHEVCHENKO November 20, 2006 Abstract. In this paper, we give a recursive formula for optimal dual barrier functions
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationOn the relation between concavity cuts and the surrogate dual for convex maximization problems
On the relation between concavity cuts and the surrogate dual for convex imization problems Marco Locatelli Dipartimento di Ingegneria Informatica, Università di Parma Via G.P. Usberti, 181/A, 43124 Parma,
More informationA Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming
A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic
More informationUnconstrained minimization
CSCI5254: Convex Optimization & Its Applications Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions 1 Unconstrained
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Convex Optimization Fourth lecture, 05.05.2010 Jun.-Prof. Matthias Hein Reminder from last time Convex functions: first-order condition: f(y) f(x) + f x,y x, second-order
More informationChapter 2: Preliminaries and elements of convex analysis
Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15
More informationSummer School: Semidefinite Optimization
Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationPOLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMS-OF-SQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationORIE 6326: Convex Optimization. Quasi-Newton Methods
ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationNonlinear Programming Models
Nonlinear Programming Models Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Nonlinear Programming Models p. Introduction Nonlinear Programming Models p. NLP problems minf(x) x S R n Standard form:
More informationIE 521 Convex Optimization
Lecture 1: 16th January 2019 Outline 1 / 20 Which set is different from others? Figure: Four sets 2 / 20 Which set is different from others? Figure: Four sets 3 / 20 Interior, Closure, Boundary Definition.
More informationThe Trust Region Subproblem with Non-Intersecting Linear Constraints
The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region
More informationLECTURE 13 LECTURE OUTLINE
LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationSolution Methods for Stochastic Programs
Solution Methods for Stochastic Programs Huseyin Topaloglu School of Operations Research and Information Engineering Cornell University ht88@cornell.edu August 14, 2010 1 Outline Cutting plane methods
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. xx, No. x, Xxxxxxx 00x, pp. xxx xxx ISSN 0364-765X EISSN 156-5471 0x xx0x 0xxx informs DOI 10.187/moor.xxxx.xxxx c 00x INFORMS On the Power of Robust Solutions in
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationInner approximation of convex cones via primal-dual ellipsoidal norms
Inner approximation of convex cones via primal-dual ellipsoidal norms by Miaolan Xie A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationThe maximal stable set problem : Copositive programming and Semidefinite Relaxations
The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu
More informationUsing Schur Complement Theorem to prove convexity of some SOC-functions
Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan
More informationA Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition
A Two-Stage Moment Robust Optimization Model and its Solution Using Decomposition Sanjay Mehrotra and He Zhang July 23, 2013 Abstract Moment robust optimization models formulate a stochastic problem with
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationOn the Power of Robust Solutions in Two-Stage Stochastic and Adaptive Optimization Problems
MATHEMATICS OF OPERATIONS RESEARCH Vol. 35, No., May 010, pp. 84 305 issn 0364-765X eissn 156-5471 10 350 084 informs doi 10.187/moor.1090.0440 010 INFORMS On the Power of Robust Solutions in Two-Stage
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationMathematics 530. Practice Problems. n + 1 }
Department of Mathematical Sciences University of Delaware Prof. T. Angell October 19, 2015 Mathematics 530 Practice Problems 1. Recall that an indifference relation on a partially ordered set is defined
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set Löwner-John ellipsoid
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationSelf-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem
Self-concordant Tree and Decomposition Based Interior Point Methods for Stochastic Convex Optimization Problem Michael Chen and Sanjay Mehrotra May 23, 2007 Technical Report 2007-04 Department of Industrial
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationOn deterministic reformulations of distributionally robust joint chance constrained optimization problems
On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationMetric Spaces Math 413 Honors Project
Metric Spaces Math 413 Honors Project 1 Metric Spaces Definition 1.1 Let X be a set. A metric on X is a function d : X X R such that for all x, y, z X: i) d(x, y) = d(y, x); ii) d(x, y) = 0 if and only
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationTopological properties
CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationLagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems
Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of
More informationON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI
ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationA sensitivity result for quadratic semidefinite programs with an application to a sequential quadratic semidefinite programming algorithm
Volume 31, N. 1, pp. 205 218, 2012 Copyright 2012 SBMAC ISSN 0101-8205 / ISSN 1807-0302 (Online) www.scielo.br/cam A sensitivity result for quadratic semidefinite programs with an application to a sequential
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationDerivative-Free Trust-Region methods
Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework
More informationSequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO
Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and
More informationTrust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization
Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region
More informationMath 10C - Fall Final Exam
Math 1C - Fall 217 - Final Exam Problem 1. Consider the function f(x, y) = 1 x 2 (y 1) 2. (i) Draw the level curve through the point P (1, 2). Find the gradient of f at the point P and draw the gradient
More informationPermutation invariant proper polyhedral cones and their Lyapunov rank
Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationCharacterizing Robust Solution Sets of Convex Programs under Data Uncertainty
Characterizing Robust Solution Sets of Convex Programs under Data Uncertainty V. Jeyakumar, G. M. Lee and G. Li Communicated by Sándor Zoltán Németh Abstract This paper deals with convex optimization problems
More information