McMaster University. Advanced Optimization Laboratory. Authors: Farid Alizadeh and Yu Xia. AdvOl-Report No. 2004/15
|
|
- Ralph Goodwin
- 5 years ago
- Views:
Transcription
1 McMaster University Advanced Optimization Laboratory Title: The Q Method for Second-order Cone Programming Authors: Farid Alizadeh and Yu Xia AdvOl-Report No 004/5 October 004, Hamilton, Ontario, Canada
2 The Q Method for Second-Order Cone Programming Farid Alizadeh Yu Xia October 9, 004 Abstract Based on the Q method for SDP, we develop the Q method for SOCP A modified Q method is also introduced Properties of the algorithms are discussed Convergence proofs are given Finally, we present numerical results Second-order cone programming, infeasible interior point method, eigen space de- Key words composition Introduction The second-order cone programming abbreviated as SOCP is currently an active research area because it has many applications; see [, ] for a survey It is somewhat between SDP and LP; so the computation time and approximation accuracy of SOCP are between LP and SDP Most interior-point methods for LP and SDP have been extended to SOCP, but until now, there is no Q method[4] for SOCP The Q method for SDP is quite different from other methods And it has many attractive properties: each iterate of the Q method is relatively cheap to obtain than other methods becaues no eigenvalue decomposition is needed, and the Schur complement can be calculated by Cholesky factorization; unlike some other interior point methods, this algorithm converges fast and is numerical stable near optimum since the Newton system is well defined and its Jacobian is nonsingular at the solution under certain conditions see [4] In this paper, we carry on the Q method to SOCP We also give a modified Q method for SOCP Convergence proofs are presented These two methods for SOCP are also different from other methods, have the above properties Preliminary numerical results show that they are promising See also [6] for the Q method on symmetric programming and [3] for a Newton type algorithm on the Q method for SOCP This paper has eight parts In, we give the eigen space decomposition of any x R n+ and the update scheme of the orthogonal transformation In 3, we derive the Newton system and give the properties of the solution In 4, we give an algorithm that converges under certain conditions In 5, we further give some restrictions under which the algorithm finds an ɛ-optimal solution in finite iterations Numerical results are given in 6 A modified Q method which doesn t need to update the orthogonal matrix is presented in 7 Finally, in 8, we give conclusion and future work Notations Throughout this paper, superscripts are used to represent iteration numbers while subscripts are for block numbers of the variables We use capital letters for matrices, bold lower case letters for column vectors, lower case letters for entries of a vector In this way, jth entry of vector x i is written as RUTCOR and Business School, Ruters, the State University of New Jersey, USA alizadeh@rutcorrutgersedu Rsearch supported in part by the US National Science Foundation Computing and Software, McMaster University, 80 Main Street West, Hamilton, ON L8S 4K, Canada yuxia@casmcmasterca
3 x i j Primal and dual vectors x, z are indexed from 0 Superscript T represents matrix or vector transpose Semicolon ; is used to concatenate column vectors; so x; y; z = x T, y T, z T T We use x to represent the sub-vector of x excluding x 0 ; thus x = x 0, x T T As convension, we use On to represent the n n real orthogonal groups For a vector λ, we use Diag λ to represent a diagonal matrix with λ on its diagonal Sometimes we use corresponding upper case letters to represent the diagonal matrix Thus, Λ = Diag λ def denotes the Euclidean or l norm: x = n / i=0 x i def denotes the LARLeast Absolute Residual or l norm: x = n i=0 x i def denotes the Tchebycheff norm or l norm: x = max 0 i n x i We denote an n-dimensional all zero vector as 0 n and an n-dimensional vector of all ones as n We omit subscripts when the dimensions are undoubt The identity matrix is denoted as I The matrix R is defined as the following, whose dimension is clear from the context R def = A second-order cone in R n+ is represented as def Q n+ = x Rn+ : x 0 n Q is also known as Loréntz cone, ice-cream cone, quadratic cone We write x Qn+ 0 interchangeably with x Q n+ since it is a partial order We will also omit the subscript, just write Q when the dimension is clear from the context Second-order cone is self dual Therefore, the second-order cone program is generally written in the primal-dual pair as the following: i= x i Primal min x st c T x + + c T n x n A x + A n x n = b x i Q 0 i =,, n Dual max z, y b T y st A T i y + z i = c i i =,, n z i Q 0 i =, n Here, x i R ni, z i R ni, y R m are unknowns; A i R m ni, b R m, c i R ni dimensions n i may not be the same are data The Basic Properties This section lays the basic tools for the Q method for SOCP We first briefly sketch the Q method for semidefinite programming in, then give the corresponding decompostion and update scheme for SOCP in The Q Method for Semidefinite Programming Basic idea of the Q method for SDP see [4] is the following Let real symmetric matrices X, Z denote the primal and dual variables When X Z = µi, it is not hard to see that X and Z commute; so they share a same complete system of eigenvectors, which can be described by an orthogonal matrix Q Hence, the eigenvalue decompositions can be written as X = [Q T ΛQ] and Z = [Q T ΩQ], where Λ and Ω are diagonal matrices with eigenvalues of X and Z as the diagonal elements respectively The Q method employs Newton s method to the primal-dual system on the 3
4 central path by updating Q, Λ, Ω and y at each iteration seperately, instead of modifying X and Z as a whole At each iteration of the Q method, the orthogonal matrix Q is replaced by QI + S, where S is skew-symmetric Justification of it is the one to one correspondence between the group of real orthogonal matrices and the set of skew symmetric matrices via the exponential map exp and the Cayley transforamtion I + S I S The linear approximation of each map at S = 0 is I + S Foundations of the Q method for SOCP To develop the Q method for SOCP, in this part, we give the second-order cone related eigen space decomposition of any vector x R n+ and corresponding approximation l, l π ; prove that primal and dual variables share a same orthogonal transformation Proposition ; 3 show how to update the orthogonal transformation Proposition ; 4 give linearization of the orthogonal transformation Propositions 3, 4 We first give the eigen space decomposition Given x R n+, denote the eigenvalues of x as x0 + x λ x = x 0 x Then, x Q iff λ 0 ; x Int Q iff λ > 0; x bd Q iff one of λ i s is zero; and x = 0 iff λ = 0 see [, 7] We define a set of orthogonal matrices K x related to x as the following: K x def = { Note that each element in K x maps Q : Q On, Q = x if x x 0 x0 x to x Hence, x can be written as x = Q x λ + λ Remark In the above discussion, we assume the dimension of x is more than When the dimension of x is, we can still write the decomposition of x in the form of by letting Q x = I, λ i = x 0 ± x When the dimension of x is, we may let Q x =, λ = λ = x Using the conventional notation Λ = Diag λ; then analogous to that of SDP, we have the following proposition showing that the primal and dual variables share a same orthogonal transformation on the central path Proposition The primal and dual pair x and z is on the analytic center for iff for each block i =,, n, there exists a real orthogonal matrix Q i such that 3 x i = Q i λ i + λ i, z i = Q i ω i + ω i, and 4 Λ i ω i = µ, λ i 0, ω i 0 } 4
5 Proof: By [], a pair x, z on the analytic center means for i =,, n, j =,, n i [recall n i is the dimension of the ith block], 5 6 x T i z i = µ, x Q 0, z Q 0, and x i j z i 0 + x i 0 z i j = 0 The sufficiency is easy to verify for 5 and 6 Next, we prove the necessity For i =,, n, when µ = 0, by Cauchy-Schwartz-Bomiakovsky inequality, also x i 0 x i and z i 0 z, we have 0 x i 0 z i 0 x i z i x T i z i = 0 So one of x i and z i must be zero, or both of them must be in the boundary If either x i or z i is zero, λ i or ω i must also be zero correspondingly; hence, 3 and 4 are satisfied trivially When neither x i 0 nor z i 0 is zero, by 6, Setting z i = z i 0 x i 0 x i Q i = Q xi K xi, we get 3 and 4 When µ 0, it is proved in [] that on the analytic center, 7 x i = γx i µ Rz i, where γx i def = x T i Rx i By, Q zi K zi, such that 8 z i = Q zi Combining 7 and 8, we see that 9 x i = Q zi with ω i 0 λ i 0 + ω i + λ i 0 0, 0 λ i = γx i µ ω i, λ i = γx i µ ω i That shows 3 Substituting 8 and 9 to 5 with consideration of 0, we get 4 Thus, we have proved the proposition Next, we will prove that the update of the orthogonal matrix can be obtained from some special orthogonal matrices Let L be a subset of K def = K x, x R n+ 5
6 defined as L def = 0 0 T 0 c 0 c T 0 c I c ct +c T : c R n, c =, c T T I Apparently, L is a subgroup of On We have the following propositions regarding the group L Proposition Given x, y R n+, and Q x K x, there exists Q c L, such that Q x Q c K y In addition, Q x L K Proof: When ȳ = 0, any Q c L satisfies Q x Q c K y When ȳ 0, since Q x is nonsingular, there is a unique c R n, such that Q x c = ȳ ȳ Observe c = Not that each element in L is determined solely by a point on the unit sphere in R n We form Q c L as 0 0 T 0 c 0 c T c 0, 0 c I c ct +c 0 Q c = T T T c 0 = I And it is easy to see that Q x Q c K y It is easy to verify that for any Q x K and Q c L, we have Q x Q c K Thus, Q x L K The above proposition implies that to update the decomposition 9 of x to that of x + x, we only need to restrict orthogonal matrices in L To apply Newton s method to 4, next we will give the linear approximation of every element in L Define l to be a set of skew-symmetric matrices in the following form: Let l π be a subset of l: l def = l π def = The following propositions relate L to l or l π 0 0 0T 0 0 s T : s R n 0 s T 0 0 s T : s π 0 s 0 Proposition 3 The mapping exp : l π L is a bijection Proof: For any S l, T S = 0 s T s 0 T, S k+ = s T s k S, S k+ = s T s k S 0 0 ss T 6
7 Hence, given s 0, exps = I + S s [ i= i+ s i i! ] [ + S ] i s i+ s i +! i=0 = I + cos s s S + sin s S s We use Q c to emphasize the dependence of an element in L on c R n with c = in this proof First we will prove that Q L, S l π, such that exps = Q Notice exp0 = I = Q ;0 ; and for s = π; 0, T exps = T T = Q ; I Now given c R n, c =, c 0 =, there is a unique 0 < α < π such that cos α = c 0 and sin α = c Notice c 0, we let s = α c c; then exps = Q c Different s will give different Q c since the, -entry of exps is cos s, and the 3 : n, -entry of exps is sin s s s On the other hand, given S l π, S 0, let c = sin s s s, c 0 = cos s ; then exps = Q c L Proposition 4 The sets L and l can be related by Cayley transformation I + S I S Proof: When s <, according to Neumann Lemma, I S can be expanded by power series So the Cayley transformation is I + SI + S = I + k Sk By, the Cayley transformation is equivalent to 3 I + SI 4 S = I s S s S Since the right hand side of 3 is well defined even for s, we use the right hand side of 3 as the definition of Cayley transformation for any S l See Appendix for the justification of this definition It is not hard to see that given S l, the Cayley transformation of S is Q 4 s L 4+ s ; 4s 4+ s Next we will show that given Q c L, there is an S l, such that the Cayley transformation of S is Q c Denote the first element of s as s Then when s and s s, the Cayley transformation of S converges to Q ;0 Given c R n, such that c = and c 0, let s = c c 0 + Then the Cayley transformation of S is Q c The uniqueness of S can be proved similarly as that in Proposition 3 Propositions 3, 4 show that the tangent space to L at the identity I is l k= 7
8 3 The Newton System In this section, we will first derive the Newton System, and then give some properties of its solution 6, including the nonsingularity By Proposition, on the central path, each iterate x, y, z satisfies 4 Q P ω + A T y = c, AQ P λ = b, ΛΩ = µi, where P is block diagonal, whose ith block, denoted as P i R ni, is in the form P i def = 0 0 It is known that if both the primal and dual of have an interior feasible solution and A has full row rank, then µ > 0, 4 has a unique solution x µ, y µ, z µ, and as µ 0, x µ, y µ, z µ tends to the optimum of see[] Assume x = Q x λ by decomposition, then any perturbation of x can be written as Q x Q x λ + λ, with Q x L by Proposition By Proposition 3 and Proposition 4, we can replace each diagonal block of Q x by exps i with S i l π, or by Cayley transformation of S i with S i l; and then discard the nonlinear terms Notice when s 0, both the linear terms of its exponential and Cayley transformation converge to I + S Define def r p = b Ax, def r d = c z A T y, def r c = vec µi ΛΩ Given the kth iterate x k, y k, z k = Q k P λ k, y k, Q k P ω k, we denote B k def = AQ k Note that only the vector s, not the matrix S is involved in calculation diagonal matrix with each diagonal block in the form Let P be a block It is obvious that P = P After collecting all the first two columns of Bi k into B k, the remaining columns into ˆB k, and splitting Q kt r k d accordingly as rk d and ˆrk d, we rewrite the Newton system as P ω + B k T y = r k d, 5 ωi k ωi k s i + ˆB i k T y = ˆr d k i i =,, n, n [ λ B k k P λ + i λ k i ] ˆB i k s i = r k p, i= Λ k ω + Ω k λ = r k c For simplicity, in the next context we will omit k Define E i def = ω i ω i I, D i def = λ i λ i I 8
9 Correspondingly, define E def = DiagE i, D def = DiagD i Hence, solution to 5 is 6 y = BP Ω ΛP T BT ˆBDE ˆBT r p BP Ω r c ˆBDE ˆr d + BP Ω ΛP T r d ω = P r d B T y λ = Ω r ˆr c Λ ω s = E d ˆB T y Properties of the Solution Though 4 is a primal-dual system, since we force primal and dual variables share a same orthogonal matrix in the decomposition, the number of variables and equations in 5 are about half of that required by other algorithms Each iterate is relative cheap to compute, because each block of the Schur complement is two dimensions less than that of other systems, which means less computaion for the search direction; to keep each iterate in Q, ie x+α x Q 0, instead of solving x 0 +α x 0 x + α x for α as that in other methods, one only needs to calculate α max{ λ i /λ i : λ i < 0} 3 The Schur complement of 6 is symmetric positive definite; so the Cholesky factorization is applicable for the computation of the search direction This can be seen by writing the Schur complement as AQ Diag P Ω i Λ ip T D i E i Q T A T, which is positive definite when A has full row rank, λ i > λ i > 0, and ω i > ω i > 0 4 The Jacobian of the solution is nonsingular under mild conditions See Theorem 3 Therefore, we can expect low computation time, high convergence rate and numerical stable near optimum, and high accuracy of the algorithm under the assumption of the theorem This property is not shared by some other search directions whose Jacobians become increasingly ill-conditioned near optimum Remark 3 For each i n, we can always ensure that λ k+ i > λ k+ i > 0 and ω k+ i > ω k+ i > 0 by careful choice of step sizes For example, if ω k+ i > ω k+ i, we swap them Assume ωi k > ωi k, only when ω i ω i and β = ωk i ωk i ω i ω i, is it possible that ωi k + β ω i = ωi k + β ω i Under this case, we can use a smaller step size β i It is obvious that β can be at least as large as β And β i are not necessarily the same for all i Next, we will show the nonsingularity of Jacobian at optimum Assume x, y, z is a solution of Suppose Q simultaneously diagonalize x and z And 7 λ i > λ i 0, 0 ω i < ω i for any nonzero block x i or z i, i {,, n} We also assume x 0, since otherwise, b = 0, the dual is trivial Analogous to [4, Theorem 6], we have the following results 9
10 Theorem 3 Let x, y, z = Q P λ, y, Q P ω be an optimal solution of satisfying strict complementarity, primal and dual nondegeneracy conditions, and also condition 7; then the Jacobian of 5 evaluated at x, y, z is nonsingular Proof: It is easy to verify that the strict complementarity see [3] equals to that one and only one of λ i j and ω i j is zero for each i n, j =, As in [3], we partition the index set {,, n} into three subsets B, I, O, and write x as x B ; x I ; x O, where x B includes all the boundary blocks, x I includes all the interior blocks, and x O includes all the zero blocks Assume x B = x,, x r It is proved in [3] that primal nondegeneracy means matrix in the following form has linearly independent rows for all α,, α r and ν that are not all zeros 8 A A r A I A O α Rx T α r Rx r T 0 T ν T Let ˆP be a block diagonal matrix with each diagonal block in the form P I, where I Rni ni is the identity a little abuse of notation, P here represents By [, Lemma ], at optimum, there exists a vector β > 0, such that Rx i = β i z i for i =,, r Substituting z i by its 0 eigen space decomposition, get Rx i = β i Q i ˆPi ω i with ω i > 0 for i =,, r due to strict complementarity Postmultiplying Q ˆP to 8, we obtain the following matrix 9 0 B P B P ˆB B r P B r P ˆBr B I ˆPI B O ˆPO 0 α β ω 0 T 0 α rβ r ω r 0 T 0 T ν T Q ˆP O Notice 8 has full row rank, and right timing a nonsingular matrix doesn t change its rank; so 9 has full row rank for all α,, α r and ν that are not all zeros Hence the matrix 0 B P ˆB B r P ˆBr B I ˆPI has full row rank The solution satisfies dual nondegeneracy and strict complementarity iff the following matrix has linearly independent columns see [3] A Rz A r Rz r A I Because A i Rz i = β i A i x i = β i A i Q i ˆP ˆP Q T i x i = β i B i P λ i i B, equals to the following matrix having full column rank: B P B r P B I ˆPI So 0 and mean we can choose all columns of B i P i =,, r and B I ˆPI, together with some columns from ˆB i i =,, r to form an m m nonsingular matrix B ] to the st block Because of the above properties, we first premultiply P = Diag [ equations of 5; then form a nonsingular matrix B, collect all the remaining columns of BP to L, all the remaining columns of ˆB to R ; partition D = DiagD, D and E = DiagE, E accordingly Since D includes only λ i s from boundary and interior blocks, E including only ω i s from boundary and zero blocks, we see D 0 and E 0 def Define D = DiagI, D, Ẽ def = Diag0, E, Ĩ = I 0 0
11 After permuting the rows and columns of the Jacobian of 5 properly, we find the nonsingularity of Jacobian is the same as the nonsingularity of the following matrix: Ẽ B T Ĩ L T I E R T B D L R D Λ Ω We first interchange the st and the 4th block rows, the nd and the last block columns; then subtract Ẽ D B timing the st block rows from the 4th block rows, add Ẽ D B R D E timing the 3rd block rows from the 4th block rows Hence the nonsingularity of the above matrix equals to the nonsingularity of 3 B T + Ẽ D B R D E RT Left multiplying 3 by B T, we get the matrix I + B T Ẽ D B R D E RT, which is nonsingular since B T Ẽ D B and R D E RT are symmetric negative semidefinite 4 The Algorithm In this section, we will give a convergent algorithm for the Q method for SOCP This algorithm is originally for infeasible LP with exact search directions [0], while the system for the Q method is nonlinear and the search direction is not exact It can start from an arbitrary infeasible interior point So it doesn t employ big M method; consequently, it doesn t have the drawback of the big M method numerically instable and computationally inefficient, see [] Its accuracy measures for primal, dual infeasibility and complementarity can be chosen separately; primal and dual step sizes can be different The algorithm is described in 4 Its convergent analysis is given in 4 4 Algorithm Description Let ɛ p, ɛ d, and ɛ c denote the accuracy requirement for the primal, dual feasibility and duality gap The neighborhood we using is { N γ c, γ p, γ d def = λ, ω, y, Q: λ R n, ω R n, y R m, Q K, λ > 0, ω > 0, λ T ω λ i j ω i j γ c j =, ; i =,, n, n λ T AQ ω γ p P λ b or AQ P λ b ɛ p, λ T A ω γ T d y + Q P ω c or A T y + Q P } ω c ɛ d The first inequality is the centrality condition The second and third inequalities guarantee that the complementarity will not be achieved before the primal or the dual feasibility Obviously, when γ c, γ p, γ d γ c, γ p, γ d, N γ c, γ p, γ d N γ c, γ p, γ d
12 And γ c,γ p,γ d >0 N γ c, γ p, γ d = {λ, ω, y, Q: λ > 0, ω > 0} Clearly, when λ T ω approaches 0, N tends to the optimal solution set of The algorithm is the following Algorithm Choose 0 < σ < σ < σ 3 < and Υ > 0 To start from an arbitrary point λ 0, ω 0, y 0, Q 0, one may select 0 < γ c <, γ p > 0, γ d > 0, so that λ 0, ω 0, y 0, Q 0 N γ c, γ p, γ d Do until r k p < ɛ p, r k d < ɛ d, and λ kt ω k < ɛ c ; or λ k, ω k > Υ Set µ = σ λ kt ω k n Compute the search direction λ, ω, y, s from 5 3 Choose step sizes α, β, γ, set Λ k+ = Λ k + α Λ, y k+ = y k + β y, Ω k+ = Ω k + β Ω, Q k+ = Q k I + γs I γs 4 k k + End We use Cayley transformation here Updating of orthogonal matrices through exp can be stated in a similar way; and the later analysis can also be carried over with slight modifications of constants Note that it doesn t require too much work to calculate the Cayley transformation or the exponential mapping from 3 or When the dimension of x i is two, k, we set Q k i = I and Si k = 0 Let ˆα k be the maximum of α [0, ], so that for any α [0, α] λ k + α λ, ω k + α ω, y k + α y, Q k I + α SI α S N, λ k + α λ T ω k + α ω [ α σ ] λ kt ω k, The step sizes α 0, ], β 0, ], γ 0, ] are chosen so that λ k+, ω k+, y k+, Q k+ N γ c, γ p, γ d, λ k+t ω k+ [ ˆα k σ 3 ] λ kt ω k Because σ < σ < σ 3, the primal and dual step sizes are not necessarily the same 4 Convergence Analysis The global convergence of the preceding algorithm can also be proved by contradiction as [0] Theorem 4 If Algorithm doesn t stop after finite steps, the smallest singular value of the Jacobian of 5 must converge to zero
13 Proof: The key to the proof is to show that the step sizes are bounded below Assume the algorithm doesn t stop after finite steps Let ɛ def = minɛ c, γ p ɛ p, γ d ɛ d Then for each iteration k, λ kt ω k ɛ, and λ k, ω k Υ, because otherwise, the iteration will terminate due to the stopping criteria Boundedness of y k is due to the dual feasible constraint Also observe that Q k is orthogonal, and the set of orthogonal matrices is compact Assume the smallest singular value of 5 doesn t converge to zero Then there must exist a positive scalar d, and a subsequence {λ m i, ω m i, y m i, Q m i } i= such that for all m i, the largest singular value of the inverse of 5 to zero is at most d Both the right hand side and the left hand side of 5 depend continuously on the iterate λ, ω, y, Q, which is in a compact set; so, the Newton s direction of 5 is a continuous function of λ, ω, y, Q Therefore, the solution of 5 is uniformly bounded for the subsequence {m i } Hence, there exists a positive constant η, such that the search direction computed by 5 satisfies λ i j ω i j γ c n λt ω η, λ T ω η, λ η, ω η, s i η, for i =,, n; j =, Note that S i = s i for i =,, n, S = max i s i For k {m i } i=, following the notations of [0], we define f ij α def = [ λ k i j + α λ i j ] [ ω k i j + α ω i j ] γ c n λk + α λ T ω k + α ω, g p α def = λ k + α λ T ω k + α ω AQ γ k p I + α SI α S P λ k + α λ b, g d α def = λ k + α λ T ω k + α ω A γ T d y k + y + Q k I + α SI α S P ω k + α ω c, hα def = [ α σ ] λ kt ω k λ k + α λ T ω k + α ω Therefore, ˆα k is determined by the following inequalities: f ij α 0 i =,, n; j =,, g p α 0 or AQ k P λ k b ɛ p, g d α 0 or A T y k + Q k P ω k c ɛ d, hα 0 Next, we will show that there is a lower bound for each ˆα k Each block of the Cayley transformation is equivalent to 4 I + α S ii α S i = I + αs i α3 s i 4 + α s i S i + α 4 + α s i The inequalities for f ij and h are obtained by the similar arguments as those in [0] ɛ f ij α σ n γ cα ηα, hα σ σ ɛ α ηα S i 3
14 Next, we will estimate g p α and g d α Note that the first column of Si is zero; and the only nonzero entry of its second column is s T i s i Let Q k denote the matrix consisting of only the nd column of each block of Q k, λ be the vector of all the first eigenvalues of x i, λ be the vector of all the second eigenvalues of x i i =,, n When λ kt ω k AQ k γ p P λ k b, 5 g p α αλ kt ω k + ασ λ kt ω k + α λ T ω γ p α AQ k P λ k b [ AQ γ p α k S P λ + max i s i AQ k 4 λ k λ k + α λ α λ ] + α max i s i AQ k S 4 P λ k + α λ ασ ɛ α η γ p α A η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ The first inequality is due to the Newton system of search directions from 5, λ kt ω γ p AQ k P λ b, and the expansion of Cayley transformation of 4; the second inequality is because of the bound on the variables and search directions, and α, also the fact If AQ k P λ k b ɛ p, then P λ = λ λ 6 AQ k I + α SI α S P λ k + α λ b α AQ k P λ k b [ AQ + α k S P λ + max i s i AQ k 4 λ k λ k + α λ α λ ] + α max i s i AQ k S 4 P λ k + α λ k αɛ p + α A η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ So when α A η 3/ + 4 ηυ + AQ k+ P λ k+ b ɛ p ɛ p η + 4 η3/ Υ + 4 η5/, 4
15 Next, we will consider the dual constraints When λ kt ω k A γ T d y k + Q k P ω k c, 7 g d α αλ kt ω k + ασ λ kt ω k + α λ T ω [ Q γ d α A T y k + Q k P ω k c γ d α k S P ω ] + Q k S P ω k + α ω + α max i s i Q k S 4 4 P ω k + α ω ασ ɛ α η γ d α η 3/ + 4 ηυ + η + 4 η3/ Υ + When A T y k + Q k P ω k c ɛ d, 8 A T y k + α y + Q k I + α SI α S P ω k + α ω c [ Q α A T y k + Q k P ω k c + α k S P ω ] + Q k S P ω k + α ω + α max i s i Q k S 4 4 P ω k + α ω 4 η5/ αɛ d + α η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ Thus, the lower bound on ˆα s is { α σ σ ɛ = min,, σ γ c ɛ, η nη σ ɛ, η + γ p A η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ ɛ p, A η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ σ ɛ, η + γ d η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ ɛ d η 3/ + 4 ηυ + η + 4 η3/ Υ + 4 η5/ After the perturbations of step sizes to ensure λ > λ and ω > ω, the lower bound on ˆα k is at least α The algorithm imposes the decrease of the sequence {λ j T ω j } j= So for each m i in the subsequence, by hα 0, we see λ m i+ T ω m i+ ] ] [ α σ 3 λ m i T ω m i [ α σ 3 λ m i + T ω m i + ] ] i [ α σ 3 λ m i T ω m i [ α σ 3 λ m T ω m } 5
16 That means the whole sequence {λ j T ω j } j= converges to 0, which contradicts to the assumption We have proved that if the smallest singular value of 5 doesn t converge to zero, either the algorithm finds an ɛ p, ɛ d, ɛ c -optimal solution in finite iterations, or the iterate is unbounded 5 Finite Convergence Algorithm may abort due to unboundedness of eigenvalues or singularity of Jacobians In this section, we will give some conditions under which Algorithm converges to an ɛ p, ɛ d, ɛ c -optimum in finite iterations Conditions ensure boundedness is given in 5, while that for nonsingularity is given in 5 5 Boundedness of Iterates To make sure that each iterate is bounded, we use some ideas in [8], which is also for LP, further impose some restrictions on the problem Let ρ represent a positive scalar no larger than the smallest singular value of A Suppose has an interior feasible solution ˆx, ẑ, ŷ Denote the eigenvalues of ˆx as ˆλ, the eigenvalues of ẑ as ˆω Assume ν p ˆλ χ p, ν d ˆω χ d We require the feasibility constraints to be calculated to a certain accuracy That is, each iterate satisfies 9 Q P ω + A T y = c + c AQ P λ = b + b with b ν pρ, c ν d It is shown in [8], that by some transformation, the smallest singular value of a matrix can be larger than, which means b and c are not too small If ɛ p > ρν p, we replace ɛ p with ρν p; If ɛ d > ν d, we replace ɛ d with ν d We modify the algorithm in 4 so that each iterate is in the neighborhood Ñ { Ñ def = λ, ω, y, Q: λ R n, ω R n, y R m, Q K, λ > 0, ω > 0; λ T ω λ i j ω i j γ c j =, ; i =,, n; n λ T AQ ω γ p P λ b and AQ P λ b ρν p, or AQ P λ b ɛ p ; λ T A ω γ T d y + Q P ω c and A T y + Q P ω c ν d, or A T y + Q P } ω c ɛ d Other parts of the algorithm is the same as that in 4 For further reference, we name the algorithm in this section Algorithm As the proofs of 6 and 8, we find that when the step size ˆα k α, where α is defined 6
17 as the following, { α def = min ν p ρ A η 3/ +, ηυ + η + η3/ Υ + η5/ ν d η 3/ + ηυ + η + η3/ Υ + η5/ condition 9 is satisfied Hence, ˆα in the algorithm of this section has a lower bound: min{α, α } Thus, by the results in 4, if the initial point is in Ñ, assume the smallest singular value of each element in Ñ is distance at least d from 0; then the iterates of Algorithm will converge to a solution of in finite iterations, if each iterate is bounded Next, we will use two lemmas to show the boundedness of each iterate Lemma 5 gives the existence of an interior feasible solution, under which Lemma 5 guarantees the boundedness We consider the perturbed system: 30 z + A T y = c + c Ax = b + b x Q 0 z Q 0 Lemma 5 Suppose has an interior feasible solution ˆx, ẑ, ŷ with ν p ˆλ χ p, ν d ˆω χ d ; then for all b ν pρ and c ν d, 30 has a feasible solution λ, ω, ỹ, Q with ν p λ 3 χ p, ν d ω 3 χ d Proof: Let h def = A + b, where A + is the Moore-Penrose generalized inverse of A Denote the decomposition of h as h = Q h P λh ; then λ h λ h = h A + b ν p Let x def = ˆx + h, ỹ def = ŷ, z def = ẑ + c Write the smaller eigenvalue of x i as λ i small ; then λ i small = ˆx i 0 + h i 0 n i ˆx i j + h i j = ˆx i 0 + h i 0 j= ˆxi + ni hi + ˆx i j h i j ˆx i 0 + h i 0 ˆxi hi j= } ˆλ i small λ h i ν p The first inequality above is due to the Cauchy-Schwartz-Bomiakovsky inequality Similarly, denote the bigger eigenvalue of x i as λ i big ; then, λ i big = ˆx i 0 + h i 0 + n i ˆx i j + h i j ˆx i 0 + h i 0 + ˆxi + hi j= ˆλ i big + λ h i 3 χ p, 7
18 Thus, ν p λ 3 χ p The inequalities ν d ω 3 χ d can be proved in a same way Lemma 5 If has an interior feasible solution ˆx, ẑ, ŷ with ν p ˆλ χ p, ν d ˆω χ d ; then there exists a positive scalar Γ, such that for any iterate λ, ω, y, Q Ñ, ν d λ +ν p ω Γ Proof: Given an iterate λ, ω, y, Q Ñ, there exists c and b, so that λ, ω, y, Q is a solution to the following system of equations Q P ω + A T y = c + c AQ P λ = b + b Then, according to Lemma 5, there exists λ, ω, ỹ, Q satisfying the above perturbed constraints with ν p λ 3 χ p, ν d ω 3 χ d; so Ax x = 0, A T y ỹ + z z = 0 Hence, Therefore, x x T z z = x x T A T y ỹ = 0 3 x T z + x T z = x T z + x T z + n i= n i= n [ ] xi 0 z i 0 x i z i i= + n [ xi 0 z i 0 ] x i z i i= x i 0 x i + x i 0 + x i z i 0 z i + z i 0 + z i zi 0 z i xi 0 x i 4 λ ν d + 4 ω ν p The first inequality is due to x i 0 0, z i 0 0, x i 0 0, z i 0 0, and Cauchy-Schwartz- Bomiakovsky inequality The second one is because of x i 0 x i, z i 0 z i The last one is obtained by the eigenvalue representations of the second-order cone, and the lower bounds on λ and ω We also have 3 x T z + x T z = λ T P T P ω + xt z λt ω + λ0t ω 0 + n i= n xi 0 z i 0 + x i z i i= xi 0 + x i z0 + z i λ0t ω nχ pχ d We use Cauchy-Schwartz-Bomiakovsky inequality to get the first and second inequalities in the above The second inequality is also from λ 0T ω 0 λ kt ω k, which is forced by the algorithm Combining 3 and 3, we obtain λ ν d + ω ν p λ 0T ω 0 + 9nχ p χ d We have proved that Algorithm will terminate at an ɛ c, ɛ p, ɛ d solution to in finite iterations, provided that the smallest singular value of the Jacobian of the Newton s system doesn t converge to zero 8
19 5 Nonsingularity of Iterates In this subsection, we will give some conditions under which the smallest singular value of Jacobian doesn t converge to zero Each iterate satisfies a system of equations in the following form 33 Q P ω + A T y = c + r d, AQ P λ = b + r p, Λω = µ + r c The algorithm ensures λ λ ; hence x 0 Given x, y, z, because only the first two columns of Q contribute to 33, the left hand side of 33 is the same for any decomposition of x Keeping only the first two columns of Q i, we see each iterate is a solution of the following system q P ω + A T y = c + r d, A 0 0 q P λ = b + rp, Λω = µ + r c, q =, λ i > λ i, ω i > ω i i =,, n Lemma 53 For each triple r p, r d, µ + r c with µ + r c > 0, if 34 has a finite solution, it is unique Proof: Consider the constrained minimization problem: 35 min x c + r d T x n i= µ + r c i ln x i 0 + x i n i= µ + r c i ln x i 0 x i st Ax = b + r p Since the Hessian of the objective function is positive definite, the objective is strictly convex; so for each r p, r d, r c, if 35 has a finite solution, it is unique The Lagrangian of 35 is L = c + r d T x n i= µ + r c i ln x i 0 + x i n i= µ + r c i ln x i 0 x i y T Ax b r p Notice 35 has only linear constraints, and A has full row rank So the solution to L = 0 is the same thing as the solution to 35 The logarithmic terms force x i to be in the interior of the second-order cone So we can set and get the system z i = µ + r c i x i 0 + x xi x has a unique solution, because it is just L = 0 + µ + r c i x i 0 x, xi x Ax = b + r p A T y + z = c + r d 9
20 Given x R n+ with x 0, the decomposition x = λ q + λ q with λ λ, q = is unique if we assume q = 0 for x = 0 This can be seen by directly solving the above equation for λ, λ and q: λ = x 0 + x, λ = x 0 x The lemma is proved by letting ω i = µ + r c i x i 0 + x, ω i = µ + r c i x i 0 x For briefness, we denote w def = λ, ω, y, Q, and use G to represent the left hand side of 5 Lemma 54 Let w be a solution to satisfying the conditions of Theorem 3 Then there are positive numbers δ, and ζ, such that if λ 0T ω 0 ζ, then Gw k is distance at least δ from 0 for k = 0,,, where w k is generated by the Algorithm Proof: By Theorem 3, Gw is nonsingular Let B denote the open unit ball Since G is Lipschitz continuous, by implicit function theorem, there exist positive numbers δ and r, such that for any w w + rb, the smallest singular value of Gw is at least distance δ from 0, and Gw + rb contains Gw + rδb Suppose r k p > ɛ p, r k d > ɛ d By the definition of the algorithm, λ kt ω k is decreasing with k, r k p γ p λ kt ω k, r k d γ d λ kt ω k Hence, if we assume max, γ p, γ d λ 0T ω 0 rδ, then Gw k Gw + rδb for k = 0,, By Lemma 53 and the relationship between 33 and 34, we get w k must be in w + rb; therefore, the smallest singular value of Gw k is distance at least δ from 0 for k = 0,, Observe the assumption, r k p > ɛ p or r k d > ɛ d is not necessary in the above proof Combining Lemma 5 and Lemma 54, we have the following theorem Theorem 5 Under the conditions of Theorem 3 and Lemma 5, there is a positive number ζ, such that if λ 0T ω 0 ζ, Algorithm converges to an ɛ p, ɛ d, ɛ c -solution of in finite steps 6 Numerical Results To test the Q method, we have implemented the basic algorithm in MATLAB Below are the results of our test on randomly generated,000 problems with known solutions For the step sizes, simply, we choose α = min, τα, β = min, τβ, γ = αβ, where α and β are the maximum stepsizes to the boundary of the second-order cone We used x i = ; ; 0, s i = ; ; 0, y = 0 as starting point We picked σ = 05, τ = 099, which may not be the best choice of parameters Our code reduced the l norm of primal infeasibility, l norm of dual infeasibility, and l norm of duality gap to less than 50e for all the problems The range of every element in our randomly generated problem is 05, 05; therefore, we didn t use relative measurement for accuracy, as done by other algorithms Note that our accuracy requirement is much more stringent than most other algorithms Below is the results 0
21 bk dimension of each block type of each block m r p0 r d 0 it 0 [,,,,,,,,,] [b,i,o,b,i,b,o,i,i,b] [0,0,0,0,0,0,0,0, [b,o,i,b,b,i,o,b,b, ,0] o] 0 [3,0,8,9,,4,6,3,4,8] [b,i,o,b,i,o,i,i,b,o] [0,0,8,9,,5,6,3,4,8] [b,i,b,i,i,o,b,i,b,o] [0,5,5,5,5,5,5,5, [b,i,b,i,i,o,b,i,b,o] ,5] [0,0,0,0,0,0,0,0, [b,o,i,b,b,i,o,b,b,o, ,0,0,0] b,i] 5 [0,0,0,0,0,0,0,0 [b,o,i,b,b,i,o,b,b,o, , 0,0,0,0,0,0] b,o,i,i,o] 5 [5,5,5,5,5,5,5,5, [i,o,b,i,i,b,o,i,b,b, ,5,5,5,5,5,5,] i,o,b,b,o] 0 [0,0,3,0,4,0,3,8,6, [b,o,i,b,b,i,o,b,b,o, ,9,,,3,,3,5,,0, b,b,i,o,i,b,b,b,i,b] 8] 0 [0,0,0,0,0,0,0,0 [b,o,i,b,b,i,o,b,b,o, ,0,0,0,0,0,0,0, b,b,i,o,i,b,b,b] 0,0,0,0] In the above table, each row is the summary of 00 instances of problem with the same number of blocks, dimension of each block, optimum variable type, and number of constraints bk represents the number of blocks; type of each block shows at optimum, whether each block is in the boundaryb, zeroo, or in the interiori; m is the number of constraints; r p 0 is the average l norm of initial primal infeasibility for the 00 instances; r d 0 is the average l norm of initial dual infeasibility for the 00 instances; it is the average number of iterations for the 00 instances All the instances were terminated at ɛ solutions within 50 iterations, which shows our algorithm is indeed stable and can get high accuracy The first row shows our algorithm can solve LP problems, since -dimensional SOCP is just LP [5] Notice that the problem type and size have little effect on the total number of iterations, which is a property of interior point methods for SOCP Following is a typical instance of the nd type of problem We use gap to represents the duality gap
22 it r p r d gap 0 655e e e+00 66e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-04 99e e e e e e e-0 Note that the closer the iterates to optimum, the faster the duality gap, primal and dual infeasibility gap reduce respectively, a property not shared by some other algorithms Observe that the duality gap reduces much slower than the primal or dual infeasibility as iterations goes on Hence we have also used l norm as the measure of duality gap, and have found that the number of total iterations reduced about 5 on average The above results are generated by algorithm without Mehrotra s predictor-corrector procedure We have tried pc method also Numerical results show that although in most cases algorithm with pc procedure requires less number of iterations, even up to one third of that without pc procedure; in some cases, algorithm with pc procedure needs much more number of iterations, occasionally two times of that without pc procedure Average these cases, algorithm with pc procedure can save only a few number of iterations, but each iteration requires more work We have tested the Algorithm with the orthogonal matrix updated by exp, too The solutions can achieve -digit accuracy as well, but it needs a couple of more iterations on average Second-order cone programming has many applications see the references in We have also tested the algorithm on one application SMT problem from [4] The SMT problem is to find a shortest network spanning a set of given points, called regular points, on the Euclidean plane The solution is always a tree, called the Steiner minimal stree SMT, including some additional vertices, called Steiner points Assume the number of regular points are N; then there are at most N Steiner points and the degree of each Steiner point is at most 3 A tree whose vertices including just the N given regular points and N Steiner points with the degree of each Steiner point being 3 is called a full Steiner topology of the N regular points In [4], the problem of finding the coordinates of the N Steiner points to form the shortest network under a known full Steiner topology is transformed into an SOCP and solved by interior point method Their numerical examples gave better computational results than that of existing algorithms did Their formulation
23 is the following Denote p def = N 3, which is the number of edges; q def = N 4, which is the total number of coordinates of the Steiner points Let ; c A T p 0; c b =, c = 0 q, AT = A T R 3p p+q, 0; c p A T p where A T i R q is a row of N block matrices The edges are ordered so that each of the first N edges connects a regular point to a Steiner point For i =,, N, c i is the coordinates of regular point i, where i is the index of the regular point on the ith edge; the only non-zero block of A T i is the i nd, which is I, where i is the index of the Steiner point on the ith edge For i = N +, p, c i = 0; assume the indices of the two Steiner points on the ith edge are i and i ; then the i st block of A T i is I, the i nd block of A T i is I, the rest blocks of A T i are zero For i =,, p, let y i represents the length of the ith edge Let y p+:p+q be the coordinates of the Steiner points Therefore, the SMT problem is to find y satisfying the dual SOCP: 36 max st b T y A T y + s = c s Q 0 We tested our code on example in [4] The two tables below are coordinates of the 0 regular points and the tree topology taken from [4] The Steiner points are indexed before the regular points The coordinates of the 0 regular points in example 0 index x-xoordinate y-coordinate index x-cocordinate y-coordinate The indices of the two vertices of each edge are listed next to the index of the edge The tree topology edge-index ea-index eb-index edge-index ea-index eb-index Our starting points and accuracy requirements are the same as those for the randomly generated problems Following is the result 3
24 it network-cost r p r d gap e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-0 Our initial network-cost is the same as that of [4] The network-cost at our 7th iteration is better than their final cost, which shows that our accuracy requirements are higher than theirs Our method starts from an infeasible point, while their inintial point must be feasible 7 Modified Q Method In this section, we will give a variant of the Q method for SOCP, which has similar properties and convergence results as that of 6 7 The System Formulation 4 shows that only the first two columns of the orthogonal matrix Q are involved in calculation The first column of Q is ; 0; the second column is a unit vector and its first element is zero Partition A i as A i = [A i 0 Ā i ] Denote Ā def = [Ā Ā n ] Decompose x i and z i as x i = λi +λ i ; λi λi q i, z i = ωi +ω i ; ωi ωi q i We let q = 0 when x = 0 Then the decomposition is unique under the assumptions λ i λ i, ω i ω i Substitute the decompositions into 4, and add a constraint q T i q i =, i =,, n Let r k def p = b Ax k, r k def d = c A T y k z k, r k def c = µ Λ k ω k We use r d k i to represent the first element of r d k i, and r d k i to represent the remaining subvector Then the resulting Newton system 4
25 is 37 ω i + ω i + A i T 0 y = r d k i ω i ω i q k i + ω i k ω i k q i + Āi T y = r d k i n λi + λ i λ i λ i A i 0 + Āi q k λ k i + Āi i λ k i q k i = r k p i= q k i T qi = 0 i =,, n Λ k ω + Ω k λ = r k c The algorithm is the same as that in the previous sections, except that the orthogonalization is substituted by normalization: i = qk i + γ q i q k i + γ q i q k+ 7 Properties of the Solution Let u i def = λ i ω i + λ i ω i, v i def = λ i ω i λ i ω i ; E i and D i are defined as that in previous sections, but with proper dimensions Omitting k, the solution to 37 is y = M r p + n i= A i 0 r c T i ω i Āiq i r c T i + u i A i 0 r d i + v i Ā i q i r d i + v i A i 0 q T i r d i + u i + D i E i Āiq i q T i r d i D i E i Ā i r d i ω i = q T i r d i q i r d i + q T i Ā T i A i T 0 y q i = E i ω i rd i q i r d i + q i ω i ĀT i q i A i T 0 y ω i = r d i A i T 0 y ω i λ = Ω r c Λ ω The Schur complement M in the above formula is: M = n i= v i Ai 0 q T i Ā T i + Āiq i A i T ui 0 + Ā Diag Di E i + u i Iq i q T i D ie ĀT i Each block of the Schur complement is one dimension less than those of other systems When λ i > λ i > 0 and ω i > ω i > 0, we have u i > v i > 0, and D i E i is a positive scalar matrix Because one is the only nonzero eigenvalue of q i q T i, the second part of M is symmetric positive definite Observe the first part of M is symmetric positive semidefinite Therefore M is symmetric positive definite; so we can use Cholesky factorization to calculate the Schur complement The number of variables and equations used by the modified Q method are also about half of that required by the other method So it is also efficient in storage and calculation per iteration The dimension of the Schur complement M is one dimensional less than other methods for each block; to keep each iterate in Q, one only needs to compute α max{ λ i /λ i : λ i < 0}, not a solution of the second-order equation 5
26 To use 37, we don t need to update the orthogonal matrix, but the price we pay is n more variables and equations Similar to Theorem 3, we have Theorem 7 Let x, y, z be an optimal solution of satisfying strict complementarity, primal and dual nondegeneracy conditions, and also condition 7 Assume x 0 at optimum Decompose x i = λi +λ i ; λ i λ i q i, z i = ωi +ω i ; ω i ω i q i Then the Jacobian of 37 evaluated at x, y, z is nonsingular Proof: For any unit vector q R n, define an orthogonal matrix Q q as q0 q T q 0 q I q qt +q 0 Q q = q 0 = I After dropping the iteration number k, we write each block of the Jacobian 37 as the following ω i ω i λ i λ i q i y i r p i A qi i A i q i D i Ā i r d qi 0Ei i q i A T i r c i λ i ω i r c i λ i ω i 0 q T i We first left multiply Diag [ QT q i ] to the block of dual feasibility equations; then right time Diag QT qi to the columns corresponding to q Notice Ā i q i = Āi Q qi QT qi q i = Āi Q qi 0 After crossing out columns q i and rows q T i q i for i =,, n, we find the Jacobian of 37 is the same as that of 5 with B = A Diag [ QT q i ] Hence all the proof of Theorem 3 are applicable here So, as for the Q method, we can expect that when the iterates of modified Q method are close to the optimum, they converge fast, and the solutions are accurate and numerically stable 6
27 73 Convergence Analysis All the convergence proofs in the previous sections can be adapted to the modified Q method For example, replace s i η by q i η in the proof of Theorem 4 Then 38 g p α = λ k + α λ T ω k + α ω γ p Ax b Since +a + a = αλ kt ω k + ασ λ kt ω k + α λ T ω γ p n i= [ A i 0 λ i + λ i + Āλ i λ i λ i + λ i λ i λ i q i b + αa i 0 + αāi λ i λ i + αāi q i + α λ i λ i q i Ā i q i + α q i λi λ i + Āi + α λ i + λ i q i q i + α q i + Āλ i λ i is increasing for a 0, one can easily see that a + a 0 q i ] α q i q i + α q i Also notice q i = and q T i q i = 0 Therefore, = q i + α q i + α q T i q i α qt i q i α η Hence, 38 ασ ɛ α η γ p Ā α η 3 + Υ + η η + Υη 3 In other words, a lower bound on α for g p α 0 is σ ɛ η + γ p Ā η 3 + Υ + η η + Υη 3 Therefore, all the lemmas and theorems in the previous sections are fitting here 74 Numerical Examples We have implemented the basic algorithm for the modified Q method in MATLAB and have tested on,000 randomly generated problems Step sizes α, β, γ are chosen as that for the Q method The problem types, accuracy requirement, starting points, and parameters are the same as that in 6 Below is the results problem r p0 r d 0 it
28 Although the algorithm finds an ɛ-optimal solution for all the,000 problems, a small portion of them need more than 00 iterations to reach the required accuracy, which brings up the number of average iterations Following is the results on SMT problem it network-cost r p r d gap e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-06 49e e e e e e e e e e e-0 Note the total number of iteration required to reach the final network-cost of [4] is, one less than that of [4] 8 Conclusion and Future Research We have developed and analyzed the Q method and its variant for SOCP Preliminary numerical results show that the algorithm is promising In the future, we intend to investigate sparse matrix issues and large-scale application Appendix In this section, we will show that 3 is valid for any S l We use the notion of primary matrix function see [9, 64, p 40] to define a matrix valued function The definition is the following Definition Let A be a given square matrix with Jordan canonical form A = UJU Assume J n λ ν J =, Jnrλνr where each J k λ is a k-by-k Jordan block with eigenvalue λ Let c i be the dimension of the largest Jordan block corresponding to λ i Let ft be a scalar valued function of t such that each λ i with 8
The Q Method for Second-Order Cone Programming
The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method
More informationThe Q Method for Symmetric Cone Programmin
The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will
More informationThe dual simplex method with bounds
The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationYURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL
Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationOn implementing a primal-dual interior-point method for conic quadratic optimization
On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationImplications of the Constant Rank Constraint Qualification
Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationIn English, this means that if we travel on a straight line between any two points in C, then we never leave C.
Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationResearch Reports on Mathematical and Computing Sciences
ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationIntroduction and Math Preliminaries
Introduction and Math Preliminaries Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Appendices A, B, and C, Chapter
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationA Parametric Simplex Algorithm for Linear Vector Optimization Problems
A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationBare-bones outline of eigenvalue theory and the Jordan canonical form
Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationarxiv: v1 [math.na] 5 May 2011
ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and
More informationThe Steepest Descent Algorithm for Unconstrained Optimization
The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationLinear-quadratic control problem with a linear term on semiinfinite interval: theory and applications
Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN
More informationContinuous methods for numerical linear algebra problems
Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationNonsymmetric potential-reduction methods for general cones
CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More information1. Introduction. Consider the following quadratically constrained quadratic optimization problem:
ON LOCAL NON-GLOBAL MINIMIZERS OF QUADRATIC OPTIMIZATION PROBLEM WITH A SINGLE QUADRATIC CONSTRAINT A. TAATI AND M. SALAHI Abstract. In this paper, we consider the nonconvex quadratic optimization problem
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationLagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems
Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationLinear Programming Redux
Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationReview Questions REVIEW QUESTIONS 71
REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationAn Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones
An Algorithm for Solving the Convex Feasibility Problem With Linear Matrix Inequality Constraints and an Implementation for Second-Order Cones Bryan Karlovitz July 19, 2012 West Chester University of Pennsylvania
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationRandomized Quasi-Newton Updates are Linearly Convergent Matrix Inversion Algorithms
Randomized Quasi-Newton Updates are Linearly Convergent Matrix Inversion Algorithms Robert M. Gower and Peter Richtárik School of Mathematics University of Edinburgh United Kingdom ebruary 4, 06 Abstract
More informationMath 341: Convex Geometry. Xi Chen
Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry
More informationLecture 17: Primal-dual interior-point methods part II
10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS
More information