Using Schur Complement Theorem to prove convexity of some SOC-functions
|
|
- Phebe Benson
- 5 years ago
- Views:
Transcription
1 Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp , 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan jschen@math.ntnu.edu.tw Tsun-Ko Liao Department of Mathematics National Taiwan Normal University Taipei 11677, Taiwan @ntnu.edu.tw Shaohua Pan Department of Mathematics South China University of Technology Guangzhou , China March 10, 01 Abstract. In this paper, we provide an important application of the Schur Complement Theorem in establishing convexity of some functions associated with second-order cones (SOCs), called SOC-trace functions. As illustrated in the paper, these functions play a key role in the development of penalty and barrier functions methods for second-order cone programs, and establishment of some important inequalities associated with SOCs. Key words. Second-order cone, SOC-functions, convexity, positive semidefiniteness AMS subject classifications. 6A7, 6B05, 6B35, 49J5, 90C33. 1 Member of Mathematics Division, National Center for Theoretical Sciences, Taipei Office. The author s work is supported by National Science Council of Taiwan. The author s work is supported by National Young Natural Science Foundation (No ) and Guangdong Natural Science Foundation (No ). shhpan@scut.edu.cn. 1
2 1 Introduction The second-order cone (SOC) in IR n, also called Lorentz cone, is a set defined as K n := { (x 1, x ) IR IR n 1 x 1 x }, (1) where denotes the Euclidean norm. When n = 1, K n reduces to the set of nonnegative real numbers IR +. As shown in [1], K n is also a set composed of the squared elements from Jordan algebra (IR n, ), where the Jordan product is a binary operation defined by x y := ( x, y, x 1 y + y 1 x ) () for any x = (x 1, x ), y = (y 1, y ) IR IR n 1. Unless otherwise stated, in the rest of this note, we use e = (1, 0,..., 0) T IR n to denote the unit element of Jordan algebra (IR n, ), i.e., e x = x for any x IR n, and for any x IR n, use x 1 to denote the first component of x, and x to denote the vector consisting of the rest n 1 components. From [11, 1], we recall that each x IR n admits a spectral factorization associated with K n, of the following form x = λ 1 (x)u (1) x + λ (x)u () x, (3) where λ i (x) and u (i) x for i = 1, are the spectral values and the associated spectral vectors of x, respectively, defined by λ i (x) = x 1 + ( 1) i x, u (i) x = 1 (1, ( 1) i x ), (4) with x = x x if x 0, and otherwise x being any vector in IR n 1 such that x = 1. When x 0, the spectral factorization is unique. The determinant and trace of x are defined as det(x) := λ 1 (x)λ (x) and tr(x) := λ 1 (x) + λ (x), respectively. With the spectral factorization above, for any given scalar function f : J IR IR, we may define a vector-valued function f soc : S IR n IR n by f soc (x) := f(λ 1 (x))u (1) x + f(λ (x))u () x (5) where J is an interval (finite or infinite, open or closed) of IR, and S is the domain of f soc determined by f. Obviously, f soc is well-defined whether x = 0 or not. Assume that f is differentiable on intj. Then, by Lemma. in Section, f soc is differentiable on ints, and moreover, for any x ints, f soc (x) has a structure which entails the application of the Schur Complement Theorem in establishing its positive semidefiniteness or positive definiteness. On the other hand, Lemma. shows that the SOC-trace function f tr (x) := f(λ 1 (x)) + f(λ (x)) = tr(f soc (x)) x S (6)
3 is differentiable on ints with f tr (x) = (f ) soc (x) for any x ints, which means that f tr (x) = (f ) soc (x) x ints. (7) The two sides show that we may establish the convexity of f tr with the help of the Schur Complement Theorem. In fact, the Schur Complement Theorem is frequently used in the topics related to semidefinite programming (SDP), but there is no paper to introduce its application involving SOCs, to the best of our knowledge. Motivated by this, in this paper we provide such an application of the Schur Complement Theorem in establishing the convexity of SOC-trace functions and the compounds of SOC-trace functions and real-valued functions. As illustrated in the next section, some of these functions are the key of penalty and barrier function methods for second-order cone programs (SOCPs), as well as the establishment of some important inequalities associated with SOCs, for which the proof of convexity of these functions is a necessity. But, this requires computation of the first and second-order derivatives, which is technically much more demanding than in the linear and semidefinite cases. As will be seen, Theorem.1 gives a simple way to achieve this objective via the Schur Complement Theorem, by which one only needs to check the sign of the second-order derivative of a scalar function. Throughout this note, for any x, y IR n, we write x K n y if x y K n ; and write x K n y if x y intk n. For a real symmetric matrix A, we write A 0 (respectively, A 0) if A is positive semidefinite (respectively, positive definite). For any f : J IR, f (t) and f (t) denote the first derivative and second-order derivative of f at the differentiable point t J, respectively; for any F : S IR n IR, F (x) and F (x) denote the gradient and the Hessian matrix of F at the differentiable point x S, respectively. Main results The Schur Complement Theorem gives a characterization for the positive semidefiniteness (definiteness) of a matrix via the positive semidefiniteness (definiteness) of the Schurcomplement with respect to a block partitioning of the matrix, which is stated as below. Lemma.1 [13] (Schur Complement Theorem) Let A IR m m be a symmetric positive definite matrix, C IR n n be a symmetric matrix, and B IR m n. Then, [ ] A B B T 0 C B T A 1 B 0 (8) C and [ ] A B B T 0 C B T A 1 B 0. (9) C 3
4 In this section, we focus on on the application of the Schur Complement Theorem in establishing convexity of SOC-trace functions. To the end, we need the following lemma. Lemma. For any given f : J IR IR, let f soc : S IR n and f tr : S IR be given by (5) and (6), respectively. Assume that J is open. Then, the following results hold. (a) The domain S of f soc and f tr is also open. (b) If f is (continuously) differentiable on J, then f soc is (continuously) differentiable on S. Moreover, for any x S, f soc (x) = f (x 1 )I if x = 0, and otherwise b(x) c(x) xt x where f soc (x) = b(x) = f (λ (x))+ f (λ 1 (x)) c(x) x x a(x)i + (b(x) a(x)) x x T x, c(x)= f (λ (x)) f (λ 1 (x)), (10), a(x)= f(λ (x)) f(λ 1 (x)). λ (x) λ 1 (x) (c) If f is (continuously) differentiable, then f tr is (continuously) differentiable on S with f tr (x) = (f ) soc (x); if f is twice (continuously) differentiable, then f tr is twice (continuously) differentiable on S with f tr (x) = (f ) soc (x). Proof. (a) Fix any x S. Then λ 1 (x), λ (x) J. Since J is an open subset of IR, there exist δ 1, δ > 0 such that {t IR t λ 1 (x) < δ 1 } J and {t IR t λ (x) < δ } J. Let δ := min{δ 1, δ }/. Then, for any y satisfying y x < δ, we have λ 1 (y) λ 1 (x) < δ 1 and λ (y) λ (x) < δ by noting that (λ 1 (x) λ 1 (y)) + (λ (x) λ (y)) = (x 1 + x ) + (y 1 + y ) 4(x 1 y 1 + x y ) (x 1 + x ) + (y 1 + y ) 4(x 1 y 1 + x, y ) = ( x + y x, y ) = x y, and consequently λ 1 (y) J and λ (y) J. Since f is a function from J to IR, this means that {y IR n y x < δ} S, and therefore the set S is open. (b) The proof is direct by using the same arguments as those of [9, Props. 4 and 5]. (c) If f is (continuously) differentiable, then from part (b) and f tr (x) = e T f soc (x) it follows that f tr is (continuously) differentiable. In addition, a simple computation yields that f tr (x) = f soc (x)e = (f ) soc (x). Similarly, by part (b), the second part follows. 4
5 Theorem.1 For any given f : J IR, let f soc : S IR n and f tr : S IR be given by (5) and (6), respectively. Assume that J is open. If f is twice differentiable on J, then (a) f (t) 0 for any t J (f ) soc (x) 0 for any x S f tr is convex in S. (b) f (t) > 0 for any t J (f ) soc (x) 0 x S = f tr is strictly convex in S. Proof. (a) By Lemma.(c), f tr (x) = (f ) soc (x) for any x S, and the second equivalence follows by Prop. B.4(a) and (c) of [4]. We next come to the first equivalence. By Lemma.(b), for any fixed x S, (f ) soc (x) = f (x 1 )I if x = 0, and otherwise (f ) soc (x) has the same expression as in (10) except that b(x) = f (λ (x))+ f (λ 1 (x)), c(x)= f (λ (x)) f (λ 1 (x)), a(x)= f (λ (x)) f (λ 1 (x)). λ (x) λ 1 (x) Assume that (f ) soc (x) 0 for any x S. Then, we readily have b(x) 0 for any x S. Noting that b(x) = f (x 1 ) when x = 0, we particularly have f (x 1 ) 0 for all x 1 J, and consequently f (t) 0 for all t J. Assume that f (t) 0 for all t J. Fix any x S. Clearly, b(x) 0 and a(x) 0. If b(x) = 0, then f (λ 1 (x)) = f (λ (x)) = 0, and consequently c(x) = 0, which in turn implies that [ ] 0 0 (f ) soc (x) = ( ) 0 a(x) I x x T 0. (11) x If b(x) > 0, then by the first equivalence of Lemma.1 and the expression of (f ) soc (x) it suffices to argue that the following matrix a(x)i + (b(x) a(x)) x x T x c (x) x x T (1) b(x) x is positive semidefinite. Since the rank-one matrix x x T has only one nonzero eigenvalue x, the matrix in (1) has one eigenvalue a(x) of multiplicity n 1 and one eigenvalue b(x) c(x) of multiplicity 1. Since a(x) 0 and b(x) c(x) = f (λ b(x) b(x) 1 (x))f (λ (x)) 0, the matrix in (1) is positive semidefinite. By the arbitrary of x, we have that (f ) soc (x) 0 for all x S. (b) The first equivalence is direct by using (9) of Lemma (.1), noting (f ) soc (x) 0 implies a(x) > 0 when x 0, and following the same arguments as part (a). The second part is due to [4, Prop. B.4(b)]. Remark.1 Note that the strict convexity of f tr does not necessarily imply the positive definiteness of f tr (x). Consider f(t) = t 4 for t IR. We next show that f tr is strictly 5
6 convex. Indeed, f tr is convex in IR n by Theorem.1(a) since f (t) = 1t 0. Taking into account that f tr is continuous, it remains to prove that ( ) x + y f tr = f tr (x) + f tr (y) = x = y. (13) Since h(t) = (t 0 + t) 4 + (t 0 t) 4 for some t 0 IR is increasing on [0, + ), and the function f(t) = t 4 is strictly convex in IR, we have that ( ) x + y f tr [ ( )] 4 [ ( )] 4 x + y x + y = λ 1 + λ ( ) 4 ( ) 4 x1 + y 1 x + y x1 + y 1 + x + y = + ( ) 4 ( x1 + y 1 x y x1 + y 1 + x + y + ( ) 4 ( ) 4 λ1 (x) + λ 1 (y) λ (x) + λ (y) = + (λ 1(x)) 4 + (λ 1 (y)) 4 + (λ (x)) 4 + (λ (y)) 4 = f tr (x) + f tr (y), and moreover, the above inequalities become the equalities if and only if x + y = x + y, λ 1 (x) = λ 1 (y), λ (x) = λ (y). It is easy to verify that the three equalities hold if and only if x = y. So, the implication in (13) holds, i.e., f tr is strictly convex. However, by Theorem.1(b), (f ) soc (x) 0 does not hold for all x IR n since f (t) > 0 does not hold for all t IR. ) 4 It should be mentioned that the fact that the strict convexity of f implies the strict convexity of f tr was proved in [, 8] via the definition of convex function, but here we use the Shcur Complement Theorem and the relation between (f ) soc and f tr to establish the convexity of SOC-trace functions. In addition, we also note that the necessity involved in the first equivalence of Theorem.1(a) was given in [11] via a different way. Next, we illustrate the application of Theorem.1 with some SOC-trace functions. Proposition.1 The following functions associated with K n are all strictly convex. (a) F 1 (x) = ln(det(x)) for x intk n. (b) F (x) = tr(x 1 ) for x intk n. 6
7 (c) F 3 (x) = tr(φ(x)) for x intk n, where φ(x) = { x p+1 e + x1 q e p+1 (d) F 4 (x) = ln(det(e x)) for x K n e. (e) F 5 (x) = tr((e x) 1 x) for x K n e. (f) F 6 (x) = tr(exp(x)) for x IR n. (g) F 7 (x) = ln(det(e + exp(x))) for x IR n. ( ) x + (x + 4e) 1/ (h) F 8 (x) = tr for x IR n. q 1 if p [0, 1], q > 1; x p+1 e ln x p+1 if p [0, 1], q = 1. Proof. Note that F 1 (x), F (x) and F 3 (x) are the SOC-trace functions associated with f 1 (t) = ln t (t > 0), f (t) = t 1 (t > 0) and f 3 (t) (t > 0), respectively, where f 3 (t) = { t p t1 q 1 p+1 q 1 if p [0, 1], q > 1, t p+1 1 ln t p+1 if p [0, 1], q = 1; F 4 (x) is the SOC-trace function associated with f 4 (t) = ln(1 t) (t < 1), F 5 (x) is the SOC-trace function associated with f 5 (t) = (t < 1) by noting that t 1 t (e x) 1 x = λ 1(x) λ 1 (e x) u(1) x + λ (x) λ (e x) u() x ; F 6 (x) and F 7 (x) are the SOC-trace functions associated with f 6 (t) = exp(t) (t IR) and f 7 (t) = ln(1 + exp(t)) (t IR), respectively, and F 8 (x) is the SOC-trace function associated with f 7 (t) = ( 1 t + t + 4 ) (t IR). It is easy to verify that the functions f 1 -f 8 have positive second-order derivatives in their respective domain, and therefore F 1 -F 8 are strictly convex functions by Theorem.1(b). The functions F 1, F and F 3 are the popular barrier functions which play a key role in the development of interior point methods for SOCPs, see, e.g., [6, 7, 14, 15, 17], where F 3 covers a wide range of barrier functions, including the classical logarithmic barrier function, the self-regular functions and the non-self-regular functions; see [7] for details. The functions F 4 and F 5 are the popular shifted barrier functions [1,, 3] for SOCPs, and F 6 -F 8 can be used as penalty functions for second-order cone programs (SOCPs), and these functions are added to the objective of SOCPs for forcing the solution to be feasible. Besides the application in establishing convexity for SOC-trace functions, the Schur complement theorem can be employed to establish convexity of some compound functions of SOC-trace functions and scalar-valued functions, which is usually difficult to achieve by the definition of convex function. The following proposition presents such an application. 7
8 Proposition. For any x K n, let F 9 (x) := [det(x)] 1/p with p > 1. Then, (a) F 9 is twice continuously differentiable in intk n. (b) F 9 is convex when p, and moreover, it is strictly convex when p >. Proof. (a) Note that F 9 (x) = exp (p 1 ln(det(x))) for any x intk n, and ln(det(x)) = f tr (x) with f(t) = ln(t) for t IR ++. By Lemma.(c), ln(det(x)) is twice continuously differentiable in intk n. Hence F 9 (x) is twice continuously differentiable in intk n. The result then follows. (b) In view of the continuity of F 9, we only need to prove its convexity over intk n. By part (a), we next achieve this goal by proving that the Hessian matrix F 9 (x) for any x intk n is positive semidefinite when p, and positive definite when p >. Fix any x intk n. From direct computations, we obtain [ F 9 (x) = 1 (x 1 ) ( x 1 x ) ] 1 p 1 p ( x ) ( x 1 x ) 1 p 1 and F 9 (x) = p 1 p (det(x)) 1 p 4x 1 p(x 1 x ) 4x p 1 1 x T 4x 1 x 4x x T + p(x 1 x ) I p 1 Since x intk n, we have x 1 > 0 and det(x) = x 1 x > 0, and therefore a 1 (x) := 4x 1 p ( (x 1 x ) = 4 p ) x 1 + p p 1 p 1 p 1 x.. (14) We next proceed the arguments by the following two cases: (1) a 1 (x) = 0; () a 1 (x) > 0. Case 1: a 1 (x) = 0. Since p, under this case we must have x = 0, and consequently, F 9 (x) = p 1 [ ] (x p 1 ) p p p 1 x 1I Case : a 1 (x) > 0. Under this case, we calculate that [ 4x 1 p ] [ (x 1 x ) 4x x T + p (x 1 x ) p 1 p 1 = 4p [ (x 1 x ) p p 1 p 1 x 1I + p p 1 x I x x T ] I 16x 1x x T ]. (15) Since the rank-one matrix x x T has only one nonzero eigenvalue x, the matrix in the bracket of the right hand side of (15) has one eigenvalue of multiplicity 1 given by p p 1 x 1 + p p 1 x x = p ( x p 1 1 x ) 0, 8
9 and one eigenvalue of multiplicity n 1 given by p p 1 x 1 + p p 1 x 0. Furthermore, we see that these eigenvalues must be positive when p > since x 1 > 0 and x 1 x > 0. This means that the matrix on the right hand side of (15) is positive semidefinite, and moreover, it is positive definite when p >. Applying Lemma.1, we have that F 9 (x) 0, and furthermore F 9 (x) 0 when p >. Since a 1 (x) > 0 must hold when p >, the arguments above show that F 9 (x) is convex over intk n when p, and strictly convex over intk n when p >. It is worthwhile to point out that det(x) is neither convex nor concave on K n, and it is difficult to argue the convexity of those compound functions involving det(x) by the definition of convex function. But, as shown in Proposition., the Schur Complement Theorem offers a simple way to prove their convexity. To close section, we take a look at the application of some of convex functions above in establishing inequalities associated with SOCs. Some of these inequalities have been used to analyze the properties of SOC-function f soc [10] and the convergence of interior point methods for SOCPs []. Proposition.3 For any x K n 0 and y K n 0, the following inequalities hold. (a) det(αx + (1 α)y) (det(x)) α (det(y)) 1 α for any 0 < α < 1. (b) det(x + y) 1/p p 1 ( det(x) 1/p + det(y) 1/p) for any p. (c) det(x + y) det(x) + det(y). (d) det(αx + (1 α)y) α det(x) + (1 α) det(y) for any 0 < α < 1. (e) [det(e + x)] 1/ 1 + det(x) 1/. (f) If x K n y, then det(x) det(y). { } 1 (g) det(x) 1/ = inf tr(x y) : det(y) = 1, y 0. Furthermore, when x K n K n 0, the same relation holds with inf replaced by min. (h) tr(x y) det(x) 1/ det(y) 1/. Proof. (a) From Prop..1(a), we know that ln(det(x)) is strictly concave in intk n. Thus, ln(det(αx + (1 α)y)) α ln(det(x)) + (1 α)det(y) = ln(det(x) α ) + ln(det(x) 1 α ) 9
10 for any 0 < α < 1 and x, y intk n. This, together with the increasing of ln t (t > 0) and the continuity of det(x), implies the desired result. (b) By Prop..(b), det(x) 1/p is concave over K n. So, for any x, y K n, we have that ( ) 1/p x + y det 1 [ det(x) 1/p + det(y) 1/p] [ (x1 ) + y 1 x + y ] 1/p ( x 1 x ) 1/p ( + y 1 y ) 1/p [ (x 1 + y 1 ) x + y ] 1/p 4 1 p [ (x 1 x ) 1/p ( + y 1 y ) ] 1/p det(x + y) 1/p ( p 1 det(x) 1/p + det(y) 1/p), which is the desired result. (c) Using the inequality in part (b) with p =, we have Squaring both sides yields det(x + y) 1/ det(x) 1/ + det(y) 1/. det(x + y) det(x) + det(y) + det(x) 1/ det(y) 1/ det(x) + det(y), where the last inequality is by the nonnegativity of det(x) and det(y) since x, y K n. (d) The inequality is direct by part (c) and the fact det(αx) = α det(x). (e) The inequality follows from part (b) with p = and the fact that det(e) = 1. (f) Using part (c) and noting that x K n y, it is easy to verify that det(x) = det(y + x y) det(y) + det(x y) det(y). (g) Using the Cauchy-Schwartz inequality, it is easy to verify that tr(x y) λ 1 (x)λ (y) + λ 1 (y)λ (x) x, y IR n. For any x, y K n, this along with the arithmetic-geometric mean inequality implies that tr(x y) λ 1(x)λ (y) + λ 1 (y)λ (x) λ 1 (x)λ (y)λ 1 (y)λ (x) = det(x) 1/ det(y) 1/, { } 1 which means that inf tr(x y) : det(y) = 1, y 0 = det(x) 1/ for a fixed x K n K n. If x K n 0, then we can verify that the feasible point y = x 1 is such that 1 tr(x y ) = det(x) 1/, and the second part follows. 10 det(x)
11 (h) Using part (g), for any x K n and y int(k n ), we have that ( ) tr(x y) det(y) = tr y x det(x), det(y) which together with the continuity of det(x) and tr(x) implies that tr(x y) det(x) 1/ det(y) 1/ x, y K n. Thus, we complete the proof. Note that some of the inequalities in Prop..3 were ever established with the help of the Schwartz-inequality [10], but here we achieve the goal easily by using the convexity of SOC-functions. These inequalities all have the corresponding counterparts for matrix inequalities [5, 13, 16]. For example, Prop..3(b) with p =, i.e., p equal to the rank of Jordan algebra (IR n, ), corresponds to the Minkowski inequality of matrix case: det(a + B) 1/n det(a) 1/n + det(b) 1/n for any n n positive semidefinite matrices A and B. 3 Conclusions We studied an application of the Schur complement theorem in establishing convexity of SOC-functions, especially for SOC-trace functions, which are the key of penalty and barrier function methods for SOCPs and some important inequalities associated with SOCs. One possible future direction is proving self-concordancy for such barrier/penalty functions associated with SOCs. We also believe that the results in this paper will be helpful towards establishing further properties of other SOC-functions. References [1] A. Auslender, Penalty and barrier methods: a unified framework, SIAM Journal on Optimization, vol. 10, pp , [] A. Auslender, Variational inequalities over the cone of semidefinite positive symmetric matrices and over the Lorentz cone, Optimization Methods and Software, vol. 18, pp , 003. [3] A. Auslender and H. Ramirez, Penalty and barrier methods for convex semidefinite programming, Mathematical Methods of Operations Research, vol. 63, pp ,
12 [4] D. P. Bertsekas, Nonlinear Programming, nd edition, Athena Scientific, [5] R. Bhatia, Matrix Analysis, Springer-Verlag, New York, [6] A. Ben-Tal and A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms and Engineering Applications, MPS-SIAM Series on Optimization. SIAM, Philadelphia, USA, 001. [7] Y.-Q. Bai and G. Q. Wang, Primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function, Acta Mathematica Sinica, vol. 3, pp , 007. [8] H. Bauschke, O. Güler, A. S. Lewis and S. Sendow, Hyperbolic polynomial and convex analysis, Canadian Journal of Mathematics, vol. 53, pp , 001. [9] J.-S. Chen, X. Chen and P. Tseng, Analysis of nonsmooth vector-valued functions associated with second-oredr cone, Mathmatical Programming, vol. 101, pp , 004. [10] J.-S. Chen, The convex and monotone functions associated with second-order cone, Optimization, vol. 55, pp , 006. [11] M. Fukushima, Z.-Q Luo and P. Tseng, Smoothing functions for second-order cone complementarity problems, SIAM Journal on Optimazation, vol. 1, pp , 00. [1] J. Faraut and A. Korányi, Analysis on symmetric Cones, Oxford Mathematical Monographs, Oxford University Press, New York, [13] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, [14] R. D. C. Monteiro and T. Tsuchiya, Polynomial convergence of primal-dual algorithms for the second-order cone programs based on the MZ-family of directions, Mathematical Programming, vol. 88, pp , 000. [15] J. Peng, C. Roos and T. Terlaky, Self-Regularity, A New Paradigm for Primal- Dual Interior-Point Algorithms, Princeton University Press, 00. [16] J. Schott, Matrix Analysis for Statistics, John Wiley and Sons, nd edition, 005. [17] T. Tsuchiya, A convergence analysis of the scaling-invariant primal-dual pathfollowing algorithms for second-order cone programming, Optimization Methods and Software, vol. 11, pp ,
Some characterizations for SOC-monotone and SOC-convex functions
J Glob Optim 009 45:59 79 DOI 0.007/s0898-008-9373-z Some characterizations for SOC-monotone and SOC-convex functions Jein-Shan Chen Xin Chen Shaohua Pan Jiawei Zhang Received: 9 June 007 / Accepted: October
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationXin-He Miao 2 Department of Mathematics Tianjin University, China Tianjin , China
Pacific Journal of Optimization, vol. 14, no. 3, pp. 399-419, 018 From symmetric cone optimization to nonsymmetric cone optimization: Spectral decomposition, nonsmooth analysis, and projections onto nonsymmetric
More informationWe describe the generalization of Hazan s algorithm for symmetric programming
ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,
More informationThe Jordan Algebraic Structure of the Circular Cone
The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure
More informationSemismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones
to appear in Linear and Nonlinear Analysis, 2014 Semismooth Newton methods for the cone spectrum of linear transformations relative to Lorentz cones Jein-Shan Chen 1 Department of Mathematics National
More informationL. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1
L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following
More informationSupplement: Universal Self-Concordant Barrier Functions
IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,
More informationEXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE
Linear and Nonlinear Analysis Volume 3, Number 3, 07, 367 384 EXAMPLES OF r-convex FUNCTIONS AND CHARACTERIZATIONS OF r-convex FUNCTIONS ASSOCIATED WITH SECOND-ORDER CONE CHIEN-HAO HUANG, HONG-LIN HUANG,
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationPermutation invariant proper polyhedral cones and their Lyapunov rank
Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationError bounds for symmetric cone complementarity problems
to appear in Numerical Algebra, Control and Optimization, 014 Error bounds for symmetric cone complementarity problems Xin-He Miao 1 Department of Mathematics School of Science Tianjin University Tianjin
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationApplying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone
Applying a type of SOC-functions to solve a system of equalities and inequalities under the order induced by second-order cone Xin-He Miao 1, Nuo Qi 2, B. Saheya 3 and Jein-Shan Chen 4 Abstract: In this
More information3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
More informationExamples of r-convex functions and characterizations of r-convex functions associated with second-order cone
to appear in Linear and Nonlinear Analysis, 07 Examples of r-convex functions and characterizations of r-convex functions associated with second-order cone Chien-Hao Huang Department of Mathematics National
More informationA proximal-like algorithm for a class of nonconvex programming
Pacific Journal of Optimization, vol. 4, pp. 319-333, 2008 A proximal-like algorithm for a class of nonconvex programming Jein-Shan Chen 1 Department of Mathematics National Taiwan Normal University Taipei,
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More information20 J.-S. CHEN, C.-H. KO AND X.-R. WU. : R 2 R is given by. Recently, the generalized Fischer-Burmeister function ϕ p : R2 R, which includes
016 0 J.-S. CHEN, C.-H. KO AND X.-R. WU whereas the natural residual function ϕ : R R is given by ϕ (a, b) = a (a b) + = min{a, b}. Recently, the generalized Fischer-Burmeister function ϕ p : R R, which
More informationAn E cient A ne-scaling Algorithm for Hyperbolic Programming
An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationPrimal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond
Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x
More informationA PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationPrimal-dual path-following algorithms for circular programming
Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a
More informationA class of Smoothing Method for Linear Second-Order Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationResearch Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationProximal-Like Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming
J Optim Theory Appl (2008) 138: 95 113 DOI 10.1007/s10957-008-9380-8 Proximal-Lie Algorithm Using the Quasi D-Function for Convex Second-Order Cone Programming S.H. Pan J.S. Chen Published online: 12 April
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationA new primal-dual path-following method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationFidelity and trace norm
Fidelity and trace norm In this lecture we will see two simple examples of semidefinite programs for quantities that are interesting in the theory of quantum information: the first is the trace norm of
More informationApplication of Harmonic Convexity to Multiobjective Non-linear Programming Problems
International Journal of Computational Science and Mathematics ISSN 0974-3189 Volume 2, Number 3 (2010), pp 255--266 International Research Publication House http://wwwirphousecom Application of Harmonic
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationConvex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)
Convex functions I basic properties and I operations that preserve convexity I quasiconvex functions I log-concave and log-convex functions IOE 611: Nonlinear Programming, Fall 2017 3. Convex functions
More informationLimiting behavior of the central path in semidefinite optimization
Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationGeometric Mapping Properties of Semipositive Matrices
Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationA smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister function
Volume 30, N. 3, pp. 569 588, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam A smoothing Newton-type method for second-order cone programming problems based on a new smoothing Fischer-Burmeister
More informationAn unconstrained smooth minimization reformulation of the second-order cone complementarity problem
Math. Program., Ser. B 104, 93 37 005 Digital Object Identifier DOI 10.1007/s10107-005-0617-0 Jein-Shan Chen Paul Tseng An unconstrained smooth minimization reformulation of the second-order cone complementarity
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationA Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems
A Novel Inexact Smoothing Method for Second-Order Cone Complementarity Problems Xiaoni Chi Guilin University of Electronic Technology School of Math & Comput Science Guilin Guangxi 541004 CHINA chixiaoni@126.com
More informationStrict diagonal dominance and a Geršgorin type theorem in Euclidean
Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationOn self-concordant barriers for generalized power cones
On self-concordant barriers for generalized power cones Scott Roy Lin Xiao January 30, 2018 Abstract In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov
More informationJordan-algebraic aspects of optimization:randomization
Jordan-algebraic aspects of optimization:randomization L. Faybusovich, Department of Mathematics, University of Notre Dame, 255 Hurley Hall, Notre Dame, IN, 46545 June 29 2007 Abstract We describe a version
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationOn the simplest expression of the perturbed Moore Penrose metric generalized inverse
Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated
More informationA Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming
A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationFischer-Burmeister Complementarity Function on Euclidean Jordan Algebras
Fischer-Burmeister Complementarity Function on Euclidean Jordan Algebras Lingchen Kong, Levent Tunçel, and Naihua Xiu 3 (November 6, 7; Revised December, 7) Abstract Recently, Gowda et al. [] established
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationAn Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University
An Introduction to Linear Matrix Inequalities Raktim Bhattacharya Aerospace Engineering, Texas A&M University Linear Matrix Inequalities What are they? Inequalities involving matrix variables Matrix variables
More informationLocal Self-concordance of Barrier Functions Based on Kernel-functions
Iranian Journal of Operations Research Vol. 3, No. 2, 2012, pp. 1-23 Local Self-concordance of Barrier Functions Based on Kernel-functions Y.Q. Bai 1, G. Lesaja 2, H. Mansouri 3, C. Roos *,4, M. Zangiabadi
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationSome characterizations of cone preserving Z-transformations
Annals of Operations Research manuscript No. (will be inserted by the editor) Some characterizations of cone preserving Z-transformations M. Seetharama Gowda Yoon J. Song K.C. Sivakumar This paper is dedicated,
More information3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationSecond-Order Cone Programming
Second-Order Cone Programming F. Alizadeh D. Goldfarb 1 Introduction Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Second-order cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More informationConvex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013
Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationCentral Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds
Central Paths in Semidefinite Programming, Generalized Proximal Point Method and Cauchy Trajectories in Riemannian Manifolds da Cruz Neto, J. X., Ferreira, O. P., Oliveira, P. R. and Silva, R. C. M. February
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationarxiv: v1 [math.ra] 8 Apr 2016
ON A DETERMINANTAL INEQUALITY ARISING FROM DIFFUSION TENSOR IMAGING MINGHUA LIN arxiv:1604.04141v1 [math.ra] 8 Apr 2016 Abstract. In comparing geodesics induced by different metrics, Audenaert formulated
More informationDO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO
QUESTION BOOKLET EECS 227A Fall 2009 Midterm Tuesday, Ocotober 20, 11:10-12:30pm DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO You have 80 minutes to complete the midterm. The midterm consists
More informationOn the game-theoretic value of a linear transformation on a symmetric cone
On the game-theoretic value of a linear transformation ona symmetric cone p. 1/25 On the game-theoretic value of a linear transformation on a symmetric cone M. Seetharama Gowda Department of Mathematics
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationA Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions
A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework
More informationON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction
J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationThe extreme rays of the 5 5 copositive cone
The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the
More informationA new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization
A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationSecond Order Cone Programming Relaxation of Positive Semidefinite Constraint
Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo
More informationSome Properties of the Augmented Lagrangian in Cone Constrained Optimization
MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented
More informationarxiv:math/ v2 [math.oc] 25 May 2004
arxiv:math/0304104v2 [math.oc] 25 May 2004 THE LAX CONJECTURE IS TRUE A.S. Lewis, P.A. Parrilo, and M.V. Ramana Key words: hyperbolic polynomial, Lax conjecture, hyperbolicity cone, semidefinite representable
More informationOperators with numerical range in a closed halfplane
Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationA NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS
ANZIAM J. 49(007), 59 70 A NEW PROXIMITY FUNCTION GENERATING THE BEST KNOWN ITERATION BOUNDS FOR BOTH LARGE-UPDATE AND SMALL-UPDATE INTERIOR-POINT METHODS KEYVAN AMINI and ARASH HASELI (Received 6 December,
More informationConic Linear Optimization and its Dual. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #04 1 Conic Linear Optimization and its Dual Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
More informationOn the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities
On the Iteration Complexity of Some Projection Methods for Monotone Linear Variational Inequalities Caihua Chen Xiaoling Fu Bingsheng He Xiaoming Yuan January 13, 2015 Abstract. Projection type methods
More informationNorm inequalities related to the matrix geometric mean
isid/ms/2012/07 April 20, 2012 http://www.isid.ac.in/ statmath/eprints Norm inequalities related to the matrix geometric mean RAJENDRA BHATIA PRIYANKA GROVER Indian Statistical Institute, Delhi Centre
More information