Approximation Algorithms for Homogeneous Polynomial Optimization with Quadratic Constraints

Size: px
Start display at page:

Download "Approximation Algorithms for Homogeneous Polynomial Optimization with Quadratic Constraints"

Transcription

1 Approximation Algorithms for Homogeneous Polynomial Optimization with Quadratic Constraints Simai HE Zhening LI Shuzhong ZHANG July 19, 010 Abstract In this paper, we consider approximation algorithms for optimizing a generic multi-variate homogeneous polynomial function, subject to homogeneous quadratic constraints. Such optimization models have wide applications, e.g., in signal processing, magnetic resonance imaging MRI), data training, approximation theory, and portfolio selection. Since polynomial functions are non-convex in general, the problems under consideration are all NP-hard. In this paper we shall focus on polynomial-time approximation algorithms. In particular, we first study optimization of a multi-linear tensor function over the Cartesian product of spheres. We shall propose approximation algorithms for such problem and derive worst-case performance ratios, which are shown to depend only on the dimensions of the models. The methods are then extended to optimize a generic multi-variate homogeneous polynomial function with spherical constraint. Likewise, approximation algorithms are proposed with provable relative approximation performance ratios. Furthermore, the constraint set is relaxed to be an intersection of co-centered ellipsoids. In particular, we consider maximization of a homogeneous polynomial over the intersection of ellipsoids centered at the origin, and propose polynomial-time approximation algorithms with provable worst-case performance ratios. Numerical results are reported, illustrating the effectiveness of the approximation algorithms studied. Keywords: multi-linear tensor form; polynomial function optimization; approximation algorithm. Mathematics Subject Classification: 15A69, 90C6, 90C59. Department of Management Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong. simaihe@cityu.edu.hk Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong. zheningli@cuhk.edu.hk Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong. zhang@se.cuhk.edu.hk. Research supported by Hong Kong RGC Grant CUHK

2 1 Introduction Maximizing or minimizing) a polynomial function, subject to some suitable polynomial constraints, is a fundamental model in optimization. As such, it is widely used in practice just to name a few examples: signal processing [5, 40], speech recognition [7], biomedical engineering [5, ], material science [43], investment science [1, 36, 1, 4, 33, 6], quantum mechanics [8, 3], and numerical linear algebra [38, 39, 3]. It is basically impossible to list, even very partially, the success stories of polynomial optimization, simply due to its sheer size in the literature. To motivate our study, below we shall nonetheless mention a few sample applications to illustrate the usefulness of polynomial optimization. Polynomial optimization has immediate applications in signal and image processing, e.g. Magnetic Resonance Imaging MRI). As an example, Ghosh et al. [5] formulated a fiber detection problem in Diffusion MRI by maximizing a homogenous polynomial function, subject to a spherical constraint. In this particular case, the order of the polynomial may be high, and the problem is non-convex. Barmpoutis et al. [] presented a case for the 4th order tensor approximation in Diffusion Weighted MRI. In statistics, Micchelli and Olsen [7] considered a maximum-likelihood estimation model in speech recognition. In Maricic et al. [5], a quartic polynomial model was proposed for blind channel equalization in digital communication, and in Qi and Teo [40], a study of global optimization was conducted for high order polynomial minimization models arising from signal processing. Polynomial functions also have wide applications in material sciences. As an example, Soare, Yoon, and Cazacu [43] proposed some 4th, 6th and 8th order homogeneous polynomials to model the plastic anisotropy of orthotropic sheet metal. In quantum physics, for example, Dahl et al. [3] proposed a polynomial optimization model to verify whether a physical system is entangled or not, which is an important problem in quantum physics. Gurvits [8] showed that the entanglement verification is NP-hard in general. In fact, the model discussed in [3] is related to the nonnegative quadratic mappings studied by Luo, Sturm and Zhang []. Homogeneous polynomials, which we shall focus on in this paper, play an important role in approximation theory; see e.g. two recent papers by Kroó and Szabados [16] and Varjú [46]. Essentially their results state that the homogeneous polynomial functions are fairly dense among continuous functions in a certain well-defined sense. One interesting application of homogeneous polynomial optimization is related to the so-called eigenvalues of tensors; see Qi [38, 39], and Ni et al. [3]. Investment models involving more than the first two moments for instance to include the skewness and the kurtosis of the investment returns) have been another source of inspiration underlying polynomial optimization. Mandelbrot and Hudson [4] made a strong case against a normal view of the investment returns. The use of higher moments in portfolio selection becomes quite necessary. Along that line, several authors proposed investment models incorporating the higher moments; e.g. De Athayde and Flôre [1], Prakash, Chang and Pactwa [36], Jondeau and Rockinger [1]. Moreover, Parpas and Rustem [33] and Maringer and Parpas [6] proposed diffusion-based methods to solve the non-convex polynomial optimization models arising

3 from portfolio selection involving higher moments. On the front of solution methods, the search for general and efficient algorithms for polynomial optimization has been a priority for many mathematical programmers. Indeed, generic solution methods based on nonlinear programming and global optimization have been studied and tested; see e.g. Qi [37], and Qi et al. [41] and the references therein. An entirely different and systematic) approach based on the so-called Sum of Squares SOS) was proposed by Lasserre [17, 18], and Parrilo [34, 35]. The SOS approach has a strong theoretical appeal, since it can in principle solve any general polynomial optimization model to any given accuracy, by resorting to a possibly large) Semidefinite Program SDP). For univariate polynomial optimization, Nesterov [30] showed that the SOS approach in combination with the SDP solution has a polynomial-time complexity. In general, however, the SDP problems required to be solved by the SOS approach may grow very large. At any rate, thanks to the recently developed efficient SDP solvers cf. e.g. SeDuMi of Jos Sturm [44], SDPT3 of Toh et al. [45], and SDPA of Fujisawa et al. [4]) the SOS approach appears to be attractive. Henrion and Lasserre [10] developed a specialized tool known as GloptiPoly the latest version, GloptiPoly 3, can be found in Henrion et al. [11]) for finding a global optimal solution for a polynomial function based on the SOS approach. For an overview on the recent theoretical developments, we refer to the excellent survey by Laurent [19]. In most cases, polynomial optimization is NP-hard, even for very special ones, such as maximizing a cubic polynomial over a sphere cf. Nestorov [31]). The reader is referred to De Klerk [13] for a survey on the computational complexity issues of polynomial optimization over some simple constraint sets. In case the constraint set is a simplex, and the polynomial has a fixed degree, it is possible to derive Polynomial-Time Approximation Schemes PTAS); see De Klerk et al. [14], albeit the result is viewed mostly as a theoretical one. Almost in all practical situations, the problem is difficult to solve, theoretically as well as numerically. The intractability of general polynomial optimization therefore motivates the search for approximative solutions. Luo and Zhang [3] proposed an approximation algorithm for optimizing a homogenous quartic polynomial under ellipsoidal constraints. That approach is similar, in its spirit, to the seminal SDP relaxation and randomization method of Goemans and Williamson [6], although the objective function in [6] is quadratic. Note that the approach in [6] has been generalized subsequently by many authors, including Nesterov [9], Ye [47, 48], Nemirovski et al. [8], Zhang [49], Zhang and Huang [50], Luo et al. [1], and He et al. [9]. All these works deal with quadratic objective functions. Luo and Zhang [3] considered quartic optimization, and showed that optimizing a quartic polynomial over the intersection of some co-centered ellipsoids is essentially equivalent to its quadratic) SDP relaxation problem, which is itself also NP-hard; however, this gives a handle on the design of approximation algorithms with provable worst-case approximation ratios. Ling et al. [0] considered a special quartic optimization model. Basically, the problem is to minimize a biquadratic function over two spherical constraints. In [0], approximate solutions as well as exact solutions using the SOS approach are considered. The approximation bounds in [0] are 3

4 indeed comparable to the bound in [3], although they are dealing with two different models. The current paper is concerned with general homogeneous polynomial optimization models, and we shall focus on approximate solutions. Our goal is to present a rather general scheme which will enable us to obtain approximate solutions with guaranteed worst-case performance ratios. To present the results, we shall start in the next section with some technical preparations. Models, Notations, and the Organization of the Paper Consider the following multi-linear function F x 1, x,, x d ) = a i1 i i d x 1 i 1 x i x d i d, 1 i 1 n 1,1 i n,,1 i d n d where x k R n k, k = 1,,..., d. In the shorthand notation we shall denote M = a i1 i...i d ) R n 1 n n d to be a d-th order tensor. Closely related to the tensor form M is a general homogeneous polynomial function fx) of degree d, where x R n. We call the tensor form M supersymmetric see [15]) if a i1 i i d is invariant under all permutations of i 1, i,, i d ). As any homogeneous quadratic function uniquely determines a symmetric matrix, a given homogeneous polynomial function fx) of degree d also uniquely determines a super-symmetric tensor form. In particular, suppose that fx) = b i1 i...i d x i1 x i x id. 1 i 1 i i d n Let the super-symmetric tensor form be M = a i1 i i d ) R nd, with a i1 i i d b i1 i i d / P i 1, i,, i d ), where P i 1, i,, i d ) is the number of distinctive permutations of the indices {i 1, i,, i d }. Let F be the multi-linear function defined by the super-symmetric tensor M. Then fx) = F x, x,, x), }{{} and this super-symmetric tensor representation is indeed unique. The Frobenius norm of the tensor form M is naturally defined as M := 1 i 1 n 1,1 i n,,1 i d n d a i 1 i i d. Throughout this paper, we shall denote F to be a multi-linear function defined by a tensor form, and f to be a homogenous polynomial function; without loss of generality we assume that n 1 n n d. In this paper we shall study optimization of a generic polynomial function, subject to two types of constraints: A) Euclidean) spherical constraints; B) general ellipsoidal constraints. To be specific, d 4

5 we consider the following models: A 1 max) max F x 1, x,, x d ) s.t. x k = 1, x k R n k, k = 1,,..., d; A max) max fx) = F x, x,, x) }{{} d s.t. x = 1, x R n ; B 1 max) max F x 1, x,, x d ) s.t. x k ) T Q k i k x k 1, k = 1,,..., d, i k = 1,,..., m k, x k R n k, k = 1,,..., d; Bmax) max fx) = F x, x,, x) }{{} d s.t. x T Q i x 1, i = 1,,..., m, x R n. The models and results of type A) are presented in Section 3; the models and results of type B) are presented in Section 4. To put the matters in perspective, the following table summarizes the organization of the paper and the approximation results: Subsection Model Approximation Performance Ratio 3.1 A 1 max) n 1 n n d ) 1 3. A max) d!d d n d n1 4.1 Bmax) 1 Ω n n d log max 1 k d m k ) d 1) ) 1 ) ) 4. Bmax) Ω d!d d n d 1 log d 1 m As a convention, the notation Ωλ) should be read as: at least in the order of λ. Since the above table is concerned with approximation ratios, we understand Ω ) as a universal constant in the interval 0, 1]. In case d =, Bmax) is precisely the same QCQP problem considered by Nemirovski et al. [8], and our approximation ratio reduces to that of [8]. For d >, there are unfortunately not many results in the literature on approximation algorithms for optimizing higher degree larger than ) polynomial functions with quadratic constraints. Among the existing ones, the most noticeable recent papers include Ling et al. [0], and Luo and Zhang [3]. Both papers consider optimization of a quartic polynomial function subject to one or two quadratic constraints, and quadratic) semidefinite 5

6 programming relaxation is proposed and analyzed in proving the approximation performance ratios. The relative ratios in [0] and [3] are in the order of Ω1/n ). The algorithms in the current paper solve approximately) general homogenous polynomials of degree d, with arbitrary number of constraints. If d = 4 and there is only one quadratic constraint, our relative approximation ratio is Ω1/n), which is better than the results in [0] and [3]. Very recently, in a working paper Zhang et al. [51] study the cubic spherical optimization problems, which is a special case of our model A 1 max) with d = 3. Their approximation ratio is Ω1/ n), which is the same as ours, when specialized to the case d = 3. 3 Polynomial Optimization with Spherical Constraints 3.1 Multi-linear Function Optimization with Spherical Constraints Let us first consider the problem A 1 max) max F x 1, x,, x d ) s.t. x k = 1, x k R n k, k = 1,,..., d, where n 1 n n d. Suppose that M is the tensor form associated with the multi-linear function F. It is clear that the optimal value of the above problem, va 1 max), is positive, unless M is a zero-tensor. A special case of Problem A 1 max) is worth noting, and we shall come back to this point later. Proposition 3.1 If d =, then Problem A 1 max) can be solved in polynomial-time, with va 1 max) M / n 1. Proof. The problem is essentially max x = y =1 x T My. For any fixed y, the corresponding optimal x must be M y/ M y due to the Cauchy-Schwartz inequality, and accordingly, ) My T x T My = My = My = y My T M T My. Thus the problem is equivalent to max y =1 y T M T My, whose solution is the largest eigenvalue and a corresponding eigenvector of the positive semidefinite matrix M T M. Denote λ max M T M) to be the largest eigenvalue of M T M, and we have λ max M T M) tr M T M)/rank M T M) M /n 1, which implies va 1 max) = λ max M T M) M / n 1. However, for any d 3, Problem A 1 max) becomes NP-hard. 6

7 Proposition 3. If d = 3, then Problem A 1 max) is NP-hard. Proof. We first quote a result of Nesterov [31], which states that max s.t. m xt A k x) x = 1, x R n is NP-hard. Now, in a special case d = 3 and n 1 = n = n 3 = n, the objective function of Problem A 1 max) can be written as F x, y, z) = n i,j, a ijk x i y j z k = n z k n a ijk x i y j = i,j=1 n z k x T A k y), where matrix A k R n n with its i, j)-th entry being a ijk for k = 1,,..., n. By the Cauchy-Schwartz inequality, Problem A 1 max) is equivalent to max n xt A k y) s.t. x = y = 1, x, y R n. We need only to show that the optimal value of the above problem is always attainable at x = y. To see why, denote x, ȳ) to be any optimal solution pair, with optimal value v. If x = ±ȳ, then the claim is true; otherwise, we may suppose that x + ȳ 0. Let us denote w := x + ȳ)/ x + ȳ. Since x, ȳ) must be a KKT point, there exist λ, µ) such that { n xt A k ȳ A k ȳ = λ x n xt A k ȳ A k x = µȳ. Pre-multiplying x T to the first equation and ȳ T to the second equation yield λ = µ = v. Summing up the two equations, pre-multiplying w T, and then scaling, lead us to n x T A k ȳ w T A k w = v. By applying the Cauchy-Schwartz inequality to the above equality, we have n )1/ n )1/ v x T A k ȳ) w T A k w) = n ) 1/ v w T A k w), which implies that w, w) is also an optimal solution. The problem is then reduced to Nesterov s quartic model, and its NP-hardness thus follows. We remark that the above hardness result is also shown independently in [51]. In the remainder of this subsection, we shall focus on approximation algorithms for general Problem A 1 max). To get the 7

8 main idea of the algorithms, let us first work with the case d = 3, i.e. Denote W = xy T, and we have Ā1 max) max F x, y, z) = 1 i n 1,1 j n,1 k n 3 a ijk x i y j z k s.t. x = y = z = 1, x R n 1, y R n, z R n 3. W = tr W W T ) = tr xy T yx T ) = tr x T xy T y) = x y = 1. Problem Ā1 max) can now be relaxed to max F W, z) = 1 i n 1,1 j n,1 k n 3 a ijk W ij z k s.t. W = z = 1, W R n 1 n, z R n 3. Notice that the above problem is exactly Problem A 1 max) with d =, which can be solved in polynomial-time by Proposition 3.1. Denote its optimal solution to be Ŵ, ẑ). Clearly F Ŵ, ẑ) vā1 max). The key step is to recover solution ˆx, ŷ) from the matrix Ŵ. Below we shall introduce two basic decomposition routines: one is based on randomization and the other on eigen-decomposition. They play fundamental roles in our proposed algorithms; all solution methods to be developed later rely on these two routines as a basis. DR Decomposition Routine) 1 Input: matrices M, W R n 1 n with W = 1. Construct W = [ I n1 W T W W T W ] 0. Randomly generate and repeat if necessary, until ξ η ) N 0 n1 +n, W ) ξ T Mη M W and ξ η O n 1 ). Output: x, y) = ξ/ ξ, η/ η ). 8

9 Now, let M = F,, ẑ) and W = Ŵ in applying the above decomposition routine. For the randomly generated ξ, η), we have E[F ξ, η, ẑ)] = E[ξ T Mη] = M W = F Ŵ, ẑ). He et al. [9] establish that if fx) is an homogeneous quadratic function and x is drawn from a zero-mean multivariate normal distribution, then there is a universal constant θ 0.03 such that Prob {fx) E[fx)]} θ. Since ξ T Mη is a homogeneous quadratic function of the normal random vector ξ T, η T ) T, we know Prob {ξ T Mη M W } = Prob {F ξ, η, ẑ) E[F ξ, η, ẑ)]} θ. 1) Moreover, by using a property of normal random vectors see Lemma 3.1 of [3]) we have E [ ξ η ] n 1 n n 1 n = E ξi ηj = E[ξ i ]E[ηj ] + E[ξ i η j ] ) = n 1 n j=1 [ j=1 j=1 Ŵ T Ŵ ) jj + Ŵ ij By applying the Markov inequality, it follows that Prob { ξ η t} E [ ξ η ] /t =n 1 + )/t, ] = n 1 + )tr Ŵ T Ŵ ) = n 1 +. ) for any t > 0. Therefore, by the so-called union inequality for the probability of joint events, we have { } Prob F ξ, η, ẑ) F Ŵ, ẑ), ξ η t } 1 Prob {F ξ, η, ẑ) < F Ŵ, ẑ) Prob { ξ η > t } 1 1 θ) n 1 + )/t = θ/, where we let t = n 1 + )/θ. Thus, F x, y, ẑ) F Ŵ, ẑ)/ t θ max) vā1 n 1 +), and we obtain an Ω1/ n 1 ) approximation ratio. Below we shall present an alternative and deterministic) decomposition routine. DR Decomposition Routine) Input: matrix M R n 1 n. Find an eigenvector ŷ corresponding to the largest eigenvalue of matrix M T M. 9

10 Compute ˆx = Mŷ. Output: x, y) = ˆx/ ˆx, ŷ/ ŷ ). This decomposition routine literally follows the proof of Proposition 3.1, which tells us that x T My M / n 1. Thus we have n1 F x, y, ẑ) = n 1 x T My M = max Z =1 M Z M Ŵ = F Ŵ, ẑ) vā1 max). The complexity is On 1 n ) with high probability) for DR 1, and is Omax{n 3 1, n 1n }) for DR. However DR is indeed very easy to implement, and is deterministic. Both DR 1 and DR lead to the following approximation result in terms of the order of the approximation ratio. Theorem 3.3 If d = 3, then Problem A 1 max) admits a polynomial-time approximation algorithm with approximation ratio 1/ n 1. Now we proceed to the case for general d. Let X = x 1 x d ) T, and Problem A 1 max) can be relaxed to Ã1 max) max F X, x, x 3,, x d 1 ) s.t. x k = 1, x k R n k, k =, 3,..., d 1, X = 1, X R n 1 n d. Clearly it is a type of Problem A 1 max) with degree d 1. Suppose Problem Ã1 max) can be solved approximately in polynomial-time with approximation ratio τ, i.e. we find ˆX, ˆx, ˆx 3,, ˆx d 1 ) with F ˆX, ˆx, ˆx 3,, ˆx d 1 ) τvã1 max) τva 1 max). Observing that F, ˆx, ˆx 3,, ˆx d 1, ) is an n 1 n d matrix, using DR we shall find ˆx 1, ˆx d ) such that F ˆx 1, ˆx,, ˆx d 1, ˆx d ) F ˆX, ˆx, ˆx 3,, ˆx d 1 )/ n 1 τ/ n 1 )va 1 max). By induction this leads to the following: Theorem 3.4 Problem A 1 max) admits a polynomial-time approximation algorithm with approximation ratio τ A 1, where τ A 1 := 1/ n 1 n n d. We summarize the above-described recursive procedure to solve Problem A 1 max) as in Theorem 3.4 below. Remark that the approximation performance ratio of this algorithm is tight. In a special case F x 1, x,, x d ) = n x1 i x i xd i, the algorithm can be made to return a solution with approximation ratio being exactly τa 1. 10

11 Algorithm 1 Input: d-th order tensor M d R n 1 n n d with n 1 n n d. Rewrite M d as d 1)-th order tensor M d 1 by combing its first and last components into one, and place the combined component into the last one in M d 1, i.e., M d i 1,i,,i d = M d 1 i,i 3,,i d 1,i 1 1)n d +i d, 1 i 1 n 1, 1 i n,, 1 i d n d. For Problem A 1 max) with d 1)-th order tensor M d 1 : if d 1 =, then use DR, with input M = M d 1 and output ˆx, ˆx 1,d ) = x, y); otherwise obtain a solution ˆx, ˆx 3,, ˆx d 1, ˆx 1,d ) by recursion. Compute matrix M = F, ˆx, ˆx 3,, ˆx d 1, ) and rewrite vector ˆx 1,d as a matrix X R n 1 n d. Apply either DR 1 or DR, with input M, W ) = M, X) and output ˆx 1, ˆx d ) = x, y). Output: a feasible solution ˆx 1, ˆx,, ˆx d ). 3. Homogenous Polynomial Optimization with Spherical Constraint Suppose that fx) is a homogenous polynomial function of degree d, and consider the problem A max) max fx) s.t. x = 1, x R n. Let F be the multi-linear super-symmetric tensor function satisfying F x, x,, x) = fx). Then }{{} the above polynomial optimization problem can be relaxed to multi-linear function optimization, as follows: Ā max) max F x 1, x,, x d ) s.t. x k = 1, x k R n, k = 1,,..., d. Theorem 3.4 asserts that Problem Ā max) can be solved approximately with an approximation ratio n d. To establish a link between A max) and Ā max), we note the following relationship: Lemma 3.5 Suppose that x 1, x,..., x d R n, and ξ 1, ξ,, ξ d are i.i.d. random variables, each takes values 1 and 1 with equal probability 1/. For any super-symmetric multi-linear function F of order d and function fx) = F x, x,, x), it holds that [ d d )] E ξ i f ξ k x k = d!f x 1, x,, x d ). d 11

12 Proof. First we observe that [ d d )] E ξ i f ξ k x k d = E = ξ i 1 k 1,k,...,k d d 1 k 1,k,...,k d d d E F ξ i j=1 ξ k1 x k 1, ξ k x k,, ξ kd x k d) d ξ kj F x k 1, x k,, x k d). If k 1, k,, k d ) is a permutation of 1,,..., d), then [ d d d E = E ξ i ξ kj j=1 otherwise, there must be an index k 0 with 1 k 0 d and k 0 k j for all j = 1,,..., d. In the latter case, d E d ξ i ξ kj j=1 = E [ξ k0 ] E ξ i ] = 1; d ξ i ξ kj 1 i d,i k 0 j=1 = 0. Since the number of different permutations of 1,,..., d) is d!, by taking into account of the supersymmetric property of F, the claimed relation follows. When d is odd, the identity in Lemma 3.5 can be rewritten as [ d d )] [ d )] d d d!f x 1, x,, x d ) = E ξ i f ξ k x k = E f ξ i ξ k x k = E f ξ i x k. i k Since ξ 1, ξ,, ξ d are i.i.d. random variables taking values 1 or 1, by randomization we may find a particular binary vector β = β 1, β,, β d ), with βi = 1 for i = 1,,..., d, such that d f β i x k d!f x 1, x,, x d ). 3) i k Remark that d is considered a constant parameter in this paper. Therefore, searching over all the combinations can be done, in principle, in constant time.) Let x = d ) i k β i x k, and ˆx = x / x. By the the triangle inequality, we have x d, and thus Combining with Theorem 3.4, we have fˆx) d!d d F x 1, x,, x d ). Theorem 3.6 For odd d, Problem A max) admits a polynomial-time approximation algorithm with approximation ratio τ A, where τ A := d!d d n d. 1

13 If n is even, then evidently we can only speak of relative approximation ratio. The following algorithm applies for Problem A max) when d is even. It is one typical case of our method for solving homogeneous polynomial optimization from multi-linear function optimization. Applying Algorithm 1 to solve Problem à max) approximately, where x 0 is either given or randomly generated with norm 1, and function H is defined by 4). Denote its approximate solution to be ˆx 1, ˆx,, ˆx d ). Output a feasible solution argmax { d / fx 0 ); f ξ iˆx i ) d ξ iˆx i, ξi }. = 1, i = 1,,..., d Theorem 3.7 For even d 4, Problem A max) admits a polynomial-time approximation algorithm with relative approximation ratio τ A, i.e. there exists a feasible solution ˆx such that fˆx) va min) τ A va max ) va min) ), where va min ) := min x =1 fx). Proof. Denote Hx, x,, x) to be the super-symmetric tensor form with respect to the homogeneous polynomial hx) = x d = x T x) d/. Explicitly, if we denote Π to be the set of all }{{} d distinctive permutations of 1,,..., d), then Hx 1, x,, x d ) = 1 Π i 1,i,,i d ) Π x i 1 ) T x i ) x i 3 ) T x i 4 ) x i d 1 ) T x i d ). 4) For any x 1, x,, x d ) with x k = 1 for k = 1,,, d, we have Hx 1, x,, x d ) 1 by applying the Cauchy-Schwartz inequality termwise. Our algorithm starts by picking any fixed x 0 with x 0 = 1. Consider the following problem à max) max F x 1, x,, x d ) fx 0 )Hx 1, x,, x d ) s.t. x k = 1, x R n, k = 1,,..., d. Applying Theorem 3.4 we obtain a solution ˆx 1, ˆx,, ˆx d ) in polynomial-time, with F ˆx 1, ˆx,, ˆx d ) fx 0 )Hˆx 1, ˆx,, ˆx d ) τ A 1 vã max), 13

14 where τ A 1 d := n. Let us first work on the case that Since Hˆx 1, ˆx,, ˆx d ) 1, we have fx 0 ) va min) τ A 1 /4) va max) va min) ). 5) F ˆx 1, ˆx,, ˆx d ) va min)hˆx 1, ˆx,, ˆx d ) = F ˆx 1, ˆx,, ˆx d ) fx 0 )Hˆx 1, ˆx,, ˆx d ) + fx 0 ) va min) ) Hˆx 1, ˆx,, ˆx d ) τ 1 A vã max) fx 0 ) va min) ) τ 1 A va max ) fx 0 ) ) τ 1 A /4) va max) va min) ) τ 1 A 1 τ 1 A /4) τ 1 A /4 ) va max) va min) ) τ A 1 / ) va max) va min) ), where the second inequality is due to the fact that the optimal solution of Problem A max) is feasible to Problem à max). On the other hand, let ξ 1, ξ,, ξ d be i.i.d. random{ variables, each d } taking values 1 and 1 with equal probability 1/. By symmetricity, we have Prob ξ i = 1 = { d } Prob ξ i = 1 = 1/. Applying Lemma 3.5 we know ) d! F ˆx 1, ˆx,, ˆx d ) va min)hˆx 1, ˆx,, ˆx d ) [ d d ) d )}] = E ξ i {f ξ k ˆx k va min)h ξ k ˆx k d ) = E f ξ k ˆx k va d d { d d } min) ξ k ˆx k ξ i = 1 Prob ξ i = 1 d ) E f ξ k ˆx k va d d { d d } min) ξ k ˆx k ξ i = 1 Prob ξ i = 1 d ) 1 E f ξ k ˆx k va d d d min) ξ k ˆx k ξ i = 1, d ) where the last inequality is due to the fact that f ξ k ˆx k va min ) d ξ k ˆx k d 0, since / d ξ k ˆx k d ξ k ˆx k is feasible to Problem A min ). Thus by randomization, we can find a binary vector β = β 1, β,, β d ) with βi = 1 and d β i = 1, such that d 1 f ) β k ˆx k va d min) d β k ˆx k d! τ 1 A / ) va max) va min) ). 14

15 By letting ˆx = d β k ˆx k / d β k ˆx k, and noticing d β k ˆx k d, we have fˆx) va min) d! τ A 1 va max ) va min )) d β τ A k ˆx k d va max ) va min) ). Recall that the above inequality is derived under the condition that 5) holds. In case 5) does not hold, then we shall have fx 0 ) va min) > τ A 1 /4) va max) va min) ) τ A va max ) va min) ). 6) By picking ˆx = argmax{fˆx), fx 0 )}, regardless whether 5) or 6) holds, we shall uniformly have fˆx ) va min) τ A va max ) va min) ). 4 Polynomial Optimization with Quadratic Constraints In this section, we shall consider a further generalization of the optimization models to include general ellipsoidal constraints. 4.1 Multi-linear Function Optimization with Quadratic Constraints Consider the following model: B 1 max) max F x 1, x,, x d ) s.t. x k ) T Q k i k x k 1, k = 1,,..., d, i k = 1,,..., m k, x k R n k, k = 1,,..., d, where F is a d-th order multi-linear function with M being its associated d-th order tensor form, and the matrices Q k i k 0 and m k i k =1 Qk i k 0 for all 1 k d, 1 i k m k. Let us start with the case d =, and suppose F x 1, x ) = x 1 ) T Mx with M R n 1 n [ ] [ ]. Denote x 1 0 n1 n y = ), M = 1 M/ Q 1 i 0 n1 n x M T, Q i = for all 1 i m 1, and Q i = / 0 n n 0 n n 1 0 n n [ ] 0 n1 n 1 0 n1 n 0 n n 1 Q for all m i m 1 + m. Problem Bmax) 1 is equivalent to i m 1 QP ) max y T My s.t. y T Q i y 1, i = 1,,..., m 1 + m, y R n 1+n. 15

16 It is well known that the above model can be solved approximately by a polynomial-time randomized algorithm with approximation ratio Ω1/ logm 1 + m )) see e.g. Nemirovski, Roos, and Terlaky [8], and He et al. [9]). We now proceed to the higher order cases. To get the essential ideas, we shall focus on the case d = 3. The extension to any higher order can be done by induction. In case d = 3 we may explicitly write B 1 max) as: B 1 max) max F x, y, z) s.t. x T Q i x 1, i = 1,,..., m 1, y T P j y 1, j = 1,,..., m, z T R k z 1, k = 1,,..., m 3, x R n 1, y R n, z R n 3, where Q i 0 for all 1 i m 1, P j 0 for all 1 j m, R k 0 for all 1 k m 3, and m1 Q i 0, m j=1 P j 0, m 3 R k 0. Combining the constraints of x and y, we have tr Q i xy T P j yx T ) = tr x T Q i xy T P j y) = x T Q i x y T P j y 1. Denoting W = xy T, Problem B 1 max) can be relaxed to B 1 max) max F W, z) s.t. tr Q i W P j W T ) 1, i = 1,,..., m 1, j = 1,,..., m, z T R k z 1, k = 1,,..., m 3, W R n 1 n, z R n 3. Observe that for any W R n 1 n, tr Q i W P j W T ) = tr Q 1/ i W P 1/ j P 1/ j W T Q 1/ i )= Q 1/ i W P 1/ j 0, and that tr Q i W P j W T ) = tr 1 i m 1,1 j m = m1 ) m Q i W m1 j=1 ) 1/ m Q i W j=1 P j P j W T 1/ > 0, W 0. Indeed, it is easy to verify that tr Q i W P j W T ) = vecw )) T Q i P j )vecw ), which implies that tr Q i W P j W T ) 1 is actually a convex quadratic constraint for W. Thus, Problem B 1 max) is exactly in the form of Problem Bmax) 1 with d =. Therefore we are able to find a feasible solution Ŵ, ẑ) of Problem B 1 max) in polynomial-time, such that F Ŵ, ẑ) Ω1/ logm 1m + m 3 )) v B 1 max) Ω1/ log m) v B 1 max), 16

17 where m = max{m 1, m, m 3 }. Let us fix ẑ, and then F,, ẑ) is a matrix. Our next step is to generate ˆx, ŷ) from Ŵ. For this purpose, we first introduce the following lemma. Lemma 4.1 Suppose Q i S n + for all 1 i m, and m Q i S n ++, the following SDP problem P ) min m t i s.t. tr UQ i ) 1, i = 1,,..., m, [ t i 0, i = 1,,... ], m, U I n I m n t 0 iq i has an optimal solution with optimal value equal to n. Proof. Straightforward computation shows that the dual of P ) is D) max m s i tr Z) s.t. tr XQ i ) 1, i = 1,,..., m, s [ i 0, i = 1,,..., ] m, X Z Z T m s 0. iq i Observe that D) indeed resembles P ). Since m Q i S n ++, both P ) and D) satisfy the Slater condition, and thus both of them have attainable optimal solutions satisfying the strong duality relationship, i.e. vp ) = vd). Let U, t ) be an optimal solution of P ). Clearly U 0, and by the Schur complement relationship we have m t i Q i U ) 1. Therefore, m m vp ) = t i t i tr U Q i ) tr U U ) 1 ) = n. 7) Observe that for any dual feasible solution X, Z, s) we always have m s i tr X m s iq i ). Hence the following problem is a relaxation of D), to be called RD) as follows: RD) max tr [ XY ) ] tr Z) X Z s.t. Z T 0. Y Consider any feasible solution X, Y, Z) of RD). Let X = P T DP be an orthonormal decomposition with D = Diag d 1, d,, d n ) and P 1 = P T. Notice that D, Y, Z ) := P XP T, P Y P T, P ZP T ) is also a feasible solution of RD) with the same objective value. By the feasibility, it follows that d i Y ii Z ii ) 0, for i = 1,,..., n. Therefore, tr XY ) tr Z) = tr DY ) tr Z ) = n Z ii) 17 n Z ii n d i Y ii n Z ii n Z ii + 1) + n n.

18 This implies that vd) vrd) n. By combining this with 7), and noticing the strong duality relationship, it follows that vp ) = vd) = n. We then have the following decomposition method, to be called DR 3, as a further extension of DR 1. It plays a similar role in Algorithm as DR does in Algorithm 1. DR Decomposition Routine) 3 Input: Q i S n 1 + for all 1 i m 1 with m 1 Q i S n 1 ++, P j S n + for all 1 j m with m j=1 P j S n ++, W Rn 1 n with tr Q i W P j W T ) 1 for all 1 i m 1 and 1 j m, and M R n 1 n. For matrices Q 1, Q,, Q m1, solve the SDP problem P ) in Lemma 4.1 to get an optimal solution of matrix U and scalars t 1, t,, t m1. Construct W = [ U W W T W T m 1 t iq i )W ] 0. 8) Randomly generate and repeat if necessary, until ξ η ) N 0 n1 +n, W ) 9) ξ T Mη M W, ξ T Q i ξ O log m 1 ) 1 i m 1, and η T P j η O n 1 log m ) 1 j m. Output: x, y) = ξ/ max i {ξ T Q i ξ}, η/ max j {η T P j η}). The complexity of DR 3 depends on the solution for the SDP problem P ), which has On 1 ) variables and Om 1 ) constraints. The current best interior point method has a computational complexity of O m 1 + n 1 )3 n 1 log 1/ϵ) ) to get an ϵ-solution. Besides, it involves Omax{n 1 n m 1, n m }) operations for other steps to get the quality assured solution with high probability. Lemma 4. Under the input of DR 3, we can find x R n 1 and y R n by a polynomial-time randomized algorithm, satisfying x T Q i x 1 for all 1 i m 1 and y T P j y 1 for all 1 j m, such that where m = max{m 1, m }. x T My Ω 1 n1 log m ) M W, 18

19 Proof. Following the randomization procedure 9) in DR 3, by Lemma 4.1 we have, for any 1 i m 1 and 1 j m, E[ξ T Q i ξ] = tr Q i U) 1, m1 ) ) m 1 m 1 E[η T P j η] = tr P j W T t i Q i W = t i tr P j W T Q i W ) t i = n 1. So et al. [4] have established that if ξ is a normal random vector and Q 0, then Prob {ξ T Qξ αe[ξ T Qξ]} e α/ for any α > 0. Using this result we have Prob {ξ T Q i ξ α 1 } Prob {ξ T Q i ξ α 1 E[ξ T Q i ξ]} e α1/, and Prob {η T P j η α n 1 } Prob {η T P j η α E[η T P j η]} e α /. Moreover, E[ξ T Mη] = M W. Now let ˆx = ξ/ α 1 and ŷ = η/ α n 1, and we have Prob { ˆx T Mŷ M W }, ˆx T Q iˆx 1 1 i m 1, ŷ T P j ŷ 1 1 j m α1 α n 1 m 1 m 1 Prob {ξ T Mη < M W } Prob {ξ T Q i ξ > α 1 } Prob {η T P j η > α n 1 } j=1 1 1 θ) m 1 e α 1/ m e α / = θ/, where we let α 1 = log8m 1 /θ) and α = log8m /θ). Since α 1 α n 1 = O n 1 log m), the desired ˆx, ŷ) can be found with high probability with multiple trials. Let us turn back to Problem B max). 1 If we pick W = Ŵ and M = F,, ẑ) in applying Lemma 4., then in polynomial-time we can find ˆx, ŷ), satisfying the constraints of Problem B max), 1 such that ) ) ) F ˆx, ŷ, ẑ) = ˆx T 1 1 Mŷ Ω M W = Ω F Ŵ n1 log m n1 log m, ẑ) Ω 1 n1 log v B max). 1 m Thus we have shown the following result. Theorem 4.3 For d = 3, Problem B 1 max) admits a polynomial-time randomized approximation algorithm with approximation ratio Ω n1 log m ) 1 ), where m = max{m 1, m, m 3 }. The result can be generalized to Problem B 1 max) of any degree d. 19

20 Theorem 4.4 Problem B 1 max) admits a polynomial-time randomized approximation algorithm with approximation ratio τ B 1, where τ B 1 := Ω n1 n n d log d 1 m ) 1 ) and m = max 1 k d {m k }. Proof. We shall again take recursive steps. Denoting W = x 1 x d ) T, Problem B 1 max) is relaxed to ˆB 1 max) max F W, x, x 3,, x d 1 ) s.t. tr Q 1 i 1 W Q d i d W T ) 1, i 1 = 1,,..., m 1, i d = 1,,..., m d, x k ) T Q k i k x k 1, k =, 3,..., d 1, i k = 1,,..., m k, W R n 1 n d, x k R n k, k =, 3,..., d 1. Notice that Problem ˆB 1 max) is exactly in the form of Problem B 1 max) of degree d 1, by treating W as a vector of dimension n 1 n d. By recursion, with high probability we can find a feasible solution Ŵ, ˆx, ˆx 3,, ˆx d 1 ) of Problem ˆB max) 1 in polynomial-time, such that n ) ) 1 F Ŵ, ˆx, ˆx 3,, ˆx d 1 ) Ω n 3 n d log d m v ˆB max) 1 n ) ) 1 Ω n 3 n d log d m vbmax). 1 As long as we fix ˆx, ˆx 3,, ˆx d 1 ), and pick W = Ŵ and M = F, ˆx, ˆx 3,, ˆx d 1, ) in applying Lemma 4., we shall be able to find ˆx 1, ˆx d ) satisfying the constraints of Problem Bmax), 1 such that ) F ˆx 1, ˆx,, ˆx d 1, ˆx d 1 ) Ω F Ŵ n1 log m, ˆx, ˆx 3,, ˆx d 1 ) τ1 B vbmax). 1 Summarizing, the recursive procedure for Problem B 1 max) Theorem 4.4) is highlighted as follows: Algorithm Input: d-th order tensor M d R n 1 n n d with n 1 n n d, matrices Q k i k S n k + for all 1 i k m k with m k i k =1 Qk i k S n k ++ for all 1 k d. Rewrite M d as d 1)-th order tensor M d 1 by combing its first and last components into one, and place the combined component into the last one in M d 1, i.e., M d i 1,i,,i d = M d 1 i,i 3,,i d 1,i 1 1)n d +i d, 1 i 1 n 1, 1 i n,, 1 i d n d. Compute matrices P i1,i d = Q 1 i 1 Q d i d for all 1 i 1 m 1 and 1 i d m d. 0

21 For Problem Bmax) 1 with d 1)-th order tensor M d 1, matrices Q k i k k d 1) and P i1,i d : if d 1 =, then the problem is essentially Problem QP ), and admits an approximate solution ˆx, ˆx 1,d ); otherwise obtain a solution ˆx, ˆx 3,, ˆx d 1, ˆx 1,d ) by recursion. Compute matrix M = F, ˆx, ˆx 3,, ˆx d 1, ) and rewrite vector ˆx 1,d as a matrix X R n 1 n d. Apply DR 3 with input Q i, P j, W, M) = Q 1 i, Qd j, X, M ) and output ˆx 1, ˆx d ) = x, y). Output: a feasible solution ˆx 1, ˆx,, ˆx d ). 4. Homogenous Polynomial Optimization with Quadratic Constraints Similar to the spherically constrained case, we now consider the problem Bmax) max fx) s.t. x T Q i x 1, i = 1,,..., m, x R n, where fx) is a homogenous polynomial function of degree d, Q i 0 for all 1 i m, and m Q i 0. If we relax Problem Bmax) to the multi-linear form like Problem Bmax), 1 then we have B max) max F x 1, x,, x d ) s.t. x k ) T Q i x k 1, k = 1,,..., d, i = 1,,..., m, x k R n, k = 1,,..., d. Theorem 4.5 For odd d, Problem Bmax) admits a polynomial-time randomized approximation algorithm with approximation ratio τ B, where τ B = Ω d!d d n d 1 ) ) log d 1 m. Proof. According to Theorem 4.4 we can find a feasible solution ˆx 1, ˆx,, ˆx d ) of Problem B max) in polynomial-time, such that where τ B F ˆx 1, ˆx,, ˆx d ) τ B v B max) τ B vbmax), ) ) := Ω n d 1 log d 1 m. By 3), we can find a binary vector β = β 1, β,, β d ) with β i = 1 for all 1 i d, such that d ) f β iˆx i d!f ˆx 1, ˆx,, ˆx d ). 1

22 Notice that for any 1 k m, d ) T d β iˆx i Q k β j ˆx j = j=1 d β i ˆx i ) T Q k β j ˆx j = i,j=1 d ) T ) β i Q k 1 ˆx β i j Q 1 k ˆx j i,j=1 d β iq 1 k ˆx i β j Q 1 k ˆx j = i,j=1 d i,j=1 ˆx i ) T Q k ˆx i ˆx j ) T Q k ˆx j d 1 1 = d. 10) i,j=1 If we denote ˆx = 1 d d β iˆx i, then ˆx is a feasible solution of Problem B max), satisfying fˆx) d d d!f ˆx 1, ˆx,, ˆx d ) d d d! τ B vb max) = τ B vb max). Theorem 4.6 For even d, Problem B max) admits a polynomial-time randomized approximation algorithm with relative approximation ratio τ B, i.e. there exists a feasible solution ˆx such that fˆx) vb min) τ B vb max ) vb min) ), where vb min ) := min x T Q i x 1,,,...,m fx). Proof. First, we observe that vb max) v B max) and vb min ) v B max). Therefore, v B max) vb max) vb min ). Let ˆx1, ˆx,, ˆx d ) be the feasible solution of Problem B max) as in the proof of Theorem 4.5. By 10) it follows that 1 d d ξ k ˆx k is feasible to Problem B max), where ξ 1, ξ,, ξ d are i.i.d. random variables, each taking values 1 and 1 with equal probability 1/. Therefore, by Lemma 3.5 we have [ d d )] d!f ˆx 1, ˆx,, ˆx d ) = E ξ i f ξ k ˆx k = dd dd E { [ ) ] [ 1 d E f ξ k ˆx k vb d d min) ξ i = 1 E f [ ) ] 1 d f ξ k ˆx k vb d d min) ξ i = 1. 1 d ) d ξ k ˆx k vbmin) ]} d ξ i = 1 Therefore by randomization, we are able to find a binary vector β = β 1, β,, β d ) with β i = 1 and d β i = 1, such that f 1 d ) d β iˆx i vbmin) d d d!f ˆx 1, ˆx,, ˆx d ) τ B v B max) τ B vb max ) vb min) ).

23 We remark that whether the approximation ratios derived in this paper are tight or not is not known, including the case of d = 3. 5 Numerical Results In this section we are going to test the performance of the approximation algorithms proposed. We shall focus on the case d = 4, i.e. fourth order multi-linear function or homogeneous quartic polynomial as a typical case. All the numerical computations are conducted using an Intel Pentium 4 CPU.80GHz computer with GB of RAM. The supporting software is MATLAB R008b), and cvx v1. Grant and Boyd [7]) is called for solving the SDP problems whenever applicable. 5.1 Multi-linear Function with Spherical Constraints Numerical test results on Problem A 1 max) for d = 4 are reported in this subsection. In particular, the model to be tested is: E 1 ) max F x, y, z, w) = 1 i,j,k,l n a ijklx i y j z k w l s.t. x = y = z = w = 1, x, y, z, w R n Randomly Generated Tensors A fourth order tensor F is generated randomly, with its n 4 entries following i.i.d. normal distributions. Basically we have a choice to make in the recursion in Algorithm 1, yielding two procedures described below. Both methods will use the deterministic routine, namely DR. 3

24 Test Procedure 1 1. Solve the relaxation problem max F X, Z) = 1 i,j,k,l n a ijklx ij Z kl s.t. X = Z = 1, X, Z R n n by DR. Denote its optimal solution to be ˆX, Ẑ) and optimal value to be v 1.. Compute matrix M 1 = F,, Ẑ) and solve the problem max x = y =1 xt M 1 y by DR. Denote its optimal solution to be ˆx, ŷ). 3. Compute matrix M = F ˆx, ŷ,, ) and solve the problem max z = w =1 zt M w by DR. Denote its optimal solution to be ẑ, ŵ). 4. Construct a feasible solution ˆx, ŷ, ẑ, ŵ) with objective value ˆv 1 = F ˆx, ŷ, ẑ, ŵ), and report an upper bound of optimal value v 1, and the ratio τ 1 := ˆv 1 / v 1. 4

25 Test Procedure 1. Solve the relaxation problem max F Z, w) = 1 i,j,k,l n a ijklz ijk w l s.t. Z = w = 1, Z R n n n, w R n by DR. Denote its optimal solution to be Ẑ, ŵ) and optimal value to be v.. Compute third order tensor F 3 = F,,, ŵ) and solve the problem max F 3Y, z) Y = z =1 by DR. Denote its optimal solution to be Ŷ, ẑ). 3. Compute matrix M 4 = F 3,, ẑ) and solve the problem max x = y =1 xt M 4 y by DR. Denote its optimal solution to be ˆx, ŷ). 4. Construct a feasible solution ˆx, ŷ, ẑ, ŵ) with objective value ˆv = F ˆx, ŷ, ẑ, ŵ), and report an upper bound of optimal value v, and the ratio τ := ˆv / v. Test Procedure is an explicit description of Algorithm 1 when d = 4 and n 1 = n = n 3 = n 4. It enjoys a theoretic worst-case performance ratio of 1/n by Theorem 3.4. Test Procedure 1 follows a similar fashion of Algorithm 1 by following a different recursion. It also enjoys a worst-case performance ratio of 1/n, which can be proven by using exactly same argument as for Theorem 3.4. From the simulation results in Table 1, the objective values of the feasible solutions are indeed very similar. However, Test Procedure 1 computes a much better upper bound for ve 1 ), and thus ends up with a better approximation ratio. The numerical results in Table 1 seem to indicate that the performance ratio of Test Procedure 1 is about 1/ n, while that of Test Procedure is about /n. The main reason for the difference of upper bounds of ve 1 ) v 1 vs. v ) is the relaxation methods. By Proposition 3.1 we may guess that v 1 = Ω M /n ), while v = Ω M /n), and this may contribute to the large gap between v 1 and v. Consequently, it is quite possible that the true value of ve 1 ) is closer to the solution values ˆv 1 and ˆv ), rather than the optimal value of the relaxed problem v 1 ). The quality of the solutions produced is possibly much better than that of the upper bounds. 5

26 Table 1: Numerical results average of 10 instances for each n) of Problem E 1 ) n ˆv ˆv v v τ τ n τ n τ n τ Table : CPU seconds average of 10 instances for each n) for solving Problem E 1 ) n Test Procedure Test Procedure Although Test Procedure 1 works clearly better than Test Procedure in terms of upper bound of ve 1 ), it requires much more computational time. The most expensive part of Test Procedure 1 is in Step 1, computing the eigenvalue and its corresponding eigenvector of an n n matrix. In comparison, for Test Procedure the corresponding part involves only an n n matrix. Evidence in Table shows that Test Procedure can find a good quality solution very fast even for large size problems. We remark here that for n = 100, the sizes of the input data are already in the magnitude of Examples with Known Optimal Solutions The upper bounds seem to be quite loose in general from the previous numerical results. To test how good the solutions are without referring to the computed upper bounds, in this subsection we report the tests where the problems instances are constructed in such a way that the optimal solutions are known. By this we hope to get some impression, from a different angle, on the quality of the approximative solutions produced by our algorithms. We first randomly generate an n dimensional vector a with norm 1, and generate m symmetric matrices A i 1 i m) with its eigenvalues lying 6

27 Table 3: Numerical results of Problem E 1 ) with Known Optimal when n = 50 m Minimal Ratio Maximal Ratio Average Ratio Percentage of Optimality in the interval [ 1, 1] and A i a = a. Then, we randomly generate an n dimensional vector b with norm 1, and m symmetric matrices B i 1 i m) with eigenvalues in the interval [ 1, 1] and B i b = b. Define m F x, y, z, w) = x T A i y z T B i w ). For this particular multi-linear function F x, y, z, w), it is easy to see that Problem E 1 ) has an optimal solution a, a, b, b) and optimal value is equal to m. We generate such random instances with n = 50 for various m, and subsequently apply Test Procedure to solve them. Since the optimal values are known, it is possible to compute the exact performance ratios. For each m, 00 random instances are generated and tested. The results are shown in Table 3, which suggest that our algorithm works very well and the performance ratios are much better than the theoretical worst-case bounds. Indeed, whenever m 50 our algorithm always finds optimal solutions. 5. Homogenous Polynomial Function with Quadratic Constraints In this subsection we shall test our solution methods for Problem B max) when d = 4: E ) max fx) = 1 i,j,k,l n a ijklx i x j x k x l s.t. x T Q i x 1, i = 1,,..., m, x R n, where M = a ijkl ) is super-symmetric, and Q i is positive semidefinite for all 1 i m. First, a fourth order tensor M is randomly generated, with its n 4 entries following i.i.d. normal distributions. We then symmetrize M averaging of the related entries) to form a super-symmetric tensor M. For the constraints, we generate n n matrix R i, whose entries also follow i.i.d. normal distributions, and then let Q i = R T i R i. The following test procedure is applied in approximately) solving Problem E ). For the particular nature of Problem E ), Test Procedure 3 is a simplification of the algorithm 7

28 Table 4: Numerical results of Problem E ) when n = 10 and m = 30 Instance ˆv v τ n log 3 m τ proposed in proving Theorem 4.6. By following essentially the same proof, this procedure also has a worst case relative performance ratio of Ω 1/n log 3 m) ), similar as Theorem 4.6 asserted. Test Procedure 3 1. Solve the problem max F X, X) = 1 i,j,k,l n a ijklx ij X kl s.t. tr Q i XQ j X T ) 1, i = 1,,..., m, j = 1,,..., m, X R n n by SDP relaxation, and randomly sample 10 times to keep the best sampled solution see [1]). Let the solution be ˆX, and the optimal value of the SDP relaxation be v 3.. Solve the SDP P ) in Lemma 4.1. Apply the randomized process as described in 8) and 9), and sample 10 times to keep the best sampled ˆx and ŷ with maximum F ˆx, ŷ, ˆx, ŷ). 3. Compare the objective values of 0 n, ˆx, ŷ, ˆx + ŷ)/, ˆx ŷ)/, and output the best one as the final solution. Report its objective value ˆv 3 and the ratio τ 3 := ˆv 3 / v 3. For n = 10 and m = 30, we randomly generate 10 instances of Problem E ). The solution results are shown in Table 4. Table 5 shows the absolute approximation ratios for various n and m by following Test Procedure 3. Each entry is the average performance ratio of 10 randomly generated instances. Next we compare our solution method with the so-called SOS approach for solving E ). Due to the limitations of the current SDP solvers constraining the size of SDP relaxation problem at Step 1 in Test Procedure 3), our test procedures work only for small size problems. Since the SOS approach [17, 18] works quite efficiently for small size problems, it is interesting to know how the SOS methods 8

29 Table 5: Absolute approximation ratios of Problem E ) for various n and m n = n = 5 n = 8 n = 10 n = 1 m = m = m = m = Table 6: Comparison with SOS methods of Problem E ) when n = 1 and m = 30 Instance ˆv v v sos Optimality of v sos No No Yes Yes No Yes No Yes No No ˆv 3 / v ˆv 3 /v sos would perform in solving these random generated instances of Problem E ). In particular, we shall use GloptiPoly 3 of Henrion, Lasserre, and Loefberg [11]. We randomly generated 10 instances of Problem E ). By using the first SDP relaxation Lasserre s procedure [17]), GloptiPoly 3 found global optimal solutions for 4 instances, and got upper bounds of optimal values for the other 6 instances. In the latter case, however, no feasible solutions are generated, while our algorithm always finds feasible solutions with guaranteed approximation ratio, and so the two approaches are complementary to each other. Moreover, GloptiPoly 3 always yields a better upper bound than τ 3 for our test instances, which helps to yield better approximation ratios. The average ratio is 0.11 by using upper bound τ 3, and is 0.6 by using the upper bound produced by GloptiPoly 3 see Table 6). To conclude this section and the whole paper, we remark that the algorithms proposed are actually practical, and they produce very high quality solutions. The worst case performance analysis offers a theoretical safety net, which is usually far from typical performance. Moreover, it is of course possible to improve the solution by some local search procedure. A stable local improvement procedure is a nontrivial task for problem in high dimensions, which is one of our future research topics. 9

Approximation algorithms for homogeneous polynomial optimization with quadratic constraints

Approximation algorithms for homogeneous polynomial optimization with quadratic constraints Math. Program., Ser. B (2010 125:353 383 DOI 10.1007/s10107-010-0409-z FULL LENGTH PAPER Approximation algorithms for homogeneous polynomial optimization with quadratic constraints Simai He Zhening Li

More information

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres

Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu

More information

Approximation Methods for Inhomogeneous Polynomial Optimization

Approximation Methods for Inhomogeneous Polynomial Optimization Approximation Methods for Inhomogeneous Polynomial Optimization Simai HE Zhening LI Shuzhong ZHANG June 29, 20 Abstract In this paper, we consider computational methods for optimizing a multivariate inhomogeneous

More information

A Semidefinite Relaxation Scheme for Multivariate Quartic Polynomial Optimization With Quadratic Constraints

A Semidefinite Relaxation Scheme for Multivariate Quartic Polynomial Optimization With Quadratic Constraints A Semidefinite Relaxation Scheme for Multivariate Quartic Polynomial Optimization With Quadratic Constraints Zhi-Quan Luo and Shuzhong Zhang April 3, 009 Abstract We present a general semidefinite relaxation

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

A Unified Theorem on SDP Rank Reduction. yyye

A Unified Theorem on SDP Rank Reduction.   yyye SDP Rank Reduction Yinyu Ye, EURO XXII 1 A Unified Theorem on SDP Rank Reduction Yinyu Ye Department of Management Science and Engineering and Institute of Computational and Mathematical Engineering Stanford

More information

approximation algorithms I

approximation algorithms I SUM-OF-SQUARES method and approximation algorithms I David Steurer Cornell Cargese Workshop, 201 meta-task encoded as low-degree polynomial in R x example: f(x) = i,j n w ij x i x j 2 given: functions

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

Polynomial Optimization: Structures, Algorithms, and Engineering Applications

Polynomial Optimization: Structures, Algorithms, and Engineering Applications Polynomial Optimization: Structures, Algorithms, and Engineering Applications A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BO JIANG IN PARTIAL FULFILLMENT

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor

A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2007; 00:1 6 [Version: 2002/09/18 v1.02] A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric

More information

DOWNLINK transmit beamforming has recently received

DOWNLINK transmit beamforming has recently received 4254 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 8, AUGUST 2010 A Dual Perspective on Separable Semidefinite Programming With Applications to Optimal Downlink Beamforming Yongwei Huang, Member,

More information

Are There Sixth Order Three Dimensional PNS Hankel Tensors?

Are There Sixth Order Three Dimensional PNS Hankel Tensors? Are There Sixth Order Three Dimensional PNS Hankel Tensors? Guoyin Li Liqun Qi Qun Wang November 17, 014 Abstract Are there positive semi-definite PSD) but not sums of squares SOS) Hankel tensors? If the

More information

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3 MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications

More information

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization

Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization Trust Region Problems with Linear Inequality Constraints: Exact SDP Relaxation, Global Optimality and Robust Optimization V. Jeyakumar and G. Y. Li Revised Version: September 11, 2013 Abstract The trust-region

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Handout 6: Some Applications of Conic Linear Programming

Handout 6: Some Applications of Conic Linear Programming ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and

More information

Tensor Principal Component Analysis via Convex Optimization

Tensor Principal Component Analysis via Convex Optimization Tensor Principal Component Analysis via Convex Optimization Bo JIANG Shiqian MA Shuzhong ZHANG December 9, 2012 Abstract This paper is concerned with the computation of the principal components for a general

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016 U.C. Berkeley CS294: Spectral Methods and Expanders Handout Luca Trevisan February 29, 206 Lecture : ARV In which we introduce semi-definite programming and a semi-definite programming relaxation of sparsest

More information

A Note on KKT Points of Homogeneous Programs 1

A Note on KKT Points of Homogeneous Programs 1 A Note on KKT Points of Homogeneous Programs 1 Y. B. Zhao 2 and D. Li 3 Abstract. Homogeneous programming is an important class of optimization problems. The purpose of this note is to give a truly equivalent

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture Note 5: Semidefinite Programming for Stability Analysis ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Complex Quadratic Optimization and Semidefinite Programming Shuzhong Zhang Yongwei Huang August 4; revised April 5 Abstract In this paper we study the approximation algorithms for a class of discrete quadratic

More information

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & super-resolution Control and system theory SDP in wide use

More information

The Trust Region Subproblem with Non-Intersecting Linear Constraints

The Trust Region Subproblem with Non-Intersecting Linear Constraints The Trust Region Subproblem with Non-Intersecting Linear Constraints Samuel Burer Boshi Yang February 21, 2013 Abstract This paper studies an extended trust region subproblem (etrs in which the trust region

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

More information

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study

More information

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION   henrion COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs 2015 American Control Conference Palmer House Hilton July 1-3, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

More information

arxiv: v1 [math.oc] 14 Oct 2014

arxiv: v1 [math.oc] 14 Oct 2014 arxiv:110.3571v1 [math.oc] 1 Oct 01 An Improved Analysis of Semidefinite Approximation Bound for Nonconvex Nonhomogeneous Quadratic Optimization with Ellipsoid Constraints Yong Hsia a, Shu Wang a, Zi Xu

More information

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Grothendieck s Inequality

Grothendieck s Inequality Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Handout 8: Dealing with Data Uncertainty

Handout 8: Dealing with Data Uncertainty MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite

More information

c 2000 Society for Industrial and Applied Mathematics

c 2000 Society for Industrial and Applied Mathematics SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 1, pp. 275 302 c 2004 Society for Industrial and Applied Mathematics GLOBAL MINIMIZATION OF NORMAL QUARTIC POLYNOMIALS BASED ON GLOBAL DESCENT DIRECTIONS LIQUN QI, ZHONG WAN,

More information

SDP Relaxations for MAXCUT

SDP Relaxations for MAXCUT SDP Relaxations for MAXCUT from Random Hyperplanes to Sum-of-Squares Certificates CATS @ UMD March 3, 2017 Ahmed Abdelkader MAXCUT SDP SOS March 3, 2017 1 / 27 Overview 1 MAXCUT, Hardness and UGC 2 LP

More information

Complexity of Deciding Convexity in Polynomial Optimization

Complexity of Deciding Convexity in Polynomial Optimization Complexity of Deciding Convexity in Polynomial Optimization Amir Ali Ahmadi Joint work with: Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Laboratory for Information and Decision Systems Massachusetts

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

More information

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation. In example below, we have N = 6, and the cut we are considering Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

More information

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

The maximal stable set problem : Copositive programming and Semidefinite Relaxations The maximal stable set problem : Copositive programming and Semidefinite Relaxations Kartik Krishnan Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 USA kartis@rpi.edu

More information

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th

More information

Disentangling Orthogonal Matrices

Disentangling Orthogonal Matrices Disentangling Orthogonal Matrices Teng Zhang Department of Mathematics, University of Central Florida Orlando, Florida 3286, USA arxiv:506.0227v2 [math.oc] 8 May 206 Amit Singer 2 Department of Mathematics

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties a Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

More information

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

More information

Applications of semidefinite programming in Algebraic Combinatorics

Applications of semidefinite programming in Algebraic Combinatorics Applications of semidefinite programming in Algebraic Combinatorics Tohoku University The 23rd RAMP Symposium October 24, 2011 We often want to 1 Bound the value of a numerical parameter of certain combinatorial

More information

arxiv: v2 [math-ph] 3 Jun 2017

arxiv: v2 [math-ph] 3 Jun 2017 Elasticity M -tensors and the Strong Ellipticity Condition Weiyang Ding Jinjie Liu Liqun Qi Hong Yan June 6, 2017 arxiv:170509911v2 [math-ph] 3 Jun 2017 Abstract In this paper, we propose a class of tensors

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning

Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning Sum-of-Squares Method, Tensor Decomposition, Dictionary Learning David Steurer Cornell Approximation Algorithms and Hardness, Banff, August 2014 for many problems (e.g., all UG-hard ones): better guarantees

More information

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems

Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems Exact SDP Relaxations for Classes of Nonlinear Semidefinite Programming Problems V. Jeyakumar and G. Li Revised Version:August 31, 2012 Abstract An exact semidefinite linear programming (SDP) relaxation

More information

Proximal-like contraction methods for monotone variational inequalities in a unified framework

Proximal-like contraction methods for monotone variational inequalities in a unified framework Proximal-like contraction methods for monotone variational inequalities in a unified framework Bingsheng He 1 Li-Zhi Liao 2 Xiang Wang Department of Mathematics, Nanjing University, Nanjing, 210093, China

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.85J / 8.5J Advanced Algorithms Fall 008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 8.5/6.85 Advanced Algorithms

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

On deterministic reformulations of distributionally robust joint chance constrained optimization problems

On deterministic reformulations of distributionally robust joint chance constrained optimization problems On deterministic reformulations of distributionally robust joint chance constrained optimization problems Weijun Xie and Shabbir Ahmed School of Industrial & Systems Engineering Georgia Institute of Technology,

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of

More information

A. Derivation of regularized ERM duality

A. Derivation of regularized ERM duality A. Derivation of regularized ERM dualit For completeness, in this section we derive the dual 5 to the problem of computing proximal operator for the ERM objective 3. We can rewrite the primal problem as

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Bounding Probability of Small Deviation: A Fourth Moment Approach. December 13, 2007

Bounding Probability of Small Deviation: A Fourth Moment Approach. December 13, 2007 Bounding Probability of Small Deviation: A Fourth Moment Approach Simai He, Jiawei Zhang, and Shuzhong Zhang December, 007 Abstract In this paper we study the problem of bounding the value of the probability

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

A New Self-Dual Embedding Method for Convex Programming

A New Self-Dual Embedding Method for Convex Programming A New Self-Dual Embedding Method for Convex Programming Shuzhong Zhang October 2001; revised October 2002 Abstract In this paper we introduce a conic optimization formulation to solve constrained convex

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS

ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS ON THE MINIMUM VOLUME COVERING ELLIPSOID OF ELLIPSOIDS E. ALPER YILDIRIM Abstract. Let S denote the convex hull of m full-dimensional ellipsoids in R n. Given ɛ > 0 and δ > 0, we study the problems of

More information

A new approximation hierarchy for polynomial conic optimization

A new approximation hierarchy for polynomial conic optimization A new approximation hierarchy for polynomial conic optimization Peter J.C. Dickinson Janez Povh July 11, 2018 Abstract In this paper we consider polynomial conic optimization problems, where the feasible

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

What can be expressed via Conic Quadratic and Semidefinite Programming?

What can be expressed via Conic Quadratic and Semidefinite Programming? What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-284 Research Reports on Mathematical and Computing Sciences Exploiting Sparsity in Linear and Nonlinear Matrix Inequalities via Positive Semidefinite Matrix Completion Sunyoung Kim, Masakazu

More information

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás

More information

arxiv: v1 [math.oc] 26 Sep 2015

arxiv: v1 [math.oc] 26 Sep 2015 arxiv:1509.08021v1 [math.oc] 26 Sep 2015 Degeneracy in Maximal Clique Decomposition for Semidefinite Programs Arvind U. Raghunathan and Andrew V. Knyazev Mitsubishi Electric Research Laboratories 201 Broadway,

More information

Convex Optimization Overview (cnt d)

Convex Optimization Overview (cnt d) Conve Optimization Overview (cnt d) Chuong B. Do November 29, 2009 During last week s section, we began our study of conve optimization, the study of mathematical optimization problems of the form, minimize

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Lecture 1 Introduction

Lecture 1 Introduction L. Vandenberghe EE236A (Fall 2013-14) Lecture 1 Introduction course overview linear optimization examples history approximate syllabus basic definitions linear optimization in vector and matrix notation

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

Lecture: Examples of LP, SOCP and SDP

Lecture: Examples of LP, SOCP and SDP 1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

More information

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering

Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Certifying the Global Optimality of Graph Cuts via Semidefinite Programming: A Theoretic Guarantee for Spectral Clustering Shuyang Ling Courant Institute of Mathematical Sciences, NYU Aug 13, 2018 Joint

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information