Real eigenvalues of nonsymmetric tensors
|
|
- Everett Miller
- 5 years ago
- Views:
Transcription
1 Comput Optim Appl (208) 70: 32 Real eigenvalues of nonsymmetric tensors Jiawang Nie Xinzhen Zhang 2 Received: 30 May 207 / Published online: 4 December 207 Springer Science+Business Media, LLC, part of Springer Nature 207 Abstract This paper discusses the computation of real Z-eigenvalues and H- eigenvalues of nonsymmetric tensors. A generic nonsymmetric tensor has finitely many Z-eigenvalues, while there may be infinitely many ones for special tensors. The number of H-eigenvalues is finite for all tensors. We propose Lasserre type semidefinite relaxation methods for computing such eigenvalues. For every tensor that has finitely many real Z-eigenvalues, we can compute all of them; each of them can be computed by solving a finite sequence of semidefinite relaxations. For every tensor, we can compute all its real H-eigenvalues; each of them can be computed by solving a finite sequence of semidefinite relaxations. Keywords Tensor Z-eigenvalue H-eigenvalue Lasserre s hierarchy Semidefinite relaxation Mathematics Subject Classification 5A8 5A69 90C22 Introduction For positive integers m and n, n 2,...,n m, an m-order and (n, n 2,...,n m )- dimensional real tensor can be viewed as an array in the space R n n 2 n m. Such a tensor A can be indexed as B Xinzhen Zhang xzzhang@tju.edu.cn Jiawang Nie njw@math.ucsd.edu Department of Mathematics, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA 2 School of Mathematics, Tianjin University, Tianjin , China 23
2 2 J. Nie, X. Zhang A = (A i...i m ), i j n j, j =,...,m. (.) When n = = n m = n, A is called an m-order n-dimensional tensor. In such case, the tensor space R n n m is denoted as T m (R n ). A tensor in T m (R n ) is said to be symmetric if its entries are invariant under permutations of indices [i.e., A i...i m = A j... j m whenever (i,...,i m ) is a permutation of ( j,..., j m )]. The subspace of symmetric tensors in T m (R n ) is denoted as S m (R n ).ForA T m (R n ) and x := (x,...,x n ), we use the notation Ax m := A i i 2...i m x i x i2 x im, i (,...,i m n ) Ax m (.2) := A ji2...i m x i2 x im. i 2,...,i m n j=,...,n Note that Ax m is an n-dimensional vector. First, we give some definitions of tensor eigenvalues, which can be found in [2,3]. Definition. For a tensor A T m (R n ), a pair (λ, u) R R n is called a real Z-eigenpair of A if Au m = λu, u T u =. (.3) (The superscript T denotes the transpose.) Such λ is called a real Z-eigenvalue, and such u is called a real Z-eigenvector associated to λ. Definition.2 For a tensor A T m (R n ), a pair (λ, u) R R n is called a real H-eigenpair of A if Au m = λu [m ], u = 0. (.4) (The symbol u [m ] denotes the vector such that (u [m ] ) i = (u i ) m for i =,...,n). Such λ is called a real H-eigenvalue, and such u is called a real H-eigenvector associated to λ. Tensor eigenvalues are defined in Lim [2]. For real symmetric tensors, they are also defined in Qi [3]. The Z-eigenvalus and H-eigenvalues have applications in signal processing, control, and diffusion imaging [35,37,38]. For recent work on tensor computations, we refer to [22,30,34]. In particular, Z-eigenvalues have important applications in higher order Markov chains, which are shown in Sect Definitions. and.2 are about real eigenvalues. Complex eigenvalues can be similarly defined, if the complex values are allowed for (λ, u). We refer to the work [3,32]. There also exist other types of tensor eigenvalues in [4,5]. In this paper, we focus on the computation of real Z/H-eigenvalues. When a tensor is symmetric, the computation of real Z/H-eigenvalues is discussed in [5]. For the case (m, n) = (3, 2), computing the largest Z-eigenvalues is discussed in [36]. Shifted power methods are proposed in [5,40]. In [24], a method is proposed to find best rank- approximations, which is equivalent to computing the largest Z- eigenvalues. As shown in [2,40], it is NP-hard to compute extreme eigenvalues of tensors. For nonnegative tensors, the largest H-eigenvalues can be computed by using Perron Frobenius theorem [4,23]. 23
3 Real eigenvalues of nonsymmetric tensors 3 For nonsymmetric tensors, there is no much earlier work on computing all real Z/Heigenvalues. An elementary approach for this task is to solve the polynomial systems (.3) and (.4) directly, for getting all complex solutions by symbolic methods, and then choose real ones. This approach is typically very expensive to be used in computation. A numerical experiment for this is given in Sect There are fundamental differences between symmetric and nonsymmetric tensor eigenvalues. For symmetric tensors, the Eqs. (.3) and (.4) follow from the Karush Kuhn Tucker (KKT) conditions of some polynomial optimization problems, which show the existence of real Z-eigenvalues and H-eigenvalues. Furthermore, every symmetric tensor has finitely many Z-eigenvalues [5]. However, the Eqs. (.3) and (.4) for nonsymmetric tensors may not be the KKT conditions of some optimization problems. The following examples show the differences between symmetric and nonsymmetric tensor eigenvalues. Example.3 Consider the tensor A T 4 (R 2 ) such that A ijkl = 0 except A 2 = A 222 =, A 2 = A 222 =. By the definition, (λ, x) is a Z-eigenpair if and only if (x 2 + x2 2 )x 2 = λx, (x 2 + x2 2 )x = λx 2, x 2 + x2 2 =. One can check that A has no real Z-eigenvalues and neither complex ones. This is confirmed by Theorem 3.(i), because the semidefinite relaxation (3.4) is infeasible for the order k = 3. By the definition, (λ, x) is an H-eigenpair if and only if it satisfies (x 2 + x2 2 )x 2 = λx 3, (x 2 + x2 2 )x = λx2 3, (x, x 2 ) = (0, 0). The tensor A has no real H-eigenvalues. This is also confirmed by Theorem 4.2(i), because the semidefinite relaxation (4.5) is infeasible for the order k = 4. It is possible that a tensor has infinitely many Z-eigenvalues. Example.4 Consider the tensor A T 4 (R 2 ) such that A ijkl = 0 except A = A 22 =. Then, (λ, x) is a Z-eigenpair of A if and only if x 3 = λx, x 2 x 2 = λx 2, (.5) x 2 + x2 2 =. 23
4 4 J. Nie, X. Zhang One can check that every λ [0, ] is a real Z-eigenvalue, associated to Z-eigenvectors (± λ, ± λ). Note that the set of Z/H-eigenvalues of a nonsymmetric tensor is different with different definitions on Ax m. To clear this fact, we consider (2)-mode [Ax m ] (2) defined by ( [Ax m ] (2) := i,i 3,...,i m n A i ji 3...i m x i x i3 x im ) j=,...,n. By directly computation, there is only Z-eigenvalue and H-eigenvalue with (2)- mode for tensor A in Examples.3 and.4. These are different from Examples.3 and.4. However, we would like to remark that the above examples are not generic cases. That is, every generic tensor has finitely many Z-eigenvalues, and its number can be given by explicit formula. Here, the meaning of the word generic is in the sense of Zariski topology (i.e., there exists a polynomial φ in the entries of A such that if φ(a) = 0 then A has finitely many Z-eigenvalues). We refer this result to [3]. On the other hand, every tensor has finitely many H-eigenvalues, which is shown in Proposition 4.. From these facts, it is a well-posed question to compute all real Z/H-eigenvalues, for generic tensors. In this paper, we propose numerical methods for computing all real Z-eigenvalues (if there are finitely many ones) and all real H-eigenvalues. For symmetric tensors, the Z-eigenvalues and H-eigenvalues are critical values of some polynomial optimization problems. This property was used very much in [5]. The method in [5] is based on Jacobian SDP relaxations [26], which are used for polynomial optimization. Indeed, the same kind of method can be used to compute all local minimum values of polynomial optimization [29]. However, the method in [5] is not suitable for computing eigenvalues of nonsymmetric tensors, because their eigenvalues are no longer critical values of polynomial optimization problems. This paper is organized as follows. Section 2 presents some preliminaries on polynomial optimization, tensor eigenvalues and an application example in higher order Markov chain. Section 3 proposes Lasserre type semidefinite relaxations for computing Z-eigenvalues. If there are finitely many real Z-eigenvalues, each of them can be computed by solving a finite sequence of semidefinite relaxations. Section 4 proposes Lasserre type semidefinite relaxations for computing all H-eigenvalues. Each of them can be computed by solving a finite sequence of semidefinite relaxations. Numerical examples are shown in Sect. 5. We make some discussions in Sect Preliminaries and motivation In this section, we recall some basics on polynomial optimization and tensor eigenvalues. After that, we present an application example arising in higher order Markov chain. 23
5 Real eigenvalues of nonsymmetric tensors 5 2. Preliminaries In this subsection, we review some basics in polynomial optimization. We refer to [7,8] for surveys in the area. In the space R n, u denotes the standard Euclidean norm of a vector u. Fort R, t denotes the smallest integer not smaller than t. Let R[x] be the ring of polynomials with real coefficients and in variables x := (x,...,x n ). The degree of a polynomial p, denoted by deg(p), is the total degree, i.e., the highest degree of its monomials. Denote by R[x] d the set of real polynomials in R[x] whose degrees are no greater than d. For a polynomial tuple h = (h, h 2,...,h s ), the ideal generated by h is the set I (h) := h R[x]+h 2 R[x]+ +h s R[x]. The k-th truncation of I (h) is the set I k (h) := h R[x] k deg(h ) + +h s R[x] k deg(hs ). The complex and real algebraic varieties of h are respectively defined as V C (h) := {x C n h(x) = 0}, V R (h) := V C (h) R n. A polynomial p is said to be sum of squares (SOS) if there exist p, p 2,...p r R[x] such that p = p 2 + p p2 r. The set of all SOS polynomials is denoted as [x]. For a given degree m, denote [x] m := [x] R[x] m. The quadratic module generated by a polynomial tuple g = (g,...,g t ) is Q(g) := [x]+g [x]+ +g t [x]. The k-th truncation of the quadratic module Q(g) is the set Q k (g) := [x] 2k + g [x] 2k deg(g ) + +g t [x] 2k deg(gt ). Clearly, it holds that I (h) = I k (h), Q(g) = Q k (g). k k If the tuple g is empty, then Q(g) = [x] and Q k (g) = [x] 2k. Let N be the set of nonnegative integers. For x := (x,...,x n ), α := (α,...,α n ) and a degree d, denote x α := x α xα n n, α :=α + +α n, N n d := {α Nn : α d}. 23
6 6 J. Nie, X. Zhang Denote by R Nn d the space of all vectors y that are indexed by α N n d.fory RNn d, we can write it as y = (y α ) α N n d. For f = α N n d f αx α R[x] d and y R Nn d, we define the operation f, y := f α y α. (2.) α N n d For every fixed f R[x] d, f, y is a linear functional in y; every fixed y R Nn d, f, y is a linear functional in f. The operation, gives a dual relationship between the polynomial space R[x] d and the vector space R Nn d.in f, y, the polynomial f is in the left side, while the vector y is in the right. For an integer t d and y R Nn d, denote the t-th truncation of y as y t := (y α ) α N n t. (2.2) Let q R[x] with deg(q) 2k. For each y R Nn 2k, qp 2, y is a quadratic form in vec(p), the coefficient vector of the polynomial p, with deg(qp 2 ) 2k. LetL q (k) (y) be the symmetric matrix such that qp 2, y =vec(p) T ( L (k) q (y) ) vec(p). (2.3) The matrix L q (k) (y) is called the k-th localizing matrix of q generated by y. It is linear in y. For instance, when n = 2, k = 2 and q = x x 2 x 2 x2 2, L (2) (y) = x x 2 x 2 x2 2 y y 20 y 02 y 2 y 30 y 2 y 2 y 2 y 03 y 2 y 30 y 2 y 3 y 40 y 22 y 22 y 3 y 3. y 2 y 2 y 03 y 22 y 3 y 3 y 3 y 22 y 04 If q = (q,...,q r ) is a tuple of polynomials, we then define L q (k) (y) := L (k) q (y)... L (k) q r (y). It is a block diagonal matrix. When q = (the constant polynomial), L (k) (y) is called the k-th moment matrix generated by y, and we denote For instance, when n = 2 and k = 2, 23 M k (y) := L (k) (y). (2.4)
7 Real eigenvalues of nonsymmetric tensors 7 y 00 y 0 y 0 y 20 y y 02 y 0 y 20 y y 30 y 2 y 2 M 2 (y) = y 0 y y 02 y 2 y 2 y 03 y 20 y 30 y 2 y 40 y 3 y 22. y y 2 y 2 y 3 y 22 y 3 y 02 y 2 y 03 y 22 y 3 y 04 For a degree d, denote the monomial vector [x] d := [, x,..., x n, x 2, x x 2,...,xn 2,..., ] T xm,...,xm n. (2.5) As shown in Example.4, there may be infinitely many real Z-eigenvalues for some tensors. But this is not the case for a generic tensor. In (.3), a real Z-eigenpair (λ, u) of a tensor A is called isolated if there exists ε>0 such that no other real Z-eigenpair (μ, v) satisfies λ μ + u v <ε. Similarly, a real Z-eigenvalue λ is called isolated if there exists ε>0 such that no other real Z-eigenvalue μ satisfies λ μ <ε. In practice, we need to check whether a Z-eigenvalue is isolated or not. Denote the function F(λ, x) := [ x T ] x Ax m. (2.6) λx Clearly, (λ, u) is a Z-eigenpair if and only if F(λ, u) = 0. Let J(λ, x) be the Jacobian matrix of the vector function F(λ, x) with respect to (λ, x). Lemma 2. Let A T m (R n ) and λ be a real Z-eigenvalue of A. (i) If u is a real Z-eigenvector associated to λ and J(λ, u) is nonsingular, then (λ, u) is isolated. (ii) If u,...,u N are the all real Z-eigenvectors of A associated to λ and each J(λ, u i ) is nonsingular, then λ is an isolated real Z-eigenvalue of A. Lemma 2. follows directly from the Inverse Function Theorem. For cleanness of the paper, its proof is omitted here. 2.2 Motivation: application in higher order Markov chain In higher order Markov chains [9,20], an mth order Markov chain fits observed data by an m-order nonsymmetric tensor P T m (R n ) whose entries are given such that P i i 2...i m = Prob(X t = i X t = i 2 ; ;X t m+ = i m ) [0, ], n P i i 2...i m =, i 2,...,i m {,...,n}. i = 23
8 8 J. Nie, X. Zhang Such P is called a transition probability tensor. A vector v R n is said to be a limiting (or stationary) probability distribution for P if Pv m = v, e T v =, v 0, where e is the vector of all ones. The existence and uniqueness of stationary probability distributions were discussed in [9]. Earlier work on this subject can be found in [3,9,20]. Interestingly, the stationary probability distributions can be obtained by computing the Z-eigenvalue of nonsymmetric tensor P. This can be seen as follows. Suppose v is a stationary probability distribution of P.Letu = v/ v, then u T u =, u 0, = e T v = v (e T u), v = u/(e T u), Pu m = v m Pvm = (e T u) m v = (e T u) m 2 u. That is, u is a Z-eigenvector of P associated to the Z-eigenvalue (e T u) m 2. Conversely, if λ is a Z-eigenvalue with the Z-eigenvector u 0, then Pu m = λu, u T u =, λ(e T u) = e T (Pu m ) = i 2,...,i m n u i2 u im i,i 2,...,i m n n P i i 2...i m = i = So, λ = (e T u) m 2.Ifweletv = Pv m = u e T u, then P i i 2...i m u i2 u im = i 2,...,i m n u i2 u im = (e T u) m. λu (e T u) m Pum = (e T u) m = v, et v =, v 0. That is, v is a stationary probability distribution vector. The above can be summarized as follows: v v is a nonneg- If v is a stationary probability distribution vector for P, then u = ative Z-eigenvector. If u is a nonnegative Z-eigenvector for P, then v = distribution vector. u is a stationary probability e T u Therefore, the stationary probability distribution vectors can be obtained by computing all nonnegative Z-eigenvectors. 3 Computing real Z-eigenvalues In this section, we first reformulate Z-eigenvalue computation as polynomial optimization problem. 23
9 Real eigenvalues of nonsymmetric tensors 9 Let A T m (R n ) be a tensor. A real pair (λ, u) is a Z-eigenpair of A if and only if Au m = λu and u T u =. So, Hence, u is a Z-eigenvector if and only if λ = λu T u = u T Au m = Au m. Au m = (Au m )u, u T u =, and the associated Z-eigenvalue is Au m. A generic tensor has finitely many Z- eigenvalues that are all simple, as shown in [3]. For special tensors, there might be infinitely many ones (see Example.4). In this section, we aim at computing all real Z-eigenvalues when there are finitely many ones. Let h be the polynomial tuple: h = ( Ax m (Ax m )x, x T x ). (3.) Then, u is a Z-eigenvector of A if and only if h(u) = 0. Let Z(R, A) denote the set of Z-eigenvalues of A. If it is a finite set, we list Z(R, A) monotonically as λ <λ 2 < <λ N. We aim at computing them sequentially, from the smallest to the largest. 3. The smallest Z-eigenvalue To compute the smallest Z-eigenvalue λ, we consider the polynomial optimization problem min f (x) := Ax m s.t. h(x) = 0, (3.2) where h is as in (3.). Note that u is a Z-eigenvector if and only if h(u) = 0, with the Z-eigenvalue f (u). The optimal value of (3.2) isλ,ifv R (h) =.Let k 0 = (m + )/2. (3.3) Lasserre s hierarchy [6] of semidefinite relaxations for solving (3.2) is f,k := min f, y s.t., y =, L (k) h (y) = 0, M k (y) 0, y R Nn 2k, (3.4) for the orders k = k 0, k 0 +,... See (2.3) (2.4) for the notation L (k) h (y) and M k(y). In the above, X 0 means that the symmetric matrix X is positive semidefinite. The dual optimization problem of (3.4) is 23
10 0 J. Nie, X. Zhang { f 2,k := max γ s.t. f γ I 2k (h) + [x] 2k. (3.5) As in [6], it can be shown that for all k f 2,k f,k λ and the sequences { f,k } and { f 2,k } are monotonically increasing. Theorem 3. Let A T m (R n ) and let Z(R, A) be the set of its real Z-eigenvalues. Then, we have the properties: (i) The set Z(R, A) = if and only if the semidefinite relaxation (3.4) is infeasible for some order k. (ii) If Z(R, A) = and λ is the smallest real Z-eigenvalue, then lim f 2,k k = lim f,k k = λ. (3.6) If, in addition, Z(R, A) is a finite set, then for all k sufficiently big f 2,k = f,k = λ. (3.7) (iii) Let k 0 be as in (3.3). Suppose y is a minimizer of (3.4). If there exists t ksuch that rank M t k0 (y ) = rank M t (y ), (3.8) then λ = f,k and there are r := rank M t (y ) distinct real Z-eigenvectors u,...,u r associated to λ. (iv) Suppose Z(R, A) is a finite set. If there are finitely many real Z-eigenvectors associated to λ, then, for all k big enough and for every minimizer y of (3.4), the condition (3.8) is satisfied for some t k. Proof (i) We prove the equivalence in two directions. Sufficiency: Assume (3.4) is infeasible for some k. Then A has no real Z-eigenpairs. Suppose otherwise (λ, u) is such a one. Then [u] 2k [see (2.5) for the notation] is feasible for (3.4) for all values of k, which is a contradiction. So Z(R, A) =. Necessity: Assume Z(R, A) =. Then the equation h(x) = 0 has no real solutions. By the Positivstellensatz [2], I (h) + [x]. So, when k is big enough, I 2k (h)+ [x] 2k, and then the optimization (3.5) is unbounded from above. By duality theory, (3.4) must be infeasible, for all k big enough. (ii) Note that x T x is a polynomial in the tuple h. So, (x T x ) 2 I (h) and the set (x T x ) 2 0 is compact. The ideal I (h) is archimedean [6]. The asymptotic convergence (3.6) is proved in Theorem 4.2 of [6]. Next, we prove the finite convergence (3.7) when Z(R, A) = is a finite set. Write Z(R, A) ={λ,...,λ N }, with λ < <λ N.Letb,...,b N R[t] be the 23
11 Real eigenvalues of nonsymmetric tensors univariate real polynomials in t such that b i (λ j ) = 0 when i = j and b i (λ j ) = when i = j.fori =,...,N, let ( 2. s i := (λ i λ ) b i ( f (x))) Let s := s + +s N. Then, s [x] 2k for some k > 0. The polynomial ˆ f := f λ s vanishes identically on V R (h). By the Real Nullstellensatz [2, Corollary 4..8], there exist an integer l>0 and q [x] such that ˆ f 2l + q I (h). For all ε>0 and c > 0, we can write ˆ f + ε = φ ε + θ ε, where φ ε = cε 2l ( fˆ 2l + q), ( θ ε = ε + f ˆ/ε + c( f ˆ/ε) 2l) + cε 2l q. By Lemma 2. of [25], when c 2l, there exists k 2 k such that, for all ε>0, Hence, we can get φ ε I 2k2 (h), θ ε [x] 2k2. f (λ ε) = φ ε + σ ε, where σ ε = θ ε + s [x] 2k2 for all ε > 0. This implies that, for all ε > 0, γ = λ ε is feasible in (3.5) for the order k 2. Thus, we get f 2,k 2 λ. Note that f 2,k f,k λ for all k and the sequence { f 2,k } is monotonically increasing. So, (3.7) must be true when k k 2. (iii) Note that M t (y ) 0 and L (t) h (y ) = 0, because t k. When (3.8) is satisfied, by Theorem. of [6], there exist r := rank M t (y ) vectors u,...,u r V R (h) such that (we refer to (2.5) for the notation [u i ] 2t ) y 2t = c [u ] 2t + +c r [u r ] 2t, with numbers c,...,c r > 0. The condition, y = implies that c + +c r =. By the notation, as in (2.), we can see that f, [u i ] 2k = f (u i ),so f,k = f, y =c f (u ) + +c r f (u r ), 23
12 2 J. Nie, X. Zhang f,k f (u i ) i =,...,r, because each [u i ] 2k is a feasible point for (3.4). Thus, we must have f,k = f (u ) = = f (u r ). Also note that f,k λ and each u i is a Z-eigenvector. So, f,k = f (u ) = = f (u r ) = λ. (iv) When Z(R, A) is a finite set, both { f,k } and { f 2,k } have finite convergence to λ, by item (iii). If (3.2) has finitely many minimizers, i.e., λ has finitely many Z-eigenvectors, the rank condition (3.8) must be satisfied when k is sufficiently big. The conclusion follows from Theorem 2.6 of [27]. In computation, we start to solve the semidefinite relaxation (3.4) from the smallest value of k, which is k 0 as in (3.3). If (3.4) is infeasible for k = k 0, then A has no real Z-eigenvalues; if (3.4)isfeasiblefork = k 0, then solve it for an optimizer y.if y satisfies (3.8), then we get λ = f,k ; otherwise, increase the value of k by one, and then repeat the above process. See Algorithm 3.6 for the implementation details. The tensor A in Example.3 has no real Z-eigenvalues. This is confirmed by that the relaxation (3.4) is infeasible for k = 3. Remark 3.2 () The rank condition (3.8) can be used as a criterion to check whether f,k = λ or not. If it is satisfied, then we can get r distinct minimizers u,...,u r of (3.2), i.e., each u i is Z-eigenvector of A. The vectors u i can be computed by the method in [0], which is implemented in GloptiPoly 3 []. (2) Suppose (3.8) holds. If rank M k (y ) is maximum among the set of all optimizers of (3.4), then we can get all mimizers of (3.2) [8, 6.6], i.e., we can get all real Z-eigenvectors associated to λ. When (3.4) (3.5) are solved by primal-dual interior point methods, typically we can get all real Z-eigenvectors associated to λ, provided there are finitely many. (3) The condition (3.8) requires to know the ranks of M t k0 (y ) and M t (y ).Itwould be hard to evaluate them when the matrices are singular or close to be singular. The matrix rank is equal to the number of positive singular values. In common practice, people often determine the rank by counting the number of singular values bigger than a tolerance (say, 0 6 ). This is a classical problem in numerical linear algebra. We refer to Demmel s book [8] for this question. (4) For generic polynomial optimization problems, Lasserre s hierarchies of semidefinite relaxations have finite convergence, as shown in [28]. However, the problem (3.2) is not generic, because h depends on f. So, the finite convergence of (3.4) cannot be proved by using the results of [28]. 23
13 Real eigenvalues of nonsymmetric tensors Larger Z-eigenvalues Suppose the ith smallest Z-eigenvalue λ i is known. We want to determine whether the next larger one λ i+ exists or not. If it exists, we show how to compute it; if it does not, we give a certificate for the nonexistence. Let δ>0 be a small number. Consider the problem min f (x) s.t. h(x) = 0, f (x) λ i + δ. (3.9) Clearly, the optimal value of (3.9) is the smallest Z-eigenvalue that is greater than or equal to λ i + δ. Let ω(λ i + δ) := min{λ Z(R, A) : λ λ i + δ}. (3.0) Lasserre s hierarchy of semidefinite relaxations for solving (3.9) is f,k i+ := min f, y s.t., y =, L (k) h (y) = 0, M k (y) 0, L (k) f λ i δ (y) 0, y RNn 2k, (3.) for the orders k = k 0, k 0 +,... The dual problem of (3.) is { f 2,k i+ := max γ s.t. f γ I 2k (h) + Q k ( f λ i δ). (3.2) Similar to Theorem 3., we have the following convergence result. Theorem 3.3 Let A T m (R n ) and Z(R, A) be the set of real Z-eigenvalues of A. Suppose λ i Z(R, A). Then, we have the properties: (i) The intersection Z(R, A) [λ i + δ, + ) = if and only if the semidefinite relaxation (3.) is infeasible for some order k. (ii) If Z(R, A) [λ i + δ, + ) =, then lim f 2,k k i+ = lim f,k k i+ = ω(λ i + δ). (3.3) If, in addition, Z(R, A) [λ i + δ, + ) is a finite set, then f 2,k i+ = f,k i+ = ω(λ i + δ) (3.4) for all k sufficiently big. (iii) Suppose y is a minimizer of (3.). If (3.8) is satisfied for some t k, then ω(λ i + δ) = f,k i+ and there are r := rank M t(y ) distinct real Z-eigenvectors u,...,u r, associated to ω(λ i + δ). (iv) Suppose Z(R, A) [λ i + δ, + ) is a finite set and ω(λ i + δ) has finitely many real Z-eigenvectors. Then, for all k big enough, and for every minimizer y of (3.), there exists t k satisfying (3.8). 23
14 4 J. Nie, X. Zhang Proof The proof is mostly the same as for Theorem 3.. In the following, we only list the major differences. (i) Necessity: If Z(R, A) [λ i + δ, + ) =, then (3.9) is infeasible. By the Positivstellensatz, I (h) + Q( f λ i δ). The rest of the proof is the same as for Theorem 3.(i). (ii) The ideal I (h) is archimedean, and so is I (h) + Q( f λ i δ). The asymptotic convergence (3.3) can be implied by Theorem 4.2 of [6]. To prove the finite convergence (3.4), we follow the same proof as for Theorem 3.(ii). Suppose Z(R, A) [λ i + δ, + ) ={ν,...,ν L }. Construct the polynomial s same as there and let Then ˆ f vanishes identically on the set ˆ f = f ω(λ i + δ) s. {x R n : h(x) = 0, f (x) λ i δ 0}. By the Real Nullstellensatz [2, Corollary 4..8], there exist an integer l>0and q Q( f λ i δ) such that fˆ 2l + q I (h). The rest of the proof is the same, except replacing [x] by Q( f λ i δ), and [x] 2k by Q k ( f λ i δ). (iii) (iv) The proof is the same as for Theorem 3.(iii) (iv). The convergence of semidefinite relaxations (3.) (3.2) can be checked by the condition (3.8). When it is satisfied, the real Z-eigenvectors u,...,u r can be computed by the method in [0]. Typically, we can get all real Z-eigenvectors if primal-dual interior-point methods are used to solve the semidefinite relaxations. We refer to Remark 3.2. Next, we show how to use ω(λ i + δ) to determine λ i+. Assume λ i is isolated (otherwise, there are infinitely many Z-eigenvalues). If λ i is the largest real Z-eigenvalue, then λ,...,λ i are the all real Z-eigenvalues and we can stop; otherwise, the next larger one λ i+ exists. For such case, if δ>0in(3.9) is small enough, then ω(λ i + δ) as in (3.0) equals λ i+. Consider the optimization problem { ν + (λ i,δ):= max f (x) s.t. h(x) = 0, f (x) λ i + δ. (3.5) The optimal value of (3.5) is the largest Z-eigenvalue of A that is smaller than or equal to λ i + δ, i.e., ν + (λ i,δ)= max{λ Z(R, A) : λ λ i + δ}. The next larger Z-eigenvalue λ i+ can be determined as follows. 23
15 Real eigenvalues of nonsymmetric tensors 5 Theorem 3.4 Let A T m (R n ) and δ>0. Assume λ i is an isolated Z-eigenvalue of A and λ max is the largest one. Then, we have the properties: (i) For all δ>0 sufficiently small, ν + (λ i,δ)= λ i. (ii) If ν + (λ i,δ) = λ i and (3.) is infeasible for some k, then λ i = λ max and the next larger Z-eigenvalue λ i+ does not exist. (iii) If ν + (λ i,δ) = λ i and the condition (3.8) is satisfied for some k, then the next larger Z-eigenvalue λ i+ = f,k i+. Proof (i) Note that ν + (λ i,δ) is the smallest Z-eigenvalue greater than or equal to λ i +δ. When λ i is isolated, for δ>0 sufficiently small, we must have ν + (λ i,δ)= λ i. (ii) When (3.) is infeasible for some k, by Theorem 3.3(i), all the Z-eigenvalues are smaller than λ i + δ. Note that ν + (λ i,δ)is the largest Z-eigenvalue that is smaller than or equal to λ i + δ. Ifλ i = ν + (λ i,δ), then λ i must be the largest Z-eigenvalue, i.e., λ i = λ max. (iii) When (3.8) is satisfied for some k, by Theorem 3.3(iii), we know ω(λ i + δ) = f,k i+. Note that ν + (λ i,δ)is the largest Z-eigenvalue that is smaller than or equal to λ i + δ, while ω(λ i + δ) is the smallest Z-eigenvalue that is bigger than or equal to λ i + δ. Since λ i = ν + (λ i,δ)and λ i is isolated, we must have λ i+ = ω(λ i + δ), which is the smallest Z-eigenvalue bigger than λ i. In the next, we show how to check if a real Z-eigenvalue λ i is isolated or not. Lemma 2. can be used to verify the isolatedness. It only gives a sufficient condition, which may not be necessary sometimes. Here, we give a sufficient and necessary condition. Consider the optimization problem { ν (λ i,δ):= min f (x) s.t. h(x) = 0, f (x) λ i δ. (3.6) Clearly, for all δ>0, it holds that ν (λ i,δ) λ i ν + (λ i,δ). Lemma 3.5 Let ν + (λ i,δ), ν (λ i,δ) be the optimal values as in (3.5) and (3.6). Then, λ i is an isolated real Z-eigenvalue of A if and only if for some δ>0 ν + (λ i,δ)= ν (λ i,δ). (3.7) When the above holds, λ i is the unique Z-eigenvalue of A in [λ i δ, λ i + δ]. 23
16 6 J. Nie, X. Zhang Proof By the construction of h as in (3.), u is a Z-eigenvector if and only if h(u) = 0. So, ν + (λ i,δ)is the largest Z-eigenvalue that is smaller than or equal to λ i + δ, while ν (λ i,δ) is the smallest one that is greater than or equal to λ i δ. We prove the equivalence in two directions. Necessity: If λ i is isolated, then for δ > 0 small enough, λ i is the unique Z- eigenvalue of A in the interval [λ i δ, λ i + δ]. So, (3.7) holds. Sufficiency: Assume (3.7) holds for some δ>0, then ν (λ i,δ)= λ i = ν + (λ i,δ). So, λ i is the unique Z-eigenvalue in [λ i δ, λ i + δ], and it must be isolated. The problems (3.5) and (3.6) are polynomial optimization. Their optimal values ν + (λ i,δ), ν (λ i,δ)can be computed by solving Lasserre type semidefinite relaxations that are similar to (3.) (3.2). 3.3 An algorithm for computing real Z-eigenvalues For a given tensor A, we compute its real Z-eigenvalues if they exist, from the smallest to the largest. First, we compute λ if it exists, by solving (3.4) (3.5). After getting λ, we solve (3.) (3.2) and then determine λ 2.Ifλ 2 does not exist, we stop; otherwise, we then determine λ 3. Repeating this procedure, we can get all the real Z-eigenvalues, when there are finitely many ones. This results in the following algorithm. Algorithm 3.6 For a given tensor A T m (R n ), compute its real Z-eigenvalues as follows: Step 0: Let k := k 0, with k 0 as in (3.3). Step : Solve the relaxation (3.4). If it is infeasible, then A has no real Z-eigenvalues and stop; if it is feasible, compute an optimizer y. Step 2: If (3.8) is satisfied, then λ = f,k and go to Step 3 with i =. Otherwise, let k := k + and go to Step. Step 3: Let δ = Solve (3.5), (3.6) for the optimal values ν + (λ i,δ), ν (λ i,δ). If ν + (λ i,δ)= ν (λ i,δ), then λ i is an isolated Z-eigenvalue and go to Step 4. Otherwise, let δ := δ/5 and compute ν + (λ i,δ), ν (λ i,δ)again. Repeat this process until we get ν + (λ i,δ)= ν (λ i,δ). Step 4: Let k := k 0, with k 0 as in (3.3). Step 5: Solve the relaxation (3.). If it is infeasible, the largest Z-eigenvalue is λ i and stop. Otherwise, compute an optimizer y for it. Step 6: If (3.8) is satisfied, then λ i+ = f,k i+ and go to Step 3 with i := i +. Otherwise, let k := k + and go to Step 5. The correctness of Algorithm 3.6 and its convergence properties are proved in Theorems 3., 3.3 and Computing real H-eigenvalues In this section, we compute real H-eigenvalues. For every tensor A, the number of H-eigenvalues is always finite. In Definition.2, if λ, u are allowed to achieve com- 23
17 Real eigenvalues of nonsymmetric tensors 7 plex values, then we call such λ a complex H-eigenvalue and such u a complex H-eigenvector. Proposition 4. Every tensor A T m (C n ) has n(m ) n complex H-eigenvalues, counting their multiplicities. Proof Let I T m (C n ) be the identity tensor whose only non-zero entries are I ii...i = fori =, 2,...,n. Note that λ is a complex H-eigenvalue if and only if there exists 0 = u C n such that Au m = λu [m ], that is, (A λi)u m = 0. By the definition of resultant [39], which we denote by Res, λ is an H-eigenvalue if and only if Res ( (A λi)x m ) = 0. (4.) The resultant Res ( (A λi)x m ) is homogeneous in the entries of A and λ. Ithas degree D := n(m ) n. We can expand it as Res ( (A λi)x m ) = p 0 (A) + p (A)λ + + p D (A)λ D. By the homogeneity of Res, we know p D (A) = Res ( Ix m ) = 0, because the homogeneous polynomial system Ix m = 0 has no nonzero complex solutions. Hence, the leading coefficient of the polynomial Res ( (A λi)x m ) in λ is not zero, and the degree is D. This implies that (4.) has D complex solutions, counting multiplicities, and the lemma is proved. Recall that (λ, u) is a real H-eigenpair if and only if Au m = λu [m ], 0 = u R n. Let m 0 be the largest even number less than or equal to m, i.e., m 0 = 2 (m )/2. Note that m m 0 m. We can normalize u as (u ) m 0 + +(u n ) m 0 =. (4.2) Under this normalization, the H-eigenvalue λ can be given as Let h be the polynomial tuple h := λ = λ(u [m 0 m+] ) T u [m ] = (u [m 0 m+] ) T Au m. ( Ax m ( (x [m 0 m+] ) T Ax m ) x [m ], n ) (x i ) m 0. (4.3) i= 23
18 8 J. Nie, X. Zhang Then, u is an H-eigenvector normalized as in (4.2) if and only if h(u) = 0. Since A has finitely many H-eigenvalues, we can order the real ones monotonically as μ <μ 2 < <μ N, if at least one of them exists. We call μ i the ith smallest H-eigenvalue. 4. The smallest H-eigenvalue In this subsection, we show how to determine μ.leth be as in (4.3), then μ equals the optimal value of the optimization problem { min f (x) := (x [m 0 m+] ) T Ax m s.t. h(x) = 0. Lasserre s hierarchy [6] of semidefinite relaxations for solving (4.4) is ρ,k := min f, z s.t., z =, L (k) h (z) = 0, M k (z) 0, z R Nn 2k, (4.4) (4.5) for the orders k = k 0, k 0 +,...,where The dual optimization problem of (4.5) is k 0 := (m 0 + m )/2. (4.6) { ρ 2,k := max γ s.t. f γ I 2k (h) + [x] 2k. (4.7) As can be shown in [6], ρ 2,k ρ,k μ for all k, and the sequences {ρ,k } are monotonically increasing. {ρ 2,k } and Theorem 4.2 Let A T m (R n ) and H(R, A) be the set of its real H-eigenvalues. Then, we have the properties: (i) The set H(R, A) = if and only if the semidefinite relaxation (4.5) is infeasible for some order k. (ii) If H(R, A) =, then for all k sufficiently big ρ,k = ρ 2,k = μ. (4.8) (iii) Let k 0 be as in (4.6). Suppose z is a minimizer of (4.5). If there exists an integer t k such that rank M t k0 (z ) = rank M t (z ), (4.9) 23
19 Real eigenvalues of nonsymmetric tensors 9 then μ = ρ,k and there are r := rank M t (z ) distinct real H-eigenvectors v,...,v r associated to μ and normalized as in (4.2). (iv) Suppose A has finitely many real H-eigenvectors associated to μ. Then, for all k big enough and for every minimizer z of (4.5), there exists an integer t k satisfying (4.9). Proof It can be proved in the same way as for Theorem 3.. The only difference is that H(R, A) is always a finite set, by Proposition 4.. To avoid being repetitive, the proof is omitted here. The tensor A in Example.3 has no real H-eigenvalues. This can be confirmed by Theorem 4.2(i), because the semidefinite relaxation (4.5) is infeasible for the order k = 4. The rank condition (4.9) is a criterion for checking the finite convergence in (4.8). When it is satisfied, the real H-eigenvectors u,...,u r can be computed by the method in [0]. Typically, we can get all H-eigenvectors if primal-dual interior-point methods are used to solve (4.5). We refer to Remark Larger H-eigenvalues Suppose the ith smallest H-eigenvalue μ i is known. We want to determine the next larger one μ i+. If it exists, we show how to compute it; if not, we get a certificate for the nonexistence. Let δ>0 be a small number. Consider the optimization problem min f (x) s.t. h(x) = 0, f (x) μ i + δ, (4.0) where f, h are same as in (4.4). The optimal value of (4.0)isthesmallestH-eigenvalue of A that is greater than or equal to μ i + δ. Denote ϖ(μ i + δ) := min{μ H(R, A) : μ μ i + δ}. (4.) Lasserre s hierarchy of semidefinite relaxations for solving (4.0) is ρ,k i+ := min f, z s.t., z =, L (k) h (z) = 0, M k (z) 0, L (k) f μ i δ (z) 0, z RNn 2k, (4.2) for the orders k = k 0, k 0 +,... The dual problem of (4.2) is { ρ 2,k i+ := max γ s.t. f γ I 2k (h) + Q k ( f μ i δ). (4.3) The properties of relaxations (4.2) (4.3) are as follows. Theorem 4.3 Let A T m (R n ) and H(R, A) be the set of its real H-eigenvalues. Assume that μ i H(R, A). Then, we have the properties: 23
20 20 J. Nie, X. Zhang (i) The intersection H(R, A) [μ i + δ, + ) = if and only if the semidefinite relaxation (4.2) is infeasible for some order k. (ii) If H(R, A) [μ i + δ, + ) =, then for all k sufficiently big ρ 2,k i+ = ρ,k i+ = ϖ(μ i + δ). (4.4) (iii) Let z be a minimizer of (4.2). If (4.9) is satisfied for some t k, then there exists r := rank M t (z ) real H-eigenvectors v,...,v r that are associated to ϖ(μ i + δ) and that are normalized as in (4.2). (iv) Suppose A has finitely many real H-eigenvectors that are associated to ϖ(μ i +δ) and that are normalized as in (4.2). Then, for all k big enough and for all minimizer z of (4.2), there exists t k satisfying (4.9). Proof It can be proved in the same way as for Theorem 3.3. Note that H(R, A) is always a finite set, by Proposition 4.. In the following, we show how to use ϖ(μ i + δ) to determine μ i+. Consider the maximization problem { υi := max f (x) s.t. h(x) = 0, f (x) μ i + δ. (4.5) The optimal value of (4.5) is the largest H-eigenvalue of A that is smaller than or equal to μ i + δ, i.e., υ i = max{μ H(R, A) : μ μ i + δ}. The next larger H-eigenvalue μ i+ can be determined as follows. Theorem 4.4 Let A T m (R n ) and δ>0. Suppose μ i H(R, A) and μ max is the maximum real H-eigenvalue. Let ϖ(λ i + δ) be as in (4.). Then, we have: (i) For all δ>0 small enough, υ i = μ i. (ii) If υ i = μ i and (4.2) is infeasible for some k, then μ i = μ max. (iii) If υ i = μ i and (4.9) is satisfied for some k, then μ i+ = ρ,k i+. Proof The proof is the same as for Theorem 3.4. Note that A has finitely many H- eigenvalues, and μ i is always isolated, as in Proposition 4.. Since (4.5) is a polynomial optimization, the optimal value υ i can also be computed by solving Lasserre type semidefinite relaxations that are similar to (4.2) (4.3). 4.3 An algorithm for all H-eigenvalues We can compute all real H-eigenvalues of a tensor A sequentially, from the smallest one to the largest one, if they exist. A similar version of Algorithm 3.6 can be applied. 23
21 Real eigenvalues of nonsymmetric tensors 2 Algorithm 4.5 For a given tenosr A T m (R n ), compute its real H-eigenvalues as follows: Step 0: Let k = k 0, with k 0 as in (4.6). Step : Solve the relaxation (4.5). If it is infeasible, then A has no real H-eigenvalues and stop. Otherwise, compute an optimizer z. Step 2: If (4.9) is satisfied, then μ = ρ,k and go to Step 3 with i :=. Otherwise, let k := k + and go to Step. Step 3: Let δ = Solve (4.5) for its optimal value υ i.ifυ i = μ i,gotostep4. Otherwise, let δ := δ/5 and compute υ i again. Repeat this process until we get υ i = μ i. Step 4: Let k = k 0, with k 0 as in (4.6). Step 5: Solve the relaxation (4.2). If it is infeasible, the largest H-eigenvalue is μ i and stop. If it is feasible, compute an optimizer z. Step 6: If (4.9) is satisfied, then μ i+ = ρ,k i+ and go to Step 3 with i := i +. Otherwise, let i := i + and go to Step 5. The correctness of Algorithm 4.5 and its convergence properties are proved in Theorems 4.2, 4.3 and Numerical examples In this section, we present numerical experiments for computing real Z-eigenvalues and H-eigenvalues. The Algorithms 3.6 and 4.5 are implemented in MATLAB 7.0 on a Dell Desktop with Linux as OS with 8GB memory and Intel(R) CPU 2.8GHz. The software Gloptipoly 3 [] is used to solve the semidefinite relaxations in the algorithms. The computational results are displayed with four decimal digits, for cleanness of the presentation. The isolatedness of Z-eigenvalues are checked by Lemma 2. or Lemma 3.5. In checking conditions (3.8) and (4.9), the rank of a matrix is evaluated as the number of its singular values that are greater than 0 6. For odd order tensors, the Z-eigenvalues always appear in ± pairs, so only nonnegative Z- eigenvalues are displayed for them. In each table of this section, the time denotes the consuming time in seconds of the computation. 5. Numerical examples of Z(H)-eigenvalues Example 5. [9, Example 3] Consider the tensor A T 4 (R 2 ) with entries A i i 2 i 3 i 4 = 0 except A = 25., A 22 = 25.6, A 22 = 24.8, A 2222 = 23. Applying Algorithms 3.6 and 4.5, we get all the real Z/H-eigenvalues. They are shown in Table. It took about s to compute Z-eigenpairs, and about 3 s to compute H-eigenpairs. The convergence f,k i λ i and ρ,k i μ i occurs as follows: f,3 = λ, f,3 2 = λ 2, ρ,4 = μ,ρ,5 2 = μ 2,ρ,4 3 = μ 3. 23
22 22 J. Nie, X. Zhang Table Z/H-eigenpairs of the tensor in Example 5. Z-eigenvalue Z-eigenvector ± (0, ) ± (, 0) H-eigenvalue H-eigenvector ± (0, ) ± (, 0) ± (0.8527, ± ) The Z-eigenvalues λ,λ 2 are isolated. This can be verified by Lemma 3.5, because ν + (λ i, 0.05) = ν (λ i, 0.05) for all λ i. Their isolatedness can also be verified by Lemma 2.. For the function F(λ, u) as in (2.6), its Jacobian matrices at (λ, u ), (λ 2, u 2 ) are respectively , They are both nonsingular. By a direct calculation, one can show that the real Z- eigenvalues are λ = 23, λ 2 = 25., while the real H-eigenvalues are μ = 23, μ 2 = 25., μ 3 = ( )/20. The computed Z/H-eigenpairs are all correct, up to round-off errors. Actually, we have Au 3 i λ i u i = Av 3 i μ i v [3] i i =, 2 and Av 3 3 μ 3v [3] =0for Example 5.2 [33, Example ] A i i 2 i 3 = 0 except Consider the tensor A T 3 (R 3 ) with the entries A = , A 2 = , A 3 = 0.440, A 2 = 0.854, A 22 = 0.099, A 23 = , A 3 = , A 32 = 0.385, A 33 = , A 2 = , A 22 = , A 32 = , A 22 = 0.764, A 222 = , A 232 = , A 32 = , A 322 = , A 332 = 0.25, A 3 = 0.387, A 23 = , A 33 = 0.35, A 23 = 0.355, A 223 = , A 233 = , A 33 = 0.975, A 323 = , A 333 = By Algorithms 3.6 and 4.5, we get all the real Z-eigenvalues (only nonnegative ones are computed because the order is odd), and get all the real H-eigenvalues. Three Z- eigenpairs (λ i, u i ) and four H-eigenpairs (μ i,v i ) are obtained. The results are shown in Table 2. It took about 4 s to compute the Z-eigenvalues, and about 5 s to get the H-eigenvalues. The three Z-eigenvalues λ,λ 2,λ 3 are all isolated. This can be verified by Lemma 3.5, because 23 ν (λ i, 0, 05) = ν + (λ i, 0.05)
23 Real eigenvalues of nonsymmetric tensors 23 Table 2 Z/H-eigenpairs of the tensor in Example 5.2 Z-eig. λ i Z-eigvec. u i Au 2 i λ i u i (0.3736, 0.703, 0.98) (0.6666, , ) (0.4086, , 0.637) H-eig. μ i H-eigvec. v i Avi 2 μ i v [2] i.3586 (0.5795, , ) (0.6782, , ) ( , , ) (0.4956, 0.674, 0.609) for all λ i. The isolatedness is also confirmed by Lemma 2.. The smallest singular values of the Jacobian matrix of F(λ, x) at (λ i, u i ) are respectively 0.605, 0.706, They are all nonsingular. The computed eignpairs are all correct, up to tiny numerical errors. The residual errors Aui 2 λ i u i, Avi 2 μ i v [2] i are shown in Table 2. Example 5.3 [4, 4.] Consider the tensor A T 3 (R 3 ) with A i i 2 i 3 = 0 except A = , A 2 = 0.443, A 3 = 0.94, A 2 = 0.443, A 22 = , A 23 = 0.590, A 3 = 0.94, A 32 = , A 33 = 0.02, A 2 = 0.443, A 22 = , A 32 = , A 22 = , A 222 = 0.283, A 232 = , A 32 = 0.590, A 322 = , A 332 = , A 3 = 0.94, A 23 = 0.590, A 33 = 0.02, A 23 = , A 223 = , A 323 = , A 333 = A 233 = , A 33 = 0.02, By Algorithms 3.6 and 4.5, we get all the real Z-eigenvalues (only nonnegative ones are computed because the order is odd), and get all the real H-eigenvalues. They are shown in Table 3. It took about 4 s to compute the Z-eigenvalues, and about 3 s to get the H-eigenvalues. It can be verified that ν (λ i, 0.05) = ν + (λ i, 0.05) for all λ i. By Lemma 3.5, we know λ,λ 2 are isolated Z-eigenvalues. This is also confirmed by Lemma 2.. The smallest singular value of the Jacobian matrix of F(λ, x) as in (2.6)at(λ, u ), (λ 2, u 2 ) are respectively , (For λ 2, there are four Z-eigenvectors u 2. At each of them, the smallest singular value is the same.) They are all nonsingular. The computed eigenpairs (λ i, u i ) and (μ i,v i ) are correct, up to small numerical errors. Their residual errors are shown in Table 3. 23
24 24 J. Nie, X. Zhang Table 3 Z/H-eigenvalues of the tensor in Example 5.3 Z-eig. λ i Z-eigvec. u i Aui 2 λ i u i ( , 0.395, ) ( , , ) ( 0.2, 0.995, ) (0.820, , 0.323) (0.065, , ) H-eig. μ i H-eigvec. v i Avi 2 μ i v [2] i (0.7954, , ) (0.760, , ) Table 4 Z/H-eigenvalues of the tensor in Example 5.4 n Z-eig. λ i ( 0) time H-eig. μ i time None ,.664, , ; , , , , , ,.9260, 4.040, , , , , , 8.844, , , 7.458, ,.090 Example 5.4 [24, Example 3.9] Consider the tensor A T 3 (R n ) such that ( A i i 2 i 3 = tan i i i ) 3. 3 Applying Algorithms 3.6 and 4.5, we get all the real Z-eigenvalues (only nonnegative ones are computed because the order is odd), and get all the real H-eigenvalues. The results are shown in Table 4, for the dimensions n = 2, 3, 4, 5. There are no real H-eigenvalues when n = 2. The Z-eigenvalues are all isolated. This is verified by Lemma 3.5, because ν (λ i, 0.05) = ν + (λ i, 0, 05) for all λ i. Since there are 28 eigenvalues, the eigenvectors are not shown, for neatness of the paper. They are all correct, up to small numerical errors. Example 5.5 Consider the tensor A T 4 (R 3 ) such that A i...i 4 = arctan(i i 2 2 i 3 3 i 4 4 ). By Algorithms 3.6 and 4.5, we get all the real Z/H-eigenvalues. It took about 3 s to get the Z-eigenvalues, and about 5 s to get the H-eigenvalues. The results are shown in Table 5. All the Z-eigenvalues λ i are isolated, which is verified by Lemma 3.5 because 23
25 Real eigenvalues of nonsymmetric tensors 25 Table 5 Z/H-eigenvalues of the tensor in Example 5.5 Z-eig. λ i Z-eigvec. u i Aui 3 λ i u i (0.9306, , 0.364) (0.2402, , ) (0.570, , ) H-eig. μ i H-eigvec. v i Aui 3 λ i u [3] i (0.943, , ) (0.2924, 0.939, ) (0.7566, , ) Table 6 Z/H-eigenvalues of the tensor in Example 5.6 Z-eig. λ i Z-eigvec. u i Aui 3 λ i u i (0.23, , ) (0.6244, 0.938, ) (0.6083, , ) H-eig. μ i H-eigvec. v i Avi 3 μ i v [3] i (0.2552, , 0.773) ( , , 0.897) (0.7732, , ) ν (λ i, 0.05) = ν + (λ i, 0.05) for all λ i. The computed Z/H-eigenpairs (λ i, u i ), (μ i,v i ) are all correct, up to small numerical errors. The residual errors are shown in Table 5. Example 5.6 Consider the tensor A T 4 (R 3 ) such that A i...i 4 = ( + i + 2i 2 + 3i 3 + 4i 4 ). By Algorithms 3.6 and 4.5, we get all the real Z/H-eigenvalues. It took about 4 s to compute Z-eigenvalues, and about 5 s for H-eigenvalues. The results are shown in Table 6.TheZ-eigenvalues λ i are all isolated. This is verified by Lemma 3.5, because ν (λ i, 0.000) = ν + (λ i, 0.000) for all λ i. The computed eigenpairs (λ i, u i ), (μ i,v i ) are all correct, up to small numerical errors. The residual errors are shown in Table 6. Example 5.7 Consider the tensor A T 5 (R n ) such that ( 5. A i...i 5 = ( ) j exp(i j )) j= 23
26 26 J. Nie, X. Zhang Table 7 Z/H-eigenvalues of the tensor in Example 5.7 n Z-eig. λ i ( 0) Time H-eig. μ i Time , , , , , , , 2.399, , , , , , , , 8, , , 4.473, , , , 5.036, 5.789, , , , Table 8 Z/H-eigenvalues of the tensor in Example 5.8 n Z-eig. λ i ( 0) time H-eig. μ i time , , , , 0.06, , , , , 0.568, ,.4958, , 0.007, , , , 0.000, , 0.002, , ,.576, , 4.089, , For the dimensions n = 2, 3, 4, 5, all the real Z/H-eigenvalues are found by Algorithms 3.6 and 4.5. Because the order is odd, only nonnegative Z-eigenvalues are computed. The results are shown in Table 7. The Z-eigenvalues are all isolated, verified by Lemma 3.5. This is because ν (λ i, 0.05) = ν + (λ i, 0.05) for all λ i. The computed eigenpairs are all correct, up to round-off errors. Since there are 34 computed eigenpairs, for neatness of the paper, the eigenvectors and the residual errors are not displayed. Example 5.8 Consider the tensor A T 3 (R n ) such that A i i 2 i 3 = ( ) i + 2i 2 + 3i 3 i i i 3 2. For the dimensions n = 2, 3, 4, we get all the real Z-eigenvalues (only nonnegative ones are computed because the order is odd), and get all the real H-eigenvalues. The results are shown in Table 8. 23
27 Real eigenvalues of nonsymmetric tensors 27 The Z-eigenvalues are all isolated, verified by Lemma 3.5. This is because ν (λ i, 0.00) = ν + (λ i, 0.00) for all λ i. The computed eigenpairs are all correct, up to round-off errors. Since there are 32 computed eigenpairs, for neatness of the paper, the eigenvectors and the residual errors are not displayed. 5.2 Numerical examples of stationary probability distribution vectors In this subsection, we report numerical examples on computing all stationary probability distribution vectors. Example 5.9 Consider the 3rd order 3-dimensional transition probability tensor P T 3 (R 3 ) whose nonzero entries are P 2 =, P 2 =, P 3 =, P 3 =, P 2 =, P 222 =, P 223 =, P 232 =, P 333 =. Its stationary probability distribution vectors are exactly v = (0,, 0), v 2 = (0, 0, ), v 3 = ( 2, 2, 0). By Algorithm 3.3, we get all the stationary probability distribution vectors, with the relaxation orders 2, 2, 3, respectively. It took about 4 s. One can verify that Pv 2 i v i =0fori =, 2, 3. In the following examples, for P T m (R n ), P ij... (i, j,... {,...,n}) denotes the matrix whose entries are given as (P ij... ) t t 2 = P t t 2 ij... ( t, t 2 n). Example 5.0 [] Consider the transition probability tensor P T 3 (R 4 ) that is given as: P = ; P 2 = ; P 3 = ; P 4 = It has only one nonnegative Z-eigenvalue. By Algorithm 3.6, we get the Z-eigenvector u = (0.477, 0.576, , ). 23
Semidefinite Relaxations Approach to Polynomial Optimization and One Application. Li Wang
Semidefinite Relaxations Approach to Polynomial Optimization and One Application Li Wang University of California, San Diego Institute for Computational and Experimental Research in Mathematics September
More informationApproximation algorithms for nonnegative polynomial optimization problems over unit spheres
Front. Math. China 2017, 12(6): 1409 1426 https://doi.org/10.1007/s11464-017-0644-1 Approximation algorithms for nonnegative polynomial optimization problems over unit spheres Xinzhen ZHANG 1, Guanglu
More informationAn Exact Jacobian SDP Relaxation for Polynomial Optimization
An Exact Jacobian SDP Relaxation for Polynomial Optimization Jiawang Nie May 26, 2011 Abstract Given polynomials f(x), g i (x), h j (x), we study how to imize f(x) on the set S = {x R n : h 1 (x) = = h
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationOptimization over Polynomials with Sums of Squares and Moment Matrices
Optimization over Polynomials with Sums of Squares and Moment Matrices Monique Laurent Centrum Wiskunde & Informatica (CWI), Amsterdam and University of Tilburg Positivity, Valuations and Quadratic Forms
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationarxiv: v1 [math.oc] 3 Jan 2019
arxiv:1901.0069v1 [math.oc] Jan 2019 Calculating Entanglement Eigenvalues for Non-Symmetric Quantum Pure States Based on the Jacobian Semidefinite Programming Relaxation Method Mengshi Zhang a, Xinzhen
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION. 1. Introduction. f 0 (x)
ALGEBRAIC DEGREE OF POLYNOMIAL OPTIMIZATION JIAWANG NIE AND KRISTIAN RANESTAD Abstract. Consider the polynomial optimization problem whose objective and constraints are all described by multivariate polynomials.
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More informationSOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS
SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study
More informationSolving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets
Solving Global Optimization Problems with Sparse Polynomials and Unbounded Semialgebraic Feasible Sets V. Jeyakumar, S. Kim, G. M. Lee and G. Li June 6, 2014 Abstract We propose a hierarchy of semidefinite
More informationEigenvalue analysis of constrained minimization problem for homogeneous polynomial
J Glob Optim 2016) 64:563 575 DOI 10.1007/s10898-015-0343-y Eigenvalue analysis of constrained minimization problem for homogeneous polynomial Yisheng Song 1,2 Liqun Qi 2 Received: 31 May 2014 / Accepted:
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationTensor Complementarity Problem and Semi-positive Tensors
DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New
More informationHow to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization
CS-11-01 How to generate weakly infeasible semidefinite programs via Lasserre s relaxations for polynomial optimization Hayato Waki Department of Computer Science, The University of Electro-Communications
More informationA new look at nonnegativity on closed sets
A new look at nonnegativity on closed sets LAAS-CNRS and Institute of Mathematics, Toulouse, France IPAM, UCLA September 2010 Positivstellensatze for semi-algebraic sets K R n from the knowledge of defining
More informationStrictly nonnegative tensors and nonnegative tensor partition
SCIENCE CHINA Mathematics. ARTICLES. January 2012 Vol. 55 No. 1: 1 XX doi: Strictly nonnegative tensors and nonnegative tensor partition Hu Shenglong 1, Huang Zheng-Hai 2,3, & Qi Liqun 4 1 Department of
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug
More informationMinimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares
Journal of Global Optimization (2005) 33: 511 525 Springer 2005 DOI 10.1007/s10898-005-2099-2 Minimum Ellipsoid Bounds for Solutions of Polynomial Systems via Sum of Squares JIAWANG NIE 1 and JAMES W.
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationPolynomial complementarity problems
Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016
More informationSemidefinite Representation of Convex Sets
Mathematical Programming manuscript No. (will be inserted by the editor J. William Helton Jiawang Nie Semidefinite Representation of Convex Sets the date of receipt and acceptance should be inserted later
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationAn Even Order Symmetric B Tensor is Positive Definite
An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationFinding the Maximum Eigenvalue of Essentially Nonnegative Symmetric Tensors via Sum of Squares Programming
Finding the Maximum Eigenvalue of Essentially Nonnegative Symmetric Tensors via Sum of Squares Programming Shenglong Hu Guoyin Li Liqun Qi Yisheng Song September 16, 2012 Abstract Finding the maximum eigenvalue
More informationEigenvalues of the Adjacency Tensor on Products of Hypergraphs
Int. J. Contemp. Math. Sciences, Vol. 8, 201, no. 4, 151-158 HIKARI Ltd, www.m-hikari.com Eigenvalues of the Adjacency Tensor on Products of Hypergraphs Kelly J. Pearson Department of Mathematics and Statistics
More informationHilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry
Hilbert s 17th Problem to Semidefinite Programming & Convex Algebraic Geometry Rekha R. Thomas University of Washington, Seattle References Monique Laurent, Sums of squares, moment matrices and optimization
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationOn Polynomial Optimization over Non-compact Semi-algebraic Sets
On Polynomial Optimization over Non-compact Semi-algebraic Sets V. Jeyakumar, J.B. Lasserre and G. Li Revised Version: April 3, 2014 Communicated by Lionel Thibault Abstract The optimal value of a polynomial
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationApproximation Algorithms
Approximation Algorithms Chapter 26 Semidefinite Programming Zacharias Pitouras 1 Introduction LP place a good lower bound on OPT for NP-hard problems Are there other ways of doing this? Vector programs
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationLecture 5. Theorems of Alternatives and Self-Dual Embedding
IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationSEMIDEFINITE PROGRAM BASICS. Contents
SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n
More informationAdvanced SDPs Lecture 6: March 16, 2017
Advanced SDPs Lecture 6: March 16, 2017 Lecturers: Nikhil Bansal and Daniel Dadush Scribe: Daniel Dadush 6.1 Notation Let N = {0, 1,... } denote the set of non-negative integers. For α N n, define the
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLinear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.
Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationGlobal polynomial optimization with Moments Matrices and Border Basis
Global polynomial optimization with Moments Matrices and Border Basis Marta Abril Bucero, Bernard Mourrain, Philippe Trebuchet Inria Méditerranée, Galaad team Juillet 15-19, 2013 M. Abril-Bucero, B. Mourrain,
More informationConvex Optimization & Parsimony of L p-balls representation
Convex Optimization & Parsimony of L p -balls representation LAAS-CNRS and Institute of Mathematics, Toulouse, France IMA, January 2016 Motivation Unit balls associated with nonnegative homogeneous polynomials
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationProjective Varieties. Chapter Projective Space and Algebraic Sets
Chapter 1 Projective Varieties 1.1 Projective Space and Algebraic Sets 1.1.1 Definition. Consider A n+1 = A n+1 (k). The set of all lines in A n+1 passing through the origin 0 = (0,..., 0) is called the
More informationSCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE
SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationR, 1 i 1,i 2,...,i m n.
SIAM J. MATRIX ANAL. APPL. Vol. 31 No. 3 pp. 1090 1099 c 2009 Society for Industrial and Applied Mathematics FINDING THE LARGEST EIGENVALUE OF A NONNEGATIVE TENSOR MICHAEL NG LIQUN QI AND GUANGLU ZHOU
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationarxiv:math/ v3 [math.oc] 5 Oct 2007
arxiv:math/0606476v3 [math.oc] 5 Oct 2007 Sparse SOS Relaxations for Minimizing Functions that are Summations of Small Polynomials Jiawang Nie and James Demmel March 27, 2018 Abstract This paper discusses
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationSparse Approximation via Penalty Decomposition Methods
Sparse Approximation via Penalty Decomposition Methods Zhaosong Lu Yong Zhang February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l 0 minimization problems
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More informationEigenvalues and Eigenvectors
Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets
Convergence rates of moment-sum-of-squares hierarchies for volume approximation of semialgebraic sets Milan Korda 1, Didier Henrion,3,4 Draft of December 1, 016 Abstract Moment-sum-of-squares hierarchies
More informationMoments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations
Moments and Positive Polynomials for Optimization II: LP- VERSUS SDP-relaxations LAAS-CNRS and Institute of Mathematics, Toulouse, France Tutorial, IMS, Singapore 2012 LP-relaxations LP- VERSUS SDP-relaxations
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationCHAPTER 5. Basic Iterative Methods
Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More information7. Symmetric Matrices and Quadratic Forms
Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationWhat can be expressed via Conic Quadratic and Semidefinite Programming?
What can be expressed via Conic Quadratic and Semidefinite Programming? A. Nemirovski Faculty of Industrial Engineering and Management Technion Israel Institute of Technology Abstract Tremendous recent
More informationA new approximation hierarchy for polynomial conic optimization
A new approximation hierarchy for polynomial conic optimization Peter J.C. Dickinson Janez Povh July 11, 2018 Abstract In this paper we consider polynomial conic optimization problems, where the feasible
More informationarzelier
COURSE ON LMI OPTIMIZATION WITH APPLICATIONS IN CONTROL PART II.1 LMIs IN SYSTEMS CONTROL STATE-SPACE METHODS STABILITY ANALYSIS Didier HENRION www.laas.fr/ henrion henrion@laas.fr Denis ARZELIER www.laas.fr/
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationa 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12
24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationAnalysis-3 lecture schemes
Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space
More informationECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems
ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A
More informationChapter 3. Linear and Nonlinear Systems
59 An expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them Werner Heisenberg (1901-1976) Chapter 3 Linear and Nonlinear Systems In this chapter
More informationPositive semidefinite matrix approximation with a trace constraint
Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More information2 Two-Point Boundary Value Problems
2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationOn the Moreau-Yosida regularization of the vector k-norm related functions
On the Moreau-Yosida regularization of the vector k-norm related functions Bin Wu, Chao Ding, Defeng Sun and Kim-Chuan Toh This version: March 08, 2011 Abstract In this paper, we conduct a thorough study
More informationCourse 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra
Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................
More informationStrong duality in Lasserre s hierarchy for polynomial optimization
Strong duality in Lasserre s hierarchy for polynomial optimization arxiv:1405.7334v1 [math.oc] 28 May 2014 Cédric Josz 1,2, Didier Henrion 3,4,5 Draft of January 24, 2018 Abstract A polynomial optimization
More information