We describe the generalization of Hazan s algorithm for symmetric programming
|
|
- Tamsin Gilbert
- 5 years ago
- Views:
Transcription
1 ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming, Euclidean Jordan algebras, low-rank approximations to optimal solutions AMS subject classifications. 90C25,17C99 1. Introduction. In [3] E.Hazan proposed an algorithm for solving semidefinite programming problems. Though the complexity estimates for this algorithm are not as good as for the most popular primal-dual algorithms, it nevertheless has a number of interesting properties. It provides a low-rank approximation to the optimal solution and it requires the computation of the gradient of an auxiliary convex function rather than the Hessian of a barrier function in case of interior-point algorithms. In present paper we show that these properties are preserved in a much more general case of symmetric programming problems. Moreover, we will also show that the major computational step of the algorithm (finding the maximal eigenvalue and corresponding eigenvector of a positive definite symmetric matrix) admits a natural decomposition in a general case of a direct sum of simple Euclidean Jordan algebras. For some types of irreducible blocks (e.g.,generally speaking, infinite-dimensional spin-factors) the corresponding eigenvalue problem has a simple analytic solution. In particular, Hazan s algorithm offers a computationally attractive alternative to other methods in case of second-order cone programming (including the infinite-dimensional version considered in [2], [4]). 2. Jordan-algebraic concepts. We stick to the notation of an excellent book [1]. We do not attempt to describe the Jordan-algebraic language here but instead provide detailed references to [1]. Throughout this paper: V is an Euclidean Jordan algebra; rank(v ) stands for the rank of V ; x y is the Jordan algebraic multiplication for x, y V ; < x, y >= tr(x y) is the canonical scalar product in V ; here tr is the trace operator on V ; Ω is the cone of invertible squares in V ; Ω is the closure of Ω in V ; An element g V such that g 2 = g and tr(g) = 1 is called a primitive idempotent in V ; Given x V, we denote by L(x) the corresponding multiplication operator on V, i.e. L(x)y = x y, y V ; Given x V, we denote by P (x) the so-called quadratic representation of x, i.e. P (x) = 2L(x) 2 L(x 2 ). The author was supported in part by National Science Foundation Grant NO. DMS Department of Mathematics University of Notre Dame, Notre Dame, IN USA 1
2 2 L. Faybusovich Given x V, there exist idempotents g 1,, g k in V such that g i g j = 0 for i j and such that g 1 + g 2 + g k = e, and distinct real numbers λ 1,, λ k with the following property: (2.1) x = k λ i g i The numbers λ i and idempotents g i are uniquely defined by x. (see Theorem III. 1.1 in [1]). Here e is the identity element in the Jordan algebra V. We define λ min (x) (resp. λ max (x)) as min{λ i : i = 1,..., k} (resp. max{λ i : i = 1,..., k}. Within the context of this paper the notion of rank of x is very important. By definition: (2.2) rank(x) = i:λ i 0 tr(f i ). Given x V, the operator L(x) is symmetric with respect to the canonical scalar product. If g is an idempotent in V, it turns out that the spectrum of L(g) belongs to {0, 1 2, 1}. Following [1], we denote by V (1, g), V ( 1 2, g), V (0, g) corresponding eigenspaces. It is clear that (2.3) V = V (0, g) V (1, g) V ( 1 2, g) and the eigenspaces are pairwise orthogonal with respect to the scalar product <, >. This is the so-called Peirce decomposition of V with respect to an idempotent g. However, eigenspaces have more structure (see [1], Proposition IV. 1.1). In particular, V (0, g), V (1, g) are subalgebras in V. We summarize some of the properties of algebras V (1, g) = V (0, 1 g). Proposition 1. Let g be an idempotent in an Euclidean Jordan algebra V. Then V (1, g) is an Euclidean Jordan subalgebra with identity element g. Moreover, rank(v (1, g)) = rank(g) The trace operator on V (1, g) coincides with the restriction of the trace operator on V. If Ω is the cone of invertible squares in V (1, g) then Ω = Ω V (1, g). Proposition 1 easily follows from the properties of Peirce decomposition on V (see section IV.2 in [1]). Notice that if c is a primitive idempotent in V (1, g), then c is primitive idempotent in V. For a proof see e.g [6]. Let g 1,, g r, where r = rank(v ), be a system of primitive idempotents such that g i g j = 0 for i j and g 1 + g r = e. Such system is called a Jordan frame. Given x V, there exists a Jordan frame g 1,, g r and real numbers λ 1,, λ r (eigenvalues of x) such that x = λ i g i. Such a representation is called a spectral decomposition of x. Since primitive idempotents in V (1, g) remain primitive in V, it easily follows that the rank of x V (1, g) is the same as its rank in V.
3 Hazan s algorithm 3 3. Auxiliary functions. Let V be an Euclidean Jordan algebra. Consider the function (3.1) f K (x) = 1 m K ln( e K( ai,x bi) ). Here K > 0 is a real parameter; a i V, b i R, i = 1,..., m. The following Proposition is, of course, well-known, but we will prove it, since we will need the explicit expressions for the gradient and the Hessian of f K later on. Proposition 2. The function f K is convex on V. Proof. We have: (3.2) Here Df K (x)h = 1 α(x) m α i (x) a i, h, h V. α i (x) = e K( ai,x bi), α(x) = m α i (x); (3.3) m D 2 f K (x)(h, h) = K[ α i(x) a i, h 2 ( m α i(x) a i, h ) 2 α(x) α(x) 2 ]. Here Df K (x), D 2 f K (x) are the first and second Frechet derivatives of f K, respectively. Let β i = a i, h e K( a i,x b i ) 2, Then Cauchy-Schwarz inequality γ i = e K( a i,x b i ) 2. m m ( β i γ i ) 2 β 2 i m γ 2 i shows that D 2 f K (x)(h, h) 0 for all x V, h V. As usual, we introduce the gradient f K (x) V and the Hessian H fk (x) as follows: f K (c), h = Df K (x)h, (3.4) h, H fk (x)h = D 2 f K (x)(h, h), x, h V. In particular, (3.5) m f K (x) = α i(x)a i, α(x)
4 4 L. Faybusovich and H fk (x) is a symmetric linear map from V to V. Consider the function (3.6) It is clear that Φ K : R m R, Φ K (y 1,..., y m ) = ln( m ekyi ) f K (x) = Φ K ( a 1, x b 1,..., a m, x b m ). The next Proposition is, of course, well-known. Proposition 3. (3.7) (4.1) max{y i : i [1, m]} Φ K (y 1,..., y m ) max{y i : i [1, m]} + ln m K. 4. Feasibility problem. Consider the following feasibility problem: K a i, x b i, i = 1, 2,..., m, tr(x) = 1, x Ω V. We can assume without loss of generality that a i = a i, a i = 1, i = 1,..., m. The key in Hazan s approach is the following observation: Proposition 4. Fix ɛ > 0. Let x be an optimal solution to the optimization problem: (4.2) (4.3) f K (x) min, tr(x) = 1, x Ω, where K = 2 ln m ɛ. Let, further, x be such that (4.4) f K ( x) f K (x ) + ɛ/2, tr( x) = 1, x Ω. If f K ( x) ɛ, then (4.5) If f K ( x) > ɛ, then the set (4.1) s empty. Proof. By Proposition 3 (4.6) where a i, x b i ɛ, i = 1,..., m. (x) f K (x) (x) + ln m K, (x) = max{ a i, x b i : i [1, m]}, x V. If f K ( x) ɛ, the lower bound in (4.6) shows that (4.5) holds true. If, however, f K ( x) > ɛ, then f K (x ) > ɛ 2 by (4.4). Suppose that x is feasible for (4.1). Then, in particular, f K (x) f K (x ) > ɛ 2. But then the upper bound in (4.6) yields: f K (x) (x) + ln m K ɛ 2, since (x) 0 and K = 2 ln m ɛ. Thus, ɛ 2 f K(x) > ɛ 2. Contradiction. Hence, f K ( x) > ɛ implies that (4.1) has no feasible solutions..
5 Hazan s algorithm 5 5. Solving an optimization problem on the generalized simplex. In light of Proposition 4, we need to provide an efficient algorithm for solving the optimization problem (4.2),(4.3). E. Hazan uses a version of the Frank-Wolfe algorithm on the set of positive semi-definite matrices with bounded trace. Let us show that this approach works in a more general symmetric cone situation. Consider the following optimization problem: (5.1) f(x) min, (5.2) tr(x) = 1, x Ω V. Here f is twice continuously differentiable function on some open neighbourhood of the generalized simplex defined by (5.2). Denote by Γ(f) the following expression: (5.3) Γ(f) = max{λ max (H f (x)) : x } where λ max (H f (x)) is the maximal eigenvalue of the Hessian H f (x) of the function f evaluated at the point x. Fix ɛ > 0. Let x 1 by any primitive idempotent in V. Clearly, x 1. For k = 1, 2,..., let s k be a primitive idempotent in V such that (5.4) f(x k ), s k λ min ( f(x k )) + ɛ 4. Set (5.5) x k+1 = x k + α k (s k x k ), where α k = min(1, 2 k ). Notice that f(x k) V and hence λ min ( f(x k ) is defined in Section 2. Theorem 1. Let x be an optimal solution to the problem (5.1), (5.2). Then (5.6) f(x k ) f(x ) + ɛ 4 + 4Γ(f), k = 1, 2,.... k Remark 1. Notice that (5.5) implies that rank(x k+1 ) rank(x k ) + rank(s k ) (for the rank inequalty see Appendix). Since s k is a primitive idempotent, we conclude that rank(x k ) k, k = 1,.... Proof. Our proof follows the one in [5], where the case of the cone of positive semidefinite matrices was considered. Notice that x k+1 is a convex combination of x k and s k and hence belong to. By Taylor s formula: (5.7) f(x k+1 ) = f(x k ) + α k f(x k ), s k x k + α2 k 2 H f (x )(s k x k ), (s k x k ) for some x conv(x k, x k+1 ). Now, H f (x ) Γ(f)I V, where I V is the identity map on V. Hence, f(x k+1 ) f(x k ) + α k f(x k ), s k x k + α2 k 2 Γ(f) s k x k 2.
6 6 L. Faybusovich By Lemma 1 (see below) s k x k 2. Hence, (5.8) f(x k+1 ) f(x k ) + α k f(x k ), s k x k + α 2 kγ(f). Convexity of f implies (5.9) f(x ) f(x k ) f(x k ), x x k. Notice that λ min ( f(x k )) is the optimal value for the optimization problem f(x k ), z min, z. This is discussed in the Appendix. Combining this with (5.4), we obtain (5.10) f(x k ), s k f(x k ), x + ɛ 4. Substituting (5.10) into (5.8), we obtain: f(x k+1 ) f(x k ) + α k ( f(x k ), x + ɛ 4 ) α k f(x k ), x k + α 2 kγ(f) f(x k ) + α k ( f(x k ), x + ɛ 4 ) + α k[f(x ) f(x k ) f(x k ), x ] + α 2 kγ(f), where in the last inequality we used (5.9). Hence, we obtain: (5.11) f(x k+1 ) (1 α k )f(x k ) + α k (f(x ) + ɛ 4 ) + α2 kγ(f). We will use (5.11) to prove (5.6) by induction on k. For k = 1, 2, α k = 1 and (5.11) yields: If (5.6) holds for some k 2, then f(x k ) f(x ) + ɛ 4 + Γ(f). f(x k+1 ) (1 2 k )(f(x ) + ɛ 4 + 4Γ(f) ) + 2 k k (f(x ) + ɛ 4 ) + 4Γ(f) k 2 = = f(x ) + ɛ 4 + 4Γ(f)( 1 k 1 k 2 ) f(x ) + ɛ 4 + 4Γ(f) k + 1. In the proof of Theorem 1 we used the following. Lemma 1. Let x, y. Then x y 2. Proof. Let x = λ i g i, y = µ i h i
7 Hazan s algorithm 7 be spectral decompositions of x, y as discussed in Section 2. Since x, y, we have λ i = µ i = 1, λ i 0, µ i 0 for i = 1,... r. Now: x y = λ i µ j (g i h j ) j=1 j=1 λ i µ j g i h j max{ g i h j : i = 1,..., r, j = 1,... r}. Hence, it suffices to prove that if g, h are primitive idempotents, then g h 2 2. But g h 2 = g, g + h, h 2 g, h = tr(g 2 ) + tr(h 2 ) 2 g, h = 2 2 g, h 2, since g, h 0. The last inequality follows, for example, from the self-duality of the cone Ω. Remark 2. Notice that (5.12) f(x k ) f(x ) + ɛ 2 provided (5.13) k 16Γ(f). ɛ In light of this remark, we need to estimate Γ(f) for the function f K. Proposition 5. Under the assumptions a i 1, i = 1,... m, we have: (5.14) Γ(f K ) K. Proof. By (3.3) (5.15) m D 2 f K (h, h) = h, H fk h K α i(x) a i, h 2. α(x) Let v 1,..., v N be an orthonormal basis in V. Then N T r(h fk (x)) = v j, H fk (x)v j j=1 K m α i(x) N j=1 a i, v j 2 α(x)
8 8 L. Faybusovich by (5.15). By Pythagoras theorem N a i, v j 2 = a i 2 = 1, i = 1,..., m. Hence, j=1 (5.16) m T r(h fk (x)) K α i(x) = K, α(x) since α(x) = m α i (x). Since H fk (x) is positive definite by Proposition 2, we obtain: λ max (H fk (x)) T r(h fk (x)). The result follows by (5.16). Recall that K = 2 ln m ɛ. Combining Proposition 5 with (5.13), we obtain: (5.17) f K (x k ) f K (x ) + ɛ 2, provided k 32 ln m ɛ Optimization problem and extensions. Consider the optimization problem (6.1) c, x min, a i, x b i, i = 1,..., m, x Ω, tr(x) = 1. We can assume without loss of generality that a i = 1, i = 1,..., m, c = 1. Lemma 2. Suppose that the feasible set of (6.1) is not empty. Then the optimal value γ opt of (6.1) belongs to [ 1, 1]. Proof. Since c, x c x = x, it suffices to show that x 1 for x. Let x have a spectral decomposition x = λ i g i.
9 Then λ i 0, r λ i = 1. We have: But Hazan s algorithm 9 x λ i g i. g i = g i, g i 1/2 = [tr(g 2 i )] 1/2 = [tr(g i )] 1/2 = 1, since g i are primitive idempotents. Hence, x r λ i = 1. We can now apply the standard bisection method to feasibility problems c, x γ, a i, x b i, i = 1,... m, x, starting with γ = 0. Hence, after ( ln( 1 ɛ ) ln m ɛ ) iterations described in Theorem 1 we 2 can find ɛ-approximate solution x to problem (6.1) such that c, x γ opt + ɛ. At first glance, the requirement tr(x) = 1 is very restrictive in formulation (6.1). However, a subset in Ω is bounded if and only if the function tr is bounded from above on it (see [1], Corollary I.1.6 ). Hence, any bounded set of the form can be described as a i, x b i, i = 1,..., m, x Ω, (6.2) a i, x b i, i = 1,..., m, tr(x) t, x Ω V for some fixed t > 0. Making the change of variables y = x/t and adding slack variable z 0, we can rewrite (6.2) in the following equivalent form: a i, y tb i, i = 1,..., m, tr(y, z) = 1, (y, z) Ω R + V R. Notice that Ω R + is the cone of squares in the Euclidean Jordan algebra V R. 7. Appendix. Each iteration described in Theorem 1 requires solving (approximately) the optimization problem of the form Notice that by (3.5) f K (x), z min, s. f K (x) 1, x V.
10 10 L. Faybusovich Hence, we need to consider the following problem: (7.1) c, x min, x Ω, tr(x) = 1, with c 1. Notice that the Lagrange dual tp (7.1) has the form: (7.2) µ max, c µe Ω. Here e is the unit element in the Jordan algebra V. It is quite obvious that the strong duality holds. Let (7.3) c = λ i g i be the spectral decomposition of c. Then c µe = (λ i µ)g i and the condition c µe Ω is equivalent to min{λ i µ : i = 1,..., r} 0, i.e. µ min{λ i : i = 1,... r}. Hence, the common optimal value of (7.1) and (7.2) is λ min (c) and an optimal solution to (7.1) is g i,where g i is the primitive idempotent in spectral decomposition (7.3) corresponding to eigenvalue λ min (c). Let q = 2e c. It is obvious that q and c have the same sets of primitive idempotents (Jordan frames) in their spectral decompositions and λ max (q) = 2 λ min (c). Lemma 3. We have: i) λ max (q) [1, 3]; ii) if s is a primitive idempotent such that q, s λ max (q) ɛ,then c, s λ min (c) + ɛ. Proof. Let (7.3) be the spectral decomposition of c. Then c = c, c 1/2 = ( λ 2 i ) 1/2. Since c 1, we have λ i 1, i = 1,..., r. Furthermore, We obviously have: q = (2 λ i )g i. 1 2 λ i 2 λ i = 2 λ i 2+ λ 3,
11 which proves i). If q, s λ max (q) ɛ, then Hazan s algorithm 11 c, s = 2 q, s 2 λ max (q) + ɛ = λ min (c) + ɛ. Each iteration described in Theorem 1 requires solving the problem of the type (7.1) approximately. In light of Lemma 3 and preceding discussion, it suffices to solve the following problem: given q Ω such that λ max (q) [1, 3] and ɛ > 0, find a primitive idempotent s V satisfying the following property: Let q, s λ max (q) ɛ. V = V 1 V 2... V l be the decomposition of V into the direct sum of pairwise orthogonal simple algebras. Such a decomposition exists and, moreover, all Euclidean simple algebras are completely classified (see[1],chapter 5). If than it is clear that q = l q i, q i V i, i = 1,..., l, λ max (q) = max{λ max (q i ) : i = 1,..., l}. Hence, solving (7.1) is reduced to finding λ max (q i ) for each i. Thus, it suffices to consider the problem of finding λ max (q) and the corresponding primitive idempotent for the case, where V is simple. 1. V is a spin algebra, i.e. V = R W, where (W,, ) is an Euclidean vector space (possibly infinite-dimensional [2],[4]). Let v = (s, x) V. Then tr(v) = 2s. the spectral decomposition has the form: v = λ 1 g 1 + λ 2 g 2, λ 1 = s+ x, λ 2 = s x, g 1 = 1 2 (1, x x ), g 2 = 1 2 (1, x x ), assuming x 0. Hence, λ max (v) = s+ x and the maximum is attained at g 1. The problem (7.1) admits an analytic solution. 2. V = Herm(n, R), V = Herm(n, C), i.e. V is the algebra of n by n real (respectively, complex) symmetric (respectively Hermitian) matrices. This case is analyzed in detail in [5], where it is proposed to use the standard power method. 3. V = Herm(n, H), i.e. the algebra of n by n Hermitian matrices with quaternion entries. As in [6], we consider a realization of V by Hermitian matrices with complex entries. Namely, V = {C Herm(2n, C : JC = CJ}.
12 12 L. Faybusovich Here [ J = 0 I n I n 0 ], where I n is n by n identity matrix. iterations Let q V. We can consider the usual power ξ k = q k ξ 0 q k ξ 0 2n, where ξ 0 C 2n is an arbitrary nonzero vector. Notice that for ξ C 2n, ξ 2n = ξ ξ, where ξ = ξ T. Then we can expect that ρ k = ξ k qξ k will converge to λ max (q) and ξ k will converge to an eigenvector η, η 2n = 1 of q with the eigenvalue λ max (q). In this case, the corresponding primitive idempotent will have the form: ϕ(η) = ηη + J η(j η ). For details, see [6]. In particular, it is shown in [6] that q, ϕ(η) = η qη = λ max (q). It is worthwhile to mention that the multiplicity of eigenvalue λ max (q) of a Hermitian matrix in V, will be at least 2. Thus, it is important that the complexity analysis of power method in [5] allows multiple maximal eigenvalue. Notice that Let ρ k = ξ 0q 2k+1 ξ 0 ξ 0 q 2k ξ 0 = q2k+1, ϕ(ξ 0 ) q 2k, ϕ(ξ 0 ). q = n λ i g i be the spectral decomposition of q as an element of Jordan algebra V. Let, further, ϕ(ξ 0 ) = n w i g i + 1 i<j n be the Peirce decomposition of ϕ(ξ 0 ) with respect to the Jordan frame g 1,... g n. (see [1] Theorem IV.2.1 ).Then n ρ k = w iλ 2k+1 i n w iλ 2k, i where w i = ϕ(ξ 0 ), g i 0, i = 1,..., n. Compare this expression with (5.15) in [5]. It is therefore clear that the complexity analysis in [5] of power iterations is applicable in our situation. Finally, we notice that for any q Ω we have P (q k ) = P (q) k and λ max (P (q)) = λ max (q) 2. Here P is the quadratic representation of V (see [1], p. 32). Thus, for general simple algebra V one can apply power iterations to P (q) (which is a linear map of V into itself) to find λ max (q). While in general it would dramatically increase the dimension of the vector space, it definitely can be applied to the exceptional simple algebra Herm(3, O) whose real dimension is just 27. ϕ ij
13 Hazan s algorithm 13 In the Remark after Theorem 1 we used the rank inequality. We now prove it. Theorem 2. Let x, y Ω. Then rank(x + y) rank(x) + rank(y). Proof. Since rank is obviously an an additive function with respect to decomposition of V into direct sum of irreducible subalgebras, we can assume without loss of generality that V is simple. Consider first the case x + y Ω. Denote by A the linear operator P (x + y) 1/2 = P ((x + y) 1/2 ). Here P is the quadratic representation of V. Then we have A(x) + A(y) = e. Since A is the automorphism of the cone Ω, it preserve the rank (see [1], Proposition IV.3.1). Thus, it suffices to consider the case x + y = e. If x = λ i g i is the spectral decomposition of x, then y = (1 λ i )g i is the spectral decomposition of y. Since x, y Ω, we have 0 λ i 1 for all i. Let 0 = {i [1, r] : λ i = 0}, 1 = {i [1, r] : λ i = 1}, 2 = {i [1, r] : 0 < λ i < 1}. Then rank(x) = card 1 + card 2, rank(y) = card 0 + card 2. Hence, rank(x) + rank(y) = card 0 + card 1 + 2card 2 card 0 + card 1 + card 2 = r = rank(e) = rank(x + y). Consider now the case rank(x + y) < rank(v ). Let x + y = λ i g i be the spectral decomposition with λ 1 λ 2... λ s > λ s+1 =... = λ r = 0. Consider the idempotent g = g i. i=s+1
14 14 L. Faybusovich Then x + y belongs to subalgebra V (g, 0) = {z V : z g = 0}. But then both x and y belong to this subalgebra. Indeed, Hence, tr(x g) + tr(y g) = 0. Or Since x, g 0, y, g 0, it implies x g + y g = (x + y) g = 0. x, g + y, g = 0. x, g = y, g = 0. But then x g = y g = 0. See Proposition 2 in [6]. Notice that x + y is invertible in V (g, 0). Indeed, e g is the unit element in V (g, 0) and (x + y) ( s 1 λ i g i ) = e g. Hence, we reduced our problem to the previous case (notice that the rank with respect to subalgebra V (g, 0) coincides with the rank in V. See e.g. [6]). 8. Concluding remarks. In present paper we have generalized Hazan s algorithm to the case of a general symmetric programming problem with a bounded feasible domain. We have shown that the major property of the algorithm which detects a low rank approximation to an optimal solution is preserved if the concept of rank of a symmetric matrix is substituted (generalized) by the concept of rank of an element of an Euclidean Jordan algebra. That gives an additional evidence that the concept of Jordan-algebraic rank play an important role in understanding the approximate solutions of general symmetric programming problems( see in this respect also [7]). In addition, we have described in detail how to use the block diagonal structure in decomposing Frank-Wolfe algorithm for general symmetric cones. In particular, in case of second-order cone programming problems (even with infinite-dimensional blocks) the corresponding eigenvalue problem admits a simple analytic solution. REFERENCES [1] J. Faraut and A. Koranyi, Analysis on Symmetric Cones,Clarendon Press, Oxford, 1994,382p. [2] L. Faybusovich and T. Tsuchiya, Primal-dual algorithms and infinite-dimensional Jordan algebras of finite rank, Mathematical Programming, vol.97 (2003), no. 3, pp [3] E.Hazan, Sparse approximate solutions to semidefinite programs, In Proceedings of the 8th Latin American conference on Theoretical Informatics (LNCS 4957),pp Springer- Verlag,Berlin, [4] L. Faybusovich, T. Mouktonglang and T. Tsuchiya, Implementation of infinite - dimensional interior - point method for solving multi - criteria linear - quadratic control problem, Optim. Methods Softw. 21(2006) no. 2, [5] B. Gartner and J. Matousek, Approximation Algorithms and Semidefinite Programming, Springer-Verlag, 2010, 251p. [6] L. Faybusovich,Jordan-algebraic approach to convexity theorems for quadratic mappings, SIAM J. Optim. 17(2006),no 2, [7] L.Faybusovich, Jordan-algebraic aspects of optimization:randomization, Optim. Methods Softw. 25(2010) no 4-6,
Jordan-algebraic aspects of optimization:randomization
Jordan-algebraic aspects of optimization:randomization L. Faybusovich, Department of Mathematics, University of Notre Dame, 255 Hurley Hall, Notre Dame, IN, 46545 June 29 2007 Abstract We describe a version
More informationLinear-quadratic control problem with a linear term on semiinfinite interval: theory and applications
Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN
More informationL. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1
L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following
More informationA Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming
A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic
More informationPermutation invariant proper polyhedral cones and their Lyapunov rank
Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu
More informationStrict diagonal dominance and a Geršgorin type theorem in Euclidean
Strict diagonal dominance and a Geršgorin type theorem in Euclidean Jordan algebras Melania Moldovan Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More information2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.
FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationUsing Schur Complement Theorem to prove convexity of some SOC-functions
Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan
More informationOptimization Theory. A Concise Introduction. Jiongmin Yong
October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationLecture 7: Semidefinite programming
CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationThe Jordan Algebraic Structure of the Circular Cone
The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationAn Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace
An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More information1 Directional Derivatives and Differentiability
Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=
More informationThe Q Method for Symmetric Cone Programming
The Q Method for Symmetric Cone Programming Farid Alizadeh Yu Xia October 5, 010 Communicated by F. Potra Abstract The Q method of semidefinite programming, developed by Alizadeh, Haeberly and Overton,
More informationInequality Constraints
Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the
More informationM. Seetharama Gowda. J. Tao. G. Ravindran
Some complementarity properties of Z and Lyapunov-like transformations on symmetric cones M. Seetharama Gowda Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore,
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationLecture 7: Positive Semidefinite Matrices
Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.
More informationComplementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras
Optimization Methods and Software Vol. 00, No. 00, January 2009, 1 14 Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras M. Seetharama Gowda a, J. Tao
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationLecture: Cone programming. Approximating the Lorentz cone.
Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is
More informationNotes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers
Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1
More informationPositive and doubly stochastic maps, and majorization in Euclidean Jordan algebras
Positive and doubly stochastic maps, and majorization in Euclidean Jordan algebras M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationOptimality conditions for problems over symmetric cones and a simple augmented Lagrangian method
Optimality conditions for problems over symmetric cones and a simple augmented Lagrangian method Bruno F. Lourenço Ellen H. Fukuda Masao Fukushima September 9, 017 Abstract In this work we are interested
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationA Distributed Newton Method for Network Utility Maximization, II: Convergence
A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility
More informationSome inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras
positivity manuscript No. (will be inserted by the editor) Some inequalities involving determinants, eigenvalues, and Schur complements in Euclidean Jordan algebras M. Seetharama Gowda Jiyuan Tao February
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationA classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger
Tatsuro Ito Kazumasa Nomura Paul Terwilliger Overview This talk concerns a linear algebraic object called a tridiagonal pair. We will describe its features such as the eigenvalues, dual eigenvalues, shape,
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationReal Symmetric Matrices and Semidefinite Programming
Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationChapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.
Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space
More informationL(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that
ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive
More informationINTRODUCTION TO LIE ALGEBRAS. LECTURE 10.
INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. 10. Jordan decomposition: theme with variations 10.1. Recall that f End(V ) is semisimple if f is diagonalizable (over the algebraic closure of the base field).
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4 Instructor: Farid Alizadeh Scribe: Haengju Lee 10/1/2001 1 Overview We examine the dual of the Fermat-Weber Problem. Next we will
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationClassification of root systems
Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April
More informationHomework 4. Convex Optimization /36-725
Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationNUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS
NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS L FAYBUSOVICH, T MOUKTONGLANG, T TSUCHIYA Abstract Based on previous explicit computations of universal barrier functions,
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationFurther Mathematical Methods (Linear Algebra)
Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:
More informationImplementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem
Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem L. Faybusovich Department of Mathematics, University of Notre Dame, Notre Dame,
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationSome characterizations of cone preserving Z-transformations
Annals of Operations Research manuscript No. (will be inserted by the editor) Some characterizations of cone preserving Z-transformations M. Seetharama Gowda Yoon J. Song K.C. Sivakumar This paper is dedicated,
More informationINVARIANT PROBABILITIES ON PROJECTIVE SPACES. 1. Introduction
INVARIANT PROBABILITIES ON PROJECTIVE SPACES YVES DE CORNULIER Abstract. Let K be a local field. We classify the linear groups G GL(V ) that preserve an probability on the Borel subsets of the projective
More informationSOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS
SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study
More informationKey words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone
ON THE IRREDUCIBILITY LYAPUNOV RANK AND AUTOMORPHISMS OF SPECIAL BISHOP-PHELPS CONES M. SEETHARAMA GOWDA AND D. TROTT Abstract. Motivated by optimization considerations we consider cones in R n to be called
More informationECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.
ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, 2015 1 Name: Solution Score: /100 This exam is closed-book. You must show ALL of your work for full credit. Please read the questions carefully.
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationECON 4117/5111 Mathematical Economics Fall 2005
Test 1 September 30, 2005 Read Me: Please write your answers on the answer book provided. Use the rightside pages for formal answers and the left-side pages for your rough work. Do not forget to put your
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationSome parametric inequalities on strongly regular graphs spectra
INTERNATIONAL JOURNAL OF PURE MATHEMATICS Volume 3, 06 Some parametric inequalities on strongly regular graphs spectra Vasco Moço Mano and Luís de Almeida Vieira Email: vascomocomano@gmailcom Faculty of
More informationAnalysis Preliminary Exam Workshop: Hilbert Spaces
Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More informationProblems in Linear Algebra and Representation Theory
Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific
More informationREVIEW OF DIFFERENTIAL CALCULUS
REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 2 Instructor: Farid Alizadeh Scribe: Xuan Li 9/17/2001 1 Overview We survey the basic notions of cones and cone-lp and give several
More informationNonlinear equations. Norms for R n. Convergence orders for iterative methods
Nonlinear equations Norms for R n Assume that X is a vector space. A norm is a mapping X R with x such that for all x, y X, α R x = = x = αx = α x x + y x + y We define the following norms on the vector
More informationAn LMI description for the cone of Lorentz-positive maps II
An LMI description for the cone of Lorentz-positive maps II Roland Hildebrand October 6, 2008 Abstract Let L n be the n-dimensional second order cone. A linear map from R m to R n is called positive if
More informationDeterministic Kalman Filtering on Semi-infinite Interval
Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control
More informationFIXED POINT ITERATIONS
FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in
More informationSTRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES
Communications on Stochastic Analysis Vol 9, No 2 (2015) 239-250 Serials Publications wwwserialspublicationscom STRUCTURE AND DECOMPOSITIONS OF THE LINEAR SPAN OF GENERALIZED STOCHASTIC MATRICES ANDREAS
More informationLecture 8: Semidefinite programs for fidelity and optimal measurements
CS 766/QIC 80 Theory of Quantum Information (Fall 0) Lecture 8: Semidefinite programs for fidelity and optimal measurements This lecture is devoted to two examples of semidefinite programs: one is for
More informationMATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations
MATHEMATICS Subject Code: MA Course Structure Sections/Units Section A Section B Section C Linear Algebra Complex Analysis Real Analysis Topics Section D Section E Section F Section G Section H Section
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationUniqueness of the Solutions of Some Completion Problems
Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive
More informationSimultaneous Diagonalization of Positive Semi-definite Matrices
Simultaneous Diagonalization of Positive Semi-definite Matrices Jan de Leeuw Version 21, May 21, 2017 Abstract We give necessary and sufficient conditions for solvability of A j = XW j X, with the A j
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More information