Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

Size: px
Start display at page:

Download "Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem"

Transcription

1 Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem L. Faybusovich Department of Mathematics, University of Notre Dame, Notre Dame, IN USA. T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN USA. T. Tsuchiya The Institute of Statistical Mathematics, Minami-Azabu, Minato-Ku, Tokyo, , Japan. May 2004 Abstract We describe an implementation of an infinite-dimensional primal-dual algorithm based on the Nesterov-Todd direction. Several applications to both continuous and discrete-time multi-criteria linear-quadratic control problems and linear-quadratic control problem with quadratic constraints are described. Numerical results show a very fast convergence (typically, within 3-4 iterations) to optimal solutions. Introduction One of the current trends in the development of interior-point algorithms of optimization is the extension of the domain of their applicability. While at first the linear programming problem was the major target, subsequent developments led to creation of algorithms and software for much broader class of symmetric programming problems. This class includes semidefinite programming and second-order cone programming problems. See e.g. [2] for numerous This author was supported in part by National Science Foundation Grant No This author was supported in part by National Science Foundation Grant No This author was supported in part by the Grant-in-aid for scientific Research (C) of the Ministry of Education, culture, Sport, Science, and Technology of Japan.

2 engineering applications. Of course, these extensions are not without a price. While the general theory (very similar to the linear programming case) has been developed and cast in a very elegant form with the help of the technique of Euclidean Jordan algebras (see e.g. [, 3, 4, 5, 6, 3, 4, 6, 8] ), the computational cost of implementation grew quite significantly (mostly due to the cost of implementation of the major computational Newton-like step). In the present paper we make the next natural step along this road. We consider a class of infinite-dimensional optimization problems and a primal-dual algorithm based on the Nesterov-Todd direction. We implemented this algorithm for a class of control problems: multi-criteria linear-quadratic control problem and linearquadratic control problem with quadratic constraints (in both continuous and discrete-time settings). A computation of Nesterov-Todd direction is an infinitedimensional (linear) problem. Thus, the cost of the major computational step is even higher in comparison with previous essentially finite-dimensional schemes. What is crucial here is a careful consideration of the structure of the problem. In particular, in control applications mentioned above the infinite-dimensional part of the Newton-like step is reduced to solving the linear-quadratic control problem with extra linear term in the cost function. As we show in this paper, the presence of this extra linear term does not lead to any significant complications and (as in the standard control-theoretic setting) the solution of this problem is essentially reduced to solving one of the standard Riccati equations (differential, difference or algebraic depending on the concrete setting of the problem). The conceptual difference in implementation of the algorithm for continuous-time and discrete-time systems is quite minimal: it is essentially in the organization of data, different schemes for computation of scalar products in various functional spaces, and solving the linear-quadratic problem briefly described above. A Newton-like step for continuous-time problem is computationally expensive operation. Good news here is that it is much cheaper for discrete-time systems. Combining this with an experimental fact (which can be partially explained by t heoretical complexity estimates [0] ) that three-four iterations of the algorithm lead to very reasonable approximations of optimal solutions, one can hope t hat on-line versions of the algorithm can be implemented. This, in turn, may lead to entirely new and exciting methodologies for the multi-criteria regulation of control systems. 2 Primal-dual interior-point algorithms In this section, following [0], we describe a general set up for a class of infinitedimensional optimization problems. We then proceed with the optimality criterion and describe a primal-dual algorithm based on Nesterov-Todd (NT) direction. Let (H, <, >) be a Hilbert space, V be a vector space: V = IR H. Let, further, V = V V (m-times), X V be a closed vector subspace in V. 2

3 Consider the following standard optimization problem: < a, z > V min (2.) z (b + X) Ω (2.2) and its dual < b, w > V min (2.3) b (a + X ) Ω (2.4) where our cone Ω is a second order cone(open convex set in V ): Ω = {(s, y) IR H : s > y H } (sometimes also called the cone of squares) and Ω = Ω Ω (m-times). It is known that Suppose z, w V, Ω = {(s, y) IR H : s y H }. z = {(s, x ),, (s m, x m )} w = {(t, y ),, (t m, y m )}. Then we define an inner product on V as follows: The cone Ω is self-dual, i.e. < z, w > V = (s i t i + < x i, y i > H ). i= Ω = {z V :< w, z > 0, w Ω} = Ω. Denoted by X the orthogonal complement of X in V with respect to <, > V. Let, further, F = [(x + X) Ω] [(a + X ) Ω ]. We assume that int(f) = [(x + X) int( Ω)] [(a + X ) int( Ω )] 0. (2.5) It is easy to see that if a pair of z, w satisfies (2.2),(2.4) and < z, w >= 0, then z is an optimal solution to (2.)-(2.2) and w is an optimal solution to (2.3)-(2.4). 3

4 Definition 2.. Given (z, w) then the duality gap µ(z, w) is defined as follows: where r is a positive constant. µ(z, w) = < z, w > 2m A typical primal-dual algorithm generates a sequence of pairs primal and dual feasible points (z (k), w (k) ) int(f), k =, 2,... such that µ(z (k+), w (k+) ) ( δ r ω )µ(z(k), w (k) ), (2.6) for some positive δ and ω (see e.g. [9]). The following theorem (see [0] ) provides necessary and sufficient conditions for a pair (z, w ) to be an optimal solution to (2.),(2.2) and (2.3),(2.4). Theorem 2.2. Suppose the conditions (2.5) holds. Then both (2.),(2.2) and (2.3),(2.4) have optimal solutions. Moreover, z is an optimal solution to(2.),(2.2) and w is an optimal solution to (2.3),(2.4) if and only if < z, w > V = 0. One of the most important steps in the implementation of primal-dual algorithms is a computation of a descent direction which drives duality gap µ to zero. One of these directions (the so-called NT-direction) is described below. Let z i = (t i, x i ), w i = (s i, y i ) V. We define a determinant of z i as follows: det(z i ) = t 2 i x i 2. And the multiplication in V is defined as follows: (t i, x i ) (s i, y i ) = (t i s i + < x i, y i > H, s i x i + t i y i ) Observe that the above multiplication is commutative but not associative. This defines the structure of a Jordan algebra (see e.g. [0] for more details). Then the inverse z i of z i is given by the formula Next we consider a function z i = det(z i ) (t i, x i ). f(z i ) = ln det(z i ) = ln(t 2 i x i 2 ). The quadratic representation P (z i ) is defined as follows: where H(z i ) is the Hessian of f(z i ). P (z i ) = H(z i ) 4

5 Remark: Since V is a product of m copies of V. We define a quadratic representation P (z) for z = (z,, z m ) V as follows: P (z) = (P (z ),, P (z m )). One of the main ingredients in the construction of NT-direction is the socalled scaling point. Proposition 2.3. Given (z, z 2 ) Ω Ω. Then there exists a unique z 3 Ω such that P (z 3 )z = z 2. (2.7) The following Corollary gives an explicit formula for the scaling point. Corollary 2.4. Let z, z 2 Ω, z = (s, y), z 2 = (t, x). Then consider z 3 = (r, u) with µ µ s + µ 2 t r = µ µ µ 2 < (s, y), (t, x) > µ µ 2 x µ y u = µ µ µ 2 < (s, y), (t, x) > µ i =, i =, 2. det(zi ) Then z 3 Ω is a unique solution to (2.7) for V = IR H. Proposition 2.5. Every element z V admits the following spectral decomposition z = λ e + λ 2 e 2 where λ i IR, e i e j = δ ij e i for i, j =, 2. The following proposition gives us an explicit formula for the spectral decomposition of an element z V. Proposition 2.6. Let z = (s, y) V, y 0. Consider Then e = 2 (, and e i e j = δ ij e i for i, j =, 2. y y ), e 2 = 2 (, y y ) λ = s + y, λ 2 = s y. (s, y) = λ e + λ 2 e 2 Proposition 2.7. Let Ω be the cone of squares of V and (s, y) Ω. Consider z = ( µ 2, y µ ), µ = s + y + s y. 5

6 Then z Ω and z 2 = (s, y). Moreover if z 2 = λ e + λ 2 e 2 is the spectral decomposition of z 2, then Denote z by (s, y) 2. Proposition 2.8. We have Hence, z = λ e + λ 2 e 2. P (z 2 ) 2 = P (z). P (z 2 ) = P (z) 2. (2.8) For proofs of the above propositions and corollary, see [0]. We are now in a position to introduce Nesterov-Todd direction (see e.g. [5]). Given z, z 2 Ω, let z 3 Ω be the scaling point of z and z 2 defined in (2.7). Then the NTdirection (ξ, η) X X is defined to be the solution to the following system of linear equations: P (z 3 )ξ + η = γµ(z, z 2 )z z 2, (2.9) ξ X, η X. (2.0) Here 0 < γ < is a real parameter. One can show (see [0]) that (2.9),(2.0) admits a unique solution. Proposition 2.9. By following the NT-direction, the duality gap decreases. Proof Let (z (k), z(k) 2 ) Ω Ω, (ξ(k), η (k) ) be NT-direction at (z (k), z(k) 2 ), and t (k) > 0. Define z (k+) = z (k) + t (k) ξ (k), z (k+) 2 = z (k) 2 + t (k) η (k). Then, consider the following equality: < z (k+), z (k+) 2 > = < z (k) + t (k) ξ (k), z (k) 2 + t (k) η (k) > = < z (k), z(k) 2 > +t (k) (< z (k), η(k) > + < z (k) 2, ξ(k) >) So it suffices to show that < z (k), η(k) > + < z (k) 2, ξ(k) >< 0. < z (k), η(k) > + < z (k) 2, ξ(k) > = < z (k), η(k) > + < P (z 3 )z (k), ξ(k) > = < z (k), η(k) + P (z 3 )ξ (k) > = < z (k), γµ(z(k), z(k) 2 )(z(k) ) z (k) 2 > = (γ ) < z (k), z(k) 2 > < 0 ( since: 0 < γ < ). 6

7 Here we used (2.9). Hence we have < z (k+), z (k+) 2 > < < z (k), z(k) 2 > µ(z (k+), z (k+) 2 ) < µ(z (k), z(k) 2 ). We conclude this section by a brief description of a primal-dual algorithm based on NT-direction. For a fix ɛ > 0, suppose that (z (0), w (0) ) int(ω) (b + X) int(ω) (a + X ). Let (ξ k, η k ) be the descent NT-direction obtained by solving the system of linear equations (2.9),(2.0), t > 0 be the largest value such that (z (k) + tξ k, w (k) + tη k ) int(ω) (b + X) int(ω) (a + X ). Then we set (z (k+), w (k+) ) = (z (k) + tξ k, w (k) + tη k ). We stop the iteration when µ(z (k), w (k) ) ɛ. In [0], it is shown that by choosing the right stepsize carefully, the algorithm runs in polynomial-time, i.e. it generates a sequence of primal and dual feasible iterates satisfying (2.6) with ω = 0.5 or and with δ being an independent positive constant. 3 Abstract Formulation of the Problem and Implementation Let H be a Hilbert space. Consider the following optimization problem: t min, (3.) W i (y y i ) H t, i =,.., m, (3.2) y c + Z. (3.3) Here H is the norm induced by the scalar product < > H, y i H are fixed ; Z is a closed vector subspace in H, W i : H H is a bounded linear operator for i =,, m. The problem (3.)-(3.3) is an example of an infinitedimensional second-order cone programming problem which has been analyzed in detail in [0]. We have implemented a version of a primal-dual algorithm based on Nestreov- Todd direction described in [0]. The problem (3.)-(3.3) can be easily rewritten in the conic form: Let Ω = {(s, y) R H : s y H }; V = IR H; Ω = Ω Ω... Ω (m-times) ; V = V... V (m-times). Let, further, Λ : V V be a linear operator such that: 7

8 Λ(s, y) = {(s, W y),..., (s, W m y)}, a = ((, 0), (0, 0),..., (0, 0)) V, b = ((0, W (c y ),..., (0, W m (c y m ))), and X = Λ(IR Z). The scalar product <, > V in V is defined as follows: < (t, x ),..., (t m, x m ), (s, y ),..., (s m, y m ) > V = {t i s i + < x i, y i > H }b. With these notations we can rewrite the problem (3.)-(3.3) in the conic form (2.),(2.2). Proposition 3.. We have X = {(r, u ),..., (r m, u m ) V ; r r m = 0, i= Wi u i Z } (3.4) i= where Z is the orthogonal complement of Z and Wi each i. is the adjoint of W i for A conic dual to (2.),(2.2) will have the following form ([0]): < W i (c y i ), u i > H min, (3.5) i= Wi u i Z, u i r i, i =,..., m, (3.6) i= Here W i is the adjoint of W i. r i =. (3.7) i= 3. Calculation of NT-direction Given (z, z 2 ) a pair of feasible solutions to the problem (2.),(2.2), and (2.3),(2.4). To obtain NT-direction for each iteration, we need to solve the following equation for (ξ, η), P (z 3 )ξ X, ξ X (3.8) where z 3 is the scaling point of z and z 2,and = γµ(z, z 2 )z z 2. (Compare with (2.9), (2.0).) The equation (3.8) is equivalent to : < P (z 3 )ξ, ξ > < ξ, > min, 2 ξ X. 8

9 Let z 3 = ((t, x 2 ), (t 2, x 2 ),, (t m, x m )) and = ((r, u ),, (r m, u m )). For (µ, ζ) IR Z, ξ = Λ(µ, ζ), i= ρ(µ, ζ) = < P (z 3)ξ, ξ > < ξ, > 2 = (t 2 i x i 2 ) W i ζ 2 + < x i, W i ζ > 2 < u i, W i ζ > + ν µ 2 + ν 2 µ, 2 2 where i= i= i= i= ν = (t 2 i + x i 2 ), ν 2 = 2 t i < x i, W i ζ > r i. Hence, if we denote by φ(ζ) = min{ρ(µ, ζ) : µ IR} where φ(ζ) = < ζ, Mζ > M = m+ i= i= i= ε i < υ i, ζ > 2 + < υ 0, ζ > ( m i= r i) 2 [ ] (t 2 i x i 2 Qi 0 ) 0 R i i= 2ν, (3.9) (3.0) υ 0 = ( r i )υ m+ W i u i. (3.) ν i= υ i = 2W i x i, i =, 2,..., m, υ m+ = 2 ν m i= t i W i x i, (3.2) ε i =, i =,..., m, ε m+ =, (3.3) As a result, to obtain NT-direction, it suffices to solve the following optimization problem: φ(ζ) min, which is equivalent to solving: < ζ, Mζ > m+ i= ζ Z. ε i < υ i, ζ > 2 + < υ 0, ζ > min, (3.4) ζ Z. The following theorem further simplifies the problem (3.4). 9

10 Theorem 3.2. Let ζ 0 be the optimal solution to the problem: ζ, Mζ 2 + v 0, ζ min, (3.5) ζ Z, (3.6) and ζ i, i =, 2,, m +, be optimal solutions to the problems: ζ, Mζ 2 + V i, ζ min, (3.7) ζ Z, (3.8) where V i = ε i v i. Let S = (s ij ), s ij = v i, ζ j, i, j =, 2,, m +. Then the set of optimal solutions to the problems (3.4) is in one-to-one correspondence with the set of the solutions of the system of linear equations (I S) δ.. δ m+ = v, ζ 0. v m+, ζ 0 More precisely, if (δ,..., δ m+ ) is a solution to (3.9), then (3.9) m+ ζ(δ) = ζ 0 + δ i ζ i (3.20) i= is the optimal solution to the problem (3.4). For a proof see [8, 0]. Remark: The procedure of reduction of (3.4), (3.5), (3.7) is a version of Sherman-Morrison-Woodbury formula. Solving (3.4) is therefore equivalent to solve m+2 problems of the following type < Mζ, ζ > + < υ, ζ > min, ζ Z (3.2) 2 and a system of (m + ) (m + ) linear algebraic equations. Let ζ be an optimal solution to (3.4). Since dρ ν2 dµ = 0. Hence µ = ν. Therefore, we obtain ξ = Λ(µ, ζ) = ((µ, W ζ),..., (µ, W m ζ)) which is the primal descent direction. A dual descent direction is simply η = P (z)ξ. 0

11 3.2 Computation of the step size The Jordan algebraic techniques significantly simplifies the computation of the step-size. Observe that z + tξ = P (z 2 )(e + tp (z 2 )ξ). Hence, z + tξ Ω, is equivalent to e + tp (z 2 )ξ Ω. If tp (z 2 )ξ = µ f + µ 2 f 2 is the spectral decomposition, then z + tξ Ω is equivalent to + tµ 0, + tµ 2 0. (3.22) Since the primal-dual cone is the product of several copies of Ω, we obtain step size t as the maximal value of t for which all inequalities (3.22) are satisfied. We use 0.9t for the actual step size in out algorithm. The stopping rule we used is quite standard: the duality gap µ < ɛ for a given ɛ > 0. Recall (see [0]) that Nesterov-Todd direction (ξ, η) is determined by the following conditions ξ + P (z) η = γµ(z, z 2 )z 2 z, ξ X, η X, where (z, z 2 ) Ω Ω and z 3 Ω is the scaling point uniquely determined by the equation: P (z 3 )z = z 2. As a result of numical experiments,we arrived at γ = 0.5 as a value which minimizes the number of iterations. 4 Examples of concrete problems We now consider several concrete examples of the problem (3.)-(3.3). 4. Multi-criteria linear quadratic control problem Denote by L n 2 [0, T ] the Hilbert space of square integrable functions f : [0, T ] IR n, T > 0. Let (x, u) H = L n 2 [0, T ] L l 2[0, T ]. Consider the following optimization problem: J(x, u) = max i [,m] T 0 (x x i ) T Q i (x x i ) + (u u i ) T R i (u u i )dt min (4.) ẋ(t) = Ax(t) + Bu(t), x(0) = x 0. (4.2)

12 Here (x i, u i ) H, i =, 2,..., m; A, B, Q i, R i, i =, 2,..., m are given matrices of appropriate dimensions. We will assume that Q i, R i are symmetric positive definite matrices. The problem (4.),(4.2) can be easily rewritten in the second order cone programming problem of the form (3.)- (3.3) Here H is the norm induced by the scalar product < > H, y i = (x i, u i ), i =, 2,..., m; Z is a closed vector subspace in H described as follows: Z = {(x, u); ẋ(t) = Ax(t) + Bu(t), x(0) = 0}; (4.3) [ ] L T W i = Qi 0 0 L T, i =,..., m, R i where L Qi, L Ri are lower-triangular matrices obtained by Cholesky factorization of Q i and R i respectively. Then by following the scheme described in the previous section, we have to show that the condition (2.5) is satisfied, and then compute NT-direction. Constructing feasible solutions Primal problem: After some numerical experiments, we end up with the following construction. Let K 0 be a unique stabilizing solution to the algebraic Riccati equation: KBB T K A T K KA I = 0. Here stabilizing means that the close-loop matrix A BB T K 0 is stable. Let x be the solution to the following system of linear differential equations: ẋ = (A BB T K 0 )x; x(0) = x 0, u = B T K 0 x. It is quite obvious that y = (x, u) c + Z. and hence if we choose s = max i [,m] W i(y y i ) + δ for some δ > 0, the initial point for primal problem is Λ(s, y). Dual problem: We simply take u i = 0, r i = m, i =, 2,..., m. Computation of NT-direction For T <, the problem (3.2) is simply classical LQ problem with a linear term on a finite interval (see e.g. [8, ]). Hence on each iteration, we need to solve a differential Riccati equation, system of linear differential equations and a system of (m + ) (m + ) linear algebraic equations (see [0]). For the case when T =, the problem (3.2) is LQ problem with a linear term on a semi-infinite interval. The complete solution to such a problem is described in the following section. 2

13 4.. Solution to LQ problem with a linear term on a semi-infinite interval Let (x, u) H = L n 2 [0, ) L m 2 [0, ), Σ = Σ T. Consider the following LQcontrol problem with a linear term on a semi-infinite J(x, u) = 2 0 [ x < u ] [ x, Σ u ] [ x > dt + < 0 u ] [ y, v ] > dt min (4.4) By using block partition, denote [ Σ Σ Σ = 2 Σ T 2 Σ 22 For proofs of the theorems in this section, see [9]. ẋ = Ax + Bu, x(0) = x 0 (4.5) Theorem 4.. Let A be an antistable n by n matrix ( i.e. real parts of all eigenvalues of A are positive). Consider the following system of linear differential equations: ẋ = Ax + f, (4.6) where f L n 2 [0, ). There exists a unique solution L(f) of (4.6) such that L(f) L n 2 [0, ). Moreover the map f L(f) is linear and bounded. Explicitly: L(f)(t) = + 0 ]. e Aτ f(τ + t)dτ (4.7) Theorem 4.2. The following conditions are equivalent: i) Σ 22 is a positive definite (symmetric) matrix and the following Riccati equation has a stabilizing solution. where KLK + KÃ + ÃT K Q = 0 (4.8) Ã = A BΣ 22 Σ 2, Q = Σ Σ 2 Σ 22 Σ 2, L = BΣ 22 BT. ii) The pair (A, B) is stabilizable and there exists ɛ > 0 such that Γ(x, u) = for all (x, u) Z. + 0 < [ x u ] [ x, Σ u Let further Z H be as follows: ] > dt ɛ + 0 ( x 2 + u 2 )dt Z = {(x, u) H; ẋ = Ax + Bu, x is absolutely continuous, x(0) = 0}. 3

14 Theorem 4.3. Suppose that the pair (A, B) is stabilizable. Then Z is a closed vector subspace in H. Then [ ] Z ṗ + A = { T p B T : p L n p 2 [0, + ), p is absolutely continuous, ṗ L n 2 [0, + )} We are now in the position to describe the optimal solution to (4.4). Theorem 4.4. Suppose that the conditions of theorem 4.2 are satisfied. Then the problem (4.4)-(4.5) has a unique solution which can be described as follows. There exists a stabilizing solution K 0 to the Riccati equation (4.8). Then the matrix C = (Ã + LK 0) is antistable, (K 0 B Σ 2 )Σ 22 v + y Ln 2 [0, + ). Let ρ be a unique solution from L n 2 [0, + ) of the system of differential equations ρ = C T ρ + (K 0 B Σ 2 )Σ 22 v + y (4.9) (which exists according to Theorem 4.), x is the solution to the system of differential equations ẋ = (Ã + LK 0)x + Lρ BΣ 22 v, x(0) = x 0, (4.0) p = K 0 x + ρ, u = Σ 22 (BT p v Σ 2 x). Now back to the problem of finding NT-direction for T =. Solving (3.2) is reduced to solving an algebraic Riccati equation (4.8), systems of differential equations (4.9),(4.0) and a system of (m+) (m+) linear algebraic equations on each iteration. 4.2 Discrete-time Multi-criteria linear quadratic control problem In this section, we consider a discrete-time formulation for the problem (4.)- (4.2). For simplicity, let us introduce some useful notations. Let x denote a sequence {x k } IR n for k = 0,,. We say that x l2 n (IN) if i= x i 2 < where is a norm induced by an inner product <, > in IR n. Let (x, u) l2 n (IN) l2 m (IN). Then the Discrete-time Multi-criteria Linear Quadratic Control Problem takes the form: max i=,...,m k=0 {< (x k ϕ (i) k ), Q i(x k ϕ (i) k ) > + < (u k φ (i) k x k+ = Ax k + Bu k, k = 0,,, x 0 = γ 0, ), R i(u k φ (i) k ) >} min (4.) where (ϕ (i), φ (i) ) H = l n 2 (IN) l m 2 (IN) for i =,, m, A, B, Q i, R i for i =,, m are matrices of appropriate sizes. We assume that Q i s and R i s are positive definite. It is easy to see that the problem (4.) can be rewritten into the secondorder cone programming problem (3.)- (3.3). 4

15 Observe now the inner product in H has the following form: < (x, y), (u, v) > H = {< x k, u k > + < y k, v k >}. (4.2) k=0 The vector subspace Z now takes the form: Z = {(x, u) H : x k+ = Ax k + Bu k, k = 0,,, x 0 = 0} The constructions of a initial feasible point for primal and dual problems are similar to the constructions of a feasible points in the continuous case. In the primal, we solve a discrete LQ problem via discrete Algebraic Riccati equation. In the dual, we take the same choice as in the continuous case. Now the problem (3.2) is discrete-time linear-quadratic control problem with linear term Discrete-time linear-quadratic control problem with linear term In this section, we describe a complete solution to discrete linear-quadratic control problem with a linear term. In order to derive the solution, we need the following auxiliary results. Theorem 4.5. Let A be a d-stable matrix (ie. all eigenvalues of A lie inside a unit circle on a on a complex plane. ) Consider the following system of linear difference equation. x k = Ax k+ + f k, k = 0,, (4.3) where f = {f k } l2 n (IN). Then there exists a unique solution L(f) of (4.3) such that L(f) l2 n (IN). Moreover the map f L (f) is linear and bounded. Explicitly, x k = L(f) k = A r f r+k, k = 0,,. (4.4) Proof Since A be a stable matrix, there exists S = S T 0, such that A T SA S = I (4.5) (see e.g. [2]). Let w k =< x k, Sx k >. Obviously, w k 0, k. Then using (4.3) and (4.5), we obtain the following: w k+ = w k+ w k = < x k+, Sx k+ > < x k, Sx k > = < x k+, Sx k+ > < (Ax k+ + f k ), S(Ax k+ + f k ) > = < x k+, Sx k+ > < x k+, A T SAx k+ > 2 < Ax k+, Sf k > < f k, Sf k > = < x k+, x k+ > 2 < Ax k+, Sf k > < f k, Sf k > x k+ 2 2 x k+ A T S f k S f k 2. 5

16 But then it is easy to see that the following inequality is true. 2 x k+ A T S f k 2 x k A T S 2 f k 2 Then w k+ 2 x k+ 2 (2 A T S 2 + S ) f k 2 Then it follows that r= k+ w k+ w 0 = w r r= k+ ( 2 x r 2 (2 A T S 2 + S ) f r 2 ). r= Hence, k+ k+ x r 2 2 (2 A T S 2 + S ) f r 2 + 2(w k+ w 0 ). (4.6) Let now then we have that r= x k = A r f r+k, k = 0,,, (4.7) Ax k+ + f k = A = = = A r f r+k+ + f k A r+ f r+k+ + f k A r f r+k + f k r= A r f r+k = x k. Hence, (4.7) satisfies (4.3). Let us now consider the following inequilities: x k = A r f r+k A r f r+k A r f r+k. 6

17 Then by Cauch-Schawrz inequality we have x k 2 A r 2 A 2r s=0 f r+k 2 f s 2 for all k. But since A is stable and all norms are equivalent, by Householder s Theorem [7] we can assume without lost of generality that Therefore, A <. x k 2 < for all k. Hence {x k } is bounded and as a result {w k } is also bounded. But since x 0 = Ar f r, x 0 2 Then from (4.6) we have, A 2r A 2 f r 2 f r 2. k+ k+ x r 2 2(2 A T S 2 + S + A 2 ) f r 2 + 2(w k+ w 0 ) for all k. Since {w k } is bounded, we conclude from (4.8): {x k } l2 n (IN). Hence As a result, From (4.8), we now have lim x k = 0. k lim w k = 0. k x r 2 2(2 A T S 2 + S + A 2 ) f r 2 2w 0. (4.8) But since w 0 =< x 0, Sx 0 > 0, 7

18 we have x r 2 2(2 A T S 2 + S + A 2 ) f r 2 C f r 2 (4.9) for some constant C. But now suppose that { x k } l n 2 (IN) be another solution to (4.3). Then y k = x k x k = A(x k+ x k+ ) = Ay k+ for k = 0,. Since A is d-stable and {y k } l n 2 (IN), one can easily see that y k = 0 for all k = 0,. Therefore the system (4.3) may have only one solution {x k } l n 2 (IN). Hence we can conclude that the linear map f L (f) is correctly defined and bounded. Let now consider the following classical discrete-time LQ problem 2 < k=0 [ xk u k ] [ xk, Σ u k ] > min, (4.20) where x k+ = Ax k + Bu k, x 0 = x 0 IR n and [ Σ Σ Σ = 2 Σ T 2 Σ 22 ]. It is well known the the following discrete algebraic Riccati equation plays a very important role in finding the optimal solution to the problem (4.20) [2]: K = A T KA (A T KB + Σ 2 )(Σ 22 + B T KB) (A T KB + Σ 2 ) + Σ. (4.2) Observe that (4.2) is defined under assumption that the matrix Σ 22 + B T KB is invertible. Let F = (A T KB + Σ 2 )(Σ 22 + B T KB). Then K 0 a symmetric solution to (4.2) is called a stabilizing if Σ 22 +B T KB > 0 and A T F B T is d-stable. Theorem 4.6. Suppose Σ 22 is a positive definite matrix, (4.2) has a stabilizing solution. Then [ ] [ ] xk xk <, Σ > > 0. (4.22) 2 u k u k k=0 8

19 for all (x k, u k ) satisfying x k+ = Ax k + Bu k, (4.23) x 0 = 0, (x, u) l n 2 (IN) l m 2 (IN). Proof Let K be a stabilizing solution to (4.2), using (4.22) we obtain the following equalities: < x k+, Kx k+ > < x k, Kx k > = < (Ax k + Bu k ), K(Ax k + Bu k ) > < x k, Kx k > = < x k, (A T KA K)x k > + < x k, A T KBu k > + < u k, B T KAx k > + < u k, (B T KB + Σ 22 )u k > < u k, Σ 22 u k >. Let now R K = Σ 22 + B T KB, and S K = A T KB + Σ 2 then using (4.2), we obtain: < x k+, Kx k+ > < x k, Kx k > = < x k, (S K R K S K Σ )x k > + < x k, A T KBu k > + < u k, B T KAx k > + < u k, R K u k > < u k, Σ 22 u k > = < x k, (S K R K S K)x k > + < x k, S K u k > + < u k, SKx T k > + < u k, R K u k > < x k, Σ 2 u k > < u k, Σ T 2x k > < u k, Σ 22 u k > < x k, Σ x k > = < (R K S Kx k + u k ), R K (R K S Kx k + u k ) > < [ xk u k ], Σ [ xk ] >. u k Then take summation with respect to k from k = 0 to k = N to obtain: N < k=0 [ xk u k ] [ xk, Σ u k ] > = N k=0 < (R K S Kx k + u k ), R K (R K S Kx k + u k ) > + < x 0, Kx 0 > < x N+, Kx K+ >. Since we have x 0 = 0 and lim x N = 0, N < k=0 [ xk u k ], Σ [ xk ] > = u k k=0 < (R K S Kx k + u k ), R K (R K S Kx k + u k ) >. Hence, since R K > 0, (4.22) is satisfied. Theorem is proved. Remark Suppose that (A, B) is d-stabilizable and Σ = I (identity matrix). Then (4.2) has a stabilizing solution (see e.g. [2]). Theorem 4.7. Let H = l n 2 (IN) l m 2 (IN). Suppose the pair (A, B) is d-stabilizable. Let X be defined as follows: 9

20 X = {(x, u) = ({x k }, {u k }) H; x k+ = Ax k + Bu k, x 0 = 0 IR n k = 0,, }. Let also Z = {({ξ k }, {η k }); ξ 0 = A T p 0, ξ k = A T p k p k for k =, 2,, η k = B T p k, for k = 0,,, {p k } l n 2 (IN)}. Then X is a closed vector subspace and Z is an orthogonal complement of X in H, i.e. Z = X. Proof Let (x, u) X and ({ξ k }, {η k }) Z. Then < (x, u), ({ξ k }, {η k }) > H = < x 0, A T p 0 > + < u 0, B T p 0 > + lim = < x, p 0 > + lim = < x, p 0 > + lim n k= n k= = lim n < x n+, p n > = 0. n k= n (< x k, A T p k p k > + < u k, B T p k >) n (< Ax k + Bu k, p k > < x k, p k >) n (< x k+, p k > < x k, p k >) Next we show that given ({ψ k }, {φ k }) H, it admits the following representation: ψ 0 = x 0 [A T p 0 ] (4.24) and for k =, 2, ψ k = x k [A T p k p k ] (4.25) φ k = u k B T p k (4.26) for k = 0,,, where ({x k }, {u k }) X and {p k } l n 2 (IN). This will imply easily that Z = X and both X and Z are closed. We look for p k in the form Then by (4.25), we have p k = Kx k + ρ k (4.27) p k = ψ k x k + A T p k where K = K T is a symmetric matrix. Substituting this into (4.27), we get ρ k = A T p k + ψ k x k + Kx k. (4.28) 20

21 But then (4.26) implies: u k = B T p k + φ k. (4.29) Our goal now is to eliminate p k. By using (4.27) p k = Kx k+ + ρ k+ K(Ax k + Bu k ) + ρ k+. By substituting this into (4.29), we have Hence, Hence, we have Using u k = B T ( K(Ax k + Bu k ) + ρ k+ ) + φ k = B T KAx k B T KBu k + B T ρ k+ + φ k. (I + B T KB)u k = B T KAx k + B T ρ k+ + φ k. u k = (I + B T KB) B T KAx k + (I + B T KB) (B T ρ k+ + φ k ). (4.30) we obtain x k+ = Ax k + Bu k x k+ = Ax k B(I + B T KB) B T KAx k +B(I + B T KB) (B T ρ k+ + φ k ). (4.3) But then since p k = Kx k+ + ρ k+, we have p k = KAx k + KB(I + B T KB) B T KAx k KB(I + B T KB) (B T ρ k+ + φ k ) + ρ k+. Then from (4.28) we have the following ρ k = A T p k + ψ k x k + Kx k = ψ k x k + Kx k A T KAx k + A T KB(I + B T KB) B T KAx k A T KB(I + B T KB) (B T ρ k+ + φ k ) + A T ρ k+. Recall, now, if Σ is the identity matrix then according to Remark which follows Theorem 4.6 the corresponding discrete algebraic Riccati (4.2) equation has a stabilizing solution. In this case, Σ = I n, Σ 22 = I m, Σ 2 = 0 i.e. K = A T KA A T KB(I + B T KB) A T KB + I (4.32) 2

22 If we choose K to be stabilizing solution, we obtain ρ k = A T (I KB(I+B T KB) B T )ρ k+ A T KB(I+B T KB) φ k +ψ k (4.33) It is known that if K 0 is a stabilizing solution to DARE (4.32), then the matrix A T (I K 0 B(I + B T K 0 B) B T ) is d-stable. Then by the Theorem 4.5, there exists a unique solution {ρ k } l2 n (IN) to (4.33). Hence by using (4.3) where x 0 = 0, (4.30) and then (4.27) we have shown that any ({ψ k }, {φ k }) H admits a unique decomposition as a sum of elements in X and X. Hence the Theorem is proved. Now we are in the position to describe a solution of the LQ-problem on a semi-infinite interval. 2 where < k=0 [ xk u k ], Σ [ xk ] > + < u k k=0 [ xk u k ], [ ψk ] > min, (4.34) φ k x k+ = Ax k + Bu k + g k, x 0 = x 0 R n, {g k } l n 2 (IN), (x, u) H. (4.35) Theorem 4.8. Suppose there exists a stabilizing solution K to DARE (4.2): Then {f k } l n 2 (IN) where f k = (A T KB + Σ 2 )(Σ 22 + B T KB) φ k + Σ 2 (Σ 22 + B T KB) B T Kg k ψ k. Furthermore, let {ρ k } be a unique solution from l n 2 (IN) of the system where ρ k = Cρ k+ + f k C = A T (A T KB + Σ 2 )(Σ 22 + B T KB) B T. Then the solution ({x k }, {u k }) to LQ-problem on a semi-infinite interval (4.34) can be described by the following recurrent relations: x k+ = C T x k + B(Σ 22 + B T KB) (B T ρ k+ B T Kg k φ k ) (4.36) u k = (Σ 22 + B T KB) (B T KA + Σ 2 )x k +(Σ 22 + B T KB) (B T ρ k+ B T Kg k φ k ). (4.37) Proof By Theorem 4.5, the functional (4.34) restricted to subspace X is convex. Hence necessary and sufficient optimality condition for (4.34),(4.35)takes the form [ ] [ ] [ ] xk ψk A Σ + = T p k p k u k φ k B T p k 22

23 when (x, u) satisfy (4.35) for some {p k } l n 2 (IN). Moreover, we are looking for {p k } in the form p k = Kx k+ + ρ k+ where K is a stabilizing solution to DARE (4.2). We finish the proof exactly as in Theorem Some extensions 5. Linear-quadratic control problem with quadratic constraints Consider the following optimization problem: W 0 (y y 0 ) H min, (5.) W i (y y i ) H t i, i =,..., m (5.2) y c + Z. (5.3) With our choice of H, Z and other parameters, we arrive at the linearquadratic control problem with quadratic constraints ( see e.g [7, 20]). We first consider the problem of checking the feasibility of (5.) - (5.3): t min, (5.4) W i (y y i ) H t + t i, i =,..., m (5.5) To cast (5.4) - (5.6) in the conic form, consider y c + Z. (5.6) Λ : IR Z (IR H) (IR H)...(IR H), m-times such that Λ(t, y) = (t, W y), (t, W 2 y),..., (t, W m y), a = ((, 0), (0, 0),..., (0, 0)), b = ((t, W (c y )),..., (t m, W m (c y m ))) and X = Λ(IR Z). Then we can rewrite (5.4)-(5.6) in the form of (2.),(2.2). Observe that the only difference between (2.),(2.2) and our conic formulation is the choice of b. It is quite obvious that the dual will be of the form t i r i + < W i (c y i ), u i > H min, (5.7) i= i= Wi u i Z, u i r i, i =,..., m, (5.8) i= r i =. (5.9) i= 23

24 Thus, the whole numerical scheme remain the same for (5.4)-(5.6)( the only difference is in the choice of initial feasible solution). Returning to (5.)-(5.3) consider the following; Λ : IR Z (IR H) (IR H)...(R H), (m+)-times such that Λ(t, y) = (t, W 0 y), (0, W y),..., (0, W m y), X = Λ(IR Z), a = ((, 0), (0, 0),..., (0, 0)), b = ((0, W 0 (c y 0 )), (t, W (c y )),..., (t m, W m (c y m ))). We can rewrite (5.)-(5.3) in the conic form < a, z > min, z (b + X) Ω. One can easily see that the dual will have the following form t i r i + < W i (c y i ), u i > H min, (5.0) i= i=0 Wi u i Z, u i r i, i =,..., m, (5.) i=0 u 0. (5.2) It is quite easy to work out a scheme for the computation of the Nesterov- Todd direction following the ideas developed in [0]. 6 Numerical results In this section we consider several examples for both continuous and discretetime formulations. The major difference between the two methods is in solving linear-quadratic control problem with linear term. For continuous-time case for finite interval (respectively, infinite interval), solving linear-quadratic control problem with linear term requires solving differential Riccati equation (continuous algebraic Riccati equation) and systems of linear differential equations. For discrete-time case, we have to solve discrete Riccati equation and systems of difference equations. The numerical experiments confirm fast convergence to optimal solution and good approximation of the continuous formulations by discrete formulations. The algorithms are implemented in MATLAB and integrations of the ordinary differential equations are carried out with ODE45, a MATLAB ODE solver function for nonstiff problems. 24

25 6. Example : Multi-criteria Linear Quadratic Control Problem We consider Multi-criteria Linear Quadratic Control Problem in both continuous and discrete cases. For continuous formulations, we first consider the problem (4.),(4.2) (i.e. problem on the finite interval [0, T ]) with m = n = 3 and the following data: x i e 3t e 3t sin(t)e 4t e 2t cos(t)e 4t cos(t)e 3t x 2i cos(t)e 4t sin(t)e 4t e 3t sin(t)e 5t e 3t e 4t x 3i cos(t)e 4t cos(t)e 4t cos(t)e 4t sin(t)e 5t e 2t e 3t u i t 2 +5 u 2i t 2 +8 u 3i t 2 +6 A = t 2 +3 t 2 +8 t t 2 +3 t 2 +8 t 2 +6 t 2 +5 t 2 +8 t x i cos(t)e 4t sin(t)e 3t x 2i e 4t e 4t x 3i e 3t e 3t u i t 2 +7 u 2i t 2 +8 u 3i t 2 +6, B = t 2 +8 t 2 +6 t , α 0 = t 2 +8 t 2 +6 t t 2 +8 t 2 +8 t 2 +6 Q = diag(69, 96, 225), Q 2 = diag(69, 69, 9), Q 3 = diag(96, 2, 25), Q 4 = diag(225, 8, 49), Q 5 = diag(96, 49, 8), Q 6 = diag(289, 25, 2), Q 7 = diag(324, 9, 69), Q 8 = diag(36, 2, 225), R = diag(96, 44, 96), R 2 = diag(44, 9, 6), R 3 = diag(69, 225, 36), R 4 = diag(96, 49, 64), R 5 = diag(225, 8, 00), R 6 = diag(256, 2, 44), R 7 = diag(289, 69, 96), R 8 = diag(324, 225, 36), Then by applying a primal-dual algorithm based on NT-direction, we obtain the following table of solutions for different T for ɛ = 0.0 T Opt V alue N umber of Iterations T Opt V alue N umber of Iterations

26 Plot of Approximation solution after 3 iterations ".."and optimal control " " Value Time Figure : Plot of approximation of optimal control and Optimal control: T = 3 Note also that the approximation of the optimal control approaches to the optimal control only after a few iterations. In fact, only after 3-4 iterations, we obtain a very good approximation to the optimal solution. Figure and 2 are graphs of the approximate optimal solution after three iterations and the optimal solution. The dotted line is the plot of the approximation and the solid line is the actual optimal solution (the actual optimal solution means the solution obtained in the last iteration). Our next goal is to use the discrete formulations to estimate the optimal solution and optimal value for the above problem. The construction is quite straightforward. First,we choose a sufficiently large K > 0, then we discretize the partition [0, T ] in to K sub-intervals uniformly. Next we let h = T K. Hence, our targets in the discrete formulation can be easily computed by the following simple procedure: for state components x (i) k x (2i) k x (3i) k = x i(kh) x 2i (kh) x 2i (kh) for i =,, m, and k = 0,,, K. Similarly, for control components u (i) k u (2i) k = u i(kh) u 2i (kh), u (3i) u 2i (kh) k for i =,, m, and i = 0,,, K., 26

27 5 Plot of approximation of state and the actual state optimal solution 4 3 value Time Figure 2: Plot of approximation of optimal state and Optimal state: T = 3 The corresponding discrete-time problem will have the form max i=,...,m k= {< (x k ϕ (i) k ), Q i (x k ϕ (i) k ) > + < (u k φ (i) k x k+ = Ãx k + Bu k, x 0 = γ 0 ), R i (u k φ (i) k ) >} min (6.) where Q i = hq i, R i = hr i, Ã = (I + ha) and B = hb. We choose K = 300. Then using above formulation, we obtain a optimal solution which is very close to the optimal solution to the continuous case. As it is shown in the Figure 3-4, we take the difference pointwise of the optimal solutions obtained from the continuous and discrete formulation. The result is quite good. The two optimal solutions are almost identical. 6.2 Example 3: Discrete-time linear-quadratic control problem with quadratic constraints In this section we solve linear-quadratic control problem with quadratic constraints using discrete formulation. Let us first consider the checking feasibility problem (5.4)-(5.6) in the discrete formulation of the linear-quadratic control problem with quadratic constraints where Z, c, W i, y i for i =,..., m will be the same as in the Example, and t i be given by the following table. i t i It is easy to see that there exists a feasible solution to the problem (5.)-(5.3) if and only if the optimal value of the problem (5.4)-(5.6) is negative. 27

28 3.) Discrete-time optimal state 5 Optimal state obtained by using discrete formulation 3.2) Difference between discrete-time and continuous-time Optimal state 0.0 Pointwise difference of state spaces between discrete and continuous formulations Figure 3: Plot of discrete-time optimal state for Multi-criteria Linear Quadratic Control Problem. 4.) Discrete-time optimal control Optimal control obtained by using discrete formulation 0 4.2) Difference between discrete-time and continuous-time Optimal control 0. Pointwise difference in control spaces between discrete and continuous formulations Figure 4: Plot of discrete-time optimal control for Multi-criteria Linear Quadratic Control Problem. 28

29 5.) Plot of optimal state for T = Plot of optimal state 5.2) Plot of optimal control for T = 20. Plot of optimal control Figure 5: Plot of Discrete-time linear-quadratic control problem with quadratic constraints. Then by applying a primal-dual algorithm described in [0] based on NTdirection for T = 3, we obtain the following data of optimal value t after some number of iterations. #iterations Opt Value As it is shown in the above table, after the forth iteration we obtain a feasible solution to the linear-quadratic control problem with quadratic constraints(5.)- (5.3). Once we obtain a feasible solution for the optimization problem (5.)- (5.3), we then can apply primal-dual algorithms to solve the problem. The calculations for the NT-direction for this problem are quite similar to the calculation for (6.). The only differences are in the definition of a closed vector subspace X and its complement X. The optimal solution to the problem (5.)-(5.3) for T = 20 based on NT-direction is shown in the Figure 5. 29

30 References [] F. Alizadeh and D. Goldfarb, Second-order cone programming, ISMP 2000, Part 3 (Atlanta, GA). Math. Program. 95(2003), no., Ser. B, 3-5. [2] A. Ben-tal and A. Nemirovski, Lecture on Modern Convex Optimization, SIAM, Philadelphia, 200. [3] L. Faybusovich, Euclidean Jordan algebras and interior-point algorithms, Positivity, Vol. (997), pp [4] L. Faybusovich, A Jordan Algebric approach to potential-reduction algorithms, Mah. Zeitshrift, vol. 239 (2002) [5] L. Faybusovich, Linear systems in Jordan algebras and primal-dual interior point algorithms, Journal of Computational and Applied Mathematics, Vol. 86(997), pp [6] L. Faybusovich and R. Arana, A long-step Primal-dual algorithm for symmetric programming problem Lie theory and its applications in control (Wurzburg, 999). Systems control Lett. 43(200), no [7] L. Faybusovich and J.B. Moore, Infinite-Dimensional quadratic Optimization: Interior-Point Methods and control Applications, Vol.36(997), p.p [8] L. Faybusovich and T. Mouktonglang, Finite-rank Pertubation of the Linear-Quadratic Control Problem. Proceedings of the American Control Conference, Denver, Colorado, June 4-6, pp [9] L. Faybusovich and T. Mouktonglang, Linear-quadratic control problem with a linear term on semiinfinite interval:theory and applications. Preprint (2003). [0] L.Faybusovich and T.Tsuchiya, Primal-dual Algorithms and Infinitedimensional Jordan Algebras of finite rank. New trends in optimization and computational algorithms(ntoc 200)(Kyoto). Math. Progam. 97 (2003), no.3, Ser. B, [] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, New York, Wiley Interscience (972). [2] P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, 995. [3] M. Muramatsu, On a commutative class of search direction for linear programming over symmetric cones, Journal of Optimization Theory and its Applications, Vol.2, No.3(2002)

31 [4] R.D.C. Monteiro and T. Tsuchiya, Polynomial convergence of primal-dual algorithms for the second-order cone program base on the MZ-family of direction Mathematical Programming, 88(), 6-83, [5] Y.E. Nesterov and M. Todd, Self-scaled barriers and interior-point methods for convex programming, Mathematics of Operations Research, vol.22(997),pp.-42. [6] S. Schmieta and F. Alizadeh, Extensions of primal-dual interior point algorithms to symmetric cone, Math. Program. 96(2003), no. 3, Ser. A, [7] D. Serre, Matrices Theory and Applications, Springer, pp. 66. [8] T. Tsuchiya, A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming Optimization Methods and Software, 0/, 4-82,999. [9] S. Wright, Primal-dual Interior Point Algorithms, SIAM Publications, Philidelphia, Pennsylvania, USA, 997. [20] V.A. Yakubovich, Nonconvex optimization problem: The infinite-horizon linear-quadratic control problem with quadratic constraints. System and Control Letters, 9:3-22,992. 3

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Primal-dual algorithms and infinite-dimensional Jordan algebras of finite rank

Primal-dual algorithms and infinite-dimensional Jordan algebras of finite rank Math. Program., Ser. B 97: 471 493 (003) Digital Object Identifier (DOI) 10.1007/s10107-003-044-4 Leonid Faybusovich Takashi Tsuchiya Primal-dual algorithms and infinite-dimensional Jordan algebras of

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Deterministic Kalman Filtering on Semi-infinite Interval

Deterministic Kalman Filtering on Semi-infinite Interval Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Jordan-algebraic aspects of optimization:randomization

Jordan-algebraic aspects of optimization:randomization Jordan-algebraic aspects of optimization:randomization L. Faybusovich, Department of Mathematics, University of Notre Dame, 255 Hurley Hall, Notre Dame, IN, 46545 June 29 2007 Abstract We describe a version

More information

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite

More information

NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS

NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS L FAYBUSOVICH, T MOUKTONGLANG, T TSUCHIYA Abstract Based on previous explicit computations of universal barrier functions,

More information

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems. Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

Using Schur Complement Theorem to prove convexity of some SOC-functions

Using Schur Complement Theorem to prove convexity of some SOC-functions Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, pp. 41-431, 01 Using Schur Complement Theorem to prove convexity of some SOC-functions Jein-Shan Chen 1 Department of Mathematics National Taiwan

More information

A priori bounds on the condition numbers in interior-point methods

A priori bounds on the condition numbers in interior-point methods A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be

More information

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming Chek Beng Chua Abstract This paper presents the new concept of second-order cone approximations for convex conic

More information

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1 L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones definition spectral decomposition quadratic representation log-det barrier 18-1 Introduction This lecture: theoretical properties of the following

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

Nonsymmetric potential-reduction methods for general cones

Nonsymmetric potential-reduction methods for general cones CORE DISCUSSION PAPER 2006/34 Nonsymmetric potential-reduction methods for general cones Yu. Nesterov March 28, 2006 Abstract In this paper we propose two new nonsymmetric primal-dual potential-reduction

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Primal-dual path-following algorithms for circular programming

Primal-dual path-following algorithms for circular programming Primal-dual path-following algorithms for circular programming Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan July, 015 Abstract Circular programming problems are a

More information

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

More information

Modern Optimal Control

Modern Optimal Control Modern Optimal Control Matthew M. Peet Arizona State University Lecture 19: Stabilization via LMIs Optimization Optimization can be posed in functional form: min x F objective function : inequality constraints

More information

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems Guidance Professor Masao FUKUSHIMA Hirokazu KATO 2004 Graduate Course in Department of Applied Mathematics and

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

The Q Method for Symmetric Cone Programmin

The Q Method for Symmetric Cone Programmin The Q Method for Symmetric Cone Programming The Q Method for Symmetric Cone Programmin Farid Alizadeh and Yu Xia alizadeh@rutcor.rutgers.edu, xiay@optlab.mcma Large Scale Nonlinear and Semidefinite Progra

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on Multi-Agent

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications MAX PLANCK INSTITUTE Elgersburg Workshop Elgersburg February 11-14, 2013 The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications Timo Reis 1 Matthias Voigt 2 1 Department

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

A new primal-dual path-following method for convex quadratic programming

A new primal-dual path-following method for convex quadratic programming Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00-805 www.scielo.br/cam A new primal-dual path-following method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté

More information

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic

More information

Positive semidefinite matrix approximation with a trace constraint

Positive semidefinite matrix approximation with a trace constraint Positive semidefinite matrix approximation with a trace constraint Kouhei Harada August 8, 208 We propose an efficient algorithm to solve positive a semidefinite matrix approximation problem with a trace

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

Limiting behavior of the central path in semidefinite optimization

Limiting behavior of the central path in semidefinite optimization Limiting behavior of the central path in semidefinite optimization M. Halická E. de Klerk C. Roos June 11, 2002 Abstract It was recently shown in [4] that, unlike in linear optimization, the central path

More information

Lecture 17: Primal-dual interior-point methods part II

Lecture 17: Primal-dual interior-point methods part II 10-725/36-725: Convex Optimization Spring 2015 Lecture 17: Primal-dual interior-point methods part II Lecturer: Javier Pena Scribes: Pinchao Zhang, Wei Ma Note: LaTeX template courtesy of UC Berkeley EECS

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

A class of non-convex polytopes that admit no orthonormal basis of exponentials

A class of non-convex polytopes that admit no orthonormal basis of exponentials A class of non-convex polytopes that admit no orthonormal basis of exponentials Mihail N. Kolountzakis and Michael Papadimitrakis 1 Abstract A conjecture of Fuglede states that a bounded measurable set

More information

Homework 4. Convex Optimization /36-725

Homework 4. Convex Optimization /36-725 Homework 4 Convex Optimization 10-725/36-725 Due Friday November 4 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems 2016 Springer International Publishing AG. Part of Springer Nature. http://dx.doi.org/10.1007/s10589-005-3911-0 On Two Measures of Problem Instance Complexity and their Correlation with the Performance

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

SEMIDEFINITE PROGRAM BASICS. Contents

SEMIDEFINITE PROGRAM BASICS. Contents SEMIDEFINITE PROGRAM BASICS BRIAN AXELROD Abstract. A introduction to the basics of Semidefinite programs. Contents 1. Definitions and Preliminaries 1 1.1. Linear Algebra 1 1.2. Convex Analysis (on R n

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

The Jordan Algebraic Structure of the Circular Cone

The Jordan Algebraic Structure of the Circular Cone The Jordan Algebraic Structure of the Circular Cone Baha Alzalg Department of Mathematics, The University of Jordan, Amman 1194, Jordan Abstract In this paper, we study and analyze the algebraic structure

More information

Permutation invariant proper polyhedral cones and their Lyapunov rank

Permutation invariant proper polyhedral cones and their Lyapunov rank Permutation invariant proper polyhedral cones and their Lyapunov rank Juyoung Jeong Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA juyoung1@umbc.edu

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

The Q Method for Second-Order Cone Programming

The Q Method for Second-Order Cone Programming The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Projection methods to solve SDP

Projection methods to solve SDP Projection methods to solve SDP Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Oberwolfach Seminar, May 2010 p.1/32 Overview Augmented Primal-Dual Method

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties. FULL NESTEROV-TODD STEP INTERIOR-POINT METHODS FOR SYMMETRIC OPTIMIZATION G. GU, M. ZANGIABADI, AND C. ROOS Abstract. Some Jordan algebras were proved more than a decade ago to be an indispensable tool

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function.   djeffal Journal of Mathematical Modeling Vol. 4, No., 206, pp. 35-58 JMM A path following interior-point algorithm for semidefinite optimization problem based on new kernel function El Amir Djeffal a and Lakhdar

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Alexander Ostrowski

Alexander Ostrowski Ostrowski p. 1/3 Alexander Ostrowski 1893 1986 Walter Gautschi wxg@cs.purdue.edu Purdue University Ostrowski p. 2/3 Collected Mathematical Papers Volume 1 Determinants Linear Algebra Algebraic Equations

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

Semidefinite Programming

Semidefinite Programming Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has

More information

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software and Second-Order Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,

More information

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization Y Q Bai G Q Wang C Roos November 4, 004 Department of Mathematics, College Science, Shanghai University, Shanghai, 00436 Faculty

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Journal of the Operations Research Society of Japan 2003, Vol. 46, No. 2, 164-177 2003 The Operations Research Society of Japan A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS Masakazu

More information

Quasi-Newton Methods

Quasi-Newton Methods Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information