Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

Similar documents
Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Primal-dual algorithms and infinite-dimensional Jordan algebras of finite rank

We describe the generalization of Hazan s algorithm for symmetric programming

Deterministic Kalman Filtering on Semi-infinite Interval

Largest dual ellipsoids inscribed in dual cones

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Jordan-algebraic aspects of optimization:randomization

A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint

NUMERICAL EXPERIMENTS WITH UNIVERSAL BARRIER FUNCTIONS FOR CONES OF CHEBYSHEV SYSTEMS

Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

An interior-point trust-region polynomial algorithm for convex programming

Second-order cone programming

Using Schur Complement Theorem to prove convexity of some SOC-functions

A priori bounds on the condition numbers in interior-point methods

A Primal-Dual Second-Order Cone Approximations Algorithm For Symmetric Cone Programming

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Nonsymmetric potential-reduction methods for general cones

w T 1 w T 2. w T n 0 if i j 1 if i = j

Lecture 6: Conic Optimization September 8

Primal-dual path-following algorithms for circular programming

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Modern Optimal Control

Sequential Quadratic Programming Method for Nonlinear Second-Order Cone Programming Problems. Hirokazu KATO

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Kernel Method: Data Analysis with Positive Definite Kernels

The Q Method for Symmetric Cone Programmin

Lecture 5. The Dual Cone and Dual Problem

Fast Algorithms for SDPs derived from the Kalman-Yakubovich-Popov Lemma

Uniqueness of the Solutions of Some Completion Problems

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Semidefinite Programming Duality and Linear Time-invariant Systems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods

The Simplest Semidefinite Programs are Trivial

A new primal-dual path-following method for convex quadratic programming

ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION

Positive semidefinite matrix approximation with a trace constraint

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization

NORMS ON SPACE OF MATRICES

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

Self-Concordant Barrier Functions for Convex Optimization

Chap. 3. Controlled Systems, Controllability

Limiting behavior of the central path in semidefinite optimization

Lecture 17: Primal-dual interior-point methods part II

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

A class of non-convex polytopes that admit no orthonormal basis of exponentials

Homework 4. Convex Optimization /36-725

A direct formulation for sparse PCA using semidefinite programming

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Lecture: Cone programming. Approximating the Lorentz cone.

Nonlinear Programming

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Nonlinear Optimization for Optimal Control

On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

The norms can also be characterized in terms of Riccati inequalities.

15. Conic optimization

Optimization and Root Finding. Kurt Hornik

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

SEMIDEFINITE PROGRAM BASICS. Contents

Optimality, Duality, Complementarity for Constrained Optimization

The Jordan Algebraic Structure of the Circular Cone

Permutation invariant proper polyhedral cones and their Lyapunov rank

Lecture: Algorithms for LP, SOCP and SDP

Quasi-Newton methods for minimization

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Elementary linear algebra

The Q Method for Second-Order Cone Programming

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Projection methods to solve SDP

Algebra II. Paulius Drungilas and Jonas Jankauskas

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Optimization Theory. A Concise Introduction. Jiongmin Yong

Lecture notes: Applied linear algebra Part 1. Version 2

2.1. Jordan algebras. In this subsection, we introduce Jordan algebras as well as some of their basic properties.

FINITE-DIMENSIONAL LINEAR ALGEBRA

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function. djeffal

Interval solutions for interval algebraic equations

Alexander Ostrowski

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

Semidefinite Programming

Second-Order Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software

A new Primal-Dual Interior-Point Algorithm for Second-Order Cone Optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

A NEW SECOND-ORDER CONE PROGRAMMING RELAXATION FOR MAX-CUT PROBLEMS

Quasi-Newton Methods

Optimality Conditions for Constrained Optimization

Transcription:

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem L. Faybusovich Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556 USA. T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN 46556 USA. T. Tsuchiya The Institute of Statistical Mathematics, 4-6-7 Minami-Azabu, Minato-Ku, Tokyo, 06-8569, Japan. May 2004 Abstract We describe an implementation of an infinite-dimensional primal-dual algorithm based on the Nesterov-Todd direction. Several applications to both continuous and discrete-time multi-criteria linear-quadratic control problems and linear-quadratic control problem with quadratic constraints are described. Numerical results show a very fast convergence (typically, within 3-4 iterations) to optimal solutions. Introduction One of the current trends in the development of interior-point algorithms of optimization is the extension of the domain of their applicability. While at first the linear programming problem was the major target, subsequent developments led to creation of algorithms and software for much broader class of symmetric programming problems. This class includes semidefinite programming and second-order cone programming problems. See e.g. [2] for numerous This author was supported in part by National Science Foundation Grant No. 002628. This author was supported in part by National Science Foundation Grant No. 002628. This author was supported in part by the Grant-in-aid for scientific Research (C) 55044 of the Ministry of Education, culture, Sport, Science, and Technology of Japan.

engineering applications. Of course, these extensions are not without a price. While the general theory (very similar to the linear programming case) has been developed and cast in a very elegant form with the help of the technique of Euclidean Jordan algebras (see e.g. [, 3, 4, 5, 6, 3, 4, 6, 8] ), the computational cost of implementation grew quite significantly (mostly due to the cost of implementation of the major computational Newton-like step). In the present paper we make the next natural step along this road. We consider a class of infinite-dimensional optimization problems and a primal-dual algorithm based on the Nesterov-Todd direction. We implemented this algorithm for a class of control problems: multi-criteria linear-quadratic control problem and linearquadratic control problem with quadratic constraints (in both continuous and discrete-time settings). A computation of Nesterov-Todd direction is an infinitedimensional (linear) problem. Thus, the cost of the major computational step is even higher in comparison with previous essentially finite-dimensional schemes. What is crucial here is a careful consideration of the structure of the problem. In particular, in control applications mentioned above the infinite-dimensional part of the Newton-like step is reduced to solving the linear-quadratic control problem with extra linear term in the cost function. As we show in this paper, the presence of this extra linear term does not lead to any significant complications and (as in the standard control-theoretic setting) the solution of this problem is essentially reduced to solving one of the standard Riccati equations (differential, difference or algebraic depending on the concrete setting of the problem). The conceptual difference in implementation of the algorithm for continuous-time and discrete-time systems is quite minimal: it is essentially in the organization of data, different schemes for computation of scalar products in various functional spaces, and solving the linear-quadratic problem briefly described above. A Newton-like step for continuous-time problem is computationally expensive operation. Good news here is that it is much cheaper for discrete-time systems. Combining this with an experimental fact (which can be partially explained by t heoretical complexity estimates [0] ) that three-four iterations of the algorithm lead to very reasonable approximations of optimal solutions, one can hope t hat on-line versions of the algorithm can be implemented. This, in turn, may lead to entirely new and exciting methodologies for the multi-criteria regulation of control systems. 2 Primal-dual interior-point algorithms In this section, following [0], we describe a general set up for a class of infinitedimensional optimization problems. We then proceed with the optimality criterion and describe a primal-dual algorithm based on Nesterov-Todd (NT) direction. Let (H, <, >) be a Hilbert space, V be a vector space: V = IR H. Let, further, V = V V (m-times), X V be a closed vector subspace in V. 2

Consider the following standard optimization problem: < a, z > V min (2.) z (b + X) Ω (2.2) and its dual < b, w > V min (2.3) b (a + X ) Ω (2.4) where our cone Ω is a second order cone(open convex set in V ): Ω = {(s, y) IR H : s > y H } (sometimes also called the cone of squares) and Ω = Ω Ω (m-times). It is known that Suppose z, w V, Ω = {(s, y) IR H : s y H }. z = {(s, x ),, (s m, x m )} w = {(t, y ),, (t m, y m )}. Then we define an inner product on V as follows: The cone Ω is self-dual, i.e. < z, w > V = (s i t i + < x i, y i > H ). i= Ω = {z V :< w, z > 0, w Ω} = Ω. Denoted by X the orthogonal complement of X in V with respect to <, > V. Let, further, F = [(x + X) Ω] [(a + X ) Ω ]. We assume that int(f) = [(x + X) int( Ω)] [(a + X ) int( Ω )] 0. (2.5) It is easy to see that if a pair of z, w satisfies (2.2),(2.4) and < z, w >= 0, then z is an optimal solution to (2.)-(2.2) and w is an optimal solution to (2.3)-(2.4). 3

Definition 2.. Given (z, w) then the duality gap µ(z, w) is defined as follows: where r is a positive constant. µ(z, w) = < z, w > 2m A typical primal-dual algorithm generates a sequence of pairs primal and dual feasible points (z (k), w (k) ) int(f), k =, 2,... such that µ(z (k+), w (k+) ) ( δ r ω )µ(z(k), w (k) ), (2.6) for some positive δ and ω (see e.g. [9]). The following theorem (see [0] ) provides necessary and sufficient conditions for a pair (z, w ) to be an optimal solution to (2.),(2.2) and (2.3),(2.4). Theorem 2.2. Suppose the conditions (2.5) holds. Then both (2.),(2.2) and (2.3),(2.4) have optimal solutions. Moreover, z is an optimal solution to(2.),(2.2) and w is an optimal solution to (2.3),(2.4) if and only if < z, w > V = 0. One of the most important steps in the implementation of primal-dual algorithms is a computation of a descent direction which drives duality gap µ to zero. One of these directions (the so-called NT-direction) is described below. Let z i = (t i, x i ), w i = (s i, y i ) V. We define a determinant of z i as follows: det(z i ) = t 2 i x i 2. And the multiplication in V is defined as follows: (t i, x i ) (s i, y i ) = (t i s i + < x i, y i > H, s i x i + t i y i ) Observe that the above multiplication is commutative but not associative. This defines the structure of a Jordan algebra (see e.g. [0] for more details). Then the inverse z i of z i is given by the formula Next we consider a function z i = det(z i ) (t i, x i ). f(z i ) = ln det(z i ) = ln(t 2 i x i 2 ). The quadratic representation P (z i ) is defined as follows: where H(z i ) is the Hessian of f(z i ). P (z i ) = H(z i ) 4

Remark: Since V is a product of m copies of V. We define a quadratic representation P (z) for z = (z,, z m ) V as follows: P (z) = (P (z ),, P (z m )). One of the main ingredients in the construction of NT-direction is the socalled scaling point. Proposition 2.3. Given (z, z 2 ) Ω Ω. Then there exists a unique z 3 Ω such that P (z 3 )z = z 2. (2.7) The following Corollary gives an explicit formula for the scaling point. Corollary 2.4. Let z, z 2 Ω, z = (s, y), z 2 = (t, x). Then consider z 3 = (r, u) with µ µ s + µ 2 t r = µ 2 2 + µ µ 2 < (s, y), (t, x) > µ µ 2 x µ y u = µ 2 2 + µ µ 2 < (s, y), (t, x) > µ i =, i =, 2. det(zi ) Then z 3 Ω is a unique solution to (2.7) for V = IR H. Proposition 2.5. Every element z V admits the following spectral decomposition z = λ e + λ 2 e 2 where λ i IR, e i e j = δ ij e i for i, j =, 2. The following proposition gives us an explicit formula for the spectral decomposition of an element z V. Proposition 2.6. Let z = (s, y) V, y 0. Consider Then e = 2 (, and e i e j = δ ij e i for i, j =, 2. y y ), e 2 = 2 (, y y ) λ = s + y, λ 2 = s y. (s, y) = λ e + λ 2 e 2 Proposition 2.7. Let Ω be the cone of squares of V and (s, y) Ω. Consider z = ( µ 2, y µ ), µ = s + y + s y. 5

Then z Ω and z 2 = (s, y). Moreover if z 2 = λ e + λ 2 e 2 is the spectral decomposition of z 2, then Denote z by (s, y) 2. Proposition 2.8. We have Hence, z = λ e + λ 2 e 2. P (z 2 ) 2 = P (z). P (z 2 ) = P (z) 2. (2.8) For proofs of the above propositions and corollary, see [0]. We are now in a position to introduce Nesterov-Todd direction (see e.g. [5]). Given z, z 2 Ω, let z 3 Ω be the scaling point of z and z 2 defined in (2.7). Then the NTdirection (ξ, η) X X is defined to be the solution to the following system of linear equations: P (z 3 )ξ + η = γµ(z, z 2 )z z 2, (2.9) ξ X, η X. (2.0) Here 0 < γ < is a real parameter. One can show (see [0]) that (2.9),(2.0) admits a unique solution. Proposition 2.9. By following the NT-direction, the duality gap decreases. Proof Let (z (k), z(k) 2 ) Ω Ω, (ξ(k), η (k) ) be NT-direction at (z (k), z(k) 2 ), and t (k) > 0. Define z (k+) = z (k) + t (k) ξ (k), z (k+) 2 = z (k) 2 + t (k) η (k). Then, consider the following equality: < z (k+), z (k+) 2 > = < z (k) + t (k) ξ (k), z (k) 2 + t (k) η (k) > = < z (k), z(k) 2 > +t (k) (< z (k), η(k) > + < z (k) 2, ξ(k) >) So it suffices to show that < z (k), η(k) > + < z (k) 2, ξ(k) >< 0. < z (k), η(k) > + < z (k) 2, ξ(k) > = < z (k), η(k) > + < P (z 3 )z (k), ξ(k) > = < z (k), η(k) + P (z 3 )ξ (k) > = < z (k), γµ(z(k), z(k) 2 )(z(k) ) z (k) 2 > = (γ ) < z (k), z(k) 2 > < 0 ( since: 0 < γ < ). 6

Here we used (2.9). Hence we have < z (k+), z (k+) 2 > < < z (k), z(k) 2 > µ(z (k+), z (k+) 2 ) < µ(z (k), z(k) 2 ). We conclude this section by a brief description of a primal-dual algorithm based on NT-direction. For a fix ɛ > 0, suppose that (z (0), w (0) ) int(ω) (b + X) int(ω) (a + X ). Let (ξ k, η k ) be the descent NT-direction obtained by solving the system of linear equations (2.9),(2.0), t > 0 be the largest value such that (z (k) + tξ k, w (k) + tη k ) int(ω) (b + X) int(ω) (a + X ). Then we set (z (k+), w (k+) ) = (z (k) + tξ k, w (k) + tη k ). We stop the iteration when µ(z (k), w (k) ) ɛ. In [0], it is shown that by choosing the right stepsize carefully, the algorithm runs in polynomial-time, i.e. it generates a sequence of primal and dual feasible iterates satisfying (2.6) with ω = 0.5 or and with δ being an independent positive constant. 3 Abstract Formulation of the Problem and Implementation Let H be a Hilbert space. Consider the following optimization problem: t min, (3.) W i (y y i ) H t, i =,.., m, (3.2) y c + Z. (3.3) Here H is the norm induced by the scalar product < > H, y i H are fixed ; Z is a closed vector subspace in H, W i : H H is a bounded linear operator for i =,, m. The problem (3.)-(3.3) is an example of an infinitedimensional second-order cone programming problem which has been analyzed in detail in [0]. We have implemented a version of a primal-dual algorithm based on Nestreov- Todd direction described in [0]. The problem (3.)-(3.3) can be easily rewritten in the conic form: Let Ω = {(s, y) R H : s y H }; V = IR H; Ω = Ω Ω... Ω (m-times) ; V = V... V (m-times). Let, further, Λ : V V be a linear operator such that: 7

Λ(s, y) = {(s, W y),..., (s, W m y)}, a = ((, 0), (0, 0),..., (0, 0)) V, b = ((0, W (c y ),..., (0, W m (c y m ))), and X = Λ(IR Z). The scalar product <, > V in V is defined as follows: < (t, x ),..., (t m, x m ), (s, y ),..., (s m, y m ) > V = {t i s i + < x i, y i > H }b. With these notations we can rewrite the problem (3.)-(3.3) in the conic form (2.),(2.2). Proposition 3.. We have X = {(r, u ),..., (r m, u m ) V ; r +... + r m = 0, i= Wi u i Z } (3.4) i= where Z is the orthogonal complement of Z and Wi each i. is the adjoint of W i for A conic dual to (2.),(2.2) will have the following form ([0]): < W i (c y i ), u i > H min, (3.5) i= Wi u i Z, u i r i, i =,..., m, (3.6) i= Here W i is the adjoint of W i. r i =. (3.7) i= 3. Calculation of NT-direction Given (z, z 2 ) a pair of feasible solutions to the problem (2.),(2.2), and (2.3),(2.4). To obtain NT-direction for each iteration, we need to solve the following equation for (ξ, η), P (z 3 )ξ X, ξ X (3.8) where z 3 is the scaling point of z and z 2,and = γµ(z, z 2 )z z 2. (Compare with (2.9), (2.0).) The equation (3.8) is equivalent to : < P (z 3 )ξ, ξ > < ξ, > min, 2 ξ X. 8

Let z 3 = ((t, x 2 ), (t 2, x 2 ),, (t m, x m )) and = ((r, u ),, (r m, u m )). For (µ, ζ) IR Z, ξ = Λ(µ, ζ), i= ρ(µ, ζ) = < P (z 3)ξ, ξ > < ξ, > 2 = (t 2 i x i 2 ) W i ζ 2 + < x i, W i ζ > 2 < u i, W i ζ > + ν µ 2 + ν 2 µ, 2 2 where i= i= i= i= ν = (t 2 i + x i 2 ), ν 2 = 2 t i < x i, W i ζ > r i. Hence, if we denote by φ(ζ) = min{ρ(µ, ζ) : µ IR} where φ(ζ) = < ζ, Mζ > 2 + 2 M = m+ i= i= i= ε i < υ i, ζ > 2 + < υ 0, ζ > ( m i= r i) 2 [ ] (t 2 i x i 2 Qi 0 ) 0 R i i= 2ν, (3.9) (3.0) υ 0 = ( r i )υ m+ W i u i. (3.) ν i= υ i = 2W i x i, i =, 2,..., m, υ m+ = 2 ν m i= t i W i x i, (3.2) ε i =, i =,..., m, ε m+ =, (3.3) As a result, to obtain NT-direction, it suffices to solve the following optimization problem: φ(ζ) min, which is equivalent to solving: < ζ, Mζ > 2 + 2 m+ i= ζ Z. ε i < υ i, ζ > 2 + < υ 0, ζ > min, (3.4) ζ Z. The following theorem further simplifies the problem (3.4). 9

Theorem 3.2. Let ζ 0 be the optimal solution to the problem: ζ, Mζ 2 + v 0, ζ min, (3.5) ζ Z, (3.6) and ζ i, i =, 2,, m +, be optimal solutions to the problems: ζ, Mζ 2 + V i, ζ min, (3.7) ζ Z, (3.8) where V i = ε i v i. Let S = (s ij ), s ij = v i, ζ j, i, j =, 2,, m +. Then the set of optimal solutions to the problems (3.4) is in one-to-one correspondence with the set of the solutions of the system of linear equations (I S) δ.. δ m+ = v, ζ 0. v m+, ζ 0 More precisely, if (δ,..., δ m+ ) is a solution to (3.9), then (3.9) m+ ζ(δ) = ζ 0 + δ i ζ i (3.20) i= is the optimal solution to the problem (3.4). For a proof see [8, 0]. Remark: The procedure of reduction of (3.4), (3.5), (3.7) is a version of Sherman-Morrison-Woodbury formula. Solving (3.4) is therefore equivalent to solve m+2 problems of the following type < Mζ, ζ > + < υ, ζ > min, ζ Z (3.2) 2 and a system of (m + ) (m + ) linear algebraic equations. Let ζ be an optimal solution to (3.4). Since dρ ν2 dµ = 0. Hence µ = ν. Therefore, we obtain ξ = Λ(µ, ζ) = ((µ, W ζ),..., (µ, W m ζ)) which is the primal descent direction. A dual descent direction is simply η = P (z)ξ. 0

3.2 Computation of the step size The Jordan algebraic techniques significantly simplifies the computation of the step-size. Observe that z + tξ = P (z 2 )(e + tp (z 2 )ξ). Hence, z + tξ Ω, is equivalent to e + tp (z 2 )ξ Ω. If tp (z 2 )ξ = µ f + µ 2 f 2 is the spectral decomposition, then z + tξ Ω is equivalent to + tµ 0, + tµ 2 0. (3.22) Since the primal-dual cone is the product of several copies of Ω, we obtain step size t as the maximal value of t for which all inequalities (3.22) are satisfied. We use 0.9t for the actual step size in out algorithm. The stopping rule we used is quite standard: the duality gap µ < ɛ for a given ɛ > 0. Recall (see [0]) that Nesterov-Todd direction (ξ, η) is determined by the following conditions ξ + P (z) η = γµ(z, z 2 )z 2 z, ξ X, η X, where (z, z 2 ) Ω Ω and z 3 Ω is the scaling point uniquely determined by the equation: P (z 3 )z = z 2. As a result of numical experiments,we arrived at γ = 0.5 as a value which minimizes the number of iterations. 4 Examples of concrete problems We now consider several concrete examples of the problem (3.)-(3.3). 4. Multi-criteria linear quadratic control problem Denote by L n 2 [0, T ] the Hilbert space of square integrable functions f : [0, T ] IR n, T > 0. Let (x, u) H = L n 2 [0, T ] L l 2[0, T ]. Consider the following optimization problem: J(x, u) = max i [,m] T 0 (x x i ) T Q i (x x i ) + (u u i ) T R i (u u i )dt min (4.) ẋ(t) = Ax(t) + Bu(t), x(0) = x 0. (4.2)

Here (x i, u i ) H, i =, 2,..., m; A, B, Q i, R i, i =, 2,..., m are given matrices of appropriate dimensions. We will assume that Q i, R i are symmetric positive definite matrices. The problem (4.),(4.2) can be easily rewritten in the second order cone programming problem of the form (3.)- (3.3) Here H is the norm induced by the scalar product < > H, y i = (x i, u i ), i =, 2,..., m; Z is a closed vector subspace in H described as follows: Z = {(x, u); ẋ(t) = Ax(t) + Bu(t), x(0) = 0}; (4.3) [ ] L T W i = Qi 0 0 L T, i =,..., m, R i where L Qi, L Ri are lower-triangular matrices obtained by Cholesky factorization of Q i and R i respectively. Then by following the scheme described in the previous section, we have to show that the condition (2.5) is satisfied, and then compute NT-direction. Constructing feasible solutions Primal problem: After some numerical experiments, we end up with the following construction. Let K 0 be a unique stabilizing solution to the algebraic Riccati equation: KBB T K A T K KA I = 0. Here stabilizing means that the close-loop matrix A BB T K 0 is stable. Let x be the solution to the following system of linear differential equations: ẋ = (A BB T K 0 )x; x(0) = x 0, u = B T K 0 x. It is quite obvious that y = (x, u) c + Z. and hence if we choose s = max i [,m] W i(y y i ) + δ for some δ > 0, the initial point for primal problem is Λ(s, y). Dual problem: We simply take u i = 0, r i = m, i =, 2,..., m. Computation of NT-direction For T <, the problem (3.2) is simply classical LQ problem with a linear term on a finite interval (see e.g. [8, ]). Hence on each iteration, we need to solve a differential Riccati equation, system of linear differential equations and a system of (m + ) (m + ) linear algebraic equations (see [0]). For the case when T =, the problem (3.2) is LQ problem with a linear term on a semi-infinite interval. The complete solution to such a problem is described in the following section. 2

4.. Solution to LQ problem with a linear term on a semi-infinite interval Let (x, u) H = L n 2 [0, ) L m 2 [0, ), Σ = Σ T. Consider the following LQcontrol problem with a linear term on a semi-infinite J(x, u) = 2 0 [ x < u ] [ x, Σ u ] [ x > dt + < 0 u ] [ y, v ] > dt min (4.4) By using block partition, denote [ Σ Σ Σ = 2 Σ T 2 Σ 22 For proofs of the theorems in this section, see [9]. ẋ = Ax + Bu, x(0) = x 0 (4.5) Theorem 4.. Let A be an antistable n by n matrix ( i.e. real parts of all eigenvalues of A are positive). Consider the following system of linear differential equations: ẋ = Ax + f, (4.6) where f L n 2 [0, ). There exists a unique solution L(f) of (4.6) such that L(f) L n 2 [0, ). Moreover the map f L(f) is linear and bounded. Explicitly: L(f)(t) = + 0 ]. e Aτ f(τ + t)dτ (4.7) Theorem 4.2. The following conditions are equivalent: i) Σ 22 is a positive definite (symmetric) matrix and the following Riccati equation has a stabilizing solution. where KLK + KÃ + ÃT K Q = 0 (4.8) Ã = A BΣ 22 Σ 2, Q = Σ Σ 2 Σ 22 Σ 2, L = BΣ 22 BT. ii) The pair (A, B) is stabilizable and there exists ɛ > 0 such that Γ(x, u) = for all (x, u) Z. + 0 < [ x u ] [ x, Σ u Let further Z H be as follows: ] > dt ɛ + 0 ( x 2 + u 2 )dt Z = {(x, u) H; ẋ = Ax + Bu, x is absolutely continuous, x(0) = 0}. 3

Theorem 4.3. Suppose that the pair (A, B) is stabilizable. Then Z is a closed vector subspace in H. Then [ ] Z ṗ + A = { T p B T : p L n p 2 [0, + ), p is absolutely continuous, ṗ L n 2 [0, + )} We are now in the position to describe the optimal solution to (4.4). Theorem 4.4. Suppose that the conditions of theorem 4.2 are satisfied. Then the problem (4.4)-(4.5) has a unique solution which can be described as follows. There exists a stabilizing solution K 0 to the Riccati equation (4.8). Then the matrix C = (Ã + LK 0) is antistable, (K 0 B Σ 2 )Σ 22 v + y Ln 2 [0, + ). Let ρ be a unique solution from L n 2 [0, + ) of the system of differential equations ρ = C T ρ + (K 0 B Σ 2 )Σ 22 v + y (4.9) (which exists according to Theorem 4.), x is the solution to the system of differential equations ẋ = (Ã + LK 0)x + Lρ BΣ 22 v, x(0) = x 0, (4.0) p = K 0 x + ρ, u = Σ 22 (BT p v Σ 2 x). Now back to the problem of finding NT-direction for T =. Solving (3.2) is reduced to solving an algebraic Riccati equation (4.8), systems of differential equations (4.9),(4.0) and a system of (m+) (m+) linear algebraic equations on each iteration. 4.2 Discrete-time Multi-criteria linear quadratic control problem In this section, we consider a discrete-time formulation for the problem (4.)- (4.2). For simplicity, let us introduce some useful notations. Let x denote a sequence {x k } IR n for k = 0,,. We say that x l2 n (IN) if i= x i 2 < where is a norm induced by an inner product <, > in IR n. Let (x, u) l2 n (IN) l2 m (IN). Then the Discrete-time Multi-criteria Linear Quadratic Control Problem takes the form: max i=,...,m k=0 {< (x k ϕ (i) k ), Q i(x k ϕ (i) k ) > + < (u k φ (i) k x k+ = Ax k + Bu k, k = 0,,, x 0 = γ 0, ), R i(u k φ (i) k ) >} min (4.) where (ϕ (i), φ (i) ) H = l n 2 (IN) l m 2 (IN) for i =,, m, A, B, Q i, R i for i =,, m are matrices of appropriate sizes. We assume that Q i s and R i s are positive definite. It is easy to see that the problem (4.) can be rewritten into the secondorder cone programming problem (3.)- (3.3). 4

Observe now the inner product in H has the following form: < (x, y), (u, v) > H = {< x k, u k > + < y k, v k >}. (4.2) k=0 The vector subspace Z now takes the form: Z = {(x, u) H : x k+ = Ax k + Bu k, k = 0,,, x 0 = 0} The constructions of a initial feasible point for primal and dual problems are similar to the constructions of a feasible points in the continuous case. In the primal, we solve a discrete LQ problem via discrete Algebraic Riccati equation. In the dual, we take the same choice as in the continuous case. Now the problem (3.2) is discrete-time linear-quadratic control problem with linear term. 4.2. Discrete-time linear-quadratic control problem with linear term In this section, we describe a complete solution to discrete linear-quadratic control problem with a linear term. In order to derive the solution, we need the following auxiliary results. Theorem 4.5. Let A be a d-stable matrix (ie. all eigenvalues of A lie inside a unit circle on a on a complex plane. ) Consider the following system of linear difference equation. x k = Ax k+ + f k, k = 0,, (4.3) where f = {f k } l2 n (IN). Then there exists a unique solution L(f) of (4.3) such that L(f) l2 n (IN). Moreover the map f L (f) is linear and bounded. Explicitly, x k = L(f) k = A r f r+k, k = 0,,. (4.4) Proof Since A be a stable matrix, there exists S = S T 0, such that A T SA S = I (4.5) (see e.g. [2]). Let w k =< x k, Sx k >. Obviously, w k 0, k. Then using (4.3) and (4.5), we obtain the following: w k+ = w k+ w k = < x k+, Sx k+ > < x k, Sx k > = < x k+, Sx k+ > < (Ax k+ + f k ), S(Ax k+ + f k ) > = < x k+, Sx k+ > < x k+, A T SAx k+ > 2 < Ax k+, Sf k > < f k, Sf k > = < x k+, x k+ > 2 < Ax k+, Sf k > < f k, Sf k > x k+ 2 2 x k+ A T S f k S f k 2. 5

But then it is easy to see that the following inequality is true. 2 x k+ A T S f k 2 x k+ 2 + 2 A T S 2 f k 2 Then w k+ 2 x k+ 2 (2 A T S 2 + S ) f k 2 Then it follows that r= k+ w k+ w 0 = w r r= k+ ( 2 x r 2 (2 A T S 2 + S ) f r 2 ). r= Hence, k+ k+ x r 2 2 (2 A T S 2 + S ) f r 2 + 2(w k+ w 0 ). (4.6) Let now then we have that r= x k = A r f r+k, k = 0,,, (4.7) Ax k+ + f k = A = = = A r f r+k+ + f k A r+ f r+k+ + f k A r f r+k + f k r= A r f r+k = x k. Hence, (4.7) satisfies (4.3). Let us now consider the following inequilities: x k = A r f r+k A r f r+k A r f r+k. 6

Then by Cauch-Schawrz inequality we have x k 2 A r 2 A 2r s=0 f r+k 2 f s 2 for all k. But since A is stable and all norms are equivalent, by Householder s Theorem [7] we can assume without lost of generality that Therefore, A <. x k 2 < for all k. Hence {x k } is bounded and as a result {w k } is also bounded. But since x 0 = Ar f r, x 0 2 Then from (4.6) we have, A 2r A 2 f r 2 f r 2. k+ k+ x r 2 2(2 A T S 2 + S + A 2 ) f r 2 + 2(w k+ w 0 ) for all k. Since {w k } is bounded, we conclude from (4.8): {x k } l2 n (IN). Hence As a result, From (4.8), we now have lim x k = 0. k lim w k = 0. k x r 2 2(2 A T S 2 + S + A 2 ) f r 2 2w 0. (4.8) But since w 0 =< x 0, Sx 0 > 0, 7

we have x r 2 2(2 A T S 2 + S + A 2 ) f r 2 C f r 2 (4.9) for some constant C. But now suppose that { x k } l n 2 (IN) be another solution to (4.3). Then y k = x k x k = A(x k+ x k+ ) = Ay k+ for k = 0,. Since A is d-stable and {y k } l n 2 (IN), one can easily see that y k = 0 for all k = 0,. Therefore the system (4.3) may have only one solution {x k } l n 2 (IN). Hence we can conclude that the linear map f L (f) is correctly defined and bounded. Let now consider the following classical discrete-time LQ problem 2 < k=0 [ xk u k ] [ xk, Σ u k ] > min, (4.20) where x k+ = Ax k + Bu k, x 0 = x 0 IR n and [ Σ Σ Σ = 2 Σ T 2 Σ 22 ]. It is well known the the following discrete algebraic Riccati equation plays a very important role in finding the optimal solution to the problem (4.20) [2]: K = A T KA (A T KB + Σ 2 )(Σ 22 + B T KB) (A T KB + Σ 2 ) + Σ. (4.2) Observe that (4.2) is defined under assumption that the matrix Σ 22 + B T KB is invertible. Let F = (A T KB + Σ 2 )(Σ 22 + B T KB). Then K 0 a symmetric solution to (4.2) is called a stabilizing if Σ 22 +B T KB > 0 and A T F B T is d-stable. Theorem 4.6. Suppose Σ 22 is a positive definite matrix, (4.2) has a stabilizing solution. Then [ ] [ ] xk xk <, Σ > > 0. (4.22) 2 u k u k k=0 8

for all (x k, u k ) satisfying x k+ = Ax k + Bu k, (4.23) x 0 = 0, (x, u) l n 2 (IN) l m 2 (IN). Proof Let K be a stabilizing solution to (4.2), using (4.22) we obtain the following equalities: < x k+, Kx k+ > < x k, Kx k > = < (Ax k + Bu k ), K(Ax k + Bu k ) > < x k, Kx k > = < x k, (A T KA K)x k > + < x k, A T KBu k > + < u k, B T KAx k > + < u k, (B T KB + Σ 22 )u k > < u k, Σ 22 u k >. Let now R K = Σ 22 + B T KB, and S K = A T KB + Σ 2 then using (4.2), we obtain: < x k+, Kx k+ > < x k, Kx k > = < x k, (S K R K S K Σ )x k > + < x k, A T KBu k > + < u k, B T KAx k > + < u k, R K u k > < u k, Σ 22 u k > = < x k, (S K R K S K)x k > + < x k, S K u k > + < u k, SKx T k > + < u k, R K u k > < x k, Σ 2 u k > < u k, Σ T 2x k > < u k, Σ 22 u k > < x k, Σ x k > = < (R K S Kx k + u k ), R K (R K S Kx k + u k ) > < [ xk u k ], Σ [ xk ] >. u k Then take summation with respect to k from k = 0 to k = N to obtain: N < k=0 [ xk u k ] [ xk, Σ u k ] > = N k=0 < (R K S Kx k + u k ), R K (R K S Kx k + u k ) > + < x 0, Kx 0 > < x N+, Kx K+ >. Since we have x 0 = 0 and lim x N = 0, N < k=0 [ xk u k ], Σ [ xk ] > = u k k=0 < (R K S Kx k + u k ), R K (R K S Kx k + u k ) >. Hence, since R K > 0, (4.22) is satisfied. Theorem is proved. Remark Suppose that (A, B) is d-stabilizable and Σ = I (identity matrix). Then (4.2) has a stabilizing solution (see e.g. [2]). Theorem 4.7. Let H = l n 2 (IN) l m 2 (IN). Suppose the pair (A, B) is d-stabilizable. Let X be defined as follows: 9

X = {(x, u) = ({x k }, {u k }) H; x k+ = Ax k + Bu k, x 0 = 0 IR n k = 0,, }. Let also Z = {({ξ k }, {η k }); ξ 0 = A T p 0, ξ k = A T p k p k for k =, 2,, η k = B T p k, for k = 0,,, {p k } l n 2 (IN)}. Then X is a closed vector subspace and Z is an orthogonal complement of X in H, i.e. Z = X. Proof Let (x, u) X and ({ξ k }, {η k }) Z. Then < (x, u), ({ξ k }, {η k }) > H = < x 0, A T p 0 > + < u 0, B T p 0 > + lim = < x, p 0 > + lim = < x, p 0 > + lim n k= n k= = lim n < x n+, p n > = 0. n k= n (< x k, A T p k p k > + < u k, B T p k >) n (< Ax k + Bu k, p k > < x k, p k >) n (< x k+, p k > < x k, p k >) Next we show that given ({ψ k }, {φ k }) H, it admits the following representation: ψ 0 = x 0 [A T p 0 ] (4.24) and for k =, 2, ψ k = x k [A T p k p k ] (4.25) φ k = u k B T p k (4.26) for k = 0,,, where ({x k }, {u k }) X and {p k } l n 2 (IN). This will imply easily that Z = X and both X and Z are closed. We look for p k in the form Then by (4.25), we have p k = Kx k + ρ k (4.27) p k = ψ k x k + A T p k where K = K T is a symmetric matrix. Substituting this into (4.27), we get ρ k = A T p k + ψ k x k + Kx k. (4.28) 20

But then (4.26) implies: u k = B T p k + φ k. (4.29) Our goal now is to eliminate p k. By using (4.27) p k = Kx k+ + ρ k+ K(Ax k + Bu k ) + ρ k+. By substituting this into (4.29), we have Hence, Hence, we have Using u k = B T ( K(Ax k + Bu k ) + ρ k+ ) + φ k = B T KAx k B T KBu k + B T ρ k+ + φ k. (I + B T KB)u k = B T KAx k + B T ρ k+ + φ k. u k = (I + B T KB) B T KAx k + (I + B T KB) (B T ρ k+ + φ k ). (4.30) we obtain x k+ = Ax k + Bu k x k+ = Ax k B(I + B T KB) B T KAx k +B(I + B T KB) (B T ρ k+ + φ k ). (4.3) But then since p k = Kx k+ + ρ k+, we have p k = KAx k + KB(I + B T KB) B T KAx k KB(I + B T KB) (B T ρ k+ + φ k ) + ρ k+. Then from (4.28) we have the following ρ k = A T p k + ψ k x k + Kx k = ψ k x k + Kx k A T KAx k + A T KB(I + B T KB) B T KAx k A T KB(I + B T KB) (B T ρ k+ + φ k ) + A T ρ k+. Recall, now, if Σ is the identity matrix then according to Remark which follows Theorem 4.6 the corresponding discrete algebraic Riccati (4.2) equation has a stabilizing solution. In this case, Σ = I n, Σ 22 = I m, Σ 2 = 0 i.e. K = A T KA A T KB(I + B T KB) A T KB + I (4.32) 2

If we choose K to be stabilizing solution, we obtain ρ k = A T (I KB(I+B T KB) B T )ρ k+ A T KB(I+B T KB) φ k +ψ k (4.33) It is known that if K 0 is a stabilizing solution to DARE (4.32), then the matrix A T (I K 0 B(I + B T K 0 B) B T ) is d-stable. Then by the Theorem 4.5, there exists a unique solution {ρ k } l2 n (IN) to (4.33). Hence by using (4.3) where x 0 = 0, (4.30) and then (4.27) we have shown that any ({ψ k }, {φ k }) H admits a unique decomposition as a sum of elements in X and X. Hence the Theorem is proved. Now we are in the position to describe a solution of the LQ-problem on a semi-infinite interval. 2 where < k=0 [ xk u k ], Σ [ xk ] > + < u k k=0 [ xk u k ], [ ψk ] > min, (4.34) φ k x k+ = Ax k + Bu k + g k, x 0 = x 0 R n, {g k } l n 2 (IN), (x, u) H. (4.35) Theorem 4.8. Suppose there exists a stabilizing solution K to DARE (4.2): Then {f k } l n 2 (IN) where f k = (A T KB + Σ 2 )(Σ 22 + B T KB) φ k + Σ 2 (Σ 22 + B T KB) B T Kg k ψ k. Furthermore, let {ρ k } be a unique solution from l n 2 (IN) of the system where ρ k = Cρ k+ + f k C = A T (A T KB + Σ 2 )(Σ 22 + B T KB) B T. Then the solution ({x k }, {u k }) to LQ-problem on a semi-infinite interval (4.34) can be described by the following recurrent relations: x k+ = C T x k + B(Σ 22 + B T KB) (B T ρ k+ B T Kg k φ k ) (4.36) u k = (Σ 22 + B T KB) (B T KA + Σ 2 )x k +(Σ 22 + B T KB) (B T ρ k+ B T Kg k φ k ). (4.37) Proof By Theorem 4.5, the functional (4.34) restricted to subspace X is convex. Hence necessary and sufficient optimality condition for (4.34),(4.35)takes the form [ ] [ ] [ ] xk ψk A Σ + = T p k p k u k φ k B T p k 22

when (x, u) satisfy (4.35) for some {p k } l n 2 (IN). Moreover, we are looking for {p k } in the form p k = Kx k+ + ρ k+ where K is a stabilizing solution to DARE (4.2). We finish the proof exactly as in Theorem 4.7. 5 Some extensions 5. Linear-quadratic control problem with quadratic constraints Consider the following optimization problem: W 0 (y y 0 ) H min, (5.) W i (y y i ) H t i, i =,..., m (5.2) y c + Z. (5.3) With our choice of H, Z and other parameters, we arrive at the linearquadratic control problem with quadratic constraints ( see e.g [7, 20]). We first consider the problem of checking the feasibility of (5.) - (5.3): t min, (5.4) W i (y y i ) H t + t i, i =,..., m (5.5) To cast (5.4) - (5.6) in the conic form, consider y c + Z. (5.6) Λ : IR Z (IR H) (IR H)...(IR H), m-times such that Λ(t, y) = (t, W y), (t, W 2 y),..., (t, W m y), a = ((, 0), (0, 0),..., (0, 0)), b = ((t, W (c y )),..., (t m, W m (c y m ))) and X = Λ(IR Z). Then we can rewrite (5.4)-(5.6) in the form of (2.),(2.2). Observe that the only difference between (2.),(2.2) and our conic formulation is the choice of b. It is quite obvious that the dual will be of the form t i r i + < W i (c y i ), u i > H min, (5.7) i= i= Wi u i Z, u i r i, i =,..., m, (5.8) i= r i =. (5.9) i= 23

Thus, the whole numerical scheme remain the same for (5.4)-(5.6)( the only difference is in the choice of initial feasible solution). Returning to (5.)-(5.3) consider the following; Λ : IR Z (IR H) (IR H)...(R H), (m+)-times such that Λ(t, y) = (t, W 0 y), (0, W y),..., (0, W m y), X = Λ(IR Z), a = ((, 0), (0, 0),..., (0, 0)), b = ((0, W 0 (c y 0 )), (t, W (c y )),..., (t m, W m (c y m ))). We can rewrite (5.)-(5.3) in the conic form < a, z > min, z (b + X) Ω. One can easily see that the dual will have the following form t i r i + < W i (c y i ), u i > H min, (5.0) i= i=0 Wi u i Z, u i r i, i =,..., m, (5.) i=0 u 0. (5.2) It is quite easy to work out a scheme for the computation of the Nesterov- Todd direction following the ideas developed in [0]. 6 Numerical results In this section we consider several examples for both continuous and discretetime formulations. The major difference between the two methods is in solving linear-quadratic control problem with linear term. For continuous-time case for finite interval (respectively, infinite interval), solving linear-quadratic control problem with linear term requires solving differential Riccati equation (continuous algebraic Riccati equation) and systems of linear differential equations. For discrete-time case, we have to solve discrete Riccati equation and systems of difference equations. The numerical experiments confirm fast convergence to optimal solution and good approximation of the continuous formulations by discrete formulations. The algorithms are implemented in MATLAB and integrations of the ordinary differential equations are carried out with ODE45, a MATLAB ODE solver function for nonstiff problems. 24

6. Example : Multi-criteria Linear Quadratic Control Problem We consider Multi-criteria Linear Quadratic Control Problem in both continuous and discrete cases. For continuous formulations, we first consider the problem (4.),(4.2) (i.e. problem on the finite interval [0, T ]) with m = n = 3 and the following data: 2 3 4 5 6 x i e 3t e 3t sin(t)e 4t e 2t cos(t)e 4t cos(t)e 3t x 2i cos(t)e 4t sin(t)e 4t e 3t sin(t)e 5t e 3t e 4t x 3i cos(t)e 4t cos(t)e 4t cos(t)e 4t sin(t)e 5t e 2t e 3t u i t 2 +5 u 2i t 2 +8 u 3i t 2 +6 A = t 2 +3 t 2 +8 t 2 +6 2 5 2 4 2 3 2 3 2 t 2 +3 t 2 +8 t 2 +6 t 2 +5 t 2 +8 t 2 +6 7 8 x i cos(t)e 4t sin(t)e 3t x 2i e 4t e 4t x 3i e 3t e 3t u i t 2 +7 u 2i t 2 +8 u 3i t 2 +6, B = t 2 +8 t 2 +6 t 2 +6 3 2 4 5 2 2 3 4 2, α 0 = t 2 +8 t 2 +6 t 2 +6 2 5 t 2 +8 t 2 +8 t 2 +6 Q = diag(69, 96, 225), Q 2 = diag(69, 69, 9), Q 3 = diag(96, 2, 25), Q 4 = diag(225, 8, 49), Q 5 = diag(96, 49, 8), Q 6 = diag(289, 25, 2), Q 7 = diag(324, 9, 69), Q 8 = diag(36, 2, 225), R = diag(96, 44, 96), R 2 = diag(44, 9, 6), R 3 = diag(69, 225, 36), R 4 = diag(96, 49, 64), R 5 = diag(225, 8, 00), R 6 = diag(256, 2, 44), R 7 = diag(289, 69, 96), R 8 = diag(324, 225, 36), Then by applying a primal-dual algorithm based on NT-direction, we obtain the following table of solutions for different T for ɛ = 0.0 T 2 3 4 5 Opt V alue 2035 207 2075 2076 2077 N umber of Iterations 6 6 6 6 6 T 6 7 8 9 0 Opt V alue 2077 2077 2077 2077 2077 N umber of Iterations 6 6 6 6 6 25

Plot of Approximation solution after 3 iterations ".."and optimal control " " 0 2 3 Value 4 5 6 7 8 0 0.5.5 2 2.5 3 Time Figure : Plot of approximation of optimal control and Optimal control: T = 3 Note also that the approximation of the optimal control approaches to the optimal control only after a few iterations. In fact, only after 3-4 iterations, we obtain a very good approximation to the optimal solution. Figure and 2 are graphs of the approximate optimal solution after three iterations and the optimal solution. The dotted line is the plot of the approximation and the solid line is the actual optimal solution (the actual optimal solution means the solution obtained in the last iteration). Our next goal is to use the discrete formulations to estimate the optimal solution and optimal value for the above problem. The construction is quite straightforward. First,we choose a sufficiently large K > 0, then we discretize the partition [0, T ] in to K sub-intervals uniformly. Next we let h = T K. Hence, our targets in the discrete formulation can be easily computed by the following simple procedure: for state components x (i) k x (2i) k x (3i) k = x i(kh) x 2i (kh) x 2i (kh) for i =,, m, and k = 0,,, K. Similarly, for control components u (i) k u (2i) k = u i(kh) u 2i (kh), u (3i) u 2i (kh) k for i =,, m, and i = 0,,, K., 26

5 Plot of approximation of state and the actual state optimal solution 4 3 value 2 0 0 0.5.5 2 2.5 3 Time Figure 2: Plot of approximation of optimal state and Optimal state: T = 3 The corresponding discrete-time problem will have the form max i=,...,m k= {< (x k ϕ (i) k ), Q i (x k ϕ (i) k ) > + < (u k φ (i) k x k+ = Ãx k + Bu k, x 0 = γ 0 ), R i (u k φ (i) k ) >} min (6.) where Q i = hq i, R i = hr i, Ã = (I + ha) and B = hb. We choose K = 300. Then using above formulation, we obtain a optimal solution which is very close to the optimal solution to the continuous case. As it is shown in the Figure 3-4, we take the difference pointwise of the optimal solutions obtained from the continuous and discrete formulation. The result is quite good. The two optimal solutions are almost identical. 6.2 Example 3: Discrete-time linear-quadratic control problem with quadratic constraints In this section we solve linear-quadratic control problem with quadratic constraints using discrete formulation. Let us first consider the checking feasibility problem (5.4)-(5.6) in the discrete formulation of the linear-quadratic control problem with quadratic constraints where Z, c, W i, y i for i =,..., m will be the same as in the Example, and t i be given by the following table. i 2 3 4 5 6 7 8 t i 4 20 39 30 32 38 43 45 It is easy to see that there exists a feasible solution to the problem (5.)-(5.3) if and only if the optimal value of the problem (5.4)-(5.6) is negative. 27

3.) Discrete-time optimal state 5 Optimal state obtained by using discrete formulation 3.2) Difference between discrete-time and continuous-time Optimal state 0.0 Pointwise difference of state spaces between discrete and continuous formulations 4 0.005 3 2 0 0.005 0 0.0 0 500 000 500 2000 2500 3000 0.05 0 500 000 500 2000 2500 3000 Figure 3: Plot of discrete-time optimal state for Multi-criteria Linear Quadratic Control Problem. 4.) Discrete-time optimal control Optimal control obtained by using discrete formulation 0 4.2) Difference between discrete-time and continuous-time Optimal control 0. Pointwise difference in control spaces between discrete and continuous formulations 0.05 2 3 0 4 0.05 5 0. 6 7 0.5 8 0.2 9 0 500 000 500 2000 2500 3000 0.25 0 500 000 500 2000 2500 3000 Figure 4: Plot of discrete-time optimal control for Multi-criteria Linear Quadratic Control Problem. 28

5.) Plot of optimal state for T = 20. 5 Plot of optimal state 5.2) Plot of optimal control for T = 20. Plot of optimal control 4 0 3 2 2 3 4 5 0 6 0 00 200 300 400 500 600 700 800 900 000 7 0 00 200 300 400 500 600 700 800 900 000 Figure 5: Plot of Discrete-time linear-quadratic control problem with quadratic constraints. Then by applying a primal-dual algorithm described in [0] based on NTdirection for T = 3, we obtain the following data of optimal value t after some number of iterations. #iterations 2 3 4 5 6 7 8 Opt Value.87 5.9.86 0.9 0.6 0.42 0.49 0.50 As it is shown in the above table, after the forth iteration we obtain a feasible solution to the linear-quadratic control problem with quadratic constraints(5.)- (5.3). Once we obtain a feasible solution for the optimization problem (5.)- (5.3), we then can apply primal-dual algorithms to solve the problem. The calculations for the NT-direction for this problem are quite similar to the calculation for (6.). The only differences are in the definition of a closed vector subspace X and its complement X. The optimal solution to the problem (5.)-(5.3) for T = 20 based on NT-direction is shown in the Figure 5. 29

References [] F. Alizadeh and D. Goldfarb, Second-order cone programming, ISMP 2000, Part 3 (Atlanta, GA). Math. Program. 95(2003), no., Ser. B, 3-5. [2] A. Ben-tal and A. Nemirovski, Lecture on Modern Convex Optimization, SIAM, Philadelphia, 200. [3] L. Faybusovich, Euclidean Jordan algebras and interior-point algorithms, Positivity, Vol. (997), pp. 33-357. [4] L. Faybusovich, A Jordan Algebric approach to potential-reduction algorithms, Mah. Zeitshrift, vol. 239 (2002) 7-29. [5] L. Faybusovich, Linear systems in Jordan algebras and primal-dual interior point algorithms, Journal of Computational and Applied Mathematics, Vol. 86(997), pp.49-75. [6] L. Faybusovich and R. Arana, A long-step Primal-dual algorithm for symmetric programming problem Lie theory and its applications in control (Wurzburg, 999). Systems control Lett. 43(200), no. 3-7. [7] L. Faybusovich and J.B. Moore, Infinite-Dimensional quadratic Optimization: Interior-Point Methods and control Applications, Vol.36(997), p.p. 43-66. [8] L. Faybusovich and T. Mouktonglang, Finite-rank Pertubation of the Linear-Quadratic Control Problem. Proceedings of the American Control Conference, Denver, Colorado, June 4-6, 2003. pp. 5347-5350. [9] L. Faybusovich and T. Mouktonglang, Linear-quadratic control problem with a linear term on semiinfinite interval:theory and applications. Preprint (2003). [0] L.Faybusovich and T.Tsuchiya, Primal-dual Algorithms and Infinitedimensional Jordan Algebras of finite rank. New trends in optimization and computational algorithms(ntoc 200)(Kyoto). Math. Progam. 97 (2003), no.3, Ser. B, 47-493. [] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, New York, Wiley Interscience (972). [2] P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, 995. [3] M. Muramatsu, On a commutative class of search direction for linear programming over symmetric cones, Journal of Optimization Theory and its Applications, Vol.2, No.3(2002)595-625. 30

[4] R.D.C. Monteiro and T. Tsuchiya, Polynomial convergence of primal-dual algorithms for the second-order cone program base on the MZ-family of direction Mathematical Programming, 88(), 6-83, 2000. [5] Y.E. Nesterov and M. Todd, Self-scaled barriers and interior-point methods for convex programming, Mathematics of Operations Research, vol.22(997),pp.-42. [6] S. Schmieta and F. Alizadeh, Extensions of primal-dual interior point algorithms to symmetric cone, Math. Program. 96(2003), no. 3, Ser. A, 409-438. [7] D. Serre, Matrices Theory and Applications, Springer, pp. 66. [8] T. Tsuchiya, A convergence analysis of the scaling-invariant primal-dual path-following algorithms for second-order cone programming Optimization Methods and Software, 0/, 4-82,999. [9] S. Wright, Primal-dual Interior Point Algorithms, SIAM Publications, Philidelphia, Pennsylvania, USA, 997. [20] V.A. Yakubovich, Nonconvex optimization problem: The infinite-horizon linear-quadratic control problem with quadratic constraints. System and Control Letters, 9:3-22,992. 3