Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems.

Size: px
Start display at page:

Download "Optimization: Interior-Point Methods and. January,1995 USA. and Cooperative Research Centre for Robust and Adaptive Systems."

Transcription

1 Innite Dimensional Quadratic Optimization: Interior-Point Methods and Control Applications January,995 Leonid Faybusovich John B. Moore y Department of Mathematics University of Notre Dame Mail Distribution Center, Notre Dame, IN USA y Department of Systems Engineering and Cooperative Research Centre for Robust and Adaptive Systems Research School of Information Sciences and Engineering Canberra ACT 000 Australia Abstract An innite-dimensional convex optimization problem with the linear-quadratic cost function and linear-quadratic constraints is considered. We generalize the interior-point techniques of Nesterov-Nemirovsky to this innite-dimensional situation. The obtained complexity estimates are similar to nite-dimensional ones. We apply our results to the linearquadratic control problem with quadratic constraints. It is shown that for this problem the Newton step is basically reduced to the standard LQ-problem. AMS classication: 90C0, 9C05, 49N05 Key words: control problems, quadratic constraints, path-following algorithms

2 Introduction In their fundamental monograph [9], Nesterov and Nemirovsky have shown that a broad class of a nite-dimensional optimization problems can be solved using interior-point technique based on the notion of the self-concordant barrier. Many control problems (like multi-criteria LQ and LQG problems) could be treated with the help of this technique provided an innite-dimensional version were available. In this paper we make a natural step in this direction. Namely, we consider a convex quadratic programming problem with quadratic constraints in a Hilbert space. The generalization of the technique of Nesterov-Nemirovsky is somewhat straight-forward for this situation. We consider both long-step and short-step versions of a path-following algorithm and obtain estimates on the number of Newton steps analogous to the best known for the nite-dimensional situation. In [] J.Renegar also considers an innite-dimensional version of the Nesterov-Nemirovsky scheme. His approach is quite dierent, however, and aims at the reformulation of complexity estimates in terms of certain generalizations of condition numbers. Of course, the performance of the Newton step in our situation requires solving an innite-dimensional linear-quadratic problem (without inequality constraints). We consider Yakubovich-type problems [] and show that the performance of the Newton step is reduced to solving the standard LQ problem of control plus a system of m linear algebraic equations where m is the number of inequality constraints. It is quite natural to expect that the task of nding the Newton direction will at least require solving the standard LQ-problem. Thus the real situation is as good as one could hope. The case of linear inequality constraints admits a direct treatment without using Nesterov- Nemirovsky technique and leads to sharper complexity estimates [4]. Duality, Central Trajectories and Dynamical Systems Let (H; h; i) be a Hilbert space, X its closed vector subspace. Given self-adjoint nonnegative denite bounded operations Q 0 ; : : :; Q m on H, vectors x 0 ; a 0 ; : : :; a m in H and real numbers b 0 = 0; b ; : : :; b m, consider the following optimization problem q 0 (x)! min; () x x 0 + X; ()

3 q i (x) 0; i = ; ; : : :m; () where q i (x) = hx; Q ixi + ha i ; xi + b i. We will assume throughout this paper that the set P H dened by constraints (), () is bounded and has a nonempty interior. More precisely, int(p ) = fx P : q i (x) < 0; i = ; ; : : :; mg (4) is not empty. Along with () - () consider the following (dual) problem: (y; ) = q 0 (y) + 5q 0 (y) + i q i (y)! max; (5) i 5 q i (y) X? ; (6) y x 0 + X; i 0; i = ; ; : : :; m: (7) Proposition. For every x satisfying (), () and every (y; ) satisfying (6),(7) we have: q 0 (x) (y; ): (8) Proof: Indeed, q 0 (x)? (y; ) (x; )? (y; ) h5(y; ); x? yi = 0, where in the second inequality we used the fact that (x; ) is convex in x for nonnegative i (see for example, [0]). The last equality follows by (6) and the fact that x? y X. Remark If f is a smooth function on H, then 5f(x); x H is uniquely determined by the relation Df(x) = h5f(x); i; H: (9) Here Df(x) stands for the Frechet derivative of f at point x evaluated on the vector. Consider the family of optimization problems of the form: f (x) = q 0 (x)? ln(?q i (x))! min; (0) q i (x) < 0; i = ; ; : : :; m () x x 0 + X: () Here 0 is a parameter. In this section we will make the following assumption: D f (x)(; ) a (x) h; i ; 0; ()

4 for some a (x) > 0. Here x int(p ); X and D f (x)(; ) stands for the second Frechet derivative of f at the point x int(p ) evaluated on the pair (; ) X X. Observe that it is sucient to verify the condition () for the case = 0. One can easily compute: D f (x)(; ) = * ; Q 0? Q i q i (x) We conclude from (4) that f is convex on int(p ).! + + h5q i (x); i h5q i (x); i q i (x) (4) Proposition. Under the assumption () the problem (0) - () has a unique solution x() int(p ); 0: Proof: Fix 0: Let ~x int(p ) and f (~x) =. Consider the set P = fx int(p ) : f (x) g. It is clear that P is a convex and bounded subset of int(p ). Suppose that x i ; i = ; ; : : :; is a sequence of points in P such that lim x i = x; i! + in the strong topology (i.e. in the topology induced by the norm). It is clear that x P. If = P? int(p ), then f (x i )! ; i! +. But f (x i ) for all i. Hence x int(p ). We proved that P is convex, bounded and strongly closed. Hence P is weakly compact (see for example, []). Since f is convex, it is weakly semi-continuous from below on int(p ). We conclude that f attains its minimum x() on P. Observe that infff (x) : x int(p )g = infff (x) : x P g: The uniqueness of the minimum easily follows from the assumption (). Remark In the nite-dimensional situation the curve x(); 0 is usually called the central trajectory and the point x(0) is the analytic center (see for example, [],p.50). Our intermediate goal is to prove that x() is a smooth function of and converges to the optimal solution of () - () when! +. Denote by : H! X the orthogonal projection onto X. The point x() int(p ) is uniquely determined by the condition In other words, a 0 + Q 0 x()? For any 0 consider the map : int(p )! X, 5f (x()) X? : (5) (x) = Q 0 x? 4! 5q i (x()) = 0: (6) q i (x())! 5q i (x) : (7) q i (x)

5 We can rewrite (6) in the form: (x()) =?a 0 : (8) Proposition. For each 0 the map is a smooth isomorphism of int(p ) onto X such that? depends smoothly on. Proof: By Proposition. for any c X the problem? hc; xi + hq 0x; xi? ln(?q i (x))! min; x int(p ) (9) has a unique solution x c int(p ) and by (8) (x c ) = c. Hence is surjective. On the other hand, if (x ) = (x ); x ; x int(p ), then again by (8) both x and x are solutions to (9). Hence by Proposition. x = x, i.e. is injective. It remains to prove that? smooth and depends smoothly on. Observe that for every X; x int(p ), D (x) = Q 0? If D (x) = 0, then by (0): Qi! q i (x)? h5q i(x); i 5 q i (x) q i (x) is (0) 0 = h; D (x)i = D f (x)(; ): Hence, by () = 0: Thus D (x) induces an injective map from X into X. Further, by (0):! Q i in 5q i (x) 5q i (x) D (x) j X = Q in? + j q i (x) q i (x) X ; () where in : X! H is the canonical immersion. The condition () easily implies that D (x) j X is surjective. We can now apply the implicit function theorem to conclude that? and depends smoothly on. is smooth Corollary. The solution x() of the problem (0) - () depends smoothly on. Proof: By (8), x() =? (?a 0). The result follows by Proposition.. We now introduce a family of Riemannian metrics g which are determined by Hessians of functions f, namely, g (x; ; ) = D f (x)(; ); () 5

6 ; H and q i (x) < 0; i = ; ; : : :; m. We clearly have: (x) = Q 0? g (x)(; ) = h (x); i; () Q i q i (x) + m X 5q i (x) 5q i (x) q i (x) ; (4) 0: We can think of int(p ) (see (4)) as a Riemannian submanifold of the open subset in H dened by the conditions: q i (x) < 0; i = ; ; : : :; m: Given a smooth function ' dened on an open neighborhood of int(p ) in H, denoted by 5 '(x) X the gradient of ' relative to the metric g, the gradient 5 '(x) X is uniquely determined by conditions: D'(x) = h5'(x); i = g (x; ; 5 '(x)) = D f (x)(; 5 '(x)) (5) X; x int(p ): The existence and uniqueness of the 5 '(x) X satisfying (5) follows easily by (). Proposition.4 We have (x) 5 '(x)? 5'(x) X? : (6) Proof: For X: h; (x) 5 '(x)? 5'(x)i = g (x; ; 5 '(x))? h; 5'(x)i = h; 5'(x)i? h; 5'(x)i = 0 The result follows. Corollary. We have: D (x) 5 '(x) = (5'(x)): (7) Proof: By (0),(4) we have: D (x) 5 '(x) = ( (x) 5 '(x)): The result follows by (6). The next Proposition describes a dierential equation which has the central trajectory as its solution. 6

7 Proposition.5 Let '(x) =?q 0 (x). Then Proof: Set () = (x()). We have (see (7)): On the other hand, by (8) Hence, dx() d =? 5 '(x); 0: (8) d d () = D (x()) dx() d + (Q 0x()) : d d () =?a 0: D (x()) dx() d =? (5q 0(x())) : Comparing this with (7) and using Proposition., we arrive at (8). Corollary. The function q 0 (x()) is a monotonically decreasing function of for 0. Proof: Indeed, d d q 0(x()) = 5q 0 (x()); dx() d = g x(); 5 q 0 (x()); dx() d = g (x(); 5 q 0 (x()); 5 q 0 (x())) 0; where in the second equality we used (5) and in the third equality we used (8). Consider now the dual of the problem (0) - (): (y; ) = (y; ) + 5q 0 (y) + ln i + m( + ln)! max; (9) i 5 q i (y) X? ; (0) i > 0; i = ; ; : : :; m; () y x 0 + X Here (y; ) was dened in (5). 7

8 Proposition.6 Let x() be a solution to (0) - (). Set y() = x(), i () =? ; i = ; ; : : :; m: q i (x()) Then f (x()) = (y(); ()): () Proof: A direct computation. Proposition.7 For every x satisfying (), () and every (y; ) satisfying (0), () we have: f (x) (y; ): Proof: The proof is quite similar to the one of Lemma.9 in []. Corollary.4 The pair of (x(); ()) is an optimal solution to (9) - (). Corollary.5 Let x be the optimal solution to the problem () - (). Then q 0 (x()) q 0 (x ) q 0 (x())? m ; () lim q 0 (x()) = q 0 (x );! +: (4) Proof: Observe that (x(); ()) is a feasible solution to (5) - (7). By Proposition. q 0 (x ) (x(); ()) = q 0 (x())? m. Since by Proposition.5 q 0 (x()) is a monotonically decreasing function of, (4) follows from (). Corollary.6 The function (y(); ()) is a monotonically increasing function of for 0. Proof: The proof is quite similar to one of Theorem.8 in []. Corollary.7 If > > 0; then q 0 (x( ))? q 0 (x( )) m? m : (5) 8

9 Proof: By the previous Corollary (x( ); ( )) (x( ); ( )). We also know that q(x())? (x(); ()) = m : (6) Hence q 0 (x())? q 0 (x()) [q 0 (x( ))? (x( ); ( ))]? [q 0 (x()? (x( )); ( ))] = m? m : Complexity Estimates for Path-Following Algorithms The main idea behind path-following algorithms is to produce a nite-step approximation to the central trajectory. Using the interpretation of points on the central trajectory as optimal solutions to the family of optimization problems (0) - (), one can reduce the construction of path-following algorithms to the performance of Newton steps for functions (0) and updating the parameter. The choice of logarithmic barriers enables us to use the powerful technique of self-concordant functions [9]. In this section we show that an innite-dimensional generalization of the Nesterov-Nemirovsky scheme is quite straight-forward. We use some simplications of this scheme proposed by F. Jarre [6] and D. den Hertog []. Interior-point methods for nitedimensional convex quadratic programming problems with quadratic constraints have been also considered in [7], [8] and [5]. Let q(x) = hx; Qxi + ha; xi + b be a convex quadratic form on H. Given x; h H, consider the function We obviously have (t) = q(x + th); t IR: (7) h i t (t) = q(x) + (Dq(x) h)t + D q(x)(h; h) : (8) Suppose that D q(x)(h; h) > 0 and (0) = q(x) < 0. Since is convex we conclude that (t) = `(t? t )(t? t ); (9) where l = D q(x)(h; h)= and t < 0 < t. In particular, (t) < 0, t < t < t : (40) 9

10 Furthermore,?ln(?(t)) =?lnl? ln(t? t )? ln(t? t) (4) Using (4) we can easily compute the Taylor expansion of the function (x) =?ln(?q(x)): Indeed, by (4): (x + th) = X n=0 t n n! Dn (x)(h; : : :h) =?ln(` t (?t )) + X n= t n n t n + t n (4) Hence D n (x)(h; : : :h) = (n? )! t n + t n ; n (4) Here D n (x) denotes the nth Frechet derivative of at the point x H. If D q(x)(h; h) = 0 but Dq(x) h 6= 0, then (t) is a linear function with a single root t 6= 0. The corresponding expansion for (t) will have the form: (t) =?ln(?q(x)) + X n t n n t n : (44) Return now to the problem (0) - (). Given x int(p ), introduce a measure of proximity of x to the point x() on the central trajectory: (x) = D f (x) (p (x); p (x)) ; (45) where (see (5)). p (x) =? 5 f (x); (46) Proposition. Suppose that q i (x) < 0 for i = ; ; : : :; m and h H is such that D f 0 (x)(h; h) < ; (47) then q i (x + h) < 0 for i = ; ; : : :; m. Proof: Consider the functions i (t) = q i (x + th) and i (t) =?ln(? i (t)); i = ; ; : : :; m: Fix i [; m]. Consider rst, the case D q i (x)(h; h) > 0. We have by (4) D i (x)(h; h) = i + i ; (48) ht (i) ht (i) 0

11 where t (i) ; t(i) are roots of the quadratic polynomial i (t): Since we conclude by (48): or min n t (i) ; t (i) D f 0 (x)(h; h) = ht (i) o < : Hence, ]? ; []t (i) D i (x)(h; h); i + i < ht (i) ; t(i) [. But i(t) < 0 for t ]t (i) ; t(i) [. This proves q i (x + h) < 0. If D q i (x)(h; h) = 0 but Dq i (x) h 6= 0, then i (t) is a linear (nonconstant) function with the root t (i) such that The case of the constant i is trivial. t (i) >. Since i (0) < 0, we necessarily have i () < 0. Lemma. Given A > 0; : : :A m > 0, we have for 0 < y < x. (A x + A x + : : : + A x m) x (A y + Ay + : : : + Ay m) y Proof: It is sucient to prove that the function is monotonically decreasing for x > 0. But X 0 (x) = X (x) = x ln (Ax + : : : + A x m) x (A x + : : : + Ax m) [(Ax ln(a x ) + : : : + A x mln(a x m))? (A x + : : : + A x m) ln (A x + A x + : : : + A x m) ] : The function S(p ; : : :p m ) = P m p i lnp i is convex on the positive orthant p i 0. Hence, maxfs(p) : p i 0; i = ; ; : : :; m; p + p + : : : + p m = cg is attained at one of the extreme points of the simplex fp IR m : p i 0; i = ; ; : : :; m; p + : : :+ p m = cg. We conclude S(p ; : : :p m ) (p + : : : + p m )ln(p + : : : + p m ): Hence X 0 (x) 0 for any positive x. Proposition. For any x int(p ); h X; 0 we have: D j f (x)(h; : : :h) (j? )!! j Dk f (x)(h; : : :h) (k? )!! k (49) for any k; j such that k is even and k j. In particular D f (x)(h; h; h) D f (x)(h; h) (50)

12 Remark The condition (50) means that the function f is -self-concordant. It is crucial in the Nesterov-Nemirovsky scheme [], [9]. Proof: Observe that f (x) = q 0 (x) + P m i(x) with i (x) =?ln(?q i (x)). Inequalities (49) easily follows by (4) and Lemma.. Proposition. Suppose that x int(p ); t = + (x). Then x + tp (x) int(p ) and f (x + tp (x)) f (x) + ln( + (x))? (x): (5) Remark Recall (46) that p (x) =? 5 f (x) is the Newton's direction for the f at the point x; (x) was dened in (45). Proof: Set p (x) = p; (x) = in this proof. We have f (x + tp) = f (x) + tdf (x) p + t D f (x)(p; p) + X j Let i (t) = q i (x + tp) = `i t? t (i) (t? t (i) D j f (x)(p; : : :p) = (j? )! ) with t(i) < 0 < t (i). By (4): 0 i j + ht (i) ht (i) D j f (x)(p; : : :p) tj j! : (5) i C j A ; (5) for j and (x) = D f (x)(p; p) = D q 0 (x)(p; p) + 0 i + ht (i) ht (i) i C A : (54) (If i (t) is linear, the formulas (5), (54) hold true if we set t (i) = +. See (44)). Observe, further, that by (5): and by (5), (54) and Lemma.: X j Df (x) p =?D f (x)(p; p) =? (x) (55) D j f (x)(p : : :p) tj j! Combining (5), (55), (56), we obtain X j jtj j j! (j? )! (x) j (56) f (x + tp)? f (x)?t(? )? ln(? ); (57)

13 t 0. For t = t we obtain (5). Observe now that D f 0 (x)(tp; tp) D f (x)(tp; tp) = Hence, x + tp int(p ) by Proposition.. (x) ( + (x)) < : Lemma. Suppose that F is a symmetric trilinear form, G is a symmetric bilinear form on a Hilbert space H such that: F (h; h; h) G(h; h) (58) for some > 0 and every h H. Then F (h ; h ; h ) G(h ; h )G(h ; h )G(h ; h ) (59) for any h ; h ; h H: Proof: Given h ; h ; h H consider the vector subspace V H spanned by vectors h ; h ; h. It is clear that V is at most -dimensional. Consider the restrictions of F; G to V. It is sucient to establish (59) for any three vectors in V. For the proof of this see for example, [6]. Proposition.4 Let x int(p ); (x) <. Then x + = x + p (x) int(p ) and (x + ) Proof: Given H, consider the function (x) (? (x)) : (60) ' (t) = Df (x + tp)? (? t)df (x) ; (6) where p = p (x): It is clear that ' () = Df (x + p); ' (0) = 0. We have ' 0 (t) = D f (x + tp)(; p) + Df (x) ; (6) Using inequality (50) and Lemma., we obtain: ' 00 ' 00 (t) = D f (x + tp)(; p; p): (6) (t) D f (x + tp)(p; p) h i D f (x + tp)(; ) : (64)

14 Observe now that D f (x)(; ) h i? t D f (x)(p; p) D f (x + tp)(; ) D f (x)(; )? t [D f (x)(p; p)] (65) for 0 t <. Indeed, let (t) = D f (x + tp)(; ). Using inequality (50) and Lemma., we have : 0 (t) (t) (t) ; (66) where (t) = D f (x + tp)(p; p). Now (again using (50) and Lemma.) 0 (t) (t) : (67) By (67) (see [], p. 54) (t) (0)? t (0) ; 0 t < : By (66) we will have [], p.54: 0 (t) (t) (0) ; 0 t < : (68)? t (0) By (68), [], p.55: (0)(? t (0) ) (t) (0)? t (0) ; (69) which coincides with (65). Observe now that ' 0 (0) = D f (x)(p; ) + Df (x) =?Df (x) + Df (x) = 0, where we used (5), (46). Hence from (64), (65) we obtain: ' 0 (t) = Z t ' 00 0 ()d hd f (x)(; ) (? t)? Z t ' 00 0 i Z t 0 () d d h (? ) = D f (x)(; ) ; (70) where = (x). Further, by taking into account ' (0) = 0, we obtain h i Z t j' (t)j D f (x)(; ) (? )? d 0 = t D f (x)(; )? t i : (7) 4

15 Take t = ; =?p (x + ). We have ' () =?Df (x + ) p (x + ) = D f (x + )(p (x + ); p (x + )) = (x + ) : Hence, by (7) (x + ) hd? i f (x) p (x + ); p (x + )?? (x + )? ; which coincides with (60). In the last inequality we used (65) again. Remark The main idea of the proof of Proposition.4 is due to Y. Nesterov and A. Nemirovsky [9]. We used simplications suggested by [6] and []. Given IR, denote by P, the set fx int(p ) : f (x) g. We know (see the proof of Proposition.) that P is a convex, closed and bounded set. Suppose that for every IR there exist positive constants m and M such that M jjjj D f (x)(; ) m jjjj ; X; x P (7) The next proposition states the convergence of the Newton method for the function f. Proposition.5 Given 0; x int(p ); (x) <, consider the sequence x 0 = x; x = x 0 + p (x 0 ); : : :; x i = x i? + p (x i? ); i : Then x i int(p ) for all i and moreover under the assumption (7) lim x i = x(); i! + (7) Proof: By (60) (x ) (x 0 ) (? (x 0 )) 9 4 (x 0 ) : Using induction on i, we can easily conclude from (60): (x i ) 9 4 i? (x 0 ) i 4 i? (74) In particular, (x i )! 0; i! +: (75) 5

16 It is also clear from (74) that (x i ) < for all i. Hence, by Proposition. x i int(p ) for all i. Moreover, by (74) (x i ) < for all i. Since the function + + ln(? ) is positive for 0 < <, we conclude by (57) that f (x i ) < f (x i? ); i. Hence, all x i belong to the set P for an appropriately chosen. Since by (7) (x i ) m jjp (x i )jj; i ; we have by (75) p (x i )! 0; i! + (76) Recall that by (6) (x i )p (x i ) = 5 f (x i ): The condition (7) implies that jj (x)jj M for all x in P. Hence, (76) implies 5 f (x i )! 0; i! +:) (77) Observe now that (Df (y)? Df (x)) (y? x) m jjx? yjj for all x; y P (see for example, [0]). Take x = x(); y = x i. We have: (Df (y)? Df (x)) (y? x) = h5f (x i )? 5f (x()); x i? x()i = h5f (x i )? 5f (x(); (x i? x()))i = h 5 f (x i )? 5 f (x()); x i? x()i = h 5 f (x i ); x i? x()i m jjx i? x()jj ; (78) where in the second equality we used x i?x() X, and 5f (x()) = 0 in the fourth equality. Now (77), (78) and boundedness of P imply jjx i? x()jj! 0; i! +. Proposition.6 Given x int(p ); (x) < ; we have: f (x)? f (x()) (x) h i 9? : (79) 4 (x) Proof: Set p = p (x) in this proof. Since f is convex, we have: f (x + p) f (x) + Df (x) p = f (x)? (x) ; (80) 6

17 where we used (55). Let x 0 = x and x ; x ; : : : be the sequence of points obtained by repeating Newton's steps. We have by (74): (x i ) and x i! x(); i! +; by Proposition i? (x 0 ) i (8) Hence, f (x)? f (x()) = X i=0 (x i ) X i=0 X (x 0 ) h i 9? ; 4 (x 0 ) i=0 9 4 [f (x i )? f (x i+ )] i+? (x 0 ) i+ where we used (80) in the rst inequality and (8) in the second inequality. Proposition.7 If x int(p ); (x) <, then jq 0 (x)? q 0 (x())j? (x) +? 9 4 (x) (x) p m? (x) (8) Proof: The proof is quite similar to one of Lemma. in []. We are now in the position to describe a version of the large-step path-following algorithm for solving () - (). Suppose we are given 0 > 0 and x 0 int(p ) such that (x 0 ). Given # > 0, we perform what is called outer iterations = ( + #) 0 ; : : : i = ( + #) i 0. Suppose we are able to construct a sequence of points x ; x ; : : : in int(p ) such that i (x i ). Theorem. Suppose " > 0 is given and i ln 4m " 0 ln( + #) (8) Then q 0 (x i )? q 0 (x ) ", where x is an optimal solution to the problem () - (). Proof: Observe that (8) is equivalent to i 4m " : (84) By () q 0 (x( i ))? q 0 (x ) m i : 7

18 By (8) with (x) we obtain: q 0 (x i )? q 0 (x( i ))? 9 4 +? p m p m : i i Thus q 0 (x i )? q 0 (x ) = q 0 (x i )? q 0 (x( i )) + q 0 (x( i ))? q 0 (x ) p m + m i " where in the last inequality we used (84). It remains to describe a procedure for updating x i. Suppose we are given x i int(p ) such that i (x i ). Set i+ = ( + #) i. We have to nd x i+ int(p ) such that i+ (x i+ ). To do that we perform several Newton steps for the function f i+ Each such a Newton step is called an inner iteration. using x i as a starting point. Theorem. Each outer iteration requires at most + # 5 p! m + # + # m (85) inner iterations. point Proof: We start with a point x i int(p ) such that i (x i ). By Proposition. the belongs to int(p ) and moreover x = x i + tp i+ (x i ); t = + i+ (x i ) f i+ (x i )? f i+ (x) i+ (x i )? ln( + i+ (x i ))? ln + > : (86) We continue to perform iterations described above until i+ (x i+ ). By (86) the value of the function f i+ is decreased by at least such iterations, we obviously have: after such an iteration. If N is the number of N f i+ (x i )? f i+ (x( i+ )) : (87) Let (x; ) = f (x)? f (x()): We wish to estimate (x i ; i+ ). By the mean value theorem: (x i ; i+ ) = (x i ; i (x; ^)# i (88) 8

19 for some ^ ( i ; i+ ). We have But 5f (x()) X? ; dx() d X. Hence, By (88), (x; ) = q 0(x) + 5f (x()); (x; ) = q 0(x)? q 0 (x()): (89) : (x i ; i+ ) (x i ; i ) + ( i+? i ) q 0 (x i )? q 0 (x( ^)) (x i ; i ) + # i (jq 0 (x()))? q 0 (x i )j + (q 0 (x())? q 0 (x( i+ )) : (90) In the last inequality we used the fact that q 0 (x()) is a monotonically decreasing function By (8) Further, by (79) Finally, by (5): + jq 0 (x i )? q 0 (x( i ))j? 9? 4 (x i ; i )? 9 4 q 0 (x())? q 0 (x( i+ )) m i? Substituting (9)-(9) into (90), we obtain: (x i ; i+ ) + # Combining (94), (87) we arrive at (85). From Theorems.,. we immediately obtain: 5p m p m 5 p m : (9) < : (9) m = m# (9) i+ i+! + m# : (94) + # Theorem. An upper bound for the total number of Newton iterations is given by: 4m " 0 ln ln( + #) + # 5 p m + m# + # We now briey consider the situation where the number of inner iterations (per one outer iteration) does not exceed. 9!! :

20 Proposition.8 Given x int(p ); > 0; # > 0. If + = ( + #), then +(x) ( + #) (x) + # p m: (95) Proof: Given X, D f +(x)(; ) = + D q 0 (x)(; ) + Introduce a vector ~p X by the condition: [Dq i (x) ] D f q i (x) (x)(; ): (96) D f (x)(; ~p) =?Df +(x) ; X (97) We have Hence, by (97): Df +(x) = ( + #)Df (x) + # D f (x)(; ~p) = ( + #)D f (x)(; p (x))? # h5q i (x); i : q i (x) h5q i (x); i : (98) q i (x) Taking = ~p, we obtain from (98): h i D f (x)(~p; ~p) ( + #) D f (x)(~p; ~p) + p m h5q i (x); ~pi q i (x)! h D f (x)(~p; ~p) hd i f (x)(p (x); p (x)) i? (x)( + #) + p m : (99) Here we used the Cauchy-Schwarz inequality twice. It remains to prove that D f +(x)? p +(x); p +(x) D f (x)(~p; ~p): (00) Indeed, by (97):?Df +(x) = D f (x)(; ~p) = D f +(x)? ; p +(x) : In particular, for = p +(x):? D f +(x) p +(x); p +(x) = D f (x) (~p; p (x)) hd? i f (x) p +(x); p +(x) h D f (x)(~p; ~p) i hd f +(x)? p +(x); p +(x) i hd f (x)(~p; ~p) i where in the last inequality we used (96). The result follows by (99), (00). 0

21 Theorem.4 Let x int(p ); (x) and + = ( + #), where # = p m, for suciently small > 0. Then x + = x + p (x) is such that +(x + ). Proof: By Propositions.4,.8: +(x + ) +(x)? +(x) 9 4 +(x) 9 ( + #) (x) + # p m 4 + p (x) + = () (0) m = 9 4 Observe that ()! 9 4 (x) 4 for! 0. Hence +(x+ ) for suciently small. 4 Example Consider the linear-quadratic time-dependent control problem with quadratic inequality constraints. Namely, let q i (y; u) = Z T 0 h i y T (t)q i (t)y(t) + u(t) T R i (t)u(t) dt + b i ; i = ; ; : : :; m (0) where (u; y) L p [0; T ]Ln [0; T ]; y is absolutely continuous function with values in IR n and such that _y L n[0; T ]. We use the standard notation Ln [0; T ] for the Hilbert space of measurable square integrable functions on [0; T ] with values in IR n. Further, Q i (t) (respectively, R i (t)) is a symmetric positive denite n by n (respectively, p by p) matrix which depends continuously on t [0; T ]; A(t) (respectively, B(t)) is a n by n (respectively, n by p) matrix which depends continuously on t [0; T ]; b i ; i = ; ; : : :; m are real numbers and b 0 = 0; T is a xed positive number. Consider the problem q 0 (y; u)! min (0) q i (y; u) 0; i = ; ; : : :; m (y; u) x 0 + X where X = f(y; u) L n[0; T ] Lp[0; T ] : y is absolutely continuous on [0; T ]; _y Ln [0; T ]; y(0) = 0 and _y(t) = A(t)y(t) + B(t)u(t); t [0; T ]g. Theorem. gives a rather sharp estimate for the number of Newton steps for the path following algorithm. From the computational viewpoint it is crucial to see that the performance

22 of a Newton step is a reasonable problem. Set x = (y; u). We will frequently omit the time variable t in what follows. We have: (" _p + A T p (y; u) f (x) = q 0 (y; u)? 5f (y; u) = # " # = d i = ln(?q i (y; u)); " P m Q0 y? P Q i y q i (y;u) m R R 0 y? i u q i (y;u) 6 P 4 Q m 0? Z T o # Q i + d iq i x q i (y;u) qi (y;u) R 0? P m R i q i (y;u) + d ir i y q i (y;u) y T Q i + u T R i p (x) = " # ; ; 7 5 ; dt; i = ; ; : : :; m; (04) X? = B T : p is absolutely continuous on [0; T ] and _p L p n [0; T ]; p(t ) = 0g. The equation (6) will take form: where We further have _p + A T p? y = Q (x) + B T p? u = R (x) + Q (x) = Q 0? R (x) = R 0? Q i q i (x) ; R i q i (x) ; (Q i y)d i ; (05) qi (x) (R i u)d i ; (06) qi (x) _ = A + B; (0) = 0 (07) " # y p(t ) = 0; 5f (x) = : u We have to solve (05) - (07) with respect to ;. The problem (05) - (07) is quite similar to one arising in connection with the standard linear-quadratic control problem (see for example, [], []). The only dierence is that the constants d i are unknown. We proceed, nevertheless as in the standard linear-quadratic case. We are looking for the solution (05) - (07) in the form: p(t) = K(t)(t) + (t): (08)

23 By (06) (t) = R (x)? B T p? u? Substituting (08), (09) in (00), (06), respectively, we obtain: d i R (x)? R i u q i (x) : (09) _K + KA + A T K + KL (x)k? Q (x) = 0 (0) K(T ) = 0 where L (x) = B(t)R (x)? B T (t), _p =?A T p + Q (x) + (t); p(t ) = 0; () _ = A + L (x) p + (t); (0) = 0; () (Q i y)d i (t) = y + ; () q i (x)?! R (x)? R (t) =?B R (x)? i u d i u + q i (x) ; (4)! = R (x)? B T (R i u)d i p? u? q i (x) : (5) _ =? A T + KL + (t) + (t); (T ) = 0: (6) From (0) we nd K(t); 0 t T. Let (t; ) be the fundamental solution to _ =?(A T + KL ): (7) Then (t; ) = (t; )?T is the fundamental solution to _ = (A + L K): By (6): (t) = Z t (See () for the known functions v; v i.) By () T (t; )(() + ())d = v(t) + (t) = Z t 0 = (t) + for known functions ; i (see (4)). (t; ) (L () + ()) d d i v i (t) (8) d i i (t) (9)

24 By (08), (8), (9) p(t) = (t) + for the known functions i ; and by (5): (t) = (t) + for the known functions ; i. Substituting (9), () into (04), we obtain: d i = j= d i i (t) (0) d i i (t) () c ij d j + " i ; i = ; ; : : :; m; () where c ij = " i = Z T 0 Z T 0 y T Q i j + u T R i j dt y T Q i + u T R i dt: Finally, we nd d i solving the system of linear algebraic equations (). We see that the performance of the Newton's step involves three essential steps: (a) The numerical integration of the Riccati equation (0) (b) Finding the fundamental matrix for the linear system for the linear time-dependent system (7) (c) Solving a system of m by m linear algebraic equations (). The rst two steps are the same as for the solving the standard time-dependent LQ-problem. We see that the performance of the Newton step in our situation is essentially equivalent to solving the standard LQ-problem plus the system of linear algebraic equations. The standard numerical treatment of the problem (0) involves introducing the Lagrange multipliers and considering the dual problem. From the computational viewpoint nding the value of the cost function of the dual problem at a point is more or less equivalent to the performing the Newton step of the primal problem. 4

25 5 Concluding Remarks In the present paper we have considered both long-step and short-step versions of a pathfollowing algorithm for the innite-dimensional quadratic programming problem with quadratic constraints. We found an estimate for the number of Newton steps which coincides with the best known [] in the nite-dimensional case. We then considered an example of the linear-quadratic timedependent control problem with quadratic constraints. We have shown that in this situation the performance of the Newton step is essentially equivalent to solving the standard LQ-problem without inequality constraints. Thus the interior-point methodology has been extended to an innite-dimensional situation useful for control applications. Acknowledgment. This research was supported in part by the Cooperative Research Centre for Robust and Adaptive Systems while the rst author visited the Australian National University and by the NSF grant DMS

26 References [] B.D.O. Anderson and J.B.Moore, " Optimal Control:Linear Quadratic Methods", Prentice -Hall,989. [] A.V. Balakrishnan, \Applied Functional Analysis", New York, Springer, 976, pp09. [] D. den Hertog, Interior point approach to linear, quadratic and convex programming, Kluwer Academic Publishers, 994, 08pp. [4] L. Faybusovich and J.B. Moore \Long-step path-following algorithm for the convex quadratic programming in a Hilbert space" (preprint) [5] D. Goldfarb and S. Liu \ An O(n L) primal interior point algorithm for convex quadratic programming", Mathematical Programming, vol. 49, pp 5-40, 99. [6] F. Jarre, \Interior-Point methods for convex programming", Applied Mathematics and Optimization,99, vol 6, pp 87-. [7] F.Jarre, \ On the convergence of the method of analytic centers when applied to convex quadratic programs", Technical Report, Dept. of Applied Mathematics, University of Wurzburg, 987. [8] S.Mehrotra and J.Sun, \ A method of analytic centers for quadratically constrained convex quadratic programs", SIAM Journal of Numerical Analysis, vol.8, pp , 99. [9] Y. Nesterov and A. Nemirovsky, \Interior-point polynomial methods in convex programming", SIAM, 994. [0] J.M. Ortega and W.C. Rheinholdt, \Iterative Solution of nonlinear equations in several variables", New York, Academy Press, 970. [] J.Renegar, \ Linear Programming, Complexity Theory and Elementary Functional Analysis ", Technical report, Cornell University, 99, to appear in revised form in Mathematical programming. [] A. Sage and C. White, III, \Optimum systems control", Englewood Clis, Prentice Hall, 977, pp4 6

27 [] V.Yakubovich, \Nonconvex optimization problem: the innite-horizon linear-quadratic control problem with quadratic constraints", System and Control Letters, 99,vol.9,pp.-. 7

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI

ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, BAHMAN KALANTARI ON THE ARITHMETIC-GEOMETRIC MEAN INEQUALITY AND ITS RELATIONSHIP TO LINEAR PROGRAMMING, MATRIX SCALING, AND GORDAN'S THEOREM BAHMAN KALANTARI Abstract. It is a classical inequality that the minimum of

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Largest dual ellipsoids inscribed in dual cones

Largest dual ellipsoids inscribed in dual cones Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that

More information

A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the dete

A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the dete A quasi-separation theorem for LQG optimal control with IQ constraints Andrew E.B. Lim y & John B. Moore z February 1997 Abstract We consider the deterministic, the full observation and the partial observation

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications

Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications Linear-quadratic control problem with a linear term on semiinfinite interval: theory and applications L. Faybusovich T. Mouktonglang Department of Mathematics, University of Notre Dame, Notre Dame, IN

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about Rank-one LMIs and Lyapunov's Inequality Didier Henrion 1;; Gjerrit Meinsma Abstract We describe a new proof of the well-known Lyapunov's matrix inequality about the location of the eigenvalues of a matrix

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

I teach myself... Hilbert spaces

I teach myself... Hilbert spaces I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition

More information

1. Bounded linear maps. A linear map T : E F of real Banach

1. Bounded linear maps. A linear map T : E F of real Banach DIFFERENTIABLE MAPS 1. Bounded linear maps. A linear map T : E F of real Banach spaces E, F is bounded if M > 0 so that for all v E: T v M v. If v r T v C for some positive constants r, C, then T is bounded:

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Supplement: Universal Self-Concordant Barrier Functions

Supplement: Universal Self-Concordant Barrier Functions IE 8534 1 Supplement: Universal Self-Concordant Barrier Functions IE 8534 2 Recall that a self-concordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi

Abstract. Jacobi curves are far going generalizations of the spaces of \Jacobi Principal Invariants of Jacobi Curves Andrei Agrachev 1 and Igor Zelenko 2 1 S.I.S.S.A., Via Beirut 2-4, 34013 Trieste, Italy and Steklov Mathematical Institute, ul. Gubkina 8, 117966 Moscow, Russia; email:

More information

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad

1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +31-10-408916 August,

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE

A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

Best approximations in normed vector spaces

Best approximations in normed vector spaces Best approximations in normed vector spaces Mike de Vries 5699703 a thesis submitted to the Department of Mathematics at Utrecht University in partial fulfillment of the requirements for the degree of

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Math 108A: August 21, 2008 John Douglas Moore Our goal in these notes is to explain a few facts regarding linear systems of equations not included in the first few chapters

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Constrained Optimization Theory

Constrained Optimization Theory Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

INVERSE FUNCTION THEOREM and SURFACES IN R n

INVERSE FUNCTION THEOREM and SURFACES IN R n INVERSE FUNCTION THEOREM and SURFACES IN R n Let f C k (U; R n ), with U R n open. Assume df(a) GL(R n ), where a U. The Inverse Function Theorem says there is an open neighborhood V U of a in R n so that

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow

FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

A Distributed Newton Method for Network Utility Maximization, II: Convergence

A Distributed Newton Method for Network Utility Maximization, II: Convergence A Distributed Newton Method for Network Utility Maximization, II: Convergence Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract The existing distributed algorithms for Network Utility

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem

Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem Implementation of Infinite Dimensional Interior Point Method for Solving Multi-criteria Linear Quadratic Control Problem L. Faybusovich Department of Mathematics, University of Notre Dame, Notre Dame,

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Lecture 15 Newton Method and Self-Concordance. October 23, 2008 Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications

More information

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory

UCLA Chemical Engineering. Process & Control Systems Engineering Laboratory Constrained Innite-Time Nonlinear Quadratic Optimal Control V. Manousiouthakis D. Chmielewski Chemical Engineering Department UCLA 1998 AIChE Annual Meeting Outline Unconstrained Innite-Time Nonlinear

More information

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1)

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1) QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1) Each of the six questions is worth 10 points. 1) Let H be a (real or complex) Hilbert space. We say

More information

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES JAMES READY Abstract. In this paper, we rst introduce the concepts of Markov Chains and their stationary distributions. We then discuss

More information

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond

Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Primal-Dual Symmetric Interior-Point Methods from SDP to Hyperbolic Cone Programming and Beyond Tor Myklebust Levent Tunçel September 26, 2014 Convex Optimization in Conic Form (P) inf c, x A(x) = b, x

More information

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2

2 Garrett: `A Good Spectral Theorem' 1. von Neumann algebras, density theorem The commutant of a subring S of a ring R is S 0 = fr 2 R : rs = sr; 8s 2 1 A Good Spectral Theorem c1996, Paul Garrett, garrett@math.umn.edu version February 12, 1996 1 Measurable Hilbert bundles Measurable Banach bundles Direct integrals of Hilbert spaces Trivializing Hilbert

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Midterm 1. Every element of the set of functions is continuous

Midterm 1. Every element of the set of functions is continuous Econ 200 Mathematics for Economists Midterm Question.- Consider the set of functions F C(0, ) dened by { } F = f C(0, ) f(x) = ax b, a A R and b B R That is, F is a subset of the set of continuous functions

More information

Lecture: Cone programming. Approximating the Lorentz cone.

Lecture: Cone programming. Approximating the Lorentz cone. Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

More information

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review

B553 Lecture 3: Multivariate Calculus and Linear Algebra Review B553 Lecture 3: Multivariate Calculus and Linear Algebra Review Kris Hauser December 30, 2011 We now move from the univariate setting to the multivariate setting, where we will spend the rest of the class.

More information

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality

Contents Real Vector Spaces Linear Equations and Linear Inequalities Polyhedra Linear Programs and the Simplex Method Lagrangian Duality Contents Introduction v Chapter 1. Real Vector Spaces 1 1.1. Linear and Affine Spaces 1 1.2. Maps and Matrices 4 1.3. Inner Products and Norms 7 1.4. Continuous and Differentiable Functions 11 Chapter

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea Torsion of dierentials on toric varieties Klaus Altmann Institut fur reine Mathematik, Humboldt-Universitat zu Berlin Ziegelstr. 13a, D-10099 Berlin, Germany. E-mail: altmann@mathematik.hu-berlin.de Abstract

More information

Linear Programming Methods

Linear Programming Methods Chapter 11 Linear Programming Methods 1 In this chapter we consider the linear programming approach to dynamic programming. First, Bellman s equation can be reformulated as a linear program whose solution

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y

More information

An Optimal Affine Invariant Smooth Minimization Algorithm.

An Optimal Affine Invariant Smooth Minimization Algorithm. An Optimal Affine Invariant Smooth Minimization Algorithm. Alexandre d Aspremont, CNRS & École Polytechnique. Joint work with Martin Jaggi. Support from ERC SIPA. A. d Aspremont IWSL, Moscow, June 2013,

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

The iterative convex minorant algorithm for nonparametric estimation

The iterative convex minorant algorithm for nonparametric estimation The iterative convex minorant algorithm for nonparametric estimation Report 95-05 Geurt Jongbloed Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Polynomial complementarity problems

Polynomial complementarity problems Polynomial complementarity problems M. Seetharama Gowda Department of Mathematics and Statistics University of Maryland, Baltimore County Baltimore, Maryland 21250, USA gowda@umbc.edu December 2, 2016

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS

GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS Di Girolami, C. and Russo, F. Osaka J. Math. 51 (214), 729 783 GENERALIZED COVARIATION FOR BANACH SPACE VALUED PROCESSES, ITÔ FORMULA AND APPLICATIONS CRISTINA DI GIROLAMI and FRANCESCO RUSSO (Received

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information