Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming
|
|
- Blaise Harper
- 6 years ago
- Views:
Transcription
1 School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio J.Gondzio@ed.ac.uk URL: Paris, January
2 Outline Part 1: IPM for QP quadratic forms duality in QP first order optimality conditions primal-dual framework Part 2: IPM for NLP NLP notation Lagrangian first order optimality conditions primal-dual framework Algebraic Modelling Languages Self-concordant barrier Paris, January
3 IPM for Convex QP Paris, January
4 Convex Quadratic Programs The quadratic function f(x) = x T Qx is convex if and only if the matrix Q is positive definite. In such case the quadratic programming problem is well defined. min c T x+ 1 2 xt Qx s.t. Ax = b, x 0, If there exists a feasible solution to it, then there exists an optimal solution. Paris, January
5 QP Background: Def. A matrix Q R n n is positive semidefinite if x T Qx 0 for any x 0. We write Q 0. Def. A matrix Q R n n is positive definite if x T Qx > 0 for any x 0. We write Q 0. Example: Consider quadratic functions f(x) = x T Qx with the following matrices: [ ] [ ] [ ] Q 1 = 0 2, Q 2 = 0 1, Q 3 = 4 3. Q 1 is positive definite (hence f 1 is convex). Q 2 and Q 3 are indefinite (f 2,f 3 are not convex). Paris, January
6 QP with IPMs Consider the convex quadratic programming problem. The primal min c T x+ 1 2 xt Qx s.t. Ax = b, x 0, and the dual max b T y 1 2 xt Qx s.t. A T y +s Qx = c, x,s 0. Apply the usual procedure: replace inequalities with log barriers; form the Lagrangian; write the first order optimality conditions; apply Newton method to them. Paris, January
7 QP with IPMs: Log Barriers Replace the primal QP min c T x+ 1 2 xt Qx s.t. Ax = b, x 0, with the primal barrier QP min c T x+ 1 2 xt Qx n s.t. Ax = b. j=1 lnx j Paris, January
8 QP with IPMs: Log Barriers Replace the dual QP with the dual barrier QP max b T y 2 1xT Qx s.t. A T y + s Qx = c, y free, s 0, max b T y 1 2 xt Qx+ n j=1 s.t. A T y +s Qx = c. lns j Paris, January
9 First Order Optimality Conditions Consider the primal barrier quadratic program min c T x+ 1 2 xt Qx µ n s.t. Ax = b, where µ 0 is a barrier parameter. Write out the Lagrangian j=1 lnx j L(x,y,µ) = c T x+ 1 2 xt Qx y T (Ax b) µ n j=1 lnx j, Paris, January
10 First Order Optimality Conditions (cont d) The conditions for a stationary point of the Lagrangian: L(x,y,µ) = c T x+ 1 n 2 xt Qx y T (Ax b) µ lnx j, are j=1 x L(x,y,µ) = c A T y µx 1 e+qx = 0 y L(x,y,µ) = Ax b = 0, where X 1 = diag{x 1 1,x 1 2,,x 1 n }. Let us denote s = µx 1 e, i.e. XSe = µe. The First Order Optimality Conditions are: Ax = b, A T y +s Qx = c, XSe = µe. Paris, January
11 Apply Newton Method to the FOC The first order optimality conditions for the barrier problem form a large system of nonlinear equations F(x,y,s) = 0, where F : R 2n+m R 2n+m is an application defined as follows: Ax b F(x,y,s) = A T y +s Qx c. XSe µe Actually, the first two terms of it are linear; only the last one, corresponding to the complementarity condition, is nonlinear. Note that F(x,y,s) = A 0 0 Q A T I. S 0 X Paris, January
12 Newton Method for the FOC (cont d) Thus, for a given point (x,y,s) we find the Newton direction ( x, y, s) by solving the system of linear equations: A 0 0 Q A T I S 0 X [ x y s ] = b Ax c A T y s+qx µe XSe. Paris, January
13 Interior-Point QP Algorithm Initialize k = 0, (x 0,y 0,s 0 ) F 0, µ 0 = 1 n (x0 ) T s 0, α 0 = Repeat until optimality k = k +1 µ k = σµ k 1, where σ (0,1) = Newton direction towards µ-center Ratio test: α P := max {α > 0 : x+α x 0}, α D := max {α > 0 : s+α s 0}. Make step: x k+1 = x k +α 0 α P x, y k+1 = y k +α 0 α D y, s k+1 = s k +α 0 α D s. Paris, January
14 From LP to QP QP problem min c T x+ 1 2 xt Qx s.t. Ax = b, x 0. First order conditions (for barrier problem) Ax = b, A T y +s Qx = c, XSe = µe. Paris, January
15 IPMs for Convex NLP Paris, January
16 Convex Nonlinear Optimization Consider the nonlinear optimization problem min f(x) s.t. g(x) 0, where x R n, and f : R n R and g : R n R m are convex, twice differentiable. Assumptions: f and g are convex If there exists a local minimum then it is a global one. f and g are twice differentiable We can use the second order Taylor approximations. Some additional (technical) conditions We need them to prove that the point which satisfies the first order optimality conditions is the optimum. We won t use them in this course. Paris, January
17 Taylor Expansion of f : R R Let f : R R. If all derivatives of f are continuously differentiable at x 0, then f (k) (x f(x) = 0 ) (x x k! 0 ) k, k=0 where f (k) (x 0 ) is the k-th derivative of f at x 0. The first order approximation of the function: f(x) = f(x 0 )+f (x 0 )(x x 0 )+r 2 (x x 0 ), where the remainder satisfies: r lim 2 (x x 0 ) = 0. x x 0 x x 0 Paris, January
18 Taylor Expansion (cont d) The second order approximation: f(x) = f(x 0 )+f (x 0 )(x x 0 ) where the remainder satisfies: f (x 0 )(x x 0 ) 2 +r 3 (x x 0 ), r lim 3 (x x 0 ) x x 0 (x x 0 ) 2 = 0. Paris, January
19 Derivatives of f : R n R The vector ( f(x)) T = is called the gradient of f at x. The matrix 2 f x 2 (x) 1 2 f(x)= ( f (x), f (x),..., f ) (x) x 1 x 2 x n 2 f x 2 x 1 (x) 2 f x 1 x 2 (x)... 2 f x 1 x n (x) 2 f x 2 (x) f x n x 1 (x) is called the Hessian of f at x. 2 f x n x 2 (x)... 2 f x 2 x n (x) 2 f x 2 (x) n Paris, January
20 Taylor Expansion of f : R n R Let f : R n R. If all derivatives of f are continuously differentiable at x 0, then f (k) (x f(x) = 0 ) (x x k! 0 ) k, k=0 where f (k) (x 0 ) is the k-th derivative of f at x 0. The first order approximation of the function: f(x) = f(x 0 )+ f(x 0 ) T (x x 0 )+r 2 (x x 0 ), where the remainder satisfies: r lim 2 (x x 0 ) x x 0 x x 0 = 0. Paris, January
21 Taylor Expansion (cont d) The second order approximation: of the function: f(x) = f(x 0 )+ f(x 0 ) T (x x 0 ) (x x 0) T 2 f(x 0 )(x x 0 )+r 3 (x x 0 ), where the remainder satisfies: r lim 3 (x x 0 ) x x 0 x x 0 2 = 0. Paris, January
22 Convexity: Reminder Property 1. For any collection {C i i I} of convex sets, the intersection i I C i is convex. Property 4. If C is a convexset and f : C R is convexfunction, the level sets {x C f(x) α} and {x C f(x) < α} are convex for all scalars α. Lemma 1: If g : R n R m is a convex function, then the set {x R n g(x) 0} is convex. Proof: Since every function g i : R n R,i = 1,2,...,m is convex, from Property 4, we conclude that every set X i = {x R n g i (x) 0} is convex. Next from Property 1, we deduce that the intersection m X = X i = {x R n g(x) 0} i=1 is convex, which completes the proof. Paris, January
23 Differentiable Convex Functions Property 8. Let C R n be a convex set and f : C R be twice continuously differentiable over C. (a)if 2 f(x)ispositivesemidefiniteforallx C,thenf isconvex. (b) If 2 f(x) is positive definite for all x C, then f is strictly convex. (c)iff isconvex,then 2 f(x)ispositivesemidefiniteforallx C. Let the second order approximation of the function be given: f(x) f(x 0 )+c T (x x 0 )+ 1 2 (x x 0) T Q(x x 0 ), where c = f(x 0 ) and Q = 2 f(x 0 ). From Property 8, it follows that when f is convex and twice differentiable, then Q exists and is a positive semidefinite matrix. Conclusion: Iff isconvexandtwicedifferentiable,thenoptimizationoff(x)can (locally) be replaced with the minimization of its quadratic model. Paris, January
24 Nonlinear Optimization with IPMs Nonlinear Optimization via QPs: Sequential Quadratic Programming (SQP). Repeat until optimality: approximate NLP (locally) with a QP; solve (approximately) the QP. Nonlinear Optimization with IPMs: works similarly to SQP scheme. However, the (local) QP approximations are not solved to optimality. Instead, only one step in the Newton direction corresponding to a given QP approximation is made and the new QP approximation is computed. Paris, January
25 Nonlinear Optimization with IPMs Derive an IPM for NLP: replace inequalities with log barriers; form the Lagrangian; write the first order optimality conditions; apply Newton method to them. Paris, January
26 NLP Notation Consider the nonlinear optimization problem min f(x) s.t. g(x) 0, where x R n, and f : R n R and g : R n R m are convex, twice differentiable. The vector-valued function g : R n R m has a derivative [ ] gi A(x) = g(x) = R m n x j which is called the Jacobian of g. i=1..m,j=1..n Paris, January
27 NLP Notation (cont d) The Lagrangian associated with the NLP is: L(x,y) = f(x)+y T g(x), where y R m,y 0 are Lagrange multipliers (dual variables). The first derivatives of the Lagrangian: x L(x,y) = f(x)+ g(x) T y y L(x,y) = g(x). The Hessian of the Lagrangian, Q(x,y) R n n : m Q(x,y) = 2 xxl(x,y) = 2 f(x)+ y i 2 g i (x). i=1 Paris, January
28 Convexity in NLP Lemma 2: If f : R n R and g : R n R m are convex, twice differentiable, then the Hessian of the Lagrangian m Q(x,y) = 2 f(x)+ y i 2 g i (x) is positive semidefinite for any x and any y 0. If f is strictly convex, then Q(x,y) is positive definite for any x and any y 0. Proof: UsingProperty8,theconvexityoff impliesthat 2 f(x)is positive semidefinite for any x. Similarly, the convexity of g implies that for all i = 1,2,...,m, 2 g i (x) is positive semidefinite for any x. Since y i 0 for all i = 1,2,...,m and Q(x,y) is the sum of positive semidefinite matrices, we conclude that Q(x, y) is positive semidefinite. If f is strictly convex, then 2 f(x) is positive definite and so is Q(x,y). Paris, January i=1
29 IPM for NLP Add slack variables to nonlinear inequalities: min f(x) s.t. g(x)+z = 0 z 0, where z R m. Replace inequality z 0 with the logarithmic barrier: min f(x) µ m lnz i i=1 s.t. g(x)+z = 0. Write out the Lagrangian L(x,y,z,µ) = f(x)+y T (g(x)+z) µ m i=1 lnz i, Paris, January
30 IPM for NLP For the Lagrangian L(x,y,z,µ) = f(x)+y T (g(x)+z) µ m lnz i, i=1 write the conditions for a stationary point x L(x,y,z,µ) = f(x)+ g(x) T y = 0 y L(x,y,z,µ) = g(x)+z = 0 z L(x,y,z,µ) = y µz 1 e = 0, where Z 1 = diag{z 1 1,z 1 2,,z 1 m }. The First Order Optimality Conditions are: f(x)+ g(x) T y = 0, g(x)+z = 0, YZe = µe. Paris, January
31 Newton Method for the FOC The first order optimality conditions for the barrier problem form a large system of nonlinear equations F(x,y,z) = 0, where F : R n+2m R n+2m is an application defined as follows: F(x,y,z) = f(x) + g(x)t y g(x) + z. YZe µe Note that all three terms of it are nonlinear. (In LP and QP the first two terms were linear.) Paris, January
32 Newton Method for the FOC Observe that F(x,y,z) = Q(x,y) A(x)T 0 A(x) 0 I 0 Z Y, where A(x) is the Jacobian of g and Q(x,y) is the Hessian of L. They are defined as follows: A(x) = g(x) Q(x,y) = 2 f(x)+ m i=1 R m n y i 2 g i (x) R n n Paris, January
33 Newton Method (cont d) Foragivenpoint(x,y,z)wefindtheNewtondirection( x, y, z) by solving the system of linear equations: Q(x,y) [ ] A(x)T 0 x A(x) 0 I y = f(x) A(x)T y g(x) z. 0 Z Y z µe YZe Using the third equation we eliminate z = µy 1 e Ze ZY 1 y, from the second equation and get [ ][ ] Q(x,y) A(x) T x A(x) ZY 1 y = [ f(x) A(x) T y g(x) µy 1 e ]. Paris, January
34 Interior-Point NLP Algorithm Initialize k = 0 (x 0,y 0,z 0 ) such that y 0 > 0 and z 0 > 0, µ 0 = 1 m (y0 ) T z 0 Repeat until optimality k = k +1 µ k = σµ k 1, where σ (0,1) Compute A(x) and Q(x,y) = Newton direction towards µ-center Ratio test: α 1 := max {α > 0 : y +α y 0}, α 2 := max {α > 0 : z +α z 0}. Choosethestep: (usetrustregionorlinesearch) α min{α 1,α 2 }. Make step: x k+1 = x k +α x, y k+1 = y k +α y, z k+1 = z k +α z. Paris, January
35 From QP to NLP Newton direction for QP Q AT I A 0 0 S 0 X [ x y s ] = [ ξd ξp ξ µ ]. Augmented system for QP [ Q SX 1 A T A 0 ][ x y ] = [ ξd X 1 ] ξ µ. ξ p Paris, January
36 From QP to NLP Newton direction for NLP Q(x,y) [ A(x)T 0 x A(x) 0 I y 0 Z Y z ] = f(x) A(x)T y g(x) z µe YZe. Augmented system for NLP [ ][ Q(x,y) A(x) T x A(x) ZY 1 y ] = [ f(x) A(x) T y g(x) µy 1 e ]. Conclusion: NLP is a natural extension of QP. Paris, January
37 Linear Algebra in IPM for NLP Newton direction for NLP Q(x,y) [ ] A(x)T 0 x A(x) 0 I y 0 Z Y z = f(x) A(x)T y g(x) z µe YZe The corresponding augmented system [ ][ ] [ Q(x,y) A(x) T x f(x) A(x) A(x) ZY 1 y = T ] y g(x) µy 1. e where A(x) R m n is the Jacobian of g and Q(x,y) R n n is the Hessian of L A(x) = g(x) Q(x,y) = 2 f(x)+ m y i 2 g i (x) Paris, January i=1.
38 Linear Algebra in IPM for NLP (cont d) Automatic differentiation is very useful... get Q(x, y) and A(x) from Algebraic Modeling Language. Model Solution AML Num. Anal. Package SOLVER Output Paris, January
39 Automatic Differentiation AD in the Internet: ADIFOR (FORTRAN code for AD): ADOL-C (C/C++ code for AD): AD Tools/adolc.anl/adolc.html AD page at Cornell: Paris, January
40 IPMs: Remarks Interior Point Methods provide the unified framework for convex optimization. IPMs provide polynomial algorithms for LP, QP and NLP. The linear algebra in LP, QP and NLP is very similar. Use IPMs to solve very large problems. Further Extensions: Nonconvex optimization. IPMs in the Internet: LP FAQ (Frequently Asked Questions): Interior Point Methods On-Line: NEOS (Network Enabled Optimization Services): Paris, January
41 Newton Method and Self-concordant Barriers Paris, January
42 Another View of Newton M. for Optimization Newton Method for Optimization Let f : R n R be a twice continuously differentiable function. Suppose we build a quadratic model f of f around a given point x k, i.e., we define x = x x k and write: f(x) = f(x k )+ f(x k ) T x+ 1 2 xt 2 f(x k ) x Now we optimize the model f instead of optimizing f. A minimum(or, more generally, a stationary point) of the quadratic model satisfies: f(x) = f(x k )+ 2 f(x k ) x = 0, i.e. x = x x k = ( 2 f(x k )) 1 f(x k ), which reduces to the usual equation: x k+1 = x k ( 2 f(x k )) 1 f(x k ). Paris, January
43 logx Barrier Function Consider the primal barrier linear program n min c T x µ lnx j s.t. Ax = b, j=1 where µ 0 is a barrier parameter. Write out the Lagrangian L(x,y,µ) = c T x y T (Ax b) µ and the conditions for a stationary point n lnx j, j=1 x L(x,y,µ) = c A T y µx 1 e = 0 y L(x,y,µ) = Ax b = 0, where X 1 = diag{x 1 1,x 1 2,,x 1 n }. Paris, January
44 log x Barrier Function (cont d) Let us denote s = µx 1 e, i.e. XSe = µe. The First Order Optimality Conditions are: Ax = b, A T y +s = c, XSe = µe. Paris, January
45 - logx bf: Newton Method The first order optimality conditions for the barrier problem form a large system of nonlinear equations F(x,y,s) = 0, where F : R 2n+m R 2n+m is an application defined as follows: Ax b F(x,y,s) = A T y +s c. XSe µe Actually, the first two terms of it are linear; only the last one, corresponding to the complementarity condition, is nonlinear. Note that F(x,y,s) = A A T I. S 0 X Paris, January
46 - log x bf: Newton Method (cont d) Thus, for a given point (x,y,s) we find the Newton direction ( x, y, s) by solving the system of linear equations: A 0 0 [ ] x 0 A T I y = b Ax c A T y s. S 0 X s µe XSe Paris, January
47 1/x α, α >0 Barrier Function Consider the primal barrier linear program n min c T 1 x µ s.t. Ax = b, j=1 where µ 0 is a barrier parameter and α > 0. Write out the Lagrangian x α j L(x,y,µ) = c T x y T (Ax b)+µ and the conditions for a stationary point n j=1 x L(x,y,µ) = c A T y µαx α 1 e = 0 y L(x,y,µ) = Ax b = 0, where X α 1 = diag{x α 1 1,x α 1 2,,x α 1 n }. Paris, January x α j,
48 1/x α, α >0 Barrier Function (cont d) Let us denote s = µαx α 1 e, i.e. X α+1 Se = µαe. The First Order Optimality Conditions are: Ax = b, A T y +s = c, X α+1 Se = µαe. Paris, January
49 1/x α, α>0 bf: Newton Method The first order optimality conditions for the barrier problem are F(x,y,s) = 0, where F : R 2n+m R 2n+m is an application defined as follows: Ax b F(x,y,s) = A T y +s c. X α+1 Se µαe As before, only the last term, corresponding to the complementarity condition, is nonlinear. Note that F(x,y,s) = A A T I (α+1)x α S 0 X α+1 Paris, January
50 1/x α, α>0 bf: Newton Method (cont d) Thus, for a given point (x,y,s) we find the Newton direction ( x, y, s) by solving the system of linear equations: A 0 0 [ ] x b Ax 0 A T I y = c A T y s (α+1)x α S 0 X α+1 s µαe X α+1 Se. Paris, January
51 e 1/x Barrier Function Consider the primal barrier linear program n min c T x µ e 1/x j s.t. Ax = b, j=1 where µ 0 is a barrier parameter. Write out the Lagrangian L(x,y,µ) = c T x y T (Ax b)+µ n e 1/x j, j=1 and the conditions for a stationary point x L(x,y,µ) = c A T y µx 2 exp(x 1 )e = 0 y L(x,y,µ) = Ax b = 0, where exp(x 1 ) = diag{e 1/x 1,e 1/x 2,,e 1/x n}. Paris, January
52 e 1/x Barrier Function Let us denote s = µx 2 exp(x 1 )e, i.e. X 2 exp( X 1 )Se = µe. The First Order Optimality Conditions are: Ax = b, A T y +s = c, X 2 exp( X 1 )Se = µe. Paris, January
53 e 1/x bf: Newton Method The first order optimality conditions are F(x,y,s) = 0, where F : R 2n+m R 2n+m is defined as follows: Ax b F(x,y,s) = A T y +s c. X 2 exp( X 1 )Se µe As before, only the last term, corresponding to the complementarity condition, is nonlinear. Note that A 0 0 F(x,y,s) = 0 A T I. (2X +I)exp( X 1 ) 0 X 2 exp( X 1 ) Paris, January
54 e 1/x bf: Newton Method (cont d) Newton direction( x, y, s) solves the following system of linear equations: A A T I (2X +I)exp( X 1 )S 0 X 2 exp( X 1 ) = [ x y s b Ax c A T y s µe X 2 exp( X 1 )Se ]. Paris, January
55 Why Log Barrier is the Best? The First Order Optimality Conditions: logx : XSe = µe, 1/x α : X α+1 Se = µαe, e 1/x : X 2 exp( X 1 )Se = µe. Log Barrier ensures the symmetry between the primal and the dual. 3rd row in the Newton Equation System: logx : F 3 = [S,0,X], 1/x α : F 3 = [(α+1)x α S,0,X α+1 ] e 1/x : F 3 = [(2X+I)exp( X 1 )S,0,X 2 exp( X 1 )] Log Barrier produces the weakest nonlinearity. Paris, January
56 Self-concordant Functions There is a nice property of the function that is responsible for a good behaviour of the Newton method. Def Let C R n be an open nonempty convex set. Let f : C R be a three times continuously differentiable convex function. A function f is called self-concordant if there exists a constant p > 0 such that 3 f(x)[h,h,h] 2p 1/2 ( 2 f(x)[h,h]) 3/2, x C, h : x+h C. (We then say that f is p-self-concordant). Note that a self-concordant function is always well approximated by the quadratic model because the error of such an approximation can be bounded by the 3/2 power of 2 f(x)[h,h]. Paris, January
57 Self-concordant Barriers Lemma The barrier function logx is self-concordant on R +. Proof Consider f(x) = logx. Compute f (x) = x 1, f (x) = x 2 and f (x) = 2x 3 and check that the self-concordance condition is satisfied for p = 1. Lemma The barrier function 1/x α, with α (0, ) is not self-concordant on R +. Lemma The barrier function e 1/x is not self-concordant on R +. Use self-concordant barriers in optimization Paris, January
Introduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationInterior Point Methods: Second-Order Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.
SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationNonlinear optimization
Nonlinear optimization Anders Forsgren Optimization and Systems Theory Department of Mathematics Royal Institute of Technology (KTH) Stockholm, Sweden evita Winter School 2009 Geilo, Norway January 11
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationSecond-order cone programming
Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationLinear Algebra in IPMs
School of Mathematics H E U N I V E R S I Y O H F E D I N B U R G Linear Algebra in IPMs Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio Paris, January 2018 1 Outline Linear
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationOptimization Theory. Lectures 4-6
Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:
More informationChapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27
Chapter 11 Taylor Series Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27 First-Order Approximation We want to approximate function f by some simple function. Best possible approximation
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationUsing interior point methods for optimization in training very large scale Support Vector Machines. Interior Point Methods for QP. Elements of the IPM
School of Mathematics T H E U N I V E R S I T Y O H Using interior point methods for optimization in training very large scale Support Vector Machines F E D I N U R Part : Interior Point Methods for QP
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationLecture 3 Interior Point Methods and Nonlinear Optimization
Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationLecture: Introduction to LP, SDP and SOCP
Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationConvexification of Mixed-Integer Quadratically Constrained Quadratic Programs
Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationOptimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More information1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016
AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationPrimal-dual IPM with Asymmetric Barrier
Primal-dual IPM with Asymmetric Barrier Yurii Nesterov, CORE/INMA (UCL) September 29, 2008 (IFOR, ETHZ) Yu. Nesterov Primal-dual IPM with Asymmetric Barrier 1/28 Outline 1 Symmetric and asymmetric barriers
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More information6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC
6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationAnalysis/Calculus Review Day 3
Analysis/Calculus Review Day 3 Arvind Saibaba arvindks@stanford.edu Institute of Computational and Mathematical Engineering Stanford University September 15, 2010 Big- Oh and Little- Oh Notation We write
More information