Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings"

Transcription

1 Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings

2 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/ Patricia TOSSINGS Institut de Mathématique (B37), 0/57 Phone number: 04/

3 TABLE OF CONTENTS 01 - Optimization in Engineering Design 02 - Fundamentals of Structural Optimization 03 - Introduction to Mathematical Programming 04 - Algorithms for Unconstrained Optimization: Gradient Methods (including conjugate directions) 05 - Line Search Techniques 06 - Algorithms for Unconstrained Optimization: Newton and quasi-newton Methods 07 - Quasi-Unconstrained Optimization 08 - General Constrained Optimization: Dual Methods 09 - General Constrained Optimization: Transformation Methods (including SLP and SQP) 10 - Optimality Criteria 11 - Structural Approximations 12 - CONLIN and MMA 13 - Sensitivity Analysis for Finite Element Model 14 - Introduction to Shape Optimization 15 - Introduction to Topology Optimization

4 Chapter 03 INTRODUCTION TO MATHEMATICAL PROGRAMMING THEORY 1

5 Contents of chapter 03 Motivation and standard formulation of MPT 3 Feasible point, feasible domain 6 Global (strict) optimum 8 Local optimum 9 Looking for optimality conditions 10 Optimality conditions for unconstrained problems 14 Optimality conditions for problems with one equality constraint 17 Optimality conditions for problems with m equality constraints 20 Optimality conditions for problems with m inequality constraints 21 In practice 27 Linear sets 28 Linear functions 29 Convex sets and functions 30 MPT: partial classification 34 In practice - Taylor expansion 37 A few words about the topology of R n 38 In practice - Efficiency of an algorithm 41 Complements about the topology of R n 46 Important results with regard to optimization (Weierstrass) 50 2

6 MOTIVATION Concrete problem Modelling Mathematical Optimization Problem (P) Find x, solution of problem (P): Minimize f (x) subject to g j (x) 0 ( j = 1,...,m) x S R n Mathematical Programming Theory (MPT) Convention. In this course, all the functions are assumed to be real valued. 3

7 Vocabulary and notations In problem (P)... f is the objective or cost function. The conditions g j (x) 0 ( j = 1,...,m) x S R n are the constraints. The set S is often an interval of R n. In that case, one speaks about side constraints. x R n x = x 1. x n = ( x 1,...,x n ) T, x i R (i = 1,...,n). 4

8 The formulation of problem (P) is not restrictive. With regard to the objective function: Maximize f (x) Minimize [ f (x)] { Same optimal points Opposite optimal values An interesting alternative Maximize f (x) Minimize 1 f (x) With regard to the constraints: g j (x) 0 g j (x) 0 g j (x) = 0 g j (x) 0 g j (x) 0 5

9 Feasible point, feasible domain x R n is a solution or feasible point of problem (P) iff g j (x ) 0 ( j = 1,...,m) and x S. The feasible domain of problem (P) is the set of all its feasible points. 6

10 Interior, boundary and exterior point Assume that S = R n. x is an interior point of the feasible domain of problem (P) iff g j (x) < 0, j {1,...,m}. x is a boundary point of this domain iff j {1,...,m} : g j (x) = 0 x is an exterior point of this domain iff j {1,...,m} : g j (x) > 0 Note. The points IP, BP and EP on the previous figure illustrate these notions with S different from the whole space. Inactive, active an violated constraint The constraint g j is inactive at the point x iff g j (x) < 0. It is active at x iff g j (x) = 0. It is violated at x iff g j (x) > 0. 7

11 Global (strict) optimum x is an optimal solution or a global optimum (global minimizer) of problem (P) iff x is feasible, f (x ) f (y) for any feasible point y of(p). x is said strict (or strong) if the strict inequality holds. 8

12 Local optimum x is a local optimum of problem (P) iff it admits a neighbourhood V (x ) such that x is a global optimum of the local problem Minimize f (x) (P loc ) g j (x) 0 ( j = 1,...,m) subject to x S V (x ) 9

13 Looking for optimality conditions - A first example in R 2 f (x 1,x 2 ) = x x2 2 f (0,0) = 0 and f (x 1,x 2 ) > 0, ( x1 x 2 ) ( 0 0 ) f admits a global strict optimum at ( 0 0 ) Extract from the notes of Pr. E. Delhez, Analyse mathematique 10

14 An interesting comment f C 2 (R 2) and f x 1 (x 1,x 2 ) = 2x 1, 2 f (x 1,x 2 ) = 2, x 2 1 f x 2 (x 1,x 2 ) = 2x 2, 2 f (x 1,x 2 ) = 2, x f x 1 x 2 (x 1,x 2 ) = 2 f x 2 x 1 (x 1,x 2 ) = 0 f (x 1,x 2 ) = 0 iff x 1 = x 2 = 0 ( ) f (x 1,x 2 ) = is positive de f inite 0 2 }{{} See slide 15 Notations. f denotes the gradient of f and 2 f denotes its Hessian. 11

15 Looking for optimality conditions - A second example in R 2 f (x 1,x 2 ) = x 2 2 x2 1 f (x 1,0) = x 2 1 0, x 1 R, f (0,0) = 0 and f (0,x 2 ) = x 2 2 0, x 2 R. At the origin, f admits a { maximum with regard to x1, minimum with regard to x 2. One says that f admits a saddle point at the origin. Extract from the notes of Pr. E. Delhez, Analyse mathematique 12

16 An interesting comment f C 2 (R 2) and f x 1 (x 1,x 2 ) = 2x 1, 2 f (x 1,x 2 ) = 2, x 2 1 f x 2 (x 1,x 2 ) = 2x 2, 2 f (x 1,x 2 ) = 2, x f x 1 x 2 (x 1,x 2 ) = 2 f x 2 x 1 (x 1,x 2 ) = 0 f (x 1,x 2 ) = 0 iff x 1 = x 2 = 0 ( ) f (x 1,x 2 ) = is not de f inite 0 2 }{{} See slide 15 13

17 Optimality conditions - (1) UNCONSTRAINED problems (P unc ) Minimize f (x) x R n NECESSARY optimality conditions Let x be a local optimum of problem (P unc ). If f is continuously differentiable in a neighbourhood of x [ f is C 1 at x ], then x is a stationary point of f f (x ) = 0. If f is twice continuously differentiable in a neighbourhood of x [ f is C 2 at x ], then 2 f (x ) is positive semi definite. 14

18 Reminder of Algebra Definition A matrix A R nn is positive semi-definite (respectively negative semi-definite) iff x T Ax 0 (resp. x T Ax 0), x R n. It is positive definite (respectively negative definite) if, moreover, x T Ax = 0 x = 0. A matrix A R nn is not definite if it is neither positive semi-definite nor negative semi-definite. Proposition A symmetric matrix A R nn is - positive semi-definite iff all its eigenvalues are positive ( 0), - positive definite iff all its eigenvalues are strictly positive (> 0), - negative semi-definite iff all its eigenvalues are negative ( 0), - negative definite iff all its eigenvalues are strictly negative (< 0), - not definite if it admits both strictly positive and strictly negative eigenvalues. (See also the Sylvester criterion.) 15

19 Optimality conditions - (2) UNCONSTRAINED problems (P unc ) Minimize f (x) x R n SUFFICIENT optimality conditions Assume that - f is C 2 at x, - f (x ) = 0, - 2 f (x ) is positive definite. Then x is a strict local optimum of (P unc ). Relaxation. If 2 f is positive semi-definite in a neighbourhood of x, then x is a local optimum of (P unc ). Characterization of a global optimum??... Except if f is convex (See slides 30-32) 16

20 Optimality conditions - (3) Problems with ONE EQUALITY CONSTRAINT ( ) Minimize f (x) Peqc subject to g(x) = 0 NECESSARY optimality conditions Assume that - f and g are C 1 at x, - g(x ) 0, - x ( ) is an optimal solution of P eqc. There is a real number λ such that (x,λ ) is a stationary point of the Lagrangian function L(x,λ) = f (x) + λg(x), x R n, λ R. Vocabulary. λ is called Lagrange multiplier. 17

21 An intuitive explanation of the previous result (x,λ ) is a stationary point of the Lagrangian function iff f (x ) + λ g(x ) = 0 and g(x ) = 0. The second condition means that x satisfies the constraint. The first one implies that f (x ) and g(x ) are parallel. If this condition is not satisfied, there exists a direction e such that while D e f (x ) = e T f (x ) > 0 D e f (x ) = e T f (x ) < 0 and f can not be extremal at x. Extract from the notes of Pr. E. Delhez, Analyse mathematique 18

22 Reminder of Mathematical Analysis Let e R n be a direction: e = e 1. e n satisfies e = n e 2 i i=1 }{{} Norm: see slide 38 = 1. Definition The directional derivative of f in the direction e at the point x R n, denoted D e f (x), is the number defined by D e f (x) = lim θ 0 + f (x + θe) f (x) θ if the limit in the right hand side exists and is finite. Proposition If D e f (x) exists, then, starting from x, the function f increases (respectively decreases) in the direction of e iff D e f (x) > 0 (resp. D e f (x) < 0). Proposition If f is C 1 at x, then D e f (x) = e T f (x) }{{}. Inner product: see slide 38 Proposition If f is C 1 at x, then, starting from x, - the greatest increase of f is obtained in the direction of f (x), - its greatest decrease is obtained in the direction of f (x), - the variation of f is null in any direction e orthogonal to f (x). }{{} e T f (x)=0 The last result will be useful in Chapter

23 Optimality conditions - (4) Problems with m EQUALITY CONSTRAINTS ( ) Minimize f (x) Peqc subject to g j (x) = 0 ( j = 1,...,m) NECESSARY optimality conditions Assume that - f and g j ( j = 1,...,m) are C 1 at x, - the gradients of the constraints are linearly independent at x, - x is an optimal solution of ( P eqc ). There are real numbers λ 1,...,λ m such that ( x,λ 1,...,λ m) is a stationary point of the Lagrangian function L(x,λ 1,...,λ m ) = f (x) + m λ j g j (x), x R n, λ j R. j=1 Convention. When there is no risk of confusion, we set λ = (λ 1,...,λ m ). 20

24 Optimality conditions - (5) Problems with m INEQUALITY CONSTRAINTS ( ) Minimize f (x) Pineqc subject to g j (x) 0 ( j = 1,...,m) We associate to ( P ineqc ) the Lagrangian function L(x,λ) = f (x) + m j=1 λ j g j (x), x R n, λ }{{ 0 }. Attention! and the quasi-unconstrained problem (see slide 34) ( ) PLag min max L(x,λ) x λ 0 Convention. λ 0 is written for λ j 0, j = 1,...,m. ( ) ( ) Remark. Pineqc PLag (see chapter 08). ( ) Interpretation. With regard to problem P Lag, the second term of the Lagrangian function can be seen as a penalization for unfeasible points (see chapter 09). 21

25 NECESSARY optimality conditions for ( P ineqc ) Assume that - f and g j ( j = 1,...,m) are C 1 at x, Karush-Kuhn-Tucker (in brief: KKT) - x is a regular point, i.e. the gradients of the active constraints at x are linearly independent, - x is an optimal solution of ( P ineqc ). There is a set of Lagrange multipliers λ 1,...,λ m such that and f (x ) + g j (x ) 0 λ j 0 m j=1 λ j g j (x ) = 0 ( j = 1,...,m) λ j g j (x ) = 0 Remark. The last condition implies that the Lagrange multipliers corresponding to inactive constraints are zero (complementary slackness). 22

26 KKT - ILLUSTRATION / DISCUSSION Objective function : f (x 1,x 2 ) = x2 1 + x2 2 2 Constraints : g 1 (x 1,x 2 ) = 1 x 1 0 g 2 (x 1,x 2 ) = x 2 1 4x 1 + x 2 0 The optimal value of f over the whole space is 0 and is achieved for x 1 = x 2 = 0. The isovalue or level curves of f are circles centered at the origin. On the circle with radius r, the value of f is r

27 The optimal value of f over the feasible domain is achieved at ( ) [ x 1 = f (x ) = 1 ] 0 2 g 1 (x ) = 0 : active constraint, g 2 (x ) = 3 < 0 : inactive constraint. g 1 (x) 0, x R 2. In particular, x is a regular point. 24

28 Lagrangian function L(x,λ 1,λ 2 ) = x2 1 + x λ 1 (1 x 1 ) + λ 2 ( x 2 1 4x 1 + x 2 ) L x 1 (x,λ 1,λ 2 ) = x 1 λ 1 + (2x 1 4)λ 2 L x 2 (x,λ 1,λ 2 ) = x 2 + λ 2 x L(x,λ 1,λ 2 ) = 0 { 1 λ1 2λ 2 = 0 λ 2 = 0 { λ1 = 1 λ 2 = 0 λ 1 0 and λ 2 0. The positivity of the Lagrange multipliers is satisfied. λ 1 g 1 (x ) = 0 and λ 2 g 2 (x ) = 0. The complementary slackness is also satisfied. 25

29 ( 2 Let us now consider the point x = 0 ) x is a feasible point. g 1 and g 2 are both inactive at x. x L(x,λ 1,λ 2 ) = 0 { 2 λ1 = 0 λ 2 = 0 { λ1 = 2 λ 2 = 0 λ 1 0 and λ 2 0, λ 2 g 2 (x) = 0 BUT λ 1 g 1 (x) 0. The positivity of the Lagrange multipliers is satisfied BUT the complementary slackness is not. Due to the necessary optimality conditions of KKT, x can not be an optimal point of f over the feasible domain. x is effectively not such a point since f (x) = 2 > 1 2 = f (x ). 26

30 IN PRACTICE f is given but its values may be hard to compute. f is not necessarily differentiable and, even if it is sufficiently, f and 2 f may be difficult to approximate. Most of the algorithms introduced to solve problem (P) are iterative. Challenge: To obtain globally convergent methods (See slide 41) Response: Adaptation to some classes of problems (See slides 34-36) An interesting approach: Global phase + Local one Measure of efficiency - Number of function (and derivatives) evaluations required - Number of arithmetic operations required - Storage requirements - Order (or rate) of convergence (See slides 42-44) 27

31 LINEAR SETS A set L R n is linear iff x,y L, λ R (x + y) L and (λx) L. Generalization: x k L, λ k R (k = 1,...,K) [ K k=1λ k x k ] }{{} Linear combination o f the x k L. An affine set results from the translation of a linear one A = a + L, a R n, L linear set of R n. 28

32 LINEAR FUNCTIONS Let L be a linear set of R n. A function f : L R is linear iff ( K ) f λ k x k K = λ k f (x k) k=1 k=1 for any linear combination of elements in L. Consequence: f (x) = (a x) = a T x = n i=1 a i x i, a R n. An affine function results from the addition of a linear function with a constant f (x) = (a x) + b, a R n, b R. 29

33 CONVEX SETS AND FUNCTIONS A set C R n is convex iff x,y C, ϑ [0,1] [ϑx + (1 ϑ)y] C. Generalization: x k C, ϑ k [0,1] (k = 1,...,K), K k=1 ϑ k = 1 [ K k=1ϑ k x k ] }{{} Convex combination o f the x k C. Let C be a convex set of R n. A function f : C R is convex iff f [ϑx + (1 ϑ)y] ϑ f (x)+(1 ϑ) f (y), x,y C, ϑ [0,1]. It is concave if the inequality is reversed. f is strictly convex (respectively strictly concave) if the strict (appropriate) inequality holds whenever x y and ϑ ]0,1[. A function f defined on a convex set C is (strictly) concave iff its opposite is (strictly) convex. 30

34 Some important properties of convex functions - (1) A strictly convex function f : R n R admits at most one minimizer. A C 1 function f : R n R is convex iff f (y) f (x) + [ f (x)] T (y x), x,y R n. It is strictly convex iff the strict inequality holds whenever x y. A C 2 function f : R n R is convex iff semi-definite for any x R n. 2 f (x) is positive It is strictly convex iff 2 f (x) is positive definite for any x R n. Example: a quadratic function f (x) = 1 2 xt A x + b T x + c, A R nn (symmetric), b R n, c R is convex (resp. strictly convex) iff the matrix A is positive semi-definite (resp. positive definite). 31

35 Some important properties of convex functions - (2) If f : R n R is a C 1 convex function, then f (y) f (x) + [ f (x)] T (y x), x,y R n where f denotes the gradient of f. Any stationary point of a C 1 convex function f : R n R is a global minimizer of this function. In other words, x is a global minimizer of a convex function f (x ) = 0. f : R n R iff As a consequence, a C 1 strictly convex function f : R n R admits at most one stationary point. 32

36 Subdifferential of a convex function If f : R n R is a C 1 convex function, then f (y) f (x) + [ f (x)] T (y x), x,y R n where f denotes the gradient of f. Generalization x is a subgradient of a convex function f : R n R at a point x R n iff f (y) f (x) + x T (y x), y R n The subdifferential of f at x [denoted f (x)] is the set of all its subgradients at this point. If f is C 1 at x then f (x) = { f (x)}. x is a global minimizer of f iff 0 f (x ). 33

37 MPT: partial classification - (1) Preliminary convention f is continuous on a continuous set f is not necessarily differentiable We don t consider integer (or discrete) programming. Unconstrained problem: m = 0 and S = R n Basis of everything! Many general methods solve a sequence of unconstrained problems. Quasi-unconstrained problem: m = 0 and S is an interval, i.e. the minimization is only subject to side constraints x i x i x i (i = 1,...,n) Special case of linearly constrained problems (see next slide). Straightforward adaptation of unconstrained optimization methods. Useful for dual problems (see chapter 08). 34

38 MPT: partial classification - (2) Linear problem: f and g j linear, S interval of R n Well documented ( standard packages). Some general methods solve a sequence of linear problems (SLP). Linearly constrained problem: { f nonlinear g j linear, S interval of R n Easy adaptation of unconstrained optimization techniques. Some general methods solve a sequence of linearly constrained problems (see structural optimization). An interesting particular case: f quadratic f (x) = 1 2 xt A x + b T x + c with A (symmetric) positive (semi )definite. Special case of convex programming. Reference problem, specially in the unconstrained case (conjugacy, convergence properties, etc). Some general methods solve a sequence of quadratic problems (SQP). 35

39 MPT: partial classification - (3) Convex problem: f and g j convex, S convex subset of R n Global solution (see slide 32). KKT = sufficient conditions if the Slater condition is satisfied. Duality is rigorous. (See chapter 08) Separable problem: f (x) = n f i (x i ), g j (x) = i=1 and S is an interval defined by side constraints n g ji (x i ) i=1 x i x i x i (i = 1,...,n) Simplifications! The problem is equivalent to n one-dimensional subproblems. 2 f and 2 g j are diagonal matrices. Possibility to use parallelism. 36

40 IN PRACTICE Use the Taylor expansion with appropriate approximations of f and 2 f Assume that - f is sufficiently continuously differentiable in V (x), - h is such that [x,x + h] V (x). Then f (x + h) = f (x) + n f h i (x) + 1 i=1 x i 2 n k=1 n h k h l l=1 2 f x k x l (x) +... = f (x) + h T f (x) }{{} or [ f (x)] T h ht 2 f (x) h

41 A FEW WORDS ABOUT THE TOPOLOGY OF R n R n is a vectorial space for the following operations - Addition: (x + y) i = x i + y i - Multiplication by a scalar: (λx) i = λx i R n is classically equipped with the inner product (x y) = the associated euclidean norm n x i y i = x T y = y T x, i=1 x = (x x) = n xi 2 i=1 and the associated euclidean distance or metric d(x,y) = x y = n (x i y i ) 2 i=1 R n can be equipped with a topological structure. Other definitions can be adopted for the inner product and, as a consequence, for the norm, etc (see chapter 04). 38

42 Sequences in R n {x k } k N converges to x [ x k x ] iff ( ε > 0)( K N)( k K) : x k x ε In that case, x is called the limit of {x k } k N. x is an accumulation point of {x k } if there is a sub-sequence of {x k } that converges to x. A convergent sequence admits a unique accumulation point (its limit). The converse is not true. 39

43 Sequences in R n A particular case (n = 1) The upper-limit of {x k } [ limsup x k] is its greatest accumulation point. The lower-limit of {x k } [ liminf x k] is its smallest accumulation point. liminf x k = limsup ( x k) liminf x k = limsup x k = x x k x 40

44 IN PRACTICE Efficiency of an algorithm Global behaviour An algorithm introduced to solve problem (P) is said globally convergent if, for any starting point x 0, the sequence generated by this algorithm converges to a point which satisfies a necessary optimality condition. Asymptotic (or local) behaviour Hypothesis. {x k } converges to x in R n Objective. To measure the speed of convergence of {x k } for k large (asymptotic behaviour) or, in other words, in a neighbourhood of x (local behaviour). 41

45 Order (or rate) of convergence The order (or rate of convergence of the sequence {x k } is the greatest positive integer p for which there exists K N and C > 0 such that x k+1 x x C k x p, k K (1) The speed of convergence of {x k } increases with p. If this rate is 2, the convergence is said quadratic. (1) is satisfied if lim k x k+1 x x k x p < 42

46 p-linear convergence The sequence {x k } is p-linearly convergent if there exists K N and 0 < C < 1 such that x k+1 x C x k x p, k K For p = 1 : linear convergence. {x k } is p-linearly convergent if lim k x k+1 x x k x p = ρ < 1 (2) The speed of convergence of the sequence increases when ρ decreases. The smallest ρ for which (2) holds is the ratio of convergence of the sequence. 43

47 p-superlinear convergence The sequence {x k } is p-superlinearly convergent if there exists K N and C k 0 such that x k+1 x Ck x k x p, k K {x k } is p-superlinearly convergent if lim k x k+1 x x k x p = 0 For p = 1 : superlinear convergence. 44

48 IN PRACTICE Efficiency of an algorithm Another approach Another measure of the efficiency of an algorithm introduced to solve problem (P) can be obtained by considering no more the sequence {x k } but well the corresponding { f (x k )}. This approach leads to replace, in the previous definitions, expressions of the form x k x by f (x k ) f (x ). The two approaches are equivalent when f is C 2 at x and 2 f (x ) is positive definite. In other cases, they can differ. 45

49 COMPLEMENTS ABOUT THE TOPOLOGY OF R n Balls and neighbourhood of a point Let a R n and R > 0 be given. The open ball with center a and radius R is the set B(a,R) = {x R n : x a < R} The corresponding closed ball is B(a,R) = {x R n : x a R} The corresponding sphere is S(a,R) = {x R n : x a = R} A neighbourhood of a point x in R n is a set that contains at least one (open) ball centered on x. 46

50 COMPLEMENTS ABOUT THE TOPOLOGY OF R n Interior, boundary and closure of a set Let S R n be given. A point x S is an interior point of S iff it admits at least one neighbourhood entirely included in S. The interior of S [denoted int(s)] is the set of all its interior points. The exterior of S is the interior of its complement in R n. The boundary of S [denoted δ(s)] is the set of the points that are neither in its interior nor in its exterior. The closure of S [denoted cl(s)] is the set resulting from the union of its interior with its boundary. 47

51 COMPLEMENTS ABOUT THE TOPOLOGY OF R n Open and closed sets Let S R n be given. S is open iff it coincides with its interior. S is closed iff it coincides with its closure. S is open iff its complement in R n is closed. The intersection of a finite number of open sets is open ; any union of open sets is open. A closed set contains the limits of its convergent sequences {x k : k N} S S closed x S x k x 48

52 COMPLEMENTS ABOUT THE TOPOLOGY OF R n Bounded and compact sets Let S R n be given. S is bounded iff it is included in a ball. S is compact iff, from any sequence in S, one can extract a subsequence which converges to a point of S. A set S R n is compact iff it is both closed and bounded. 49

53 Important results with regard to optimization Theorem (Weierstrass) Assume that f is a continuous (real valued) function defined on a compact set K R n. The problem Minimize f (x) (P K ) subject to x K admits a global optimum. Corollary Assume that f is a continuous (real valued) function defined on R n, such that f (x) + when x + (one says that f is coercive). Then f admits a global minimizer on R n. 50

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology

KKT Examples. Stanley B. Gershwin Massachusetts Institute of Technology Stanley B. Gershwin Massachusetts Institute of Technology The purpose of this note is to supplement the slides that describe the Karush-Kuhn-Tucker conditions. Neither these notes nor the slides are a

More information

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III CONVEX ANALYSIS NONLINEAR PROGRAMMING THEORY NONLINEAR PROGRAMMING ALGORITHMS

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Lecture 2: Convex Sets and Functions

Lecture 2: Convex Sets and Functions Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING Nf SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING f(x R m g HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 5, DR RAPHAEL

More information

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces

Introduction to Optimization Techniques. Nonlinear Optimization in Function Spaces Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

On the Convergence of the Concave-Convex Procedure

On the Convergence of the Concave-Convex Procedure On the Convergence of the Concave-Convex Procedure Bharath K. Sriperumbudur and Gert R. G. Lanckriet Department of ECE UC San Diego, La Jolla bharathsv@ucsd.edu, gert@ece.ucsd.edu Abstract The concave-convex

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics

More information

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima

Lecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2003

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2003 Test 1 September 26, 2003 1. Construct a truth table to prove each of the following tautologies (p, q, r are statements and c is a contradiction): (a) [p (q r)] [(p q) r] (b) (p q) [(p q) c] 2. Answer

More information

Finite Dimensional Optimization Part III: Convex Optimization 1

Finite Dimensional Optimization Part III: Convex Optimization 1 John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,

More information

Inequality Constraints

Inequality Constraints Chapter 2 Inequality Constraints 2.1 Optimality Conditions Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the

More information