NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Size: px
Start display at page:

Download "NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)"

Transcription

1 NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

2 Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions (objective function and constraint functions) are linear. This assumption frequently does not hold, and nonlinear programming problems are formulated: Find x = (x 1, x 2,..., x n ) to Maximize f (x) subject to g i (x) b i, and x 0 for i = 1, 2,..., m João Miguel da Costa Sousa / Alexandra Moutinho 325

3 Nonlinear Programming g There are many types of nonlinear programming problems, depending on f(x) and g i (x) assumed differentiable or piecewise linear functions. Different algorithms are used for different types. Some problems can be solved very efficiently, whilst others, even small, can be very difficult. Nonlinear programming is a particularly large subject. Only some important types will be dealt with here. Some applications are give in the following. João Miguel da Costa Sousa / Alexandra Moutinho 326

4 Application: product mix problem In product mix problems (as Wyndor Glass Co.) the goal is to determine optimal mix of production levels. Sometimes price elasticity is present: the amount of sold product has an inverse relation to price charged: João Miguel da Costa Sousa / Alexandra Moutinho 327

5 Price elasticity p(x) ) is the price required to sell x units. c is the unit cost for producing and distributing product. Profit from producing poduc gand dselling x is: P(x) = xp(x) cx João Miguel da Costa Sousa / Alexandra Moutinho 328

6 Product mix problem If each product has a similar profit function, overall objective function is n f( x) = Pj( xj) j= 1 Other nonlinearity: marginal cost varies with production level. It may decrease when production level is increased due to the learning curve effect. It may increase due to overtime or more expensive production facilities when production increases. João Miguel da Costa Sousa / Alexandra Moutinho 329

7 Application: transportation problem Determine optimal plan for shipping goods from various sources to various destinations (see P&T Company problem). Cost per unit shipped may not be fixed. Volume discounts are sometimes available for large shipments. Marginal cost can have a pattern like in the figure. Cost of shipping x units is a piecewise linear function C(x), with slope equal to the marginal cost. João Miguel da Costa Sousa / Alexandra Moutinho 330

8 Volume discounts on shipping costs Marginal cost Cost of shipping João Miguel da Costa Sousa / Alexandra Moutinho 331

9 Transportation problem If each combination of source and destination has a similar shipping cost function, so that cost of shipping x ij units from source i (i = 1, 2,..., m) to destination j (j = 1, 2,..., n) is given by a nonlinear function C ij (x ij ), the eoea overall objective e function cto is = Minimize f ( x ) C ( x ) m n i= 1 j= 1 ij ij João Miguel da Costa Sousa / Alexandra Moutinho 332

10 Graphical illustration Example: p Wyndor Glass Co. problem with NL constraint João Miguel da Costa Sousa / Alexandra Moutinho 333

11 Graphical illustration Example: p Wyndor Glass Co. with NL objective function João Miguel da Costa Sousa / Alexandra Moutinho 334

12 Example: Wyndor Glass Co. (3) João Miguel da Costa Sousa / Alexandra Moutinho 335

13 Global and local optimum Example: f(x) with three local maxima (where?), and three local minima (where?). Global? João Miguel da Costa Sousa / Alexandra Moutinho 336

14 Guaranteed local maximum Global maximum when: 2 f( x) 0, for all x x 2 Function always curving downward is a concave function (concave downward). Function always curving upward is a convex function (concave upward). João Miguel da Costa Sousa / Alexandra Moutinho 337

15 Guaranteed local optimum Nonlinear programming with no constraints and concave objective function, a local maximum is the global maximum. Nonlinear programming with no constraints and convex objective function, a local minimum is the global minimum. With constraints, t these guarantees still hold if the feasible region is a convex set. The feasible region for a NP problem is a convex set if all g i (x) are convex functions. João Miguel da Costa Sousa / Alexandra Moutinho 338

16 Ex: Wyndor Glass with one concave g i (x) João Miguel da Costa Sousa / Alexandra Moutinho 339

17 Convex Programming gproblem To guarantee a local maximum is a global maximum for a NP problem with constraints g i (x) b i, for i = 1, 2,..., m and x 0, the objective function f (x) must be a concave function and each g i (x) must be a convex function. See appendix 2 of Hillier s book for convexity properties and definitions. João Miguel da Costa Sousa / Alexandra Moutinho 340

18 Types of NP problems Unconstrained Optimization: no constraints Maximize f ( x) necessary condition for a solution x * = x to be optimal: f ( x) * = 0 at x = x, for j = 1,2,, n x j when f (x) is a concave function this condition is sufficient. when x j has a constraint x j 0, sufficient condition changes to: * * f ( x ) 0 at x= x, if x j = 0 = = * * x j 0 at x x, if x j > 0 João Miguel da Costa Sousa / Alexandra Moutinho 341

19 Example: nonnegative constraint João Miguel da Costa Sousa / Alexandra Moutinho 342

20 Types of NP problems Linearly Constrained Optimization All constraints are linear and objective function is nonlinear. Special case: Quadratic Programming Objective function is quadratic. Many applications, e.g. portfolio selection, predictive control. Convex Programming assumptions for maximization: 1. f (x) is a concave function. 2. Each g i (x) is a convex function. For a minimization problem, f (x) must be a convex function. João Miguel da Costa Sousa / Alexandra Moutinho 343

21 Types of NP problems Separable Programming is a special case of convex programming with additional assumption 3. All f(x) and g i (x) are separable functions. A separable function is a function where each term involves only a single variable (satisfies assumption of additivity but not proportionality): n f( x) = fj( xj ) j= 1 Nonconvex Programming: local optimum is not assured to be a global optimum. João Miguel da Costa Sousa / Alexandra Moutinho 344

22 Types of NP problems Geometric Programming is applied to engineering design as well as economics and statistics problems Objective function and constraint functions are of the form: N = ai1 ai2 ain g ( x ) cp i i ( x ), where P i ( x ) = x 1 x 2 x n i= 1 c i and a ij are typically physical constraints. When all c i are strictly positive, functions are generalized positive polynomials (posynomials). If the objective function is to be minimized, a convex programming algorithm can be applied. João Miguel da Costa Sousa / Alexandra Moutinho 345

23 Types of NP problems Fractional Programming f 1( x Maximize f ( x) = ) f ( x) when f (x) has the linear fractional programming form: cx + c0 f ( x) = dx + d 0 problem can be transformed into a linear programming problem. 2 João Miguel da Costa Sousa / Alexandra Moutinho 346

24 One variable unconstrained optimization Methods for solving unconstrained optimization with only one variable x (n = 1), where the differentiable function f (x) is concave. Necessary and sufficient condition for optimum: f( x) xx * = 0 at x = x. João Miguel da Costa Sousa / Alexandra Moutinho 347

25 Solving the optimization problem If f (x) is not simple, problem cannot be solved analytically. If not, search procedures can solve the problem numerically. We will describe two common search procedures: Bisection method Newton s method João Miguel da Costa Sousa / Alexandra Moutinho 348

26 Bisection method Since f (x) is concave, we know that: df ( x) dx df ( x) dx df ( x) dx > 0 if x < x *, = 0 if x = x *, < 0 if x > x *. Can hold if 2 nd derivative 0 for some (not all) values of x. If derivative of x is positive, x is a lower bound of x *. If derivative of x is negative, x is an upper bound of x *. João Miguel da Costa Sousa / Alexandra Moutinho 349

27 Bisection method Notation: x = current trial solution, x x * = current lower bound on x, * = current upper bound on x, * ε = error tolerance for x. In the bisection method, new trial solution is the midpoint i bt between the two current bounds. João Miguel da Costa Sousa / Alexandra Moutinho 350

28 Algorithm of the Bisection Method Initialization: Select ε. Find initial upper and lower bounds. Select initial trial as: Iteration: ti 1. Evaluate at x = 2. If df ( x) at x', dx df ( x) 0, reset x = x, dx df ( x) 0, reset x = x, dx x+ x x = 2 3. If 4. Select a new x + x = 2 x Stopping rule: If x x 2ε stop. Otherwise, go to 1. João Miguel da Costa Sousa / Alexandra Moutinho 351

29 Example Maximize f ( x ) = 12 x 3 x 2 x 4 6 João Miguel da Costa Sousa / Alexandra Moutinho 352

30 Solution df ( x) (1 x ) dx 2 d f( x) 2 4 = 12(3 x + 5 x ) 2 dx First two derivatives: = x ε = 0.01 Iteration df (x)/dx x x New x f (x ) João Miguel da Costa Sousa / Alexandra Moutinho 353

31 Solution x * < x * < Bisection method converges relatively slowly. Only information about first derivative is being used. Additional information can be obtained by using second derivative, as in Newton s method. João Miguel da Costa Sousa / Alexandra Moutinho 354

32 Newton s method This method approximates f (x) within neighborhood of current trial solution by a quadratic function This quadratic approximation is Taylor series truncated after second derivative term: f ( xi ) 2 f ( x i+ 1) f( xi) + f ( xi)( xi+ 1 xi) + ( xi+ 1 xi) 2 Maximized edby setting f (x( i+1 ) equal to zero eo( (x i, f (x i ), f (x( i ) and f (x i ) are constants): f ( x ) f ( x ) + f ( x )( x x ) = 0 i+ 1 i i i+ 1 i x i+ 1 f ( xi ) = xi f (( x i ) João Miguel da Costa Sousa / Alexandra Moutinho 355

33 Algorithm of Newton s Method Initialization: ation Select ε. Find initial trial solution x i by inspection. Set i = 1. Iteration i: 1. Calculate f (x i ) and f (x i ). 2. Set x i f ( x i ) + = xi f ( x ) 1. i Stopping rule: If x i+1 x i ε, stop; x i+1 is optimal. Otherwise, i = i + 1 (another iteration). João Miguel da Costa Sousa / Alexandra Moutinho 356

34 Example 4 6 Maximize i i again f ( x ) = 12x 3x 2x New solution is given by: f ( x ) 12(1 x x ) 1 x x x x x x i i i i i i+ 1 = i = i = i 2 4 f ( x i ) 12(3 x i + 5 x i ) 3x i + 5x i Selecting x 1 = 1, and ε = : Iteration i x i f (x i ) f (x i ) f (x i ) x i João Miguel da Costa Sousa / Alexandra Moutinho 357

35 Multivariable unconstrained optimization Problem: maximizing a concave function f (x) of multiple variables x = (x 1, x 2,..., x n ) with no constraints. Necessary and sufficient condition for optimality: partial derivatives equal to zero. No analytical solution numerical search procedure must be used. One of these is the gradient search procedure: It identifies and uses the direction of movement from the current trial solution that maximizes the rate at which f (x) is increased. João Miguel da Costa Sousa / Alexandra Moutinho 358

36 Gradient search procedure Use values of partial derivatives to select the specific direction to move, using the gradient. Gradient of a point x = x is the vector with ihpartial derivatives evaluated at x = x : f f f f ( x ) =,,, at x= x x 1 x 2 x n Moves in the direction of this gradient until f (x) stops increasing. Each iteration changes the trial solution x : Reset x = x + t * f( x ) João Miguel da Costa Sousa / Alexandra Moutinho 359

37 Gradient search procedure where t * is value of t 0 that maximizes f (x + t f(x )): * f ( x + t f ( x )) = max f ( x + t f ( x )) t 0 The function f (x + t f(x )) is simply f (x) where: f x = j xj + t, for j= 1,2,, n x j x= x Iterations continue until f (x) = 0 within ε tolerance: f x j ε, for j= 1,2,, n. João Miguel da Costa Sousa / Alexandra Moutinho 360

38 Summary of gradient search procedure Initialization: Select ε and any initial trial solution x. Go to stopping rule. Iteration: 1. Express f (x +t f (x )) as a function of t by setting f x j = xj + t, for j= 1,2,, n x j x= x and substitute these expressions into f (x). João Miguel da Costa Sousa / Alexandra Moutinho 361

39 Summary of gradient search procedure Iteration (concl.): 2. Use search procedure to find t = t* that maximizes f (x + t f(x )) over t Reset x = x +t t * f (x ). Go to stopping rule. Stopping rule: Evaluate f(x ) at x = x. Check if: f x j ε, for j= 1,2,, n. If so, stop with current x as the approximation of x *. Otherwise, perform another iteration. João Miguel da Costa Sousa / Alexandra Moutinho 362

40 Example Maximize Thus, f( x) = 2x x + 2x x 2 2 x 2. f x = 2x 2x 2 1 f = 2x + 2 4x x Verify that t f (x) is concave (see Appendix 2 of Hillier s book). Suppose that x = (0, 0) is initial trial solution. Thus, f (0,0) = (0,2) João Miguel da Costa Sousa / Alexandra Moutinho 363

41 Example (2) () Iteration 1 sets x1 = 0 + t (0) = 0 x2 = 0 + t (2) = 2t By y substituting these expressions into f (x): f( x + t f( x )) = f(0,2) t = 2(0)(2 t ) + 2(2 t ) 0 2(2 t ) = 4t 8t Because f(0,2 t * ) = max f(0,2) t = max {4t 8 t 2 } t 0 t 0 João Miguel da Costa Sousa / Alexandra Moutinho 364

42 Example (3) and d ( 2 4t 8t ) = 4 16t = 0 dt * 1 it follows that t = 4 so 1 1 Reset x = (0,0) + (0,2) = 0, 4 2 This completes lt first iteration. ti For new trial, gradient is: 1 f 0, = (1,0) 2 João Miguel da Costa Sousa / Alexandra Moutinho 365

43 Example (4) As ε < 1, Iteration 2: 1 1 x = 0, + t(1,0) = t, so f( x + t f( x )) = f 0 + t, + 0 t = f t, = (2 t ) t = t t f t*, = max f t, = max t t t 2 t 2 João Miguel da Costa Sousa / Alexandra Moutinho 366 2

44 Example (5) Because and then t = * f t, = max f t, = max t t t 2 t 2 d 2 1 t t + = 1 2 t = 0 dt 2 * 1 2 so Reset x = 0, + (1,0) =, This completes second iteration. See figure. João Miguel da Costa Sousa / Alexandra Moutinho 367

45 Illustration of example Optimal solution is (1, 1), as f (1, 1) = (0, 0) João Miguel da Costa Sousa / Alexandra Moutinho 368

46 Newton s method It is a quadratic approximation of objective function f (x). When objective function is concave and x and its gradient f (x) are written as column vectors, The solution x that maximizes the approximating quadratic at function cto is: 2 1 x = x [ f( x)] f( x), where 2 f (x) is the n n Hessian matrix. João Miguel da Costa Sousa / Alexandra Moutinho 369

47 Newton s method The inverse of the Hessian matrix is commonly approximated in various ways. Approximations of Newton s methods are referred to as quasi Newton methods (or variable metric methods). Recall that atthis stopic was mentioned to edin Intelligent t Systems, e.g. in neural network leaning. João Miguel da Costa Sousa / Alexandra Moutinho 370

48 Conditions for optimality Problem One variable unconstrained Multivariable unconstrained Constrained, nonnegative constraints only General constrained problem Necessary conditions for optimality df dx = 0 f f = 0, j = 1,2,, n x j f f = 0, j = 1,2,, n x j (or 0, if x = 0) Karush Kuhn Tucker conditions j Also sufficient if: f (x) concave f (x) concave f (x) concave f (x) concave and g i (x) convex ( i = 1, 2,..., m) João Miguel da Costa Sousa / Alexandra Moutinho 371

49 Karush Kuhn Tucker conditions Theorem: Assume that f(x), g 1 (x), g 2 (x),..., g m (x) are differentiable functions satisfying certain regularity conditions. Then x = (x 1*, x 2*,..., x n* ) can be an optimal solution for the NP problem if there are m numbers u 1, u 2,..., u m such that all the KKT conditions are satisfied: m f gi ui 0 x j i= 1 x j m * f gi x = j ui 0 xj i= 1 xj * at x= x, for j = 1,2, n. João Miguel da Costa Sousa / Alexandra Moutinho 372

50 Karush Kuhn Tucker conditions * g i( x ) bi 0 for j = 1,2,, n. * ui[ gi( x ) bi] = 0 * x 0, for j= 1,2,, n. j u 0, for i= 1,2,, m. i Conditions 2 and 4 require that one of the two quantities must be zero. Thus, conditions 3 and 4 can be combined: * (3,4) g x b = i ( ) i 0 (or 0, if u = 0), for i = 1,2,, m. i João Miguel da Costa Sousa / Alexandra Moutinho 373

51 Karush Kuhn Tucker conditions Similarly, conditions 1 and 2 can be combined: m f g (1,2) u i i = 0 xj i= 1 xj * (or 0 if x = 0), for j = 1,2, n. Variables u i correspond to dual variables in linear programming. Previous conditions are necessary but not sufficient to ensure optimality (see slide 371). j João Miguel da Costa Sousa / Alexandra Moutinho 374

52 Karush Kuhn Tucker conditions Corollary: assume that f(x) is concave and g 1 (x), g 2 (x),..., g m (x) are convex functions, where all functions satisfy the regularity conditions. Then, x = (x 1*, x 2*,..., x n* ) is an optimal solution if and only if all the conditions of the theorem are satisfied. João Miguel da Costa Sousa / Alexandra Moutinho 375

53 Example Maximize f( x) = ln( x + 1) + x subject to 2x1 + x2 3 and x 0, x Thus, m =1 1, and g 1 (x) =2x 1 + x 2 is convex. Further, f(x) is concave (check it using Appendix 2). Thus, any solution that verifies the KKT conditions is an optimal solution. João Miguel da Costa Sousa / Alexandra Moutinho 376

54 Example: KKT conditions 1 x (j = 1) u (j = 2) 2. (j = 1) 3. (j = 2) x x 1 u u = 0 x u = ( ) 2 1 2x + x u1 (2 x1 + x2 3) = 0 5. x1 0, x u 1 0 João Miguel da Costa Sousa / Alexandra Moutinho 377

55 Example: solving KKT conditions From condition 1 (j = 2) u 1 1; x 1 0 from condition 5 1 x Therefore,, u < 2 0. Therefore, x 1 =0, from condition 2 (j = 1). 1 u 1 0 implies that 2x 1 + x 2 3 = 0f from condition i 4. Two previous steps implies that x 2 = 3. x 2 0 implies that u 1 = 1 from condition 2 (j = 2). No conditions are violated for x 1 =0 0, x 2 =3 3, u 1 = 1. Consequently x * = (0,3). João Miguel da Costa Sousa / Alexandra Moutinho 378

56 Quadratic Programming g Maximize ( ) = 1 T f x cx x Qx 2 subject to Ax b, and x 0 Objective function can be expressed as: n n n 1 T 1 f ( x ) = cx x Qx= c x q x x 2 2 j j ij i j j= 1 i= 1 j= 1 João Miguel da Costa Sousa / Alexandra Moutinho 379

57 Example Maximize f ( x ) = 15 x + 30 x + 4 x x 2 x 4 x subject to x + 2x 30, and x 0, x In this case, x1 4 4 c = [15 30] x = Q = x2 4 8 A = [1 2] b= [30] João Miguel da Costa Sousa / Alexandra Moutinho 380

58 Solving QP problems Objective i function is concave if x T Qx 0 x, i.e., Q is a positive semidefinite matrix. Some KKT conditions for quadratic programming g problems can be transformed in equality constraints by introducing slack variables (yy 1, y 2, u 1). KKT conditions can be condensed due to the complementary variables ((x 1, y 1 ), (x 2, y 2 ), (u 1, v 1 )), introducing complementary constraint (1+2+4). João Miguel da Costa Sousa / Alexandra Moutinho 381

59 Solving QP problems Applying KKT conditions to example 1. (j = 1) 15+4x 2 4x 1 u 1 0 (j = 2) 30+4x 1 8x 2 2u (j = 1) x 1 (15+4x 2 4x 1 u 1 )= 0 (j = 2) x 2 (30+4x 1 8x 2 2u 1 )=0 3. x 1 +2x u 1 (x 1 +2x 2 30)=0 5. x 1 x 2 6. u 1 João Miguel da Costa Sousa / Alexandra Moutinho 382

60 Solving QP problems 1. (j = 1) 4x 1 +4x 2 u 1 +y 1 = 15 (j = 2) 4x 1 8x 2 2u 1 +y 2 = (j = 1) x 1 y 1 = 0 (j = 2) x 2 y 2 = 0 3. x 1 +2x 2 +v 1 =30 4. u 1 v 1 = 0 Complementary 2 (j=1)+2 (j=2)+4. x 1 y 1 x 2 y 2 u 1 v 1 constraint João Miguel da Costa Sousa / Alexandra Moutinho 383

61 Solving QP problems 4x 1 4x 2+u 1 yy 1 = 15 4x 1 +8x 2 +2u 1 y 2 = 30 linear programming x 1 +2x 2 +v 1 =30 constraints x 1 x 2 u 1 y 1 y 2 v 1 x 1 y 1 x 2 y 2 u 1 v 1 T T T T João Miguel da Costa Sousa / Alexandra Moutinho 384

62 Solving QP problems Using the previous properties, QP problems can be solved using a modified simplex method. See example of a QP problem in Hillier s book (pages ). Excel, LINGO, LINDO, and MPL/CPLEX can all solve quadratic at programming poga gpobe problems. João Miguel da Costa Sousa / Alexandra Moutinho 385

63 Separable Programming g Assumed that f(x) is concave and g i (x) are convex. n f ( x ) = fj ( x j ) f(x) is a (concave) piecewise linear function (see example). j= 1 If g i (x) are linear, this problem can be reformulated as an LP problem by using a separate variable for each line segment. The same technique can be used for nonlinear g i (x). João Miguel da Costa Sousa / Alexandra Moutinho 386

64 Example João Miguel da Costa Sousa / Alexandra Moutinho 387

65 Example João Miguel da Costa Sousa / Alexandra Moutinho 388

66 Convex Programming g Many algorithms can be used, falling into 3 categories: 1. Gradient algorithms,, where the gradient search procedure is modified to avoid violating a constraint. Example: generalized reduced gradient (GRG). 2. Sequential unconstrained algorithms, includes penalty function and barrier function methods. Example: sequential unconstrained minimization technique (SUMT). 3. Sequential approximation i algorithms, includes linear and quadratic approximation methods. Example: Frank Wolfe algorithm for linear constraints. João Miguel da Costa Sousa / Alexandra Moutinho 389

67 Frank Wolfe algorithm It is a sequential linear approximation algorithm. It replaces the objective function f(x) by the first order Taylor expansion of f(x) around x = x, namely: n f ( x ) f( x ) f( x ) + ( x ) = ( ) + ( )( j xj f x f x x x ) x j= 1 j As f(x ) ) and f(x )x have fixed values, they can be dropped to give equivalent linear objective function: n f ( x ) g( x) = f( x ) x= c, where = at = jxj cj x x. x j= 1 j João Miguel da Costa Sousa / Alexandra Moutinho 390

68 Frank Wolfe algorithm Simplex method is applied to find a solution x LP. Then, chose the point that maximizes the nonlinear objective function along the line segment. This can be done using an one variable unconstrained optimization algorithm. The algorithm continues the iterations until the stop condition is satisfied. João Miguel da Costa Sousa / Alexandra Moutinho 391

69 Summary of Frank Wolfe algorithm Initialization: Find feasible initial trial solution x (0), e.g. using LP to find initial BF solution. Set k = 1. Iteration k: f ( x) ( k 1) 1. For j = 1, 2,..., n, evaluate at x= x. x and set c j equal to this value. j ( ) 2. Find optimal solution x k by solving LP problem: x LP Maximize g( x) c x, subject to n = j=1 Ax b and x 0 João Miguel da Costa Sousa / Alexandra Moutinho 392 j j

70 Summary of Frank Wolfe algorithm 3. For the variable t [0,1], set ht () = f( x) for x= x + t( x x ), ( k 1) ( k) ( k 1) LP LP so that h(t) gives value of f(x) on line segment ( k 1) ( k ) between x (where t = 0) and x (where t = 1). Use one variable unconstrained optimization to maximize h(t) to find x (k). Stopping rule: If x (k 1) and x (k) are sufficiently close stop. x (k) is the estimate of optimal solution. Otherwise, reset k = k + 1. LP João Miguel da Costa Sousa / Alexandra Moutinho 393

71 Example Maximize f ( ) 5 x x 8 x 2 x 2 2 x = subject to 3x + 2x 6, and x 0, x 0 Note that f x f = 5 2 x, = 8 4 x x so that the unconstrained maximum x = (2.5, 2) violates the functional constraint. João Miguel da Costa Sousa / Alexandra Moutinho 394

72 Example (2) () Iteration 1: x = (0, 0) is feasible (initial trial x (0) ). Step 1 gives c 1 = 5 and c 2 = 8, so g(x) = 5x 1 + 8x 2. (1) Step 2: solving graphically yields x LP = (0, 3). Step 3: points between (0, 0) and (0, 3) are: ( x1, x2) = (0,0) + t[(0,3) (0,0)] for t [0,1] = (0,3 t) This expression gives ht ( ) = f(0,3 t) = 8(3 t) 2(3 t) 2 = 24t 18t 2 João Miguel da Costa Sousa / Alexandra Moutinho 395

73 Example (3) the value t = t * that maximizes h(t) is given by dh() t dt = t = 0 so t * = 2/3. This results leads to the next trial solution, (see figure): x = (0,0) + [(0,3) (0,0)] 3 = (0,2) (1) 2 Iteration ti 2: following the same procedure leads to the next trial solution x (2) =(5/6, 7/6). João Miguel da Costa Sousa / Alexandra Moutinho 396

74 Example (4) João Miguel da Costa Sousa / Alexandra Moutinho 397

75 Example (5) Figure shows next iterations. Note that trial solutions alternate between two trajectories that intersect at point x = (1, 1.5). This is the optimal solution (satisfies KKT conditions). Using quadratic instead of linear approximations lead to a much faster convergence. João Miguel da Costa Sousa / Alexandra Moutinho 398

76 Sequential unconstrained minimization Main versions of SUMT: exterior point algorithm: deals with infeasible solutions and a penalty function, interior point algorithm: deals with feasible solutions and a barrier function. Uses the advantage of solving unconstrained problems, which are much easier to solve. Each unconstrained problem in the sequence chooses a smaller and smaller value of r, and solves for x to Maximize P( x; r) = f( x) rb( x) João Miguel da Costa Sousa / Alexandra Moutinho 399

77 SUMT B(x) is a barrier function with following properties (for feasible x for original problem): 1. B(x) is small when x is far from boundary of feasible region. 2. B(x) is large when x is close from boundary of feasible region. 3. B(x) as distance from the (nearest) boundary of feasible region 0. Most common choice of B(x) (when all assumptions of convex programming are satisfied, P(x;r) is concave): m n 1 1 B( x) = + b g ( x) x i= 1 i i j= 1 j João Miguel da Costa Sousa / Alexandra Moutinho 400

78 Summary of SUMT Initialization: Find feasible initial trial solution x (0), not on the boundary of feasible region. Set k = 1. Choose value for r and θ < 1 (e.g. r = 1 and θ = 0.01). Iteration k: starting from x (k 1), apply a multivariable unconstrained optimization procedure (e.g. gradient search procedure) to find local maximum x (k) of m n 1 1 P( x; r) = f ( x) r + i= 1 b g ( x ) j= 1 x i i j João Miguel da Costa Sousa / Alexandra Moutinho 401

79 Summary of SUMT Stopping rule: If change from x (k 1) k to x (k) k is very small stop and use x (k) as local maximum. Otherwise, set k = k + 1 and r = θr for other iteration. SUMT can be extended for equality constraints. Note that SUMT is quite sensitive to numerical instability, so it should be applied cautiously. João Miguel da Costa Sousa / Alexandra Moutinho 402

80 Example Maximize f ( x ) = x x x 1 2 subject to 2 x + x 3, and x 0, x g = is convex, but = is not concave 1( x) x1 + x2 f( x) x1x2 Initialization: (x 1, x 2 ) = x (0) = (1, 1), r = 1 and θ = For each iteration: ti P ( x ; r ) = x1 x 2 r x x x x João Miguel da Costa Sousa / Alexandra Moutinho 403

81 Example (2) () For r = 1, maximization leads to x (1) = (0.90, 1.36). Table below shows convergence to (1, 2). k r x (k) 1 x (k) João Miguel da Costa Sousa / Alexandra Moutinho 404

82 Nonconvex Programming g Assumptions of convex programming often fail. Nonconvex programming gproblems can be much more difficult to solve. Dealing with non differentiable and non continuous objective functions is usually very complicated. LINDO, LINGO and MPL have efficient algorithms to deal with these problems. Simple problems can be solved using hill climbing to find a local maximum several times. João Miguel da Costa Sousa / Alexandra Moutinho 405

83 Nonconvex Programming g An example is given in Hillier s book using Excel Solver to solve simple problems. More difficult problems can use Evolutionary Solver. It uses metaheuristics based on genetics, evolution and survival of the fittest: a genetic algorithm. Next section presents some well known metaheuristics. João Miguel da Costa Sousa / Alexandra Moutinho 406

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP) INTEGER PROGRAMMING Integer Programming g In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Introduction. Very efficient solution procedure: simplex method.

Introduction. Very efficient solution procedure: simplex method. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid 20th cent. Most common type of applications: allocate limited resources to competing

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Mathematical Economics. Lecture Notes (in extracts)

Mathematical Economics. Lecture Notes (in extracts) Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Part 2: NLP Constrained Optimization

Part 2: NLP Constrained Optimization Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

More on Lagrange multipliers

More on Lagrange multipliers More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

Machine Learning. Support Vector Machines. Manfred Huber

Machine Learning. Support Vector Machines. Manfred Huber Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Introduction to Nonlinear Optimization Paul J. Atzberger

Introduction to Nonlinear Optimization Paul J. Atzberger Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

5.5 Quadratic programming

5.5 Quadratic programming 5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the

More information

CONSTRAINED OPTIMALITY CRITERIA

CONSTRAINED OPTIMALITY CRITERIA 5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

Lecture 4: Optimization. Maximizing a function of a single variable

Lecture 4: Optimization. Maximizing a function of a single variable Lecture 4: Optimization Maximizing or Minimizing a Function of a Single Variable Maximizing or Minimizing a Function of Many Variables Constrained Optimization Maximizing a function of a single variable

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E

Roles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E IDOSR PUBLICATIONS International Digital Organization for Scientific Research ISSN: 2550-7931 Roles of Convexity in Optimization Theory Efor T E and Nshi C E Department of Mathematics and Computer Science

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

Introduction to Operations Research. Linear Programming

Introduction to Operations Research. Linear Programming Introduction to Operations Research Linear Programming Solving Optimization Problems Linear Problems Non-Linear Problems Combinatorial Problems Linear Problems Special form of mathematical programming

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Chapter 2. Optimization. Gradients, convexity, and ALS

Chapter 2. Optimization. Gradients, convexity, and ALS Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

Performance Surfaces and Optimum Points

Performance Surfaces and Optimum Points CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance

More information