2.3 Linear Programming
|
|
- Vincent Joseph
- 5 years ago
- Views:
Transcription
1 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are a combination of linear equalities and inequalities. LP occur in many real-life problems in transportation, manufacturing, scheduling,... LP also has applications in nonlinear programming (NLP) using the so called sequential linear programming (SLP). This technique consists of successive linearizations that lead to a sequence of LP problems. Two types of solution techniques are used in LP: Basic exchange algorithms: Simplex algorithm of Dantzig; Criss-cross algorithm. Interior point algorithms: Ellipsoid algorithm of Khachiyan; Projective algorithm of Karmarkar; Path-following algorithms. Multidisciplinary Design Optimization of Aircrafts 141
2 The general form of the LP problem is minimize c 1 x 1 + c 2 x c n x n (2.78) subject to a i1 x 1 + a i2 x a in x n b i, LE constraints, i = 1,..., l a j1 x 1 + a j2 x a jn x n b j, GE constraints, j = l + 1,..., l + r a k1 x 1 + a k2 x a kn x n = b k, EQ constraints, k = l + r + 1,..., l + r + q x 1 0, x 2 0,..., x n 0 where a ij are constant coefficients, b i are fixed positive real constants and x i are the design variables. When all constraints are converted into equality type, the LP problem is said to be in standard form: LE-type constraints are converted into equalities by adding a non-negative variable, called the slack variable, to the left-hand side. GE-type constraints are converted into equalities by subtracting a non-negative variable, called the surplus variable, to the left-hand side. Any variable unrestricted in sign, also called free variable, can be put in standard form by expressing it as a difference of two non-negative variables. Multidisciplinary Design Optimization of Aircrafts 142
3 The standard form of the LP (2.78) can be written as minimize c T x (2.79) subject to Ax = b x 0 where c and x are a (n + l + r) 1 vectors, A is a m (n + l + r) matrix and b is a m 1 vector, with m = l + r + q. Multidisciplinary Design Optimization of Aircrafts 143
4 Example 2.11: Cast the LP problem in standard form Given the problem in general form maximize 3x 1 5x 2 subject to x 1 + x 2 2 4x 1 + x 2 5 x 1 0, x 2 unrestricted in sign First, convert maximize to minimize and make the right-hand side positive minimize 3x 1 + 5x 2 subject to x 1 x 2 2 4x 1 x 2 5 x 1 0, x 2 unrestricted in sign Multidisciplinary Design Optimization of Aircrafts 144
5 Then, introduce surplus and slack variables, S and s, respectively, to convert the inequality constraints into equalities minimize 3x 1 + 5x 2 subject to x 1 x 2 S = 2 4x 1 x 2 + s = 5 x 1 0, x 2 unrestricted in sign, S 0, s 0 Finally, express the free variable as a difference of two non-negative variables, x 2 = y 1 y 2, minimize 3x 1 + 5y 1 5y 2 subject to x 1 y 1 + y 2 S = 2 4x 1 y 1 + y 2 + s = 5 x 1 0, y 1 0, y 2 0, S 0, s 0 Multidisciplinary Design Optimization of Aircrafts 145
6 2.3.1 Simplex Method As shown in [6], the maximum value of the function occurs on the boundary of the feasible region and there is always an extreme point with this maximum value. The simplex method provides a systematic algebraic procedure for moving from one extreme point to an adjacent one while improving the function value. The method was developed by Dantzig in 1947 [17, 19, 18] and it has been widely used since then. Multidisciplinary Design Optimization of Aircrafts 146
7 Input: LP problem in standard form and starting feasible point x 0 Output: minimum of f begin create initial tableau repeat if exists negative objective row coefficient then choose incoming variable j corresponding to most negative objective row coefficient (pivot column) if exists positive a ij column coefficient then perform ratio test b i /a ij for positive coefficients a ij and determine row i corresponding to least ratio (pivot row) perform elementary row operations to make pivot 1 and other coefficients in column 0, including first row else solution unbounded; stop end else optimum attained end until optimum attained end Pseudo-code 13: Simplex Method Multidisciplinary Design Optimization of Aircrafts 147
8 Example 2.12: Solve the two-variable LP problem: [6] maximize 2x 1 + x 2 subject to 2x 1 x 2 8 x 1 + 2x 2 14 x 1 + x 2 4 x 1 0, x 2 0 Convert LP problem into standard form: minimize 2x 1 x 2 + 0x 3 + 0x 4 + 0x 5 subject to 2x 1 x 2 + x 3 = 8 x 1 + 2x 2 + x 4 = 14 x 1 + x 2 + x 5 = 4 x 1 0, x 2 0, x 3 0, x 4 0, x 5 0 Multidisciplinary Design Optimization of Aircrafts 148
9 Thus, c T = [ 2, 1, 0, 0, 0], x T = [x 1, x 2, x 3, x 4, x 5 ] and A = A basic solution is obtained if any two variables are set to zero (because A is rank 3 and there are 5 variables). Setting x 1 = x 2 = 0, yields x 3 = 8, x 4 = 14 and x 5 = 4. First Tableau x 1 x 2 x 3 x 4 x 5 rhs f x 3 [2] x x Second Tableau x 1 x 2 x 3 x 4 x 5 rhs f x 1 1 1/2 1/ x 4 0 [5/2] 1/ x 5 0 1/2 1/ Multidisciplinary Design Optimization of Aircrafts 149
10 Third Tableau x 1 x 2 x 3 x 4 x 5 rhs f 0 0 3/5 4/ x /5 1/5 0 6 x /5 2/5 0 4 x /5 1/5 1 6 Minimum value is 16, thus maximum value of f is 16, at point x 1 = 6, x 2 = 4, x 3 = 0, x 4 = 0, x 5 = 6. Further details about the simplex method applied to GE and EQ constraints can be found in reference [6]. Also refer to this for the concept of duality in LP. Multidisciplinary Design Optimization of Aircrafts 150
11 2.3.2 Interior Point Approach Consider the LP problem in standard form maximize c T x (2.80) subject to Ax = b x 0 Let n be the number of variables in x and let m be the number of constraints corresponding to the number of rows of A. The rows of A give the directions normal to the constraints. thus, columns of A T are the normals to the constraints and c is the direction of steepest increase. In interior point methods, the initial guess, x 0 is a strictly interior point, such that all the components are strictly positive. To determine the direction to increase the objective function for maximization, a scaling scheme is first applied and then a projection idea in obtaining a reasonable direction. There are various scaling schemes but Karmarkar s [35] is chosen for its simplicity. Multidisciplinary Design Optimization of Aircrafts 151
12 Let x 0 = [x 0 1, x0 2,..., x0 n ]T be the initial point and define the diagonal scaling matrix D as D = and the transformed coordinates ˆx such that x x x 0 n (2.81) x = Dˆx (2.82) The transformed problem at the current problem becomes maximize ĉ Tˆx (2.83) subject to ˆx = b ˆx 0 where ĉ = Dc and  = AD. Multidisciplinary Design Optimization of Aircrafts 152
13 The search direction is then given by the projection vector ĉ p = Pĉ (2.84) where the projection operator is given by P = I Â T (ÂÂ T ) 1 Â (2.85) The step length ˆx is then computed as ˆx = ασĉ p (2.86) where α ]0, 1[ (typically between 0.6 and 0.98 is chosen), and σ is the maximum extension parameter given as where ĉ p is the most negative component of ĉ p. σ = 1 ĉ p (2.87) Multidisciplinary Design Optimization of Aircrafts 153
14 Input: LP problem in standard form, starting feasible interior point x 0 and convergence parameters ε a and ε r Output: maximum of f begin k = 0 repeat define scaling matrix D using (2.81) evaluate ĉ = cd and  = AD compute projection operator P using (2.85) find search direction ĉ p from (2.84) compute parameters α and σ using (2.87) determine increment x = ασdĉ p determine new point: x k+1 = x k + x if c T x k+1 c T x k ε a + ε r c T x k satisfied for two successive iterations then converged else set k = k + 1, x k = x k+1 end until converged end Pseudo-code 14: Interior Point Method Multidisciplinary Design Optimization of Aircrafts 154
15 Example 2.13: Problem Modeling 1 - Structural design Problem: [6] Consider the platform support system shown used for scaffolding. The maximum cable support is given in the table. Determine the maximum total load that the system can support. cable max.load[lb] From the static equilibrium of forces and moments, W i = F L i + F R i and F L i = W b/(a + b) and F R i = W a/(a + b), where W i is the weight applied to platform i, and F L i and F R i are the forces on the cables attached to the left and right ends, and a and b are the distance of W i to the left and right ends. maximize W 1 + W 2 subject to W (cables 3 and 4) 4W 1 + 3W (cable 2) 4W 1 + W (cable 1) W 1 0, W 2 0 Multidisciplinary Design Optimization of Aircrafts 155
16 2.4 Constrained Gradient-Based Minimization The majority of engineering problems involve constrained minimization. A common problem in aerodynamics is minimizing drag with constraints on lift. In structures, one often wishes to minimize the weight of the structure with constraints on stress and deflection. The constraints in these problems are most often nonlinear and therefore it is important that we learn about methods for the solution of nonlinearly constrained optimization problems. As we will see, such problems can be converted into a sequence of unconstrained problems and we can then use the methods of solution that we are already familiar with. Constraint problems may be expressed in the following nonlinear programming (NP) form: minimize f(x) w.r.t x R n (2.88) subject to g i (x) 0, i = 1,..., m h j (x) = 0, j = 1,..., l where x = [x 1, x 2,..., x n ] T is a column vector of n real-valued design variables, f is the objective or cost function, g s are the inequality constraints and h s are the equality constraints. Multidisciplinary Design Optimization of Aircrafts 156
17 A point x that satisfies all the constraints is called feasible. Also, an inequality constraint g i (x) 0 is said to be active at the feasible point x if g i (x) = 0 and inactive if g i (x) < 0 Conceptually, (7.2) can be expressed in the form minimize f(x) (2.89) subject to x Ω where Ω is the feasible region defined by all constraints as Ω = {x R n : g 0, h = 0} Four gradient-based methods are presented for solving the NP problem: Rosen s Gradient Projection method for linear constraints; Zoutendijk s Method of Feasible Directions; Generalized Reduced Gradient method; Sequential Quadratic Programming method. Multidisciplinary Design Optimization of Aircrafts 157
18 Example 2.14: Graphical Solution of a Constrained Optimization Problem minimize f(x) = 4x 2 1 x 1 x w.r.t x 1, x 2 subject to c 1 (x) = x x x c 2 (x) = x x2 1 2x Problems are constraint-bound when the constraints themselves dictate the values of the design variables. When many constraints are active at the optimum, the value of the gradient is not zero, and a modification the gradient methods are needed. Multidisciplinary Design Optimization of Aircrafts 158
19 2.4.1 KKT Optimality Conditions and Lagrange Multipliers The optimality conditions for nonlinearly constrained problems (7.2) are important because they form the basis for algorithms for solving such problems. In contrast to the straightforward necessary condition for unconstrained problems, f(x ) = 0, NP problems naturally require the consideration of the constraints in the optimality conditions. First, only equality constraints are considered and the concept of Lagrange multipliers is introduced. Then, inequalities are studied and the KKT optimality conditions for constrained problems, derived by Karush, Kuhn and Tucker [36, 37], are expressed. Multidisciplinary Design Optimization of Aircrafts 159
20 Nonlinear Equality Constraints Consider the following optimization problem with equality constraints, minimize f(x) (2.90) w.r.t x R n subject to h j (x) = 0, j = 1,..., l To solve this problem, one could solve for l components of x by using the equality constraints to express them in terms of the other components. Then, function f would depend only on n l unconstrained variables for which the optimality conditions are known (see unconstrained minimization). However, this procedure is only feasible for simple explicit functions. Lagrange devised a method to solve this problem... Multidisciplinary Design Optimization of Aircrafts 160
21 At a stationary point, the total differential of the objective function has to be equal to zero, i.e., df = f x 1 dx 1 + f x 2 dx f x n dx n = f T dx = 0. (2.91) For a feasible point, the total differential of the constraints (h 1,... h l ) must also be zero, and so dh j = h j x 1 dx h j x n dx n = h T j dx = 0., (j = 1,..., l) (2.92) Lagrange suggested that one could multiply each constraint variation by a scalar λ j and add it to from the objective function, l n df + λ j dh j = 0 f l h j + λ j dx i = 0. (2.93) x i x i j=1 i=1 j=1 Note that the components of variation vector dx are independent and arbitrary. Thus, for this equation to be satisfied, the vector λ has to be such that the expression inside the parenthesis vanishes, i.e., f x i + l j=1 λ j h j x i = 0, (i = 1,..., n) (2.94) Multidisciplinary Design Optimization of Aircrafts 161
22 Consider the Lagrangian function, defined as L(x, λ) = f(x) + l λ j h j (x) = f(x) + λ T h(x) (2.95) j=1 If x is a stationary point of this function, then using the necessary optimality conditions for unconstrained problems, results L = f + x i x i l j=1 L λ j = h j (x) = 0, (j = 1,..., l). λ j h j x i = 0, (i = 1,..., n) (2.96) These first-order conditions are known as the Karush Kuhn Tucker (KKT) conditions and are necessary conditions for the optimum of a constrained problem. These conditions can also be written as x L = 0 and λ L = 0. Note that the Lagrangian function is defined such that minimizing it with respect to the design variables and the Lagrange multipliers, the constraints of the original function are obtained. The constrained optimization problem of n variables and l constraints has been transformed into an unconstrained problem of n + l variables. Multidisciplinary Design Optimization of Aircrafts 162
23 Example 2.15: Problem with Single Equality Constraint Consider the following equality constrained problem: 2 minimize f(x) = x 1 + x 2 subject to h(x) = x x2 2 2 = By inspection we can see that the feasible region for this problem is a circle of radius 2. The solution x is obviously ( 1, 1) T -1. From any other point in the circle it is easy to find a way to move in -2 the feasible region (the boundary of the circle) while decreasing f The Lagrangian is L = x 1 + x 2 + λ(x x2 2 2) and the optimality conditions are [ ] [ ] [ ] 1 + 2λx1 x1 1 x L = = 0 = 2λ 1 + 2λx 2 x 2 1 2λ λ L = x x2 2 2 = 0 λ = ±1 2 In this case λ = 1 2 corresponds to the minimum, and the negative value of the Lagrange multiplier corresponds to the maximum. These two situations can be distinguished by checking for positive definiteness of the Hessian of the Lagrangian. Multidisciplinary Design Optimization of Aircrafts 163
24 Also note that at the solution the constraint normal h(x ) is parallel to f(x ), i.e., there is a scalar λ such that f(x ) = λ h(x ) (2.97) This expression can be derived using the first-order Taylor series approximations to the objective and constraint functions. To retain feasibility with respect to h(x) = 0, h(x + d) = h(x) + h T (x)d + O(d T d) = 0. (2.98) }{{} =0 Linearizing yields h T (x)d = 0. (2.99) Also, given that a direction of improvement must result in a decrease in f, i.e., Then, to first order, the following holds f(x + d) f(x) < 0. (2.100) f(x) + f T (x)d f(x) < 0 f T (x)d < 0. (2.101) A necessary condition for optimality is that there is no direction satisfying both of these conditions. The only way that such a direction cannot exist is if f(x) and h(x) are parallel, that is, if f(x) = λ h(x) holds. Multidisciplinary Design Optimization of Aircrafts 164
25 If the function gradient has a projection in the gradient constraint direction, then it is possible to move the solution while remaining feasible and decreasing f. By defining the Lagrangian function L(x, λ) = f(x) + λh(x), (2.102) and noting that x L(x, λ) = f(x) + λ h(x), (2.103) the necessary optimality condition can be stated as follows: At the solution x, there is a scalar λ such that x L(x, λ ) = 0. Thus one can search for solutions of the equality-constrained problem by searching for a stationary point of the Lagrangian function. The scalar λ is called the Lagrange multiplier for the constraint h(x) = 0. Multidisciplinary Design Optimization of Aircrafts 165
26 Nonlinear Inequality Constraints Consider the following optimization problem with inequality constraints, minimize f(x) (2.104) w.r.t x R n subject to g i (x) 0, i = 1,..., m The vector d is a descent direction at point x k if f T (x k )d < 0. (2.105) This condition ensures that, for sufficiently small α > 0, the inequality f(x k + αd) < f(x k ) holds. Thus, d is a descent direction if it is pointing in the opposite half-space from f. This half-space defined by (2.105) forms the descent cone within which d should lie. Multidisciplinary Design Optimization of Aircrafts 166
27 The vector d is a feasible direction if g T i d < 0 for each i I, (2.106) where I is the active set, I = {i : g i (x k ) = 0, i = 1,..., m}, that is, the active constraints at the feasible point x k. This condition ensures that, for sufficiently small α > 0, (x k +αd) will be feasible or g i (x k +αd) > 0, i = 1,..., m. The intersection of all half-spaces defined by (2.106) forms the feasible cone within which d should lie. These two concepts descent direction and feasible direction are important for the derivation of the optimizality conditions for the inequality constrained problem (2.104). Two cases are possible for the solution x to (2.104): If x lies in the interior, then it is necessary that f(x ) = 0. This is the optimality condition in the absence of active constraints which coincide with unconstrained problems; If some constraints are active, then it is necessary that there not exist any vector d at the point x which is both a descent direction and a feasible direction. That is, the intersection of the descent-feasible cone should be empty. Multidisciplinary Design Optimization of Aircrafts 167
28 Farkas lemma [22] is a result used in the proof of the Karush-Kuhn-Tucker (KKT) theorem from nonlinear programming. Lemma. [Farkas 1902] systems has a solution: If A is a matrix and b a vector, then exactly one of the following two A T y 0 for some y such that b T y < 0 or in the alternative Ax = b for some x 0 where the notation x 0 means that all components of the vector x are non-negative. Associating x with µ, b with f, y with d, and columns of A with negative of the active constraint gradients, g i (x ), i I, Farkas Lemma implies that the vector d which is both descent and feasible cannot exist if f = i I µ i( g i ), µ i 0. The scalars µ i are called Lagrange multipliers. The previous condition may be written as f = i I µ i g i, µ i 0 (2.107) Multidisciplinary Design Optimization of Aircrafts 168
29 The optimality conditions in (2.107) do not refer to constraints that are inactive (g i < 0). It is usual to include the inactive constraints as well by assigning a zero Lagrange multiplier to each of them. Thus, for each inactive constraint g i, a scalar µ i 0 is introduced and the condition µ i g i = 0 imposed. A point x is a local minimum to (2.104) if x L = 0 L x p = f x p + m i=1 µ i 0, (i = 1,..., m) µ i g i = 0, (i = 1,..., m) g i (x) 0, (i = 1,..., m). µ i g i x p = 0, (p = 1,..., n) (2.108) where the Lagrangian L is defined as L = f + m i=1 µ ig i. These optimality conditions can be compactly written as x L = 0, µ T g = 0, µ 0, g 0. (2.109) These necessary conditions for optimality are called Karusch Kuhn Tucker (KKT) conditions, and a point satisfying these is referred to as a KKT point. Multidisciplinary Design Optimization of Aircrafts 169
30 Nonlinear Equality and Inequality Constraints Consider the following optimization problem with equality and inequality constraints, minimize f(x) (2.110) w.r.t x R n subject to g i (x) 0, i = 1,..., m subject to h j (x) = 0, j = 1,..., l In view of (2.96) and (2.108), the first-order KKT necessary optimality conditions for the general problem in (2.110) are Optimality: x L = 0 L x p = f x p + m i=1 µ i g i x p + l j=1 λ j h j x p = 0, (p = 1,..., n) Non-negativity: µ i 0, (i = 1,..., m) (2.111) Complementary: µ i g i = 0, (i = 1,..., m) Feasiblity: g i (x) 0, (i = 1,..., m) λ L = 0 L λ j = h j (x) = 0, (j = 1,..., l). Multidisciplinary Design Optimization of Aircrafts 170
31 Now we have n + l + 2m equations and two main possibilities for each inequality constraint i: inactive : the i-th constraint is inactive, and µ i = 0. active : the i-th constraint is active, and µ i 0. µ i must then be non-negative, otherwise from the first equations, the gradient of objective and gradient of constraint point in the same direction. Multidisciplinary Design Optimization of Aircrafts 171
32 2.4.2 Sufficient Conditions for Optimality The KKT conditions are necessary conditions: if a design x is a local minimum to the problem in (2.110), then it should satisfy (2.111). However, if a design x satisfies the conditions (2.111), then it only guarantee to be a stationary point minimum, maximum or saddle. The sufficient conditions are obtained by examining the second order requirements. Let f, g and h be twice-continuously differentiable functions. Then, the point x is a strict local minimum to (2.110) if there exist Lagrange multipliers µ and λ such that 1. KKT necessary conditions in (2.111) must be satisfied at x and 2. The Hessian matrix of the Lagrangian, 2 L(x ) = 2 f(x ) + m µ i 2 g i (x ) + i=1 l λ j 2 h j (x ) (2.112) j=1 is positive definite in the feasible space. This is a subspace of R n and is defined as follows: any direction y that satisfies y 0 h T j (x )y = 0, for all j = 1,..., l (2.113) g T i (x )y = 0, for all i for which g i (x ) = 0 with µ i > 0, Multidisciplinary Design Optimization of Aircrafts 172
33 Then the Hessian of the Lagrangian in feasible space must be positive definite, y T 2 L(x )y > 0. (2.114) The condition with y simply defines the subspace or the tangent plane. Thus, the KKT sufficient conditions (2.113) require that the Hessian of the Lagrangian, 2 L(x ), to be positive definite on the tangent plane M defined by M = {y : h T j (x )y = 0, j = 1,..., l and g T i (x )y = 0 for all i for which g i (x ) = 0 with µ i > 0} that is, on the plane tangent to all active constraints. Multidisciplinary Design Optimization of Aircrafts 173
34 Example 2.16: Problem with a Single Inequality Constraint Consider the previous example with the equality constrain replaced by an inequality constrained: minimize f(x) = x 1 + x 2 subject to g(x) = x x The feasible region is now the circle and its interior. Note that g(x) now points outwards from the center of the circle. Graphically, we can see that the solution is still ( 1, 1) T and therefore µ = 1/ Defining inequality as g(x) 0 means that g(x) points to feasible region (of decreasing g). For optimality in minimization problems, we want f(x) in opposite direction such that the intersection between the feasible and descent direction cones is empty: x L = x f + µ x g = 0 x f = µ x g The sign of the Lagrange multiplier µ is important now. If it were negative, then the gradients would point in the same direction and the intersection of the two regions would be an entire half-space. Multidisciplinary Design Optimization of Aircrafts 174
35 Given a point x that is not optimal, we can find a step d that both stays feasible and decreases the objective function f, to first order. As in the equality constrained case, the latter condition is expressed as f T (x)d < 0 (2.115) The first condition, however is slightly different, since the constraint is not necessarily zero, i.e. Performing a Taylor series expansion we have, g(x + d) }{{} 0 Thus feasibility is retained to a first order if g(x + d) 0 (2.116) g(x) + g T (x)d. (2.117) g(x) + g T (x)d 0 (2.118) Multidisciplinary Design Optimization of Aircrafts 175
36 In order to find valid steps d it helps two consider two possibilities: 1. Suppose x lies strictly inside the circle (g(x) < 0). In this case, any vector d satisfies the condition (2.118), provided that its length is sufficiently small. The only situation that will prevent us from finding a descent direction is if f(x) = Consider now the case in which x lies on the boundary, i.e., g(x) = 0. The conditions thus become f T (x)d < 0 and g T (x)d 0. The two regions defined by these conditions fail to intersect only when f(x) and g(x) point in the opposite direction, that is, when f(x) T d = µ g(x), for some µ 0. (2.119) In case 1., g(x ) < 0, so we require µ = 0. Hence f(x) = 0. Case 2 allows µ to take a nonnegative value, so the conditions become equivalent to the equality-constrained case. Multidisciplinary Design Optimization of Aircrafts 176
37 The optimality conditions for these two cases can again be summarized by using the Lagrangian function, that is, x L(x, µ ) = 0, for some µ 0 and µ g(x ) = 0. (2.120) The last condition is known as a complementarity condition and implies that the Lagrange multiplier can be strictly positive only when the constraint is active Figure 2.9: Contour plots of f(x), g = 0 and L(x, µ ) Multidisciplinary Design Optimization of Aircrafts 177
38 Example 2.17: Lagrangian Whose Hessian is Not Positive Definite minimize f(x) = x 1 x 2 subject to h(x) = 2 x 2 1 x2 2 = 0 x 1 0, x Figure 2.10: Contour plots of f(x), h = 0 and L(x, λ ) Multidisciplinary Design Optimization of Aircrafts 178
39 Example 2.18: Problem with Two Inequality Constraints Suppose we now add another inequality constraint, minimize f(x) = x 1 + x 2 subject to g 1 (x) = x x g 2 (x) = x 2 0 The feasible region is now a half disk. Graphically, we can see that the solution is now ( 2, 0) T and that both constraints are active at this point For g i (x) T d 0, d lies in quadrant between g 1 and g 2. Any vector in this quadrant result in increase. Multidisciplinary Design Optimization of Aircrafts 179
40 The Lagrangian for this problem is L(x, µ) = f(x) + µ 1 g 1 (x) + µ 2 g 2 (x), (2.121) where µ = (µ 1, µ 2 ) T is the vector of Lagrange multipliers. The first order optimality conditions are thus, x L(x, µ ) = 0, for some µ 0. (2.122) Applying the complementarity conditions to both inequality constraints, µ 1 g 1(x ) = 0, and µ 2 g 2(x ) = 0. (2.123) For x = ( 2, 0) T we have, f(x ) = [ ] 1, g 1 1 (x ) = [ ] 2 2, g 2 (x ) = 0 [ ] 0, 1 and x L(x, µ ) = 0 when µ = [ ] Multidisciplinary Design Optimization of Aircrafts 180
41 Now lets consider other feasible points that are not optimal and examine the Lagrangian and its gradients at these points. For point x = ( 2, 0) T, both constraints are again active. However, f(x) no longer lies in the quadrant defined by g i (x) T d 0, i = 1, 2 and therefore there are descent directions that are feasible, like for example d = ( 1, 0) T. Note that x L(x, µ ) = 0 at this point for µ = ( 1 2 2, 1)T. However, since µ 1 is negative, the first order conditions are not satisfied at this point Multidisciplinary Design Optimization of Aircrafts 181
42 Now consider the point x = (1, 0) T, for which only the second constraint is active. Linearizing f and c as before, d must satisfy the following to be a feasible descent direction, g 1 (x + d) g 1 (x) T d 0, (2.124) g 2 (x + d) 0 g 2 (x) T d 0, (2.125) f(x + d) f(x) < f(x) T d < 0. (2.126) We only need to worry about the last two conditions, since the first is always satisfied for a small enough step. By noting that f(x ) = [ ] 1, g 1 2 (x ) = [ ] 0, 1 we can see that the vector d = ( 1 2, 1 4), for example, satisfies the two conditions. Since g 1 (x) < 0, we must have µ 1 = 0. In order to satisfy x L(x, µ) = 0 we would have to find µ 2 such that f(x) = µ 2 g 2 (x). No such µ 2 exists and this point is therefore not an optimum. Multidisciplinary Design Optimization of Aircrafts 182
43 2.4.3 Projection Methods These methods find the search direction by projecting steepest descent direction onto the feasible region. Rosen s gradient projection method will be presented for linear constraints [52]. While Rosen did publish an extension of his method for nonlinear constraints [53], it is very difficult to implement and other methods are preferred. Consider problems that can be expressed in the form minimize f(x) w.r.t x R n (2.127) subject to a i x b i 0, i = 1,..., m a j x b i = 0, i = m + 1,..., m + l The active set consists of all the equalities and the active inequalities, expressed as I(x) = {m + 1,..., m + l} {j : a j x b j = 0, j = 1,..., m}. (2.128) Let the current design x k be feasible, i.e., satisfies the constraints in (2.127). Multidisciplinary Design Optimization of Aircrafts 183
44 If x k is an interior point, the active set I(x) is empty and the steepest descent direction can be used, d = f(x). (2.129) Otherwise, this direction needs to be adjusted. This is in general the case. Rosen s idea is to determine the direction vector by projecting the steepest descent direction onto the tangent plane, defined by M = {y : By = 0}, (2.130) where the matrix B consists of the rows of the gradient vectors obtained from the active constraints, [ ] B = a 1T... a tt, (2.131) where t is the number of active constraints. This matrix has size [t n]. Thus, d can be obtained as the solution of minimize ( f(x k ) d) T ( f(x k ) d) w.r.t d (2.132) subject to Bd = 0 Multidisciplinary Design Optimization of Aircrafts 184
45 Recognizing that (2.132) is an equality constrained problem, the Lagrangian is L(d, β) = ( f(x k ) + d) T ( f(x k ) + d) + β T Bd, (2.133) the necessary optimality conditions apply d L = L d = ( f + d)t + β T B = 0. (2.134) Pre-multiplying (2.134) by B and using Bd = 0 yields which can be solved for the Lagrange multipliers β. [BB T ]β = B f, (2.135) Multidisciplinary Design Optimization of Aircrafts 185
46 From (2.134), the direction vector is then d = f B T β. (2.136) Substituting (2.135) in (2.136) leads to d = f B T [BB T ] 1 B f = P( f), (2.137) where P is the projection matrix defined as P = [ ] I B T [BB T ] 1 B. (2.138) After having determined the search direction d, the next task is to determine the step size. The first task is to determine the step to the nearest intersection boundary, α U, as depicted in figure. Multidisciplinary Design Optimization of Aircrafts 186
47 This can be determined from the requirement that Next, the slope of f is evaluated at α = α U as a i (x k + αd) b i 0, i / I(x k ). (2.139) f = df dα = f(x k + α U d) T d. (2.140) If f < 0 then α = α U. If f > 0 then the minimum of fis in the interval [0, α U ] and a 1-D minimization method is then used. More details can be found in Belegundu and Chandrupatla, pg [7]. Multidisciplinary Design Optimization of Aircrafts 187
48 Input: Problem in the form (2.127) and initial feasible point x 0 Output: Optimal solution of the linearly constrained problem begin k = 0 repeat Determine the active set (2.128) and form the matrix B (2.131) Solve (2.135) for β Evaluate d from (2.136) if d 0 then Obtain step α k as discussed Update current point x k+1 = x k + α k d else if β j 0 for all j corresponding to active constraints then x k satisfies the KKT optimality conditions else Delete row from B corresponding to the most negative component of β (associated with an inequality) end end Set k = k + 1 until x k satisfies the optimality conditions end Pseudo-code 15: Rosen s Gradient Projection Method Multidisciplinary Design Optimization of Aircrafts 188
49 2.4.4 Feasible Direction Methods In 1960, Zoutendijk [58] developed the Method of Feasible Directions which is still a popular and robust method for optimization. The method is geared for solving problems with inequality constraints in the form minimize f(x) w.r.t x R n (2.141) subject to g i (x) 0, i = 1,..., m Assuming that the current design point x k is feasible, this method is based on the following idea: if we have a procedure to determine a direction d which is both descent ( f T d < 0) an feasible ( g T i d < 0), then a line search along d will yield an improved design. Define the active set I as where ɛ is the constraints thickness parameter. I = {j : g j (x k ) + ɛ 0, j = 1,..., m}, (2.142) Multidisciplinary Design Optimization of Aircrafts 189
50 Introduce an artificial variable α as α = max{ f T d < 0, g T i d < 0 for each j I}. (2.143) To obtain a descent-feasible direction, the aim is to try and reduce α until it becomes a negative number. To this end, a subproblem is formulated to determine a descent-feasible direction, minimize α (2.144) subject to f T i d α g T i d α, 1 d i 1, for each j I i = 1,..., n There are n + 1 variables in this subproblem: d and α. The constraint on the magnitude of d i is imposed to bound the solution. Multidisciplinary Design Optimization of Aircrafts 190
51 The subproblem (2.144) can be cast in standard LP form minimize c T x (2.145) subject to Ax b where x T = (d, α), and solves using the Simplex method. x 0 (2.146) If the solution α < 0, then a descent-feasible direction has been found and line search has to be performed. Multidisciplinary Design Optimization of Aircrafts 191
52 The step size problem can be expressed as a constrained one-dimensional search, minimize f(α) f(x k + αd) (2.147) subject to g i (α) g i (x k + αd), i = 1,..., m (2.148) More details can be found in Belegundu and Chandrupatla, pg [7]. Multidisciplinary Design Optimization of Aircrafts 192
53 Input: Problem in the form (2.141) and initial feasible point x 0 Output: Optimal solution of the inequality constrained problem begin k = 0 repeat Determine the active set (2.142) Solve the LP problem (2.145) for (d, α ) if α = 0 then x k satisfies the KKT optimality conditions else Perform line search: determine maximum step α U and optimum step size α k Update current point x k+1 = x k + α k d end Set k = k + 1 until x k satisfies the optimality conditions end Pseudo-code 16: Zoutendijk s Method of Feasible Directions Multidisciplinary Design Optimization of Aircrafts 193
54 2.4.5 Reduced Gradient Methods The reduced gradient method was proposed by Wolfe [56] to solve nonlinear programming problems with linear constraints. It was extended by Abadie, Carpentier and Hensgen [1] to solve problems with nonlinear constraints through approximate linearized problems. This extension, called generalized reduced gradient method (GRG). The GRG method is very robust to solve problems in the form minimize f(x) w.r.t x R n (2.149) subject to h j (x) = 0, j = 1,..., l x L x x U The first step in GRG is to partition x into independent variables z and dependent variables y as x = { y z } l dependent variables n l independent variables (2.150) Multidisciplinary Design Optimization of Aircrafts 194
55 The y variables are chosen so as to be strictly off the bounds, that is, y L < y < y U. (2.151) Let the (l l) square matrix B, and the l (n l) matrix C be defined from the partition [ h] T = [B, C]. (2.152) Assuming that the point x k in the iterative process is regular and for a certain choice of y, the matrix B is nonsingular. This property allows the use of the Implicit Function Theorem: Theorem 1. [Implicit Function Theorem] There is a small neighborhood of x k such that for z in this neighborhood, y = y(z) is a differentiable function of z with h(y(z), z) = 0. This means f can be treated as an implicit function of z as f = f(y(z), z), with dimensionality (n l). The gradient of f in this reduced z-space is called the reduced gradient and is given by R T df dz = f z + f y y z (2.153) where f z = [ f z 1,..., f z n l ]. Multidisciplinary Design Optimization of Aircrafts 195
56 Differentiating h(y(z), z) = 0 we get B y z + C = 0. (2.154) Combining (2.153) and (2.154), we get R T = f z f y B 1 C. (2.155) A direction vector d, an (n 1) column vector, is determined as follows: using the partition d = { dy d z }, (2.156) The direction d z is chosen to be the steepest descent direction in z-space, with components of z on their boundary held fixed if the movement would violate the bound, (d z ) i = R i 0 if Z i = Z L i and R i > 0 0 if Z i = Z U i and R i < 0. (2.157) Multidisciplinary Design Optimization of Aircrafts 196
57 Next, the direction d y is obtained from (2.154) as d y = B 1 Cd z. (2.158) If the direction vector d = 0, then the current point is a KKT point and the iterations are terminated. The maximum step size α U along d is then determined from Then, to determine the step size α k, compute f as x L x k + αd x U. (2.159) f = df α = f(x k + α U d) T d (2.160) If f < 0, then α k = α U. If f > 0 or f(α U ) > f(x k ), then the minimum is in the interval [0, α U ]. The golden section search can then be used to find it. The new point is then obtained as x k+1 = x k + α k d. Multidisciplinary Design Optimization of Aircrafts 197
58 When handling nonlinear constraints, because the procedure was based on a linearization, the new point might be non-feasible. In such case, the dependent variables y need to be adjusted, while keeping the independent variables fixed. The problem is then to solve h(y, z k+1 ) = 0 by adjusting y only. Starting with the point y (0) k+1 y k+1, Newton s iterations are executed as followed by y (r+1) k+1 = y (r) k+1 where J is the (l l) Jacobian matrix defined as J y = h(y (r) k+1, z k+1) (2.161) J = h(y(r) k+1, z k+1) y + y (2.162). (2.163) More details can be found in Belegundu and Chandrupatla, pg [7]. Multidisciplinary Design Optimization of Aircrafts 198
59 Input: Problem in the form (2.149) and initial feasible point x 0 Output: Optimal solution of the equality constrained problem begin k = 0 repeat Determine the dependent and independent variables Evaluate reduced gradient R from (2.155) Determine direction vector d from (2.157) and (2.158) if d = 0 then x k satisfies the KKT optimality conditions else Perform line search: determine maximum step α U amd optimum step size α k Update current point x k+1 = x k + α k d if x k+1 not feasible then Perform Newton iterations to return to feasible region using (2.161) and (2.162) end end Set k = k + 1 until x k satisfies the optimality conditions end Pseudo-code 17: Generalized Reduced Gradient Method Multidisciplinary Design Optimization of Aircrafts 199
60 2.4.6 Sequential Quadratic Programming (SQP) To understand the use of SQP in problems with general constraints, consider the equality-constrained problem, minimize f(x) subject to h j (x) = 0, j = 1,..., l The idea of SQP is to model this problem at the current point x k by a quadratic subproblem and to use the solution of this subproblem to find the new point x k+1. SQP is in a way the application of Newton s method to the KKT optimality conditions. The Lagrangian function for this problem is L(x, λ) = f(x) λ T h(x). The Jacobian of the constraints is defined by A(x) T = h(x) T = [ h 1 (x),..., h l (x)] (2.164) which is an n l matrix and g(x) = f(x) is an n-vector as before. Multidisciplinary Design Optimization of Aircrafts 200
61 Applying the first-order necessary KKT conditions to this problem yields L(x, λ) = [ ] x L λ L [ g(x) A(x) T ] λ = = 0 (2.165) h(x) This set of nonlinear equations can be solved using Newton s method, Hp = g. [ ] [ ] [ W (x, λ) A(x) T pk gk + A T = k λ ] k A(x) 0 p λ h k (2.166) where the Hessian of the Lagrangian, obtained by differentiating the vector (g(x) A(x) T λ, h(x)) T with respect to x, is denoted by W (x, λ) = 2 xxl(x, λ) and the Newton step from the current point is given by [ xk+1 λ k+1 ] = [ xk λ k ] + [ pk p λ ]. (2.167) Multidisciplinary Design Optimization of Aircrafts 201
62 An alternative way of looking at this formulation of the SQP is to define the following quadratic problem at (x k, ˆλ k ) This problem has a unique solution that satisfies minimize 1 2 pt W k p + g T k p subject to A k p + h k = 0 W k p + g k A T k λ k = 0 A k p + h k = 0 By writing this in matrix form, p k and λ k can be identified as the solution of the Newton equations derived previously. [ Wk A T k A k 0 ] [ pk λ k+1 ] = [ ] gk h k. (2.168) This problem is equivalent to (2.166), but the second set of variables, is now the actual vector of Lagrange multipliers λ k+1 instead of the Lagrange multiplier step, p λ. Multidisciplinary Design Optimization of Aircrafts 202
63 Quasi-Newton Approximations Any SQP method relies on a choice of W k (an approximation of the Hessian of the Lagrangian) in the quadratic model. When W k is exact, then the SQP becomes the Newton method applied to the optimality conditions. One way to approximate the Lagrangian Hessian would be to use a quasi-newton approximation, such as the BFGS update formula. One could define, s k = x k+1 x k, y k = x L(x k+1, λ k+1 ) x L(x k, λ k+1 ), (2.169) and then compute the new approximation B k+1 using the same formula used in the unconstrained case. If 2 xx L is positive definite at the sequence of points x k, the method will converge rapidly, just as in the unconstrained case. If, however, 2 xxl is not positive definite, then using the BFGS update may not work well. Multidisciplinary Design Optimization of Aircrafts 203
64 To ensure that the update is always well-defined the damped BFGS updating for SQP was devised. Using this scheme, one sets r k = θ k y k + (1 θ k )B k s k, (2.170) where the scalar θ k is defined as θ k = 1 if s T k y k 0.2s T k B ks k, 0.8s T k B k s k s T k B k s k st k y k if s T k y k < 0.2s T k B ks k. (2.171) Then B k+1 is updated using B k+1 = B k B ks k s T k B k s T k B ks k + r kr T k s T k r, (2.172) k which is the standard BFGS update formula with y k replaced by r k. This guarantees that the Hessian approximation is positive definite. Note that when θ k = 0 results in B k+1 = B k, and that θ k = 1 yields an unmodified BFGS update. The modified method thus produces an interpolation between the current B k and the one corresponding to BFGS. The choice of θ k ensures that the new approximation stays close enough to the current approximation to guarantee positive definiteness. Multidisciplinary Design Optimization of Aircrafts 204
65 In addition to using a different quasi-newton update, SQP algorithms also need modifications to the line search criteria in order to ensure that the method converges from remote starting points. It is common to use a merit function, φ to control the size of the steps in the line search. The following is one of the possibilities for such a function: φ(x k ; µ) = f(x) + 1 µ h 1 (2.173) The penalty parameter µ is positive and the L 1 norm of the equality constraints is h 1 = l h j. (2.174) j=1 For the line search the directional derivative of φ in the p k direction is needed, which is given by D (φ(x k ; µ); p k ) = p T k W kp k + p T k AT k λ k+1 µ 1 h k 1 (2.175) For the sequence of penalty parameters, the following strategy is often used µ k = where γ is set to max(λ k+1 ). { µk 1 if µ 1 k 1 γ + δ (γ + 2δ) 1 otherwise, (2.176) Multidisciplinary Design Optimization of Aircrafts 205
66 A line search SQP algorithm: 1. Choose parameters 0 < η < 0.5, 0 < τ < 1 and the initial point (x 0, λ 0 ). 2. Initialize the Hessian estimate, say B 0 = I. 3. Evaluate f 0, g 0, c 0 and A Begin major iteration loop in k: (a) If termination criteria are met, then stop. (b) Compute p k by solving (2.166). (c) Choose µ k such that p k is a descent direction for φ at x k. (d) Set α k = 1. i. While φ(x k + α k p k, µ k ) > φ(x k, µ k ) + ηα k Dφ(x k, p k ) ii. Set α k = τ α α k for some 0 < τ α < τ. (e) Set x k+1 = x k + α k p k. (f) Evaluate f k+1, g k+1, h k+1 and A k+1. (g) Compute λ k+1 by solving λ k+1 = [ A k+1 A T k+1] 1 Ak+1 g k+1 (h) Set s k = α k p k, y k = x L(x k+1, λ k+1 ) x L(x k, λ k+1 ). (i) Obtain B k+1 by updating B k using a quasi-newton formula. 5. End major iteration loop. Multidisciplinary Design Optimization of Aircrafts 206
67 The SQP method can be extended to handle inequality constraints. Consider general nonlinear optimization problem minimize f(x) subject to h j (x) = 0, j = 1,..., l g i (x) 0, i = 1,..., m To define the subproblem we now linearize both the inequality and equality constraints and obtain, 1 minimize 2 pt W k p + g T k p subject to h j (x) T p + h j (x) = 0, j = 1,..., l g i (x) T p + g i (x) 0, i = 1,..., m One of the most common type of strategy to solve this problem, the active-set method, is to consider only the active constraints at a given iteration and treat those as equality constraints. This is a significantly more difficult problem because one does not know a priori which inequality constraints are active at the solution. If one did, one could just solve the equality constrained problem considering only the active constraints. The most commonly used active-set methods are feasible-point methods. These start with a feasible solution and never let the new point be infeasible. Multidisciplinary Design Optimization of Aircrafts 207
68 Example 2.19: Constrained Minimization Using SQP minimize f(x) = 4x 2 1 x 1 x w.r.t x 1, x 2 subject to g 1 (x) = x x x 1 1 0, g 2 (x) = x x2 1 2x Multidisciplinary Design Optimization of Aircrafts 208
69 2.4.7 Sequential Unconstrained Minimization Techniques Fiacco and McCormick s developed the sequential unconstrained minimization techniques (SUMT) [23], which is the now classical penalty approach. It is perhaps the simplest algorithm to implement for solving constrained problems. Multidisciplinary Design Optimization of Aircrafts 209
70 2.4.8 Penalty and Barrier Methods One of the ways of replacing a constrained optimization problem with an unconstrained one is by adding a penalty to the objective function that depends in some logical way on the value of the constraints. The idea is to minimize a sequence of unconstrained minimization problems where the infeasibility of the constraints is minimized together with the objective function. There two main types of penalization methods: Exterior penalty functions, which impose a penalty for violation of constraints, and Exterior penalty methods are also referred to simply as penalty methods, because the penalty is activated only when the design point is outside the feasible region. Interior penalty functions, which impose a penalty for approaching the boundary of an inequality constraint. Barrier methods are also known as interior point methods because it ensures that all intermediate iteration are feasible. These methods are often used due to their relatively easy implementation, but they can raise some computational problems. The penalty functions are a precursor of the augmented Lagrangian method. Multidisciplinary Design Optimization of Aircrafts 210
71 Exterior Penalty Functions The modified objective function is defined as the original one plus a term for each constraint, which is positive when the current point violates the constraint and zero otherwise. Consider the equality-constrained problem: minimize f(x) subject to h(x) = 0 where h(x) is an l-dimensional vector whose j-th component is h j (x). We assume that all functions are twice continuously differentiable. We require a penalty for constraint violation to be a continuous function φ with the following properties { φ(x) = 0 if x is feasible φ(x) > 0 otherwise, Multidisciplinary Design Optimization of Aircrafts 211
72 The new objective function is π(x, ρ) = f(x) + ρφ(x), (2.177) where ρ is positive and is called the penalty parameter. The penalty method consists of solving a sequence of unconstrained minimization of the form minimize π (x, ρ k ) w.r.t. x for an increasing sequence of positive values of ρ k tending to infinity. In general, for finite values of ρ k, the minimizer of the penalty function violate the equality constraints. The increasing penalty forces the minimizer toward the feasible region. Multidisciplinary Design Optimization of Aircrafts 212
73 The Quadratic Penalty Method The quadratic penalty function is defined as π(x, ρ) = f(x) + ρ 2 l h j (x) 2 = f(x) + ρ 2 h(x)t h(x). (2.178) j=1 The penalty is equal to the sum of the square of all the constraints and is therefore greater than zero when any constraint is violated and is zero when the point is feasible. This method can be modified to handle inequality constraints by defining the penalty for these constraints as m φ(x, ρ) = ρ (max [0, g i (x)]) 2. (2.179) i=1 Penalty functions suffer from problems of ill conditioning. The solution of modified problem approaches real solution as lim ρ + x (ρ) = x. As the penalty parameter increases, the condition number of the Hessian matrix of π(x, ρ) increases and also tends to. This makes the problem increasingly difficult to solve numerically. Multidisciplinary Design Optimization of Aircrafts 213
74 Input: Modified objective function π(x, ρ k ) and initial penalty parameter ρ 0 and guess x 0 Output: Optimal solution of constrained problem begin k = 0 repeat Minimize the penalty function: with x k as the starting point, execute an algorithm to solve the unconstrained subproblem minimize π(x, ρ k ) w.r.t. x and let the solution of this subproblem be x k+1 Increase the penalty parameter: set ρ k+1 to a larger value than ρ k Set k = k + 1 until x k satisfies the optimality conditions end Pseudo-code 18: Exterior Penalty Method Note: the increase in the penalty parameter for each iteration can range from modest (ρ k+1 = 1.4ρ k ), to ambitious (ρ k+1 = 10ρ k ), depending on the problem. Multidisciplinary Design Optimization of Aircrafts 214
75 Interior Penalty Methods Exterior penalty methods generate infeasible points and are therefore not suitable when feasibility has to be strictly maintained. This might be the case if the objective function is undefined or ill-defined outside the feasible region. The method is analogous to the external penalty method: it creates a sequence of unconstrained modified differentiable functions whose unconstrained minima converge to the optimum solution of the constrained problem in the limit. Consider the inequality-constrained problem: minimize f(x) subject to g(x) 0 where g(x) is an m-dimensional vector whose i-th component is g i (x). Again, all functions are assumed to be twice continuously differentiable. Multidisciplinary Design Optimization of Aircrafts 215
76 The Logarithmic Barrier Method The logarithmic barrier function adds a penalty that tends to infinity as x approaches infeasibility. The function is defined as π(x, µ) = f(x) µ m log (g i (x)), (2.180) i=1 where the positive scalar µ is called the barrier parameter. The Inverse Barrier Function The inverse barrier function is defined as π(x, µ) = f(x) + µ m i=1 1 g i (x), (2.181) and shares many of the same characteristics of the logarithmic barrier. The solution of the modified problem for both functions approach the real solution as lim µ 0 x (µ) = x. Again, the Hessian matrix becomes increasingly ill conditioned as µ approaches zero. Multidisciplinary Design Optimization of Aircrafts 216
77 Similarly to the an exterior point method, an algorithm using these barrier functions finds the minimum of π(x, µ k ), for a given starting (feasible) point and terminates when norm of gradient is close to zero. The algorithm then chooses a new barrier parameter µ k+1 and a new starting point, finds the minimum of the new problem and so on. A value of 0.1 for the ratio µ k+1 /µ k is usually considered ambitious. Multidisciplinary Design Optimization of Aircrafts 217
78 Example 2.20: Constrained Minimization Using a Quadratic Penalty Function minimize f(x) = 4x 2 1 x 1 x w.r.t x 1, x 2 subject to g 1 (x) = x x x 1 1 0, g 2 (x) = x x2 1 2x Note that if we started with a high ρ, might have found a local minimum. SQP would always find the local if started in some regions. Multidisciplinary Design Optimization of Aircrafts 218
5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationSIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)
19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationg(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to
1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More information1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:
Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion
More informationPart 2: NLP Constrained Optimization
Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz EMAIL: James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow!
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationLagrange Multipliers
Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize
More informationReview of Classical Optimization
Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationNotes on Constrained Optimization
Notes on Constrained Optimization Wes Cowan Department of Mathematics, Rutgers University 110 Frelinghuysen Rd., Piscataway, NJ 08854 December 16, 2016 1 Introduction In the previous set of notes, we considered
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationIMM OPTIMIZATION WITH CONSTRAINTS CONTENTS. 2nd Edition, March K. Madsen, H.B. Nielsen, O. Tingleff
IMM CONTENTS OPTIMIZATION WITH CONSTRAINTS 2nd Edition, March 2004 K. Madsen, H.B. Nielsen, O. Tingleff. INTRODUCTION..... Smoothness and Descent Directions.... 4.2. Convexity.... 7 2. LOCAL, CONSTRAINED
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationMath 273a: Optimization Basic concepts
Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationCONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS SUBMITTED TO NATIONAL INSTITUTE OF TECHNOLOGY,
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More information