CHAPTER 2: QUADRATIC PROGRAMMING

Similar documents
CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Optimization: an Overview

Computational Finance

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Math 273a: Optimization Basic concepts

Nonlinear Optimization for Optimal Control

September Math Course: First Order Derivative

minimize x subject to (x 2)(x 4) u,

Numerical Optimization. Review: Unconstrained Optimization

5 Handling Constraints

Gradient Descent. Dr. Xiaowei Huang

MATH 4211/6211 Optimization Basics of Optimization Problems

Lecture 3: Basics of set-constrained and unconstrained optimization

Optimality Conditions

Optimization Tutorial 1. Basic Gradient Descent

5.5 Quadratic programming

1 Computing with constraints

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

Lecture 18: Optimization Programming

Lecture 13: Constrained optimization

Constrained optimization: direct methods (cont.)

Multidisciplinary System Design Optimization (MSDO)

Nonlinear Programming (NLP)

Module 04 Optimization Problems KKT Conditions & Solvers

SF2822 Applied nonlinear optimization, final exam Wednesday June

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Nonlinear Optimization: What s important?

Chapter 3 Numerical Methods

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization

Performance Surfaces and Optimum Points

Optimization Methods

Review of Optimization Methods

Matrix Derivatives and Descent Optimization Methods

Functions of Several Variables

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Constrained Optimization and Lagrangian Duality

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Written Examination

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

Generalization to inequality constrained problem. Maximize

Semidefinite Programming Basics and Applications

CHAPTER 3: OPTIMIZATION

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

4. Convex optimization problems

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Scientific Computing: Optimization

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples

OR MSc Maths Revision Course

Introduction to gradient descent

Chapter 2: Unconstrained Extrema

MATH2070 Optimisation

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Constrained Optimization

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

The Steepest Descent Algorithm for Unconstrained Optimization

Appendix A Taylor Approximations and Definite Matrices

MATH 4211/6211 Optimization Constrained Optimization

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

A Project Report Submitted by. Devendar Mittal 410MA5096. A thesis presented for the degree of Master of Science

CONSTRAINED NONLINEAR PROGRAMMING

2.098/6.255/ Optimization Methods Practice True/False Questions

Algorithms for Constrained Optimization

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Support Vector Machines: Maximum Margin Classifiers

Mathematical optimization


Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Applications of Linear Programming

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Lecture 5: September 12

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

CS-E4830 Kernel Methods in Machine Learning

Support Vector Machines

Structured Problems and Algorithms

A New Trust Region Algorithm Using Radial Basis Function Models

Convex Optimization Problems. Prof. Daniel P. Palomar

Computational Optimization. Convexity and Unconstrained Optimization 1/29/08 and 2/1(revised)

Iterative Methods for Solving A x = b

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lec3p1, ORF363/COS323

EE C128 / ME C134 Feedback Control Systems

Algorithms for nonlinear programming problems II

5. Duality. Lagrangian

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

ICS-E4030 Kernel Methods in Machine Learning

Intro to Nonlinear Optimization

3.5 Quadratic Approximation and Convexity/Concavity

REVIEW OF DIFFERENTIAL CALCULUS

Transcription:

CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense, QPs are a generalization of LPs and a special case of the general nonlinear programming problem. QPs are ubiquitous in engineering problems, include civil & environmental engineering systems. A classic example is least squares optimization, often performed during regression analysis. Like LPs, QPs can be solved graphically. However, the nature of solutions is quite different. Namely, interior optima are possible. This leads us towards conditions for optimality, which is an extension of basic optimization principles learned in first-year calculus courses. Finally, QPs provide a building-block to approximately solve nonlinear programs. That is, a nonlinear program can be solved by appropriately constructing a sequence of approximate QPs. By the end of this chapter, students will be able to identify and formulate QPs. They will also be able to assess the nature of its solution, i.e. unique local maxima/minima, infinite solutions, or no solutions. Finally, they will have one tool to approximately solve a more general nonlinear programming problem. Chapter Organization This chapter is organized as follows: (Section ) Quadratic Programs (QP) (Section 2) Least Squares (Section 3) Graphical QP (Section 4) Optimality Conditions (Section ) Sequential Quadratic Programming (SQP) Quadratic Programs A quadratic program (QP) is the problem of optimizing a quadratic objective function subject to linear constraints. Mathematically, Minimize: 2 xt Qx + R T x + S () Revised December 9, 24 NOT FOR DISTRIBUTION Page

subject to: Ax b (2) A eq x = b eq (3) where x R n is the vector of design variables. The remaining matrices/vectors have dimensions Q R n n, R R n, S R, A R m n, b R m, A eq R l n, b eq R l, where n is the number of design variables, m is the number of inequality constraints, and l is the number of equality constraints. Remark. (The S Term). Note that the S term in () can be dropped without loss of generality, since it has no impact on the optimal solution x. That is, suppose x is the optimal solution to ()-(3). Then it is also the optimal solution to Minimize: 2 xt Qx + R T x (4) subject to: Ax b () A eq x = b eq (6) As a consequence, we often disregard the S term. In fact, some computational solvers do not even consider the S term as an input. Remark.2 (Quadratically Constrained Quadratic Program). A generalization of the QP is to consider quadratic inequality constraints, resulting in the quadratically constrained quadratic program (QCQP) Minimize: 2 xt Qx + R T x (7) subject to: 2 xt Ux + V T x + W (8) A eq x = b eq (9) When U =, then this problem degenerates into a standard QP. We will not discuss this class of QPs further, besides to say it exists along with associated solvers. 2 Least Squares To this point, QPs are simply an abstract generalization of LPs. In this section, we seek to provide some practical motivation for QPs. Consider an overdetermined linear system of equations. That is consider a skinny matrix A, and the following equation to solve: y = Ax () Revised December 9, 24 NOT FOR DISTRIBUTION Page 2

where x R n is a vector of unknowns, y R m includes known data, and A R m n is a known matrix where m > n. Under appropriate conditions for A, eqn. () has no unique solution, since there are more equations than unknowns. We call this an overdetermined systems of equations. A common approach is to seek a best fit solution. For this, define the residual error as r = Ax y () and consider a solution x that minimizes the norm of the residual, r. This solution is called the least squares solution. It is often used in regression analysis, among many applications, as demonstrated below. Example 2. (Linear Regression). Suppose you have collected measured data pairs (x i, y i ), for i =,, N where N > 6, as shown in Fig.. You seek to fit a fifth-order polynomial to this data, i.e. y = c + c x + c 2 + c 3 x 3 + c 4 x 4 + c x (2) Figure : You seek to fit a fifth-order polynomial to the measured data above. The goal is to determine parameters c j, j =,, that best fit the data in some sense. To this end, you may compute the residual r for each data pair: c + c x + c 2 + c 3 x 3 + c 4 x 4 + c x y = r, c + c + c 2 2 + c 3 x 3 2 + c 4 x 4 2 + c x 2 y 2 = r 2,. =. c + c x N + c 2 N + c 3 x 3 N + c 4 x 4 N + c x N y N = r N, (3) All columns of A must be linearly independent. Revised December 9, 24 NOT FOR DISTRIBUTION Page 3

which can be arranged into matrix-vector form Ac y = r, where A = x x 3 x 4 x 2 x 3 2 x 4 2 x 2...... x N N x3 N x4 N x N, c = c c c 2 c 3 c 4, y = y y 2. y N, r = r r 2. r N. (4) c Now we compute an optimal fit for c in the following sense. We seek the value of c which minimizes the squared residual min c 2 r 2 = 2 rt r = 2 (Ac y)t (Ac y) = 2 ct A T Ac y T Ac + 2 yt y. () Note that () is quadratic in variable c. In this case the problem is also unconstrained. As a result, we can set the gradient with respect to c to zero and directly solve for the minimizer. c 2 r 2 = A T Ac A T y =, A T Ac = A T y, c = (A T A) A T y (6) This provides a direct formula for fitting the polynomial coefficients c j, j =,, using the measured data. Exercise. Consider fitting the coefficients c, c 2, c 3 of the following sum of radial basis functions to data pairs (x i, y i ), i =,, N. y = c e (x.2)2 + c 2 e (x.)2 + c 3 e (x.7)2 (7) Formulate and solve the corresponding QP problem. Exercise 2. Repeat the same exercise for the following Fourier Series: y = c sin(ωx) + c 2 cos(ωx) + c 3 sin(2ωx) + c 4 cos(2ωx) (8) Revised December 9, 24 NOT FOR DISTRIBUTION Page 4

3 Graphical QP For problems of one, two, or three dimensions, it is possible to solve QPs graphically. Consider the following QP example: min J = (x 2) 2 + ( 2) 2 s. to 2x + 4 28 x + x 8 6 x The feasible set and corresponding iso-contours are illustrated in the left-hand side of Fig. 2. In this case, the solution is an interior optimum. That is, no constraints are active at the minimum. In contrast, consider the objective function J = (x 6) 2 + ( 6) 2 shown on the right-hand side of Fig. 2. In this case, the minimum occurs at the boundary and is unique. 9 9 8 7 6 4 3 2 J = (x 6) 2 + ( 6) 2 J* J = 2 3 4 6 7 8 9 2 Figure 2: An interior optimum [LEFT] and boundary optimum [RIGHT] for a QP solved graphically. Exercise 3. Solve the following QP graphically min J = (x 2) 2 + ( 4) 2 Revised December 9, 24 NOT FOR DISTRIBUTION Page

s. to x + 3 x 3 Exercise 4. Solve the following QP graphically min J = 2 s. to x + x x 4 Optimality Conditions In this section, we establish a connection to optimization from high-school / first-year college 2 calculus. We establish this connection within the context of QPs, since they also reveal the nature of QP solutions. Consider an unconstraint QP of the form min f(x) = 2 xt Qx + R T x (9) In calculus, you learned that a necessary condition for minimizers is that the function s slope is zero at the optimum. We extend this notion to multivariable functions. That is, if x is an optimum, then the gradient is zero at the optimum. Mathematically, d dx f(x ) = = Qx + R Qx = R (2) We call this condition the first order necessary condition (FONC) for optimality. This condition is necessary for an optimum, but not sufficient for completely characterizing a minimizer or maximizer. As a result, we call a solution to the FONC a stationary point, x. In calculus, the term extremum is often used. Recall from calculus that the second derivative can reveal if a stationary point is a minimizer, maximizer, or neither. For single variable functions f(x), the stationary point x has nature char- 2 At UC Berkeley, the related course in Math A Revised December 9, 24 NOT FOR DISTRIBUTION Page 6

acterized by the following 3. x is minimizer if f (x ) > x is maximizer if f (x ) < x is inflection point if f (x ) = (2) We now extend this notion to multivariable optimization problems. Consider the second derivative of a multivariable function, which is called the Hessian. Mathematically, d 2 d f(x ) = Q (22) and Q is the Hessian. The nature of the stationary point is given by the positive definiteness of matrix Q, as shown in Table. Namely, if Q is positive definite then x is a local minimizer. If Q is negative definite then x is a local maximizer. It is also possible to have infinite solutions, which can be characterized by the Hessian Q. If Q is positive (negative) semi-definite, then x is a valley (ridge). If Q is indefinite, then x is a saddle point. We call this the second order sufficient condition (SOSC). Visualizations of each type of stationary point are provided in Fig. 3 and 4. A quick review of positive definite matrices is provided in Section 4.. Table : Relationship between Hessian and nature of stationary point x. Hessian matrix Quadratic form Nature of x positive definite x T Qx > local minimizer negative definite x T Qx < local maximizer positive semi-definite x T Qx valley negative semi-definite x T Qx ridge indefinite x T Qx any sign saddle point 3 Technically, an inflection point is a point x where the curve f(x) changes from concave to convex, or vice versa. It is possible for f (x ) = and the concavity does not change. An example is f(x) = x 4 for x =. In this case x is not an inflection point, but undulation point. Revised December 9, 24 NOT FOR DISTRIBUTION Page 7

Local Minimum 2 2 2 x x Local Maximum 2 4 6 8 x 2 4 x Figure 3: Visualizations of stationary points of different nature. In each case, the objective functions take the form f(x, ) = x T Qx and have stationary points at the origin (x, x 2 ) = (, ), denoted by the red star. The Hessian for each case are: Local Minimum Q = [3 2; 2 3], Local Maximum Q = [ 2 ; 2]. Revised December 9, 24 NOT FOR DISTRIBUTION Page 8

CE 9 CEE Systems Analysis Valley 4 6 3 x2 3 4 2 2 2 x x 2 x Ridge 2 x2 4 6 8 x2 x 2 x Saddle Point 3 2 2 2 x2 2 x x2 x Figure 4: Visualizations of stationary points of different nature. In each case, the objective functions take the form f (x, x2 ) = xt Qx and have stationary points at the origin (x, x 2 ) = (, ), denoted by the red star. The Hessian for each case are: Valley Q = [8 4; 4 2], Ridge Q = [ ; ], Saddle Point Q = [2 4; 4.]. Revised December 9, 24 NOT FOR DISTRIBUTION Page 9

4. Review of Positive Definite Matrices In linear algebra, matrix positive definiteness is a generalization of positivity for scalar variables. Definition 4. (Positive Definite Matrix). Consider symmetric matrix Q R n n. All of the following conditions are equivalent: Q is positive definite x T Qx >, x the real parts of all eigenvalues of Q are positive Q is negative definite Similarly, the following conditions are all equivalent: Q is positive semi-definite x T Qx, x the real parts of all eigenvalues of Q are positive, and at least one eigenvalue is zero Q is negative semi-definite In practice, the simplest way to check positive-definiteness is to examine the signs of the eigenvalues. In Matlab, one can compute the eigenvalues using the eig command. Exercise. Determine if the following matrices are positive definite, negative definite, positive semi-definite, negative semi-definite, or indefinite. (a) M = (b) M = 9 3 3 2 (c) M = 2 2 (d) M = (e) M = 2 2 8 4 4 2 4 2 (f) M = 2 4 4 Revised December 9, 24 NOT FOR DISTRIBUTION Page

4.2 Examples for Optimality Conditions Example 4.. Consider the following unconstrained QP f(x, ) = (3 x ) 2 + (4 ) 2 [ ] = x x + [ 6 8 ] x (23) Check the FONC. That is, find values of x = [x, ] T where the gradient of f(x, ) equals zero. f x = 6 + 2x 8 + 2 = (24) has the solution (x, x 2 ) = (3, 4). Next, check the SOSC. That is, check the positive definiteness of the Hessian. 2 f = 2 2 positive definite (2) since the eigenvalues are 2,2. Consequently, the point (x, x 2 ) = (3, 4) is a unique minimizer. Example 4.2. Consider the following unconstrained QP min f(x, ) = 4x + 2 + 4 x,x 4x + 2 2 [ ] 4 2 x [ = x + 2 4 2 ] x (26) Check the FONC. That is, find values of x = [x, ] T where the gradient of f(x, ) equals zero. f x = 4 + 8x 4 2 4x + 2 = (27) has an infinity of solutions (x, x 2 ) on the line 2x =. Next, check the SOSC. That is, check the positive definiteness of the Hessian. 2 f = 8 4 4 2 positive semidefinite (28) Revised December 9, 24 NOT FOR DISTRIBUTION Page

since the eigenvalues are,. Consequently, there is an infinite set of minima (valley) on the line 2x x 2 =. Exercise 6. Examine the FONC and SOSC for the following unconstrained QP problems. What is the stationary point x? What is its nature, i.e. unique minimizer, unique maximizer, valley, ridge, or no solution? (a) min x, f(x, ) = 3x + 4 2 + x (b) min x, f(x, ) = 2 4x +. 2 + (c) min x,,x 3 f(x,, x 3 ) = + 2x2 2 + 3x2 3 + 3x + 4x x 3 3 x 3 (d) min x,,x 3 f(x,, x 3 ) = 2 + x + 2 + x 3 + 3 6x 7 8x 3 + 9 (e) min x,,x 3 f(x,, x 3 ) = + 4x2 2 + 4x2 3 + 4x + 4x x 3 + 6 x 3 Sequential Quadratic Programming In our discussion of QPs so far, we have defined QPs, motivated their use with least squares in regression analysis, examined graphical solutions, and discussed optimality conditions for unconstrained QPs. To this point, however, we are still not equipped to solve general nonlinear programs (NLPs). In this section, we provide a direct method for handling NLPs with constraints, called the Sequential Quadratic Programming (SQP) method. The idea is simple. We solve a single NLP as a sequence QP subproblems. In particular, at each iteration we approximate the objective function and constraints by a QP. Then, within each iteration, we solve the corresponding QP and use the solution as the next iterate. This process continues until an appropriate stopping criterion is satisfied. SQP is very widely used in engineering problems and often the first go-to method for NLPs. For many practical energy system problems, it produces fast convergence thanks to its strong theoretical basis. This method is commonly used under-the-hood of Matlab function fmincon. Consider the general NLP min x f(x) (29) subject to g(x), (3) h(x) =, (3) and the k th iterate x k for the decision variable. We utilize the Taylor series expansion. At each iteration of SQP, we consider the 2nd-order Taylor series expansion of the objective function (29), Revised December 9, 24 NOT FOR DISTRIBUTION Page 2

and st-order expansion of the constraints (3)-(3) around x = x k : f(x) f(x k ) + f x g(x) g(x k ) + g x h(x) h(x k ) + h x T (x k ) (x x k ) + 2 (x x k) T 2 f (x k) (x x k ), (32) T (x k ) (x x k ), (33) T (x k ) (x x k ) =. (34) To simplify the notation, define x = x x k. Then we arrive at the following approximate QP where min 2 xt Q x + R T x, (3) s. to A x b (36) Q = d2 f d (x k), A = dg dx A eq = dh dx A eq x = b eq (37) R = df dx (x k), (38) T (x k ), b = g(x k ), (39) T (x k ), b eq = h(x k ). (4) Suppose (3)-(37) yields the optimal solution x. Then let x k+ = x k + x, and repeat. Remark.. Note that the iterates in SQP are not guaranteed to be feasible for the original NLP problem. That is, it is possible to obtain a solution to the QP subproblem which satisfies the approximate QP s constraints, but not the original NLP constraints. Example.. Consider the NLP with the initial guess [x,,, ] T min x, e x + ( 2) 2, (4) s. to x. (42) = [, ] T. By hand, formulate the Q,R,A,b matrices for the first three iterates. Use Matlab command quadprog to solve each subproblem. What is the solution after three iterations? We have f(x) = e x + ( 2) 2 and g(x) = x. The iso-contours for the objective function and constraint are provided in Fig.. From visual inspection, it is clear the optimal solution is near Revised December 9, 24 NOT FOR DISTRIBUTION Page 3

[., 2] T. We seek to find the approximate QP subproblem Taking derivatives of f(x) and g(x), one obtains Q = A = [ min 2 xt Q x + R T x (43) s. to A x b (44) e x 2, R = e x 2( 2), (4) x ], b = x (46) Now consider the initial guess [x,,, ] T = [, ] T. Note that this guess is feasible. We obtain the following matrices for the first QP subproblem Q = A = [ e 2, R = ], b = e Solving this QP subproblem results in x = [.6893,.6893]. Then the next iterate is given by [x,,, ] = [x,,, ] + x = [.37,.6893]. Repeating the formulation and solution of the QP subproblem at iteration produces [x,2,,2 ] = [.443,.9483]. Note that this solution is infeasible. Continued iterations will produce solutions that converge toward the true solution. 2, 4 3. 3 2. 2.. g(x) = Iterations of SQP 2 2 x 9 8 7 6 4 3 2 Iteration [x, ] f(x) g(x) [, ].3679 [.37,.6893].829 -.47 2 [.443,.9483].829.6 3 [.22,.97].62. 4 [.2,.992].64 -. Figure & Table 2: [LEFT] Iso-contours of objective function and constraint for Example.. [RIGHT] Numerical results for first three iterations of SQP. Note that some iterates are infeasible. Revised December 9, 24 NOT FOR DISTRIBUTION Page 4

SQP provides an algorithmic way to solve NLPs in civil & environmental engineering systems. However, it still relies on approximations - namely truncated Taylor series expansions - to solve the optimization problem via a sequence of QP subproblems. In Chapter 4, we will discuss a direct method for solving NLPs, without approximation. Exercise 7 (Example 7. of []). Consider the NLP min x, x x 2 2 + x (47) s. to x =, (48) x, (49) x, () with the initial guess [x,,, ] T = [2,.] T. By hand, formulate the Q,R,A,b matrices for the first three iterates. Use Matlab command quadprog to solve each subproblem. What is the solution after three iterations? Exercise 8 (Exercise 7.33 of []). Consider the NLP with the initial guess [x,,, ] T min x, 6x x 2 + x 2 () s. to x 2 =, (2) x (3) = [, 2] T. By hand, formulate the Q,R,A,b matrices for the first three iterates. Use Matlab command quadprog to solve each subproblem. What is the solution after three iterations? 6 Notes Many solvers have been developed specifically for quadratic programs. In Matlab, the quadprog command uses algorithms tailored towards the specific structure of QPs. More information of the theory behind QPs can be found in Section 4.4 of [2], including applications such as minimum variance problems and the famous Markowitz portfolio optimization problem in economics. Readers interested in further details and examples/exercises for optimality conditions should consult Section 4.3 of []. Section 7.7 of [] also provides an excellent exposition of Sequential Quadratic Programming. Applications of QPs in systems engineering problems include electric vehicles [3], hydro power networks [4], ground-water planning and management [], transportation engineering [6], financial engineering [7], and structural reliability [8]. Revised December 9, 24 NOT FOR DISTRIBUTION Page

References [] P. Y. Papalambros and D. J. Wilde, Principles of Optimal Design: Modeling and Computation. Cambridge University Press, 2. [2] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 29. [3] S. Bashash and H. Fathy, Optimizing demand response of plug-in hybrid electric vehicles using quadratic programming, in American Control Conference (ACC), 23, 23, pp. 76 72. [4] S. A. A. Moosavian, A. Ghaffari, and A. Salimi, Sequential quadratic programming and analytic hierarchy process for nonlinear multiobjective optimization of a hydropower network, Optimal Control Applications and Methods, vol. 3, no. 4, pp. 3 364, 2. [Online]. Available: http://dx.doi.org/.2/oca.99 [] W. Yeh, Systems analysis in groundwater planning and management, Journal of Water Resources Planning and Management, vol. 8, no. 3, pp. 224 237, 992. [6] M. E. O kelly, A quadratic integer program for the location of interacting hub facilities, European Journal of Operational Research, vol. 32, no. 3, pp. 393 44, 987. [7] R. O. Michaud, The markowitz optimization enigma: Is optimized optimal? Financial Analysts Journal, vol. 4, no., pp. pp. 3 42, 989. [8] P.-L. Liu and A. D. Kiureghian, Optimization algorithms for structural reliability, Structural Safety, vol. 9, no. 3, pp. 6 77, 99. Revised December 9, 24 NOT FOR DISTRIBUTION Page 6