CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
|
|
- Darleen Hodge
- 6 years ago
- Views:
Transcription
1 CONVEX FUNCTIONS AND OPTIMIZATION TECHINIQUES A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN MATHEMATICS SUBMITTED TO NATIONAL INSTITUTE OF TECHNOLOGY, ROURKELA BY ARUN KUMAR GUPTA ROLL NO. 409MA2066 UNDER THE SUPERVISION OF Dr. ANIL KUMAR DEPARTMENT OF MATHEMATICS NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA
2 NATIONAL INSTITUTE OF TECHNOLOGY ROURKELA DECLARATION I hereby certify that the work which is being presented in the thesis entitled Convex function and optimization techniques in partial fulfillment of the requirement for the award of the degree of Master of Science, submitted in the Department of Mathematics, National Institute of Technology, Rourkela is an authentic record of my own work carried out under the supervision of Dr. Anil Kumar. The matter embodied in this thesis has not been submitted by me for the award of any other degree. Date: (Arun Kumar Gupta) This is to certify that the above statement made by the candidate is correct to the best of my knowledge. Dr. Anil Kumar Department of Mathematics National Institute of Technology Rourkela Orissa, India.
3 ACKNOWLEDGEMENTS I wish to express my deep sense of gratitude to my supervisor, Prof. Anil Kumar, Department of Mathematics, National Institute of Technology, Rourkela for his inspiring guidance and assistance in the preparation of this thesis. I am grateful to Prof. P.C. Panda, Director, National Institute of Technology, Rourkela for providing excellent facilities in the Institute for carrying out research. I also take the opportunity to acknowledge quite explicitly with gratitude my debt to the Head of the Department, Prof G.K. Panda and all the Professors and Staff, Department of Mathematics, National Institute of Technology, Rourkela for their encouragement and valuable suggestions during the preparation of this thesis. I am extremely grateful to my parents, and my friends, who is a constant source of inspiration for me. (ARUN KUMAR GUPTA)
4 TABLE OF CONTENTS Chapter 1 Introduction Chapter 2 Convex functions and generalizations Chapter 3 Constrained Minimization Chapter 4 Optimality conditions Chapter 5 References
5 CHAPTER -1 Introduction: Optimization is the process of maximizing or minimizing a desired objective function while satisfying the prevailing constraints. The optimization problems have two major divisions. One is linear programming problem and other is non-linear programming problem. But the modern game theory, dynamic programming problem, integer programming problem also part of the Optimization theory having wide range of application in modern science, economics and management. In the present work I tried to compare the solution of Mathematical programming problem by Graphical solution method and others as well as its theoretic descriptions. As we know that not like linear programming problem where multidimensional problems have a great deal of applications, non-linear programming problem mostly considered only in two variables. Therefore for nonlinear programming problems we have an opportunity to plot the graph in two dimensions and get a concrete graph of the solution space which will be a step ahead in its solutions. Nonlinear programming deals with the problem of optimizing an objective function in the presence of equality and inequality constraints. The development of highly efficient and robust algorithms and software for linear programming, the advent of high speed computers, and the education of managers and practitioners in regard to the advantages and profitability of mathematical modeling and analysis, have made linear programming an important tool for solving problems in diverse fields. However, many realistic problems cannot be adequately represented or approximated as a linear program owing to the nature of the nonlinearity of the objective function or the nonlinearity of any of the constraints.
6 CHAPTER -2 Convex Functions and Generalization: Convex and concave functions have many special and important properties. In this chapter, I introduce the important topics of convex and concave functions and develop some of their properties. For example, any local minimum of a convex function over a convex set is also a global minimum. These properties can be utilized in developing suitable optimality conditions and computational schemes for optimality problems that involve convex and concave functions. 2.1 Definitions and Basic properties: Convex Function: A function defined on a convex subset X of is said to be convex on X if ( ( ) ) ( ) ( ) ( ) for each and, -. The function is called strictly convex on X if the above inequality is true as a strict inequality for each and, -. Concave function: A real-valued function defined on a convex subset X of is said to be concave on X if ( ( ) ) ( ) ( ) ( ) for each and, -. The function is called strictly concave on X if the above inequality is true as a strict inequality for each and, -. Also, ( ) is concave on [a, b] if and only if the function ( ) is convex on [a, b]. Now let us consider the geometric interpretation of convex and concave functions. Let be two distinct points in the domain of, and consider the point ( ) with, -. So for a convex function, the value of at points on the line segment ( ), is less than or equal to the height of the chord joining the points, ( )- and, ( )-. For the concave function, the chord is below the function itself. Hence, a function is both convex and concave if and only if it is affine. Figure 2.1 shows some examples of convex and concave functions.
7 ( ) y ( ) y Convex function concave function x y neither convex nor concave Figure 2.1 examples of convex and concave functions The following are some examples of convex and concave functions. Examples of convex functions: 1. Any function of the form: ( ) 2. Powers: ( ) 3. Powers of absolute value: ( ) 4. Exponential: ( ) 5. Every linear transformation taking values in R is convex. 6. ( ) 7. Every norm is a convex function, by the triangle inequality and positive homogeneity. 8. ( ), -
8 Examples of Concave functions: 1. ( ) 2. ( ) 3. Any linear function ( ) 4. The function ( ) ( ) is concave on the interval, - 5. Logarithm: We give below some particularly important instances of convex functions that arise very often in practice. 1. Let, be convex functions. Then, (a) ( ) ( ), where for is a convex functions. (b) ( ) Maximum { ( ), ( ) ( ) } is a convex function. 2. Suppose than is a concave function. Let * ( ) + and define ( ) ( ) Then, is convex over S. 3. Let be a nondecreasing, univariate, convex function, and let be a convex function. Then, the composite function defined as ( ), ( )- is a convex function. 4. Let be a convex function, and let be an affine function of the form h(x) =Ax + b, where A is a matrix and b is a vector. Then, the composite function defined as ( ), ( )- is a convex function. 2.2 Epigraph and Hypograph of a function: A function on S can be described by the set *, ( )- + which is referred to as the graph of the function. We can construct two sets that are related to the graph of the epigraph, which consists of points above the graph of, and the hypograph, which consists of points below the graph of. These notions are discussed in the definition below Definition: Let S be a nonempty set in and let.the epigraph of, denoted by is a subset of defined by *( ) ( )+ And the strict epigraph of the function is denoted by *( ) ( )+
9 The hypograph of a function is denoted by is a subset of defined by *( ) ( )+ And the strict hypograph of the function is defined by *( ) ( )+ A function is convex if and only if its epigraph is a convex set and, equivalently, that a function is concave if and only if its hypograph is a convex set. 2.3 Differentiable Convex functions: Now we focus on differentiable convex and concave functions. A function an open convex set S of is said to be differentiable if the gradient defined on ( ). ( ) ( ) ( ) ( ) / exists at each First order differentiable condition: Let S be a nonempty open convex set in, and let be differentiable on S. is convex if and only if ( ) ( ) ( ) ( ) for all.
10 Second order differentiable condition: Let S be a nonempty set in, and let. Then, is said to be twice differentiable if there exist a vector ( ) and a symmetric matrix H( ), called the Hessian matrix, such that ( ) ( ) ( ) ( ) ( ) ( )( ). For twice differentiable function the Hessian matrix partial derivatives and is given by ( ) is given comprised of the second order ( ) [ ( ) ( ) ( ) ( ) ] (x) (x)... (x) (x) (x)... (x) (x) (x)... (x) Examples Let ( ) We have ( ) [ ] ( ) 0 1 Taking ( ) the second-order expansion of this function is given by ( ) ( ). / ( ) 0 1. / Since the given function is quadratic, no remainder term is present there, and so the above representation is exact. and so ( ) 0 1 ( ) 0 1 We conclude that is strictly concave. ( ) is positive definite and so ( ) is negative definite and the function
11 Examples Let ( ) We have ( ) [ ] ( ) [ ] Hence, the second-order expansion of this function about the point ( ) is given by ( ) ( ) ( ) ( ) 0 1 ( ) Theorem: Let S be a nonempty set in, and let be twice differentiable on S then, is convex on S if and only if the Hessian matrix is positive semi definite (PSD) at each point in S; that is, for any in S, we have ( ) Symmetrically, a function is concave on S if and only if its Hessian matrix is negative semi definite (NSD) everywhere in S, that is, for any, we have ( ) 2.4 Generalizations of a convex function: In this section, we present various types of functions that are similar to convex and concave functions but that share only some of their desirable properties. We ll learn that many of the results do not require the restrictive assumptions of convexity, but rather the less restrictive assumptions of quasi-convexity, pseudo-convexity Quasi-convex functions: A function defined on a convex subset S of a real vector space is quasi-convex if for any and, - furthermore if ( ( ) ) * ( ) ( )+ ( ( ) ) * ( ) ( )+ for any and, -, then is strictly quasi-convex. The function is said to be quasiconcave if is quasi-convex and a strictly quasi-concave functions a function whose negative is strictly quasi-convex. Equivalently a function is quasiconcave if ( ( ) ) * ( ) ( )+
12 and strictly quasi-concave if ( ( ) ) * ( ) ( )+ Examples of Quasi-convex functions: is quasi-convex on R. is quasi-linear on Differentiable Quasi-convex functions: Let S be a nonempty open convex set in, and let be differentiable on S. Then, is quasiconvex if and only if either one of the following equivalent statements holds: If ( ) ( ) ( ) ( ). If ( ) ( ) ( ) ( ). To illustrate the above theorem, let ( ). To check quasi-convexity, suppose ( ) ( ). This is true only if Now consider ( )( ) ( ). Since, ( ) therefore, ( ) ( ) ( )( ) and hence by the above theorem, is quasi-convex. Consider let ( ) Let ( ) and ( ). ( ) and ( ), so that ( ) ( ). But, ( ) ( ) ( )( ). By the necessary part of the theorem, is not quasiconvex. This shows that the sum of two quasi-convex functions is not necessarily quasiconvex. Propositions: A function which is both quasi-convex and quasi-concave is called quasi-linear. Every convex function is quasi-convex. A concave function can be quasi-convex function. For example log(x) is concave, and it is quasi-convex. If ( ) and ( ) are positive convex decreasing functions, then, ( ) ( ) is quasiconvex. Sum of two Quasi-convex functions is not necessarily a Quasi-convex.
13 2.4.3 Strongly Quasi-convex function: The concept of strong convexity extends the notion of strict convexity. Let S be a nonempty open convex set in, and let. The function is said to be strongly quasiconvex if for each we have ( ( ) ) * ( ) ( )+ for each ( ), The function is said to be strongly quasi-concave if is strongly quasi-convex. From the definition of convex function, quasi-convex function and strictly quasi-convex function, the following statements hold: Every strictly convex function is strongly Quasi-convex. Every strongly quasi-convex function is strictly quasi-convex. Every strongly quasi-convex function is quasi-convex Pseudo-convex functions: The differentiable strongly (or strictly) quasi-convex functions do not share the property of convex functions, which states that if ( ) at some then is a global minimum of. This motivates the definition of pseudo-convex functions which share this important property with convex functions, and leads to a generalization of various derivative based optimality conditions Definition Let S be a nonempty open convex set in, and let be differentiable on S. The function is said to be pseudo-convex if for each ( ) ( ) we have ( ) ( ); or, equivalently, if ( ) ( ) then ( ) ( ). The function is said to be pseudo-concave if is pseudo-convex. The function is said to be strictly pseudo-convex if, for each distinct satisfying ( ) ( ), we have ( ) ( ); or equivalently, if for each distinct ( ) ( ) implies that ( ) ( ). The function is said to be strictly pseudo-concave if is strictly pseudoconvex.
14 2.5 Relationship among various types of convex functions: Strictly convex Under differentiability Convex Strictly pseudo-convex Under differentiability Pseudo-convex Strongly quasi-convex Strictly quasi-convex Under lower semi continuity Quasi-convex
15 Chapter 3 Constrained Minimization: 3.1 Introduction and problem formulation: The majority of engineering problems involve constrained minimization- that is, the task is to minimize a function subject to constraints. Constrained problem may be expressed in the following general nonlinear programming form: Minimize ( ) subject to ( ) (3.1.1) and ( ) Where ( ) is a column vector of n real-valued design variables. In equation (3.1.1), is the objective or cost function, are inequality constraints and, are equality constraints. A vector satisfying all the constraints is called a feasible solution to the problem. The collection of all such solutions forms the feasible region. The nonlinear programming problem, then, is to find a feasible point such that ( ) ( ) for each feasible point x. Such a point is called an optimal solution. We define the feasible set Ω as Ω={ ( ) ( ) } The phrase minimization problem refers to finding a local minimizer in the feasible set Ω. 3.2 Graphical method: The graphical method for solving mathematical programming problem is based on a well-defined set of logical steps. Using graphical method, the given programming problem can be easily solved with a minimum amount of computational effort. We know that the simplex method is the well-studied and widely useful method for solving linear programming problem, while for the class of nonlinear programming no such widely useful method exists. Programming problems involving only two variables can easily solved graphically. The algorithms or the systematic procedure is discussed as follows: Algorithm: The solution nonlinear programming problem by graphical method, in general, involves the following steps: Step-1: Construct the graph of the given nonlinear programming problem. Step-2: Identify the convex region (solution space) generated by the objective function and constraints of the given problem.
16 Step-3: Determine the point in the convex region at which the objective function is optimum (maximum or minimum). Step-4: Interpret the optimum solution so obtained Solution of various kinds of problems by graphical method: Case -1: Problems with linear objective function and nonlinear constraints: Consider the problem Maximize Subject to (3.2.1) We ll solve this problem by graphical method. A B C O D Figure 3.1
17 For this, we have to trace the graph of the constraints of the problem considering inequalities as equations in the first quadrant (since ). We get the above convex region OABCD. We have to find a point which maximize the value of Z and also lies in the convex region OABCD. The desired point is obtained by moving parallel to for some k, so long as touches the extreme boundary point of the convex region OABCD. Hence C (2, 4) gives the maximum value of Z. at. Case-2 Problems with linear objective function and nonlinear as well as linear constraints: Consider the problem Maximize Subject to (3.2.2) We ll solve this problem by graphical method. A (0, 1) B. / O (0, 0) C (1, 0) Figure 3.2
18 For this we see that our objective function is linear and constraints are nonlinear. Constraints one is a circle of radius 1 with center at (0, 0) and constraints two is a straight line. OABC is the convex region having extreme points O (0, 0), A (0, 1), B. / and C (1, 0). Max at and. The nonlinear programming problems having more than two variables can t be solved using graphical method. In such cases we have to use the method of Lagrange multipliers. 3.3 Lagrange multipliers method: The basic ideas behind Lagrange multipliers can be illustrated by consider the following special case of the problem defined by Eq. (3.3.1) with three variables and only two equality constraints: Subject to Min ( ) ( ) (3.3.1) ( ) The feasible set Ω is a curve in, determined by the intersection of two surfaces given by and. We assume that ( ) is a point of minimum in Ω and that the functions have continuous first order derivatives on some open set containing and ( ), ( ) are linearly independent. Since ( ), are linearly independent and continuous, can be solved in term of as ( ) ( ) in the neighborhood around. Further u and v are differentiable around. Hence, Eq. (3.3.1) reduces to an unconstrained minimization problem of the following form ( ( ) ( )) (3.3.2) involving a single variable. Since is a point of minimum of Eq. (3.3.2), we have ( ) (3.3.3) holds at ( ). Also, from the constraint equations, we have two additional relations
19 (3.3.4) at ( ). From Eq. (3.3.3) and Eq. (3.3.4) [ ], ( ) ( ) ( )- Since (1, ) is not a null vector, we have det A= 0, which implies ( ) ( ) ( ) (3.3.5) where a, b, c are not all zero. But as ( ), ( ) are linearly independent and hence there exist, both not simultaneously zero, such that ( ) ( ) ( ) (3.3.6) Here are called Lagrange multipliers. So, in the equality constrained minimization problems, we have to find the critical point of the Lagrange function ( ) ( ) ( ) ( ) (3.3.7) instead of the critical points of ( ). The minimum of the Lagrangian function is also a minimum of, because equality constraints at the critical point imply that And hence ( ) ( ) ( ) ( ) The above analysis helps us in obtaining the following generalization of the above fact the n-dimensional minimization problem with equality constraints: min ( ) subject to (3.3.8) ( )
20 Example Solve the problem Min subject to Solution: The Lagrangian ( ) is given by ( ) ( ) The critical points of the Lagrangian ( ) are given by ( ) ( ) ( ) Solving (1) and (2) and substituting in (3) we get, ( ) ( ) ( ) ( ) ( ) The Hessian matrix of ( ) with respect to is given by ( ) 0 1 The Hessian matrix is positive definite at ( ) and negative definite at ( ) Hence ( ) is a minimizer and ( ) is a maximizer for the above problem. Example Minimize ( )= subject to Solution: The Lagrangian ( ) is given by
21 ( ) ( ) ( ) The critical points of the Lagrangian ( ) are given by ( ), we get Solving these five equations, we get and. The Hessian of ( ) with respect to is [ ], which is positive definite and hence ( ) is a point of minimum. Min. /. / Example Minimize Subject to We have ( ) The optimality conditions are Substituting for x (λ) into the constraint equation we get for each root, we obtain Point A in fig.3.1 Point B in fig.1
22 The minimum point is A. B( ). A( ) Figure 3.3
23 Chapter 4 Optimality conditions: 4.1 The Fritz John (FJ) optimality conditions: We now reduce the necessary optimality conditions to a statement in terms of gradients of the objective function and of the constraints. The resulting optimality conditions, credited to Fritz John, are given below. The Fritz John conditions are a necessary condition for a solution in nonlinear programming to be optimal. The concept of Kuhn-tucker conditions and Duality are central discussion on constrained optimization Theorem (The Fritz John (FJ) necessary conditions): Let X be a nonempty open set in and let, and for i=1,2,,m. Consider the problem in Eq to Minimize ( ) subject to ( ) (4.1.1) and ( ), Let be a feasible solution, and denote * ( ) + Suppose that for is continuous at, that are differentiable at, and that for is continuously differentiable at If locally solves above problems, then there exists scalars and for such that ( ) ( ) ( ) (4.1.2) ( ) Where is the vector whose components are and ( ). Furthermore, if each for i L I is also differentiable at, then the Fritz John conditions can be written in the following equivalent form
24 ( ) ( ) ( ) ( ) (4.1.3) ( ) In the condition of above theorem, the scalars for are usually called Langrage multipliers. The condition that is feasible to the problem Eq. (4.1.1) is called the primal feasibility (PF) condition, whereas the requirements ( ) ( ) ( ), and are sometimes called to as dual feasibility (DF) conditions. The condition ( ) for is called the complementary slackness (CS) condition. Together, the PF, DF, and the CS conditions are called the Fritz John (FJ) optimality conditions. Any point for which there exist Lagrangian multipliers ( ) such that ( ) satisfy the FJ conditions is called a Fritz John (FJ) point. Example Minimize ( ) ( ) Subject to The feasible region for the above problem is illustrated in figure 3.2. The Fritz -John conditions are true at the optimal point (2, 1). The set of binding constraints I at ( ) is given by I= {1, 2}. Thus, the Lagrangain multipliers and associated with and respectively, are equal to zero. ( ) ( ) ( ) ( ) ( ) ( )
25 (0, 2) ( ) ( ) (2, 1) ( ) (0, 0) ( ) Figure 4.1 Hence, to satisfy the Fritz John conditions, we need a nonzero vector ( ) satisfying ( ) ( ) ( ) ( ) This implies and. Taking and as such for any, we satisfy the Fritz Jhon conditions. Let us check whether the point ( ) is a FJ point, here, the set of binding constraints is * + and thus Note that ( ) ( ) ( ) ( ) ( ) ( ) Also the DF condition ( ) ( ) ( ) ( ) holds true if and only if and. If, then, contradicting the nonnegativity restrictions. If, on the other hand,, then, which contradicts the stipulation that the vector ( ) is nonzero. Thus, the Fritz John conditions do not hold true at ( ), which also shows that the origin is not a local optimal point.
26 4.2 KKT optimality conditions with both Equality and Inequality constraints: In the Fritz John conditions, the Langragian multiplier associated with the objective functions not necessarily positive. Under further assumptions on the constraint set we can show that at any local minimum, there exists a set of Langragian multiplier for which is positive. So we obtain a generalization of the KKT necessary optimality conditions KKT necessary optimality conditions: Let X be a nonempty open set in and let, and for i=1,2,,m. Consider the problem to Minimize ( ) subject to ( ) (4.2.1) and ( ), Let be a feasible solution, and denote * ( ) + Suppose that for is continuous at, that are differentiable at, and that for is continuously differentiable at We assume that is a regular point (gradients of active inequalities and of all the equality constraints are linearly independent). If locally solves above problems, and each for i L I is also differentiable at, then there exists scalars and for exist such that ( ) ( ) ( ) ( ) (4.2.2) The Lagrange multipliers associated with the equality constraints are unrestricted in sign they can be positive or negative; only the multipliers associated with the inequalities have to be nonnegative KKT Sufficient Conditions for Optimality: The KKT conditions are necessary conditions: if a point is local minimum to the problem in Eq. (4.1.1), then it should satisfy Eq. (4.2.2). Conversely, if a point satisfies Eq. (4.2.2), it can be a local minimum or a saddle point (neither a minimum nor a maximum). A set of sufficient
27 conditions will now be stated which, if satisfied, will ensure that the KKT point is a strict local minimum. Let be twice-continuously differentiable functions. Then, the point is a strict local minimum to Eq, (4.1.1) if there exit Lagrange multipliers µ and λ, such that KKT necessary conditions in Eq. (4.2.2) are satisfied at and The Hessian matrix ( ) ( ) ( ) ( ) (4.2.3) is positive definite on a subspace of as defined by the condition: ( ) for every vector which satisfies ( ) ( ) for all i for which ( ) (4.2.4) Consider the following two remarks to understand Eq. (4.2.4): (1) To understand what y is, consider a constraint h(x) = 0. This can be graphed as a surface in x-space. Then, ( ) is normal to the tangent plane to the surface at. Thus, if ( ) that means y is perpendicular to and is hence in the tangent plane. The collection of all such y s will define the entire tangent plane. This tangent plane is a subspace of the entire space. Thus, the KKT sufficient conditions only require ( ) to be positive definite on the tangent space M defined by M={ ( ) ( ) ( ) } (2) If ( ) for every vector then the Hessian matrix ( ) is positive definite (on the whole space ). Then it will also positive definite on a (tangent) subspace of. Example For the problem Minimize ( ) ( ) Subject to 2 (i) (ii) Write down the KKT conditions and find out the KKT points. Graph the problem and check whether the KKT point is a solution to the problem.
28 (iii) (iv) Sketch the feasible cone at the points (1, 0); that is, sketch the intersection of the descent and feasible cones. What is the feasible cone at the KKT point obtained in (i)? We have ( ) The KKT conditions ( ) ( ) ( ) Owing to the switching nature of the conditions we need to guess which constraints are active and then proceed with solving the KKT conditions. We have the following cases: (only 1 st constraints active) (1 st and 2 nd constraints active) Now, in case-2: We get, upon using the KKT conditions, Since is negative, we have violated the KKT conditions and here our guess is incorrect. The correct guess is Case 1:. This leads to: Substituting this into gives We can then recover Thus, we have the KKT point
29 The feasible cone at ( ) is shown in figure below. At the point ( ) we have the negative cost gradient to be ( ) ( ) and the active constraint gradient to be ( ). Since these two vectors are aligned with one another, there is no descent/feasible cone (or the cone is empty). (0,2) ( ) Feasible feasible cone at (1,0) region,ω Figure 4.2 (1,0) 4.3 Duality: Consider the general nonlinear programming problem Minimize ( ) subject to ( ) (4.3.1) and ( ) Where ( ) is a column vector of n real-valued design variables. In equation (4.3.1), is the objective or cost function,, are inequality constraints and, are equality constraints. First, we assume that all functions are twice-continuously differentiable and a solution of the primal problem in Eq. (4.3.1) is a regular point satisfying the KKT necessary conditions given in the previous section. In the sufficient conditions for a minimum to Eq. (4.3.1), we only required the Hessian of the Lagrangian ( ) be positive definite on the tangent
30 subspace. Now, to introduce duality, we have to make the stronger assumption that the Hessian of the Lagrangian ( ) is positive definite. This is a stronger assumption because we require positive definiteness of the Hessian at on the whole space and not just on a subspace. This is equivalent to requiring L to be locally convex at. We can state the following equivalent theorems, under these assumptions: (1) together with and, is solution to the primal problem of Eq.(4.3.1) (2) is a saddle point of the Lagrangian function ( ). That is, ( ) ( ) ( ) in the neighborhood of. (3) solves the dual problem Maximize ( ) ( ) ( ) (4.3.2) subject to ( ), ( )-, ( )- (4.3.3) and (4.3.4) where both x and Lagrange multipliers ( ) ( ) are variable and the two extrema are equal: From the above discussion the following two corollaries follows: (a) Any x that satisfies the constraints in Eq. (4.3.1) is called primal-feasible and any x, µ, λ that satisfies Eq. (4.3.3) and (4.3.4) is called dual-feasible. We know that any primalfeasible point provides us with a upper bound on the optimum cost: ( ) ( ) Any dual feasible point provides a lower bound on the optimum cost as ( ) ( ) It is important to realize that the lower bound is only on the local minimum of f in the neighborhood. (b) If the dual objective is unbounded, then the primal has no feasible solution. 4.4 The Dual Function ( ): The conditions in Eq. (4.3.3), i.e. ( ) ( ) ( ), can be viewed as equations involving two sets of variables: x and [µ, λ]. The Jacobian with respect to the x variables of this system, is. Since this matrix is non-singular at (by the hypothesis that it is
31 positive definite), we can invoke the Implicit Function Theorem which states that in a small neighborhood of, -, such that for µ, λ in this neighborhood, ( ) is a differentiable function of with ( ( ) ) This gives us to treat L as an implicit function of as ( ( ) ) which is called the dual function ( ). Since will remain positive definite near, we can define ( ) from the minimization problem ( ) ( ) ( ) ( ) (4.4.1) Where are near and where the minimization is carried out with respect to x near Gradient and Hessian of ( ) in Dual Space: The gradient of the dual function is, ( )- (4.4.2), ( )- (4.4.3) The proof follows from: ( ( ) ) in view of the fact that by the very construction of. The Hessian of in space is given by, ( )- Where the rows of A are the gradient vectors of the constraints. Since we assume that regular point, A has full row rank and hence the dual Hessian is negative definite. is a Example: Minimize ( ) ( ) Subject to We have ( ) ( ) ( ). It is clear that is positive definite. Setting we obtain
32 Substituting into L, we have ( ) From Eq. (4.4.2) and eq. (4.4.3) we have we have The dual problem is max ( ) with the restriction that The solution is Correspondingly, we obtain ( ) and ( ).
33 Chapter -5 References: 1) Nonlinear programming theory and algorithms, second edition, Mokhtar S. Bazaraa, Hanif D. Sherali, C.M. Shetty. John Wiley and sons publication 2) Optimization concepts and applications in engineering, Ashok D. Belegundu, Tirupathi R. Chandrupatla. Published by Pearson education (Singapore) Pte. Ltd. 3) Optimization theory and practice, Mohan C Joshi, Kannan M Moudgalya Narosa Publishing House, ) Effect of graphical method for solving mathematical programming problem, Bimal Chandra Das, Department of Textile Engineering, Daffodil International University 28 journal of Science and Technology, Vol. 4, January 2009, Dhaka, Bangladesh 5) Convex optimization, Stephen Boyd and Lieven Vandenerghe, Cambridge University Press (2009) 6) Principles of Optimization Theory, C.R. Bector, S. Chandra, J. Dutta Narosa Publishing House 7) Convexity and optimization in finite dimension I, Josef Stoer, Christoph Witzgall Springer Verlag Berlin.Heidelberg.New york
Constrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS
CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS A Dissertation Submitted For The Award of the Degree of Master of Philosophy in Mathematics Neelam Patel School of Mathematics
More informationA Project Report Submitted by. Devendar Mittal 410MA5096. A thesis presented for the degree of Master of Science
PARTICULARS OF NON-LINEAR OPTIMIZATION A Project Report Submitted by Devendar Mittal 410MA5096 A thesis presented for the degree of Master of Science Department of Mathematics National Institute of Technology,
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationGENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION
Chapter 4 GENERALIZED CONVEXITY AND OPTIMALITY CONDITIONS IN SCALAR AND VECTOR OPTIMIZATION Alberto Cambini Department of Statistics and Applied Mathematics University of Pisa, Via Cosmo Ridolfi 10 56124
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationNonlinear Programming and the Kuhn-Tucker Conditions
Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationUniversity of California, Davis Department of Agricultural and Resource Economics ARE 252 Lecture Notes 2 Quirino Paris
University of California, Davis Department of Agricultural and Resource Economics ARE 5 Lecture Notes Quirino Paris Karush-Kuhn-Tucker conditions................................................. page Specification
More informationMicroeconomics I. September, c Leopold Sögner
Microeconomics I c Leopold Sögner Department of Economics and Finance Institute for Advanced Studies Stumpergasse 56 1060 Wien Tel: +43-1-59991 182 soegner@ihs.ac.at http://www.ihs.ac.at/ soegner September,
More informationConvex Programs. Carlo Tomasi. December 4, 2018
Convex Programs Carlo Tomasi December 4, 2018 1 Introduction In an earlier note, we found methods for finding a local minimum of some differentiable function f(u) : R m R. If f(u) is at least weakly convex,
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationJanuary 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationNonlinear Programming (NLP)
Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationRoles of Convexity in Optimization Theory. Efor, T. E and Nshi C. E
IDOSR PUBLICATIONS International Digital Organization for Scientific Research ISSN: 2550-7931 Roles of Convexity in Optimization Theory Efor T E and Nshi C E Department of Mathematics and Computer Science
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationAppendix A Taylor Approximations and Definite Matrices
Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary
More information. This matrix is not symmetric. Example. Suppose A =
Notes for Econ. 7001 by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN 0-13- 583600-X, 1995). 1. Convexity,
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationMATHEMATICAL ECONOMICS: OPTIMIZATION. Contents
MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationCONSTRAINED OPTIMALITY CRITERIA
5 CONSTRAINED OPTIMALITY CRITERIA In Chapters 2 and 3, we discussed the necessary and sufficient optimality criteria for unconstrained optimization problems. But most engineering problems involve optimization
More informationReview of Optimization Basics
Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationMathematical Economics: Lecture 16
Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationEconomics 101A (Lecture 3) Stefano DellaVigna
Economics 101A (Lecture 3) Stefano DellaVigna January 24, 2017 Outline 1. Implicit Function Theorem 2. Envelope Theorem 3. Convexity and concavity 4. Constrained Maximization 1 Implicit function theorem
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationSymmetric and Asymmetric Duality
journal of mathematical analysis and applications 220, 125 131 (1998) article no. AY975824 Symmetric and Asymmetric Duality Massimo Pappalardo Department of Mathematics, Via Buonarroti 2, 56127, Pisa,
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationEE/AA 578, Univ of Washington, Fall Duality
7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More informationSemidefinite Programming
Semidefinite Programming Notes by Bernd Sturmfels for the lecture on June 26, 208, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra The transition from linear algebra to nonlinear algebra has
More informationApplication of Harmonic Convexity to Multiobjective Non-linear Programming Problems
International Journal of Computational Science and Mathematics ISSN 0974-3189 Volume 2, Number 3 (2010), pp 255--266 International Research Publication House http://wwwirphousecom Application of Harmonic
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationModern Optimization Theory: Concave Programming
Modern Optimization Theory: Concave Programming 1. Preliminaries 1 We will present below the elements of modern optimization theory as formulated by Kuhn and Tucker, and a number of authors who have followed
More informationOptimization Theory. Lectures 4-6
Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationChapter 1. Preliminaries
Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationEC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline
LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationConvex Functions and Optimization
Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized
More informationOptimality Conditions
Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.
More informationFIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1. Tim Hoheisel and Christian Kanzow
FIRST- AND SECOND-ORDER OPTIMALITY CONDITIONS FOR MATHEMATICAL PROGRAMS WITH VANISHING CONSTRAINTS 1 Tim Hoheisel and Christian Kanzow Dedicated to Jiří Outrata on the occasion of his 60th birthday Preprint
More informationOn the Method of Lagrange Multipliers
On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationChapter 3: Constrained Extrema
Chapter 3: Constrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 3: Constrained Extrema 1 Implicit Function Theorem For scalar fn g : R n R with g(x ) 0 and g(x ) =
More informationTangent spaces, normals and extrema
Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationOPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)
MATHEMATICAL BIOSCIENCES doi:10.3934/mbe.2015.12.503 AND ENGINEERING Volume 12, Number 3, June 2015 pp. 503 523 OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More information