Part 2: NLP Constrained Optimization

Size: px
Start display at page:

Download "Part 2: NLP Constrained Optimization"

Transcription

1 Part 2: NLP Constrained Optimization James G. Shanahan 2 Independent Consultant and Lecturer UC Santa Cruz James_DOT_Shanahan_AT_gmail_DOT_com WIFI: SSID Student USERname ucsc-guest Password EnrollNow! TIM Introduction to Optimization Theory and Applications Thursday, March 14, 2013 Lecture 14 University of California, Santa Cruz TIM Introduction to Optimization to Optimization Theory Theory and Applications, and Applications, Winter 2013 Winter 2013 James 2013 G. James Shanahan G. Shanahan 1

2 Final Eam Take home final Eam, due on March 24, 2013, You will receive it on March 17, 2013 Happy Saint Patrick s Day! 2

3 Outline Lecture 10 Multivariable unconstrained Linear regression, Logistic Regression KKT conditions for Constrained Optimization Lagrangian, KKT conditions Quadratic Programming Modified Simple Algorithm Conve Programming Frank-Wolfe Algorithm, Penalty or barrier function e.g., SUMT Noncove Programming Course review 3

4 Newton s Method in Optimization This iterative scheme can be generalized to several dimensions by replacing the derivative with the gradient, f X, and the reciprocal of the second derivative with the inverse of the Hessian matri,. One obtains the Newtoniterative scheme i i+ 1 f ' i = + For one variable i f '' X i+ 1 = = i X d2 f d i i 1 f ' i in matri form [ ] i 1 i f '' X f ' X For multivariable 4

5 Newton s Method in Optimization As we have seen above, Newton's method is used to find the roots of equations in one or more dimensions. It can also be used to find local maima and local minima of functions, as these etrema are the roots of the derivative function i.e., f. We shall define a series of -s, starting from an initial guess 0, s.t. the series converges towards * which satisfies f' * = 0. This * will also be an etremum, i.e. stationary point, of f Thus, provided that f is a twice-differentiable function and the initial guess is chosen close enough to *, the sequence n defined as follows will converge on *: i [ ] i+1 f ' i i i 1 i = + = f '' f ' Iteration function i f '' 5

6 Gradient: Nabla is the symbol. the gradient at a specific point = is the vector whose elements are the respective partial derivatives evaluated at X=, so that f = df/d1, df/d2,. df/dn The significance of the gradient is that the infinitesimal change in that maimizes the rate at which f increases is the change that is proportional to f. 6

7 Gradient. 7

8 Significance of the Gradient Vector The gradient vector, f,y, gives the direction of fastest increase of f, y assuming a two-variable function here. [Newton-Raphson] The gradient vector, f,y, is orthogonal to the contour lines Imagine climbing an upside-down bowl from below, where I can move in any <, y> direction NOTE I cant move in z;, and y are independent variables. If I follow the level curve f,y=k then I make no progress to the summit or bottom but if I move perpendicular to the level curve then I make the quickest progress to the summit of the bowl. 8

9 F = where F I. E. F = F = F = F * + F = * 1 [ ] T F, F, Fn, [ F ', F ', F ', ] T F, T * +... is the gradient of F evaluated at * = * F,... n n F T MultiVariate Taylor i+ 1 = i i g i g' Iteration function where g = f' i+ 1 = i i f ' i f '' Iteration function for finding roots of f' 9

10 10 MultiVariate Taylor = = + + = = = = = evaluated at * F is the Hessian of and evaluated at * F isthe gradient of * * 2 1 * * * 2 * * 2 * F F F F F F F F F F and F F F F Here F F where F F F F n n n n n T n T T for findi '' ' g where ' 1 1 function Iteration f f function Iteration g g i i i i i i i i = = + +

11 Multivariate Newton s Method Find the roots of an equation or system of equations Calculating gradient and Hessian not very timeconsuming but calculating the inverse of H is! In R, have a look at?optim #method=bfgs [ 29/lecture-29.pdf] [Hand, Manilla, Smith, Data Mining, Section 8.3] 11

12 Tangent Approimations 12

13 A plane from a point and orthogonal vector Although a line in space is determined by a point and a direction, a plane in space is more difficult to describe. A single vector parallel to a plane is not enough to convey the direction of the plane, but a vector perpendicular to the plane does completely specify its direction. Thus, a plane in space is determined by a point in the plane and a vector that is orthogonal to the plane. This orthogonal vector is called a normal vector. 13

14 Tangent Planes and Linear Approimation Just as we can visualize the line tangent to a curve at a point in 2-space, in 3-space we can picture the plane tangent to a surface at a point. Consider the surface given by z=f,y. Let 0, y0, z0 be any point on this surface. If f,y is differentiable at 0, y0, then the surface has a tangent plane at 0, y0, z0. The equation of the tangent plane at 0,y0, z0 is given by: Tangent Plane Tangent Line z z0= f 0,y0* 0+f y 0,y0 *y y0 similar form: y y0= f 0 * 0 where f 0,y0 is the partial derivative of f WRT calculated at 0,y0; similarly for f y 0,y0 14

15 f F where F I. E. F a F = F = = * = 1 [ ] T F, F, Fn, [ F ', F ', F ', ] T isthe gradient of 1 1 F, = * F,... n F evaluated at * n F Notation = f a + f ' a 1D Linear Approimation T = F * + F * T Multi Variable Linear Appro. T F = F * + F * *

16 z z0= f 0,y0 0+f y 0,y0y y0 Tangent Plane Function of variables Calculate gradient vector by evaluating partial derivatives at the tangential point Gradient vector at 1, 1 is 4, 2; f 1,1 = 4,2 f1,1=3 Tangent plane at 1, 1, 3 with gradient 4,2 [Adapted from Multivariable Calculus: Concepts and Contets, James Stewart] 16

17 Tangent Plane Eample 1, 1 4,2 Gradient vector 17

18 Tangent Plane to Ellipoid Eample 18

19 Approimate Δy with dy via the tangent Difference in f; i.e., second term in Linear Taylor Epansion f = f a + f ' a a Actual y dy Linear Approimation Actual approimated by Predicted I.e., Δy~dy dy is the predicted difference in f given the linear approimation Can change Δ as much as we like but the bigger the Δ the bigger the gap between the tangent approimation and the actual function 19 and dy and Δy

20 20 Linear and Quadratic Approimations Approimate fx for X around point a by the tangent at a point a, fa Taylor Series eplores different approimations of fx; the above tangential form is linear approimation General Form of a Taylor Series n n a f a f a a a a f a f f n... '! 2 2! '' = ' a a f a f f + = k k k a k a f f! 0 = = More compactly f'a slope fa a, AT ' = + = + = = = a a f a f f m y f m y f m y y

21 21 Total Differential, dz, for z=f, y in 2D Estimated change in z using total differential Total Differential in 2D estimated change in z=f using a linear approimation This corresponds to the second term the linear term in Taylors epansion ' a a f a f f dz a f f + = + = Total Change in f,y a a f a f f dz a f f + +

22 Total Differential in 2D estimated change in z First-order Taylor Series a+, b+ y, La+, a + Where L is the linear approimation of f, around the point a,b 22

23 Total Differential in 2D estimated change in z The total derivative estimates how much z changes estimated based on the tangential plane approimation Any f,y can be approimated fa,b +total differential for any,y close to a, b 23

24 Total Differential in 2D: An Eample change in height of f change in height of f when I travel in 0.05, Total Differential Estimated z difference between f, y and fa, b z=f Tanga,b,y-fa,b Actual z difference i.e., z=f,y-fa,b 24

25 Total Differential and Directional Derivative The total derivative tells us how much z changes only estimated as it is based on the tangential plane approimation when we travel in a particular direction u, i.e.,d u f,y= f,y u Where u=-a, y-b Any f,y can be approimated by fa,b +total differential 25

26 Directional Derivative== Total Deriv. == Change in f Change in f when we travel in direction v We can approimate the change in z i.e., f f+v using D v f,y= f,y v D A f,y= f,y -A1,y-A2 26

27 Necessary and Sufficient Conditions for Optimality.. 27

28 KKT Conditions.. 28

29 Problem needs to be concave.. 29

30 .. Eample 30

31 .. Eample 31

32 Shadow prices/dual variables u i.. 32

33 Quadratic Programming. 33

34 Eample.. 34

35 Concave Quadratic Programming.. 35

36 KKT Conditions of QP.. 36

37 .. 37

38 Complementarity Constraint.. 38

39 .. 39

40 From Quadratic to LP via KKT conditions.. 40

41 Find a feasible solution to these constraints.. 41

42 Modified Simple Algorithm Find a feasible solution to this: 42

43 .. 43

44 Restricted Entry Rule.. 44

45 Eample.. 45

46 Simple Tableau.. 46

47 Conve Programming.. 3 families of approaches Sequential Approimation Algorithms Sequential Unconstrained Minimization Techniques SUMT Generalized reduced gradient 47

48 Sequential Approimation Algorithms Sequential-approimation algorithms includes linear approimation and quadratic approimation methods These algorithms replace the nonlinear objective function by a succession of linear or quadratic approimations. For linearly constrained optimization problems, these approimations allow repeated application of linear or quadratic programming algorithms. This work is accompanied by other analysis that yields a sequence of solutions that converges to an optimal solution for the original problem. Can be etended to problems with nonlinear constraint functions by the use of appropriate linear approimations. 48

49 Frank Wolfe Algorithm As one eample of a sequential-approimation algorithm, consider the Frank-Wolfe algorithm for the case of linearly constrained conve programming With constraints: A b and 0 in matri form. This procedure is particularly straightforward; it combines linear approimations of the objective function enabling us to use the simple method with the onedimensional search procedure of 49

50 Frank Wolfe Algorithm The Frank Wolfe algorithm is a simple iterative firstorder optimization algorithm for constrained conve optimization quadratic programming problems. Also known as the conditional gradient method, reduced gradient algorithm and the conve combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in In each iteration, the Frank Wolfe algorithm considers a linear approimation of the objective function, and moves slightly towards a minimizer of this linear function taken over the same domain. 50

51 Introduction The basic Idea: 1. Use linear functions to approimate both the objective function as well as the constraints Linearization. 2. Employ LP algorithms to solve this new linear program. 51

52 Introduction Linearization can be achieved in two ways: Any non-linear function can be approimated in the vicinity of a point by using Taylor s epansion, is called the linearization point. Using piecewise linear approimations and then applying a modified simple algorithm separable programming. 52

53 Direct Use of Successive Linear Programs Using Taylor s epansion linearize all problem functions at some selected estimate of the solution. Result is an LP. is called the linearization point. With some additional precautions the LP solution ought to be an improvement over the linearization point. There are two cases to be considered: 1. Linearly constrained NLP case: 2. General NLP case: 53 53

54 Linearly constrained NLP case The linearly constrained NLP problem that of: is a nonlinear objective function. Feasible region is a polyhedron, however optimal solution can lie anywhere within the feasible region. 54

55 Linearly Constrained NLP case Using Taylor s approimation around the linearization point and ignoring the second and higher order terms we obtain the linear approimation of around the point. So the linearized version becomes: The Solution of the linearized version is. How close is to the solution to the original NLP? By virtue of minimization it must be true that 55

56 56 Linear and Quadratic Approimations Using Taylor s approimation around the linearization point 0 =a n n a f a f a a a a f a f f n... '! 2 2! '' = k k k a k a f f! 0 = = More compactly

57 Total Differential in 2D estimated change in z First-order Taylor Series a+, b+ y, La+, a + Where L is the linear approimation of f, around the point a,b 57

58 .. 58

59 FW Algo.. 59

60 Linearly Constrained NLP case Using a bit of algebra leads us to the result: So the vector is a descent direction. Previously studied that a descent direction can lead to an improved point only if it is coupled with a step adjustment procedure. All points between and are feasible. Moreover since is a corner point, any point beyond it on the line are outside the feasible region. So, to improve upon, a line search method is employed in the line segment: Minimizing will find a point such that 60

61 Linearly Constrained NLP case will not in general be the optimal solution but it will serve as a linearization point for the net approimating LP. The tet book presents the Frank-Wolfe Algorithm that employs this sequence of alternating LP s and line searches. 61

62 FW Algo.. 62

63 FW Eample.. 63

64 .. 64

65 Iteration

66 .. 66

67 Iteration

68 Optimal Solution.. 68

69 .. 69

70 Frank-Wolfe Algorithm Eecution: Eample 8.1 Page Number

71 Frank-Wolfe Algorithm Eecution: Eample 8.1 Page Number

72 Frank-Wolfe Algorithm Frank-Wolfe algorithm converges to a Kuhn-Tucker point from any feasible starting point. No analysis for rate of convergence. However, if is conve we can obtain estimates on how much remaining improvements can be achieved. If is conve and it is linearized at a point, for all Hence after each cycle the difference gives an estimate of the improvement. 72

73 Conclusions: Frank-Wolfe Algorithm In conclusion, we emphasize that the Frank-Wolfe algorithm is just one eample of sequential-approimation algorithms. Many of these algorithms use quadratic instead of linear approimations at each iteration because quadratic approimations provide a considerably closer fit to the original problem and thus enable the sequence of solutions to converge considerably more rapidly toward an optimal solution than was the case in the linear FW algo. For this reason, even though sequential linear approimation methods such as the Frank-Wolfe algorithm are relatively straightforward to use, sequential quadratic approimation methods now are generally preferred in actual applications. Popular among these are the quasi - Newton or variable metric methods, which compute a quadratic approimation to the curvature of a nonlinear function without eplicitly calculating second partial derivatives. For further information about conve programming algorithms, see Selected References 4 and 6. 73

74 Conclusions: Frank-Wolfe Algorithm In conclusion, we emphasize that the Frank-Wolfe algorithm is just one eample of sequential-approimation algorithms. Many of these algorithms use quadratic instead of linear approimations at each iteration because quadratic approimations provide a considerably closer fit to the original problem and thus enable the sequence of solutions to converge considerably more rapidly toward an optimal solution than was the case in Fig b H&L Book. For this reason, even though sequential linear approimation methods such as the Frank-Wolfe algorithm are relatively straightforward to use, sequential quadratic approimation methods1 now are generally preferred in actual applications. Popular among these are the quasi -Newton or variable metric methods, which compute a quadratic approimation to the curvature of a nonlinear function without eplicitly calculating second partial derivatives. For linearly constrained optimization problems, this nonlinear function is just the objective function; whereas with nonlinear constraints, it is the Lagrangian function described in Appendi 3. Some quasi-newton algorithms do not even eplicitly form and solve an approimating quadratic programming problem at each iteration, but instead incorporate some of the basic ingredients of gradient algorithms. 74

75 Linearly Constrained NLP case: Frank-Wolfe Algorithm 75

76 MULTI VARIABLE OPTIMIZATION PROCEDURES Introduction Mutivariable Search Methods Overview Unconstrained Multivariable Search Methods Quasi-Newton Methods Conjugate Gradient and Direction Methods Logical Methods Constrained Multivariable Search Methods Successive Linear Programming Successive Quadratic Programming Generalized Reduced Gradient Method Penalty, Barrier and Augmented Lagrangian Functions Other Multivariable Constrained Search Methods Comparison of Constrained Multivariable Search Methods Stochastic Approimation Procedures Closure FORTRAN Program for BFGS Search of an Unconstrained Function References Problems 76

77 Sequential Unconstrained Minimization Techniques SUMT. 77

78 Penalty and Barrier Methods General classical constrained minimization problem minimize f subject to g 0 h = 0 Penalty methods are motivated by the desire to use unconstrained optimization techniques to solve constrained problems. This is achieved by either adding a penalty for infeasibility and forcing the solution to feasibility and subsequent optimum, or adding a barrier to ensure that a feasible solution never becomes infeasible. 78

79 Penalty Methods Penalty methods use a mathematical function that will increase the objective for any given constrained violation. General transformation of constrained problem into an unconstrained problem: min T = f + r k P where f is the objective function of the constrained problem r k is a scalar denoted as the penalty or controlling parameter P is a function which imposes penalities for infeasibility note that P is controlled by r k T is the pseudo transformed objective Two approaches eist in the choice of transformation algorithms: 1 Sequential penalty transformations 2 Eact penalty transformations 79

80 Sequential Penalty Transformations Sequential Penalty Transformations are the oldest penalty methods. Also known as Sequential Unconstrained Minimization Techniques SUMT based upon the work of Fiacco and McCormick, Consider the following frequently used general purpose penalty function: m T = y + r k { ma[0, g i ] 2 + [h j ] 2 } i=1 l j=1 Sequential transformations work as follows: Choose P and a sequence of r k such that when k goes to infinity, the solution found. * is For eample, for k = 1, r 1 = 1 and we solve the problem. Then, for the second iteration r k is increased by a factor 10 and the problem is resolved starting from the previous solution. Note that an increasing value of r k will increase the effect of the penalties on T. The process is terminated when no improvement in T is found and all constraints are satisfied. 80

81 Two Classes of Sequential Methods Two major classes eist in sequential methods: 1 First class uses a sequence of infeasible points and feasibility is obtained only at the optimum. These are referred to as penalty function or eterior-point penalty function methods. 2 Second class is characterized by the property of preserving feasibility at all times. These are referred to as barrier function methods or interiorpoint penalty function methods. General barrier function transformation T = y + r k B where B is a barrier function and r k the penalty parameter which is supposed to go to zero when k approaches infinity. Typical Barrier functions are the inverse or logarithmic, that is: m B = i=1 g i -1 m or B = ln[ g i ] i=1 81

82 SUMT Algo. 82

83 SUMT Summary.. 83

84 What to Choose? Some prefer barrier methods because even if they do not converge, you will still have a feasible solution. Others prefer penalty function methods because You are less likely to be stuck in a feasible pocket with a local minimum. Penalty methods are more robust because in practice you may often have an infeasible starting point. However, penalty functions typically require more function evaluations. Choice becomes simple if you have equality constraints. Why? 84

85 SUMT Closing Remarks Typically, you will encounter sequential approaches. Various penalty functions P eist in the literature. Various approaches to selecting the penalty parameter sequence eist. Simplest is to keep it constant during all iterations. Always ensure that penalty does not dominate the objective function during initial iterations of eterior point method. 85

86 Penalty, Barrier Methods These methods convert the constrained optimization problem into an unconstrained one. The idea is to modify the economic model by adding the constraints in such a manner to have the optimum be located and the constraints be satisfied. There are several forms for the function of the constraints that can be used. These create a penalty to the economic model if the constraints are not satisfied or form a barrier to force the constraints to be satisfied, as the unconstrained search method moves from the starting point to the optimum. This approach is related to the method of Lagrange multipliers which is a procedure that modifies the economic model with the constraint equations to have an unconstrained problem. Also, the Lagrangian function can be used with an unconstrained search technique to locate the optimum and satisfy the constraints. In addition, the augmented Lagrangian function combines a penalty function with the Lagrangian function to alleviate computational difficulties associated with boundaries formed by equality constraints when the Lagrangian function is used alone. 86

87 SUMT Eample.. 87

88 Iterations in SUMT Eample.. Homework b 88

89 See Book website for more worked eamples.. 89

90 Generalized Reduced Gradient GRG 90

91 Nonconve Programming Convert into subproblems Find local optimal Evolutionary Computing e.g., genetic algorithms Run EC, and then use gradient descent 91

92 Outline Lecture 10 Multivariable unconstrained Multivariable unconstrained optimization Linear regression, Logistic Regression KKT conditions for Constrained Optimization Lagrangian, KKT conditions Quadratic Programming Modified Simple Algorithm Conve Programming Frank-Wolfe Algorithm, Penalty or barrier function e.g., SUMT Noncove Programming Course review 92

93 Course Review Course Reviw 93

94 End of Lecture 94

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Review of Optimization Basics

Review of Optimization Basics Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Computational Optimization. Constrained Optimization Part 2

Computational Optimization. Constrained Optimization Part 2 Computational Optimization Constrained Optimization Part Optimality Conditions Unconstrained Case X* is global min Conve f X* is local min SOSC f ( *) = SONC Easiest Problem Linear equality constraints

More information

A Primer on Multidimensional Optimization

A Primer on Multidimensional Optimization A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Intro to Nonlinear Optimization

Intro to Nonlinear Optimization Intro to Nonlinear Optimization We now rela the proportionality and additivity assumptions of LP What are the challenges of nonlinear programs NLP s? Objectives and constraints can use any function: ma

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Lecture 16: October 22

Lecture 16: October 22 0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Lecture 14: Newton s Method

Lecture 14: Newton s Method 10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4

Optimization Methods: Optimization using Calculus - Equality constraints 1. Module 2 Lecture Notes 4 Optimization Methods: Optimization using Calculus - Equality constraints Module Lecture Notes 4 Optimization of Functions of Multiple Variables subect to Equality Constraints Introduction In the previous

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Economics 205 Exercises

Economics 205 Exercises Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

1 Introduction

1 Introduction 2018-06-12 1 Introduction The title of this course is Numerical Methods for Data Science. What does that mean? Before we dive into the course technical material, let s put things into context. I will not

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form:

3.3.1 Linear functions yet again and dot product In 2D, a homogenous linear scalar function takes the general form: 3.3 Gradient Vector and Jacobian Matri 3 3.3 Gradient Vector and Jacobian Matri Overview: Differentiable functions have a local linear approimation. Near a given point, local changes are determined by

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Machine Learning Support Vector Machines. Prof. Matteo Matteucci Machine Learning Support Vector Machines Prof. Matteo Matteucci Discriminative vs. Generative Approaches 2 o Generative approach: we derived the classifier from some generative hypothesis about the way

More information

Lecture 23: November 19

Lecture 23: November 19 10-725/36-725: Conve Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 23: November 19 Scribes: Charvi Rastogi, George Stoica, Shuo Li Charvi Rastogi: 23.1-23.4.2, George Stoica: 23.4.3-23.8, Shuo

More information

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Computational Optimization. Augmented Lagrangian NW 17.3

Computational Optimization. Augmented Lagrangian NW 17.3 Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday

More information

1 Kernel methods & optimization

1 Kernel methods & optimization Machine Learning Class Notes 9-26-13 Prof. David Sontag 1 Kernel methods & optimization One eample of a kernel that is frequently used in practice and which allows for highly non-linear discriminant functions

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3 Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing

More information

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

10-725/36-725: Convex Optimization Spring Lecture 21: April 6 10-725/36-725: Conve Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 21: April 6 Scribes: Chiqun Zhang, Hanqi Cheng, Waleed Ammar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

3.5 Quadratic Approximation and Convexity/Concavity

3.5 Quadratic Approximation and Convexity/Concavity 3.5 Quadratic Approximation and Convexity/Concavity 55 3.5 Quadratic Approximation and Convexity/Concavity Overview: Second derivatives are useful for understanding how the linear approximation varies

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Numerical Methods. Root Finding

Numerical Methods. Root Finding Numerical Methods Solving Non Linear 1-Dimensional Equations Root Finding Given a real valued function f of one variable (say ), the idea is to find an such that: f() 0 1 Root Finding Eamples Find real

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning First-Order Methods, L1-Regularization, Coordinate Descent Winter 2016 Some images from this lecture are taken from Google Image Search. Admin Room: We ll count final numbers

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Machine Learning Basics III

Machine Learning Basics III Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient

More information

Optimization. 1 Some Concepts and Terms

Optimization. 1 Some Concepts and Terms ECO 305 FALL 2003 Optimization 1 Some Concepts and Terms The general mathematical problem studied here is how to choose some variables, collected into a vector =( 1, 2,... n ), to maimize, or in some situations

More information

too, of course, but is perhaps overkill here.

too, of course, but is perhaps overkill here. LUNDS TEKNISKA HÖGSKOLA MATEMATIK LÖSNINGAR OPTIMERING 018-01-11 kl 08-13 1. a) CQ points are shown as A and B below. Graphically: they are the only points that share the same tangent line for both active

More information

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C.

Deep Learning. Authors: I. Goodfellow, Y. Bengio, A. Courville. Chapter 4: Numerical Computation. Lecture slides edited by C. Yim. C. Chapter 4: Numerical Computation Deep Learning Authors: I. Goodfellow, Y. Bengio, A. Courville Lecture slides edited by 1 Chapter 4: Numerical Computation 4.1 Overflow and Underflow 4.2 Poor Conditioning

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Topic 8c Multi Variable Optimization

Topic 8c Multi Variable Optimization Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

II. An Application of Derivatives: Optimization

II. An Application of Derivatives: Optimization Anne Sibert Autumn 2013 II. An Application of Derivatives: Optimization In this section we consider an important application of derivatives: finding the minimum and maximum points. This has important applications

More information

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013

UNCONSTRAINED OPTIMIZATION PAUL SCHRIMPF OCTOBER 24, 2013 PAUL SCHRIMPF OCTOBER 24, 213 UNIVERSITY OF BRITISH COLUMBIA ECONOMICS 26 Today s lecture is about unconstrained optimization. If you re following along in the syllabus, you ll notice that we ve skipped

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION

STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag.

1. Sets A set is any collection of elements. Examples: - the set of even numbers between zero and the set of colors on the national flag. San Francisco State University Math Review Notes Michael Bar Sets A set is any collection of elements Eamples: a A {,,4,6,8,} - the set of even numbers between zero and b B { red, white, bule} - the set

More information

Beyond Newton s method Thomas P. Minka

Beyond Newton s method Thomas P. Minka Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

CHAPTER 1-2: SHADOW PRICES

CHAPTER 1-2: SHADOW PRICES Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization Convex Optimization Ofer Meshi Lecture 6: Lower Bounds Constrained Optimization Lower Bounds Some upper bounds: #iter μ 2 M #iter 2 M #iter L L μ 2 Oracle/ops GD κ log 1/ε M x # ε L # x # L # ε # με f

More information