Penalty, Barrier and Augmented Lagrangian Methods
|
|
- Scot Blake
- 6 years ago
- Views:
Transcription
1 Penalty, Barrier and Augmented Lagrangian Methods Jesús Omar Ocegueda González Abstract Infeasible-Interior-Point methods shown in previous homeworks are well behaved when the number of constraints are small and the dimension of the energy function domain is also small. This fact is easily seen, since each iteration of such methods requires of solving a linear equation system whose size depends precisely on the number of constraints and the dimension of the search space. In addition, the energy functions that we could optimize with the previous approaches are restricted to be linear with linear constraints. In this homework I describe three new methods that deal with these inconvenients. I. INTRODUCTION THE, main idea of the kind of methods described here is to construct a sequence of unconstrained optimization problems whose approximated solutions converge to a solution of the original constrained problem. Such a sequence penalizes every non-feasible point. The sequence starts with a small penalty term and this is sequencially inreased with time. At each iteration k we find an approximation x k to the solution of the modified unconstrained problem within a given tolerance τ k. We expect that the sequence {x k } converge to a solution to the original problem, i.e. lim k x k = x. II. THE PROBLEM In order to test the performance of the three methods described here, I will use the energy function given by Uf) = subject to f r g r δ <r,s> C 2 f r f s ) 2 these constraints are equivalent to f r g r + δ 0 g r f r + δ 0 in this context, r, s S, where S is the set of sites in which the function f and g are defined. C 2 is the set of cliques of order 2 and δ is a fixed positive constant. Intuitively, we are looking for the smoothest f for which the constraints are held. The expected result is an edge-preserving regularization of the observed image g. III. THE QUADRATIC PENALTY METHOD A. Equality-constrained problem We will first examine the case in which we have only equality constraints: min x fx) subject to c i x) = 0. The quadratic penalty method constructs the following unconstrained function: Qx, k ) = fx) + 1 c i x) 2 2 k where k > 0 is the penalty parameter. If k 0 then the infeasibilities are increasingly penalized, forcing the solution to be almost feasible. B. The general constrained optimization problem In the general case, we would like to penalize a point x whenever c i x) < 0 but not when c i x) 0. To achieve this we define the operator [ ] as: a)effect of the [ ] operator. [x] = max{ x, 0}. b)quadratic penalty function. Fig. 1. Quadratic penalty function for inequality constraints for different values of. Using this expression we define the function see figure 4)) Qx, ) = fx) + 1 c i x) [c i x)] ) C. Implementation i I In this case, the unconstrained problem is min Qf, ) = f r f s ) 2 + <r,s> C [fr g r + δ] ) 2 + [g r f r + δ] ) 2). Taking the first derivative of Q respect to f r we have: Q f, ) = 2f r f s ) 1 f r [f r g r + δ] Ig r f r δ) 1 [g r f r + δ] If r g r δ) where N r is the first order neighbourhood of the site r, and I is the step function given by { 0, x 0 Ix) = 1, x > 0
2 TABLE I NUMERICAL RESULTS OBTAINED WITH THE QUADRATIC PENALTY METHOD USING NORMALIZED IMAGES). Iter. Optimum value Mean constraint violation bola, δ = bola, δ = bola, δ = taza, δ = taza, δ = taza, δ = Since d[x] ) 2 = 2[x] I x)) = 2[x] I x) = 2xI x) dx we have Q f, ) = f r 2f r f s ) + 1 f r g r + δ)ig r f r δ) + 1 g r f r + δ)if r g r δ) then, making Q f r f, ) = 0 we obtain: = 2 f r 2 N r + Ig r f r δ) If ) r g r δ) = f s + 1 δ g r)ig r f r δ) 1 δ+g r)if r g r δ) Since the value of I is not in terms of f r we can use the Gauss-Sidel iterative scheme given by where f r = 2 f s + 1 δ g r)i 1 1 δ + g r)i 2 2 N r + I1 I2 D. Experimental results I 1 = Ig r f r δ) I 2 = If r g r δ) For the experiments below I show the original image, the solution found, the estimated lagrange multipliers, the numerical value of the energy function, the average of the violation of the constraints v defined as v = 1 S [c i x )] a)optimal image found. b)λ. c) λ. Fig. 2. Results obtained for δ = 0.2 first row), δ = 0.1 second row) and δ = 0.05 third row). E. Conclusions The quadratic penalty method leads us to a simplification of the constrained optimization problem which can be solved using conventional methods. In the case of quadratic energy functions, Gauss-Seidel approach can be used with excelent results, at least visually; it is difficult even to know what is the optimal value of the energy function, then we can not know how good is the result obtained. The convergence is fast and the results are good, but the constraints are slightly violated. In the case of image processing, a slight violation of the constraints is not important but in cases in which it is so, a correction must be done to x to make it feasible. Finally, the results on noisy images are not as good as on non-perturbed images. IV. LOGARITHMIC BARRIER METHOD As we can see, the quadratic penalty method has the inconvenient of slightly violating the constraints. We saw that in the case of image processing this is not too important. Barrier
3 Our problem becomes min P f, k ) = <r,s> C 2 f r f s ) 2 k [logf r g r + δ) + logg r f r + δ)] Fig. 3. a)noisy image, σ = 0.1. b)result with δ = Results obtained by the quadratic penalty method on noisy images. methods are used in cases in which it is very important the solution to be feasible, the general formulation of these methods begins by defining a Barrier function, which is a function T defined only in F 0 for which the following propierties hold: lim x x0 T x) = x 0 F 0 T is smooth into F 0 where F 0 is the strictly feasible set: F 0 = {x R n c i x) > 0 i I} The most important barrier function is the logarithmic barrier function defined as: T x) = i I logc i x)). The unconstrained optimization problem is, then, minimize P x, ) = fx) i I logc i x)). where is the barrier parameter that in is fact written as k and lim k k = 0.As we can see, it is necesary to have a feasible starting point and, clearly, it is necesary F 0 to be non-empty. The presence of the log function makes it harder to minimize this function than the quadratic penalty function. A. Implementation Once again, we will try to solve the problem described in section II, using the log-barrier method. We can make use of any of the conventional strategies for unconstrained optimization, taking care of avoid leaving the feasible set { 0. For this homework I implemented a simple gradient descense iterative method. In this case we have P x, ) = fx) i I which leads us to yhe iterative scheme x t+1 = x t h fx t ) i c i x t ) c i x t ) c i x t ) c i x t ) P f, k ) = 2 f r f s ) k [logf r g r + δ) + logg r f r + δ)] = 2 [ ] 1 f r f s ) k f r g r + δ) 1 g r f r + δ) The modification we have to do to avoid leaving the strictly feasible set is the following: Let φ t r = h P r f t, t ), we will leave the strictly feasible set if either fr t+1 g r + δ 0 fr t + φ t r g r + δ 0 φ t r g r f r δ or g r fr t+1 +δ 0 g r fr t φ t r +δ 0 φ t r g r f r +δ. Let Ω be the subset Ω S of sites for which one of the conditions above holds. Then we choose α = 1 if Ω is empty and if Ω is non-empty α = min r Ω { gr f r δ φ t r if the first condition holds and α = min r Ω { gr f r + δ φ t r if the second condition holds. To ensure the next point is strictly feasible, we must set x t+1 = x t + α1 ɛ)φ t. B. Experimental results This method has many problems to converge. The minimum value of the energy function is greater that the obtained with the quadratic penaty method. Visually, the regularization is notably worst than the obtained with the previous method. I will show the results in the same cases except on the noisy images. I think that, given the results, we can not expect good news on these. The Lagrange multipliers are difficult to estimate because the constraints quickly take values close to zero. } }
4 this is the Lagrangian with a quadratic penalty term on the constraints. Fig. 4. a)δ = 0.2 b)δ = 0.1 c)δ = 0.05 Results using log-barrier method. TABLE II NUMERICAL RESULTS OBTAINED WITH THE log-barrier METHOD USING NORMALIZED IMAGES). Iter. Optimum value Mean constraint violation bola, δ = bola, δ = bola, δ = taza, δ = taza, δ = taza, δ = C. Conclusions Given the experimetal results, it is clear that the gradient descent method used is not a good choice, the optimal values found are very poor either numericaly and visually. This method rapidly becomes unstable, and we need to monitor de numerical value of the variables to avoid them to become zero or infinity. The constraints violation is clearly smaller than the obtained with quadratic penalty, and in fact this is due to numerical error, because the form in which we avoid leaving the strictly feasible set is teoretically correct. V. AUGMENTED LAGRANGIAN METHOD When we use the quadratic penalty method, the equality constraints are not satisfied. Instead, the value of these are c i x k ) = k λ i i E instead, k 0, c i x) 0, making the constraint to be satisfied. The lagrangian method avoid having this infeasibilities through the estimation of the lagrange multipliers. The Augmented Lagrangian Function is defined as L A x, λ, ) = fx) λ i c i x) + 1 c i x) 2 2 Taking the first derivative of this function, we have L A x, λ, ) = fx) λ i c i x) + 1 c i x) c i x) = fx) c i x) λ i c ) ix) In an iterative scheme, we can see from the previous expression that λ i λ k i c ix k ) ) k when x k x. This property, motivates the Method of Multipliers-Equality Constraints, that consists on the same scheme as before, generating a sequence of partial solutions {x k } minimizing the Augemnted Lagrangian function leaving λ k fixed and at each iteration we set λ k+1 i = λ k i c ix k ) ) k Now, we need an extension of this idea for inequality constraints. This is seen in the next subsection. A. Augmented Lagrangian for inequality constraints The technique we will use to handle inequality constraints is the same we used when we studied the Simplex method, this is, we will introduce slack variables. First, asume that we have just inequality constraints. Given the problem we reformulate it as min fx) subject to c i x) 0 i I min fx) subject to c i x) s i = 0 i I s i 0. This expression seems to be helpless due to the presence of the new constraints s i 0, i I. The difference is that now the inequality constraints are linear and, as we will see, it is easier to handle this constraints. By writing the Augmented Lagrangian Function, our new problem is min L A x, λ, ) = fx) λ i c i x) s i )+ 1 c i x) s i ) 2 2 subject to s i 0. Now lets see which values of s i minimize this function L A s i x, λ, ) = λ i 1 c ix) s i )
5 this function has a critical point in s i = c i x) λ i since the function is quadratic with respect to s i we have that the restricted minimizer is s i = max{c x) λ i, 0}. Using this expression, and substituting its value on the original problem, we see that λ k i c i x) s i ) c ix) s i ) 2 = λ k i c ix) c ix) 2 if c i x) λ k i 0 2 λk i )2 otherwise In order to make this expression to be clear, we introduce the function Ψ given by Ψt, σ, ) = σt t2 if t σ 0 2 σ2 otherwise Finally, we obtained the transformed problem min L A x, λ k, k ) = fx) + i I Ψc i x), λ k i, k ) if we compare the derivative of this function = fx k ) x L A x k, λ k, k ) i I c ix k ) λ k i λ k i c ix k ) ) c i x k ) 0 k with the first KKT condition for an optimal point fx ) i I c ix )=0 λ i c i x ) = 0 we see that the values of the Lagrange multipliers should be λ i λ k i c ix k ) k then keeping their values to be nonnegative we know that the Lagrange multipliers must be nonnegative) we can construct the sequence of partial solutions {x k } as before but with the extra step of actualizing the values of the lagrange multipliers using the formula λ k+1 i = max{λ k i c ix k ) k, 0} TABLE III NUMERICAL RESULTS OBTAINED WITH THE AUGMENTED LAGRANGIAN METHOD USING NORMALIZED IMAGES). Iter. Optimum value Mean constraint violation bola, δ = bola, δ = bola, δ = taza, δ = taza, δ = taza, δ = B. Implementation For the problem stated in II, the Augmented Lagrangian for inequality constraints has the form L A f, λ, λ, ) = = f r f s ) 2 + Ψc r f), λ r, )+Ψ c r f), λ r, )) <r,s> C 2 where c r f) = g r f r + δ c r f) = f r g r + δ taking the first derivative with respect to f r we have L A f, λ, f λ, ) = 2 f r f s ) r [ + λ r g ] [ r f r + δ) I 1 + λ r + f ] r g r + δ) I 2 where I is the step function defined above and I 1 = Iλ r g r f r + δ)) I 2 = I λ r f r g r + δ)) Again, the expression for I is not in terms of f r, then we can apply the Gauss-Seidel iterative scheme to minimize the Augmented Lagrangian 2 [ ] f s + gr+δ) λ r I 1 + [ λr δ gr)] I 2 f r = 2 N r + I1 + I2 C. Experimental results In fugure 5) I show the optimal images found applying the Augmented Lagrangian Method on the same images as before. The quality of the images seems to be equal than the obtained with the quadratic penalty method, but by evaluating the energy function of the result we can see that the Augmented Lagrangian Method si slightly better see table III). The main difference is seen on the lagrangian multipliers. As we said above, the problem can be seen as an Edge Preserving Regularization, then it is expected the Lagrange multipliers to be considerably biger in the edges of the image than in non-endge sites. This property can not be seen on the images given by the quadratic penalty method, but the images obtained
6 a)noisy image, σ = 0.1. b)result with δ = 0.2. Fig. 6. Results obtained by the Augmented Lagrangian method on noisy images. results for the test on noisy images are similar than the obtained eith the quadratic penalty method. a)result. b)λ. c) λ. d)λ + λ. Fig. 5. Results obtained for δ = 0.2 rows 1 & 4), δ = 0.1 rows 2 & 5) and δ = 0.05 rows 3 & 6). using the Augmented Lagrangian show clearly this property. Clearly, since an edge is defined by two sites say < r, s >, if the lagrange multiplier corresponding to the inequality c r f) is activated then the multiplier corresponding to the inequality c s f) should be activated on s. This property can also be clearly seen on the image formed by the sum of the Lagrange multipliers the edges are wider than any of the other two images). Another important characteristic that can be seen in table III is that the mean constraint violation is considerably smaller than the obtained with the quadratic penalty method. D. Conclusion Given the results, both numerical and visual), and the characteristics mentioned in the previous section, there is no much to say... This method is considerably better than the other two methods developed in this homework. In fact this method improves all the defitiencies presented by its competitors. The
5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 03 Simplex Algorithm Lecture 15 Infeasibility In this class, we
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More information8 Barrier Methods for Constrained Optimization
IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationLecture 16: October 22
0-725/36-725: Conve Optimization Fall 208 Lecturer: Ryan Tibshirani Lecture 6: October 22 Scribes: Nic Dalmasso, Alan Mishler, Benja LeRoy Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationWeek_4: simplex method II
Week_4: simplex method II 1 1.introduction LPs in which all the constraints are ( ) with nonnegative right-hand sides offer a convenient all-slack starting basic feasible solution. Models involving (=)
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationConstrained Optimization
Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationDual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725
Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationAn interior-point stochastic approximation method and an L1-regularized delta rule
Photograph from National Geographic, Sept 2008 An interior-point stochastic approximation method and an L1-regularized delta rule Peter Carbonetto, Mark Schmidt and Nando de Freitas University of British
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods
Convex Optimization Prof. Nati Srebro Lecture 12: Infeasible-Start Newton s Method Interior Point Methods Equality Constrained Optimization f 0 (x) s. t. A R p n, b R p Using access to: 2 nd order oracle
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationPrimal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization
Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationHow to Characterize Solutions to Constrained Optimization Problems
How to Characterize Solutions to Constrained Optimization Problems Michael Peters September 25, 2005 1 Introduction A common technique for characterizing maximum and minimum points in math is to use first
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING
ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING YANGYANG XU Abstract. Motivated by big data applications, first-order methods have been extremely
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationConvexification of Mixed-Integer Quadratically Constrained Quadratic Programs
Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April 2011 1 DEIS, University of Bologna, Italy 2 Department of Management Science,
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationSeminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1
Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationDistributed Real-Time Control Systems. Lecture Distributed Control Linear Programming
Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More information