ECS550NFB Introduction to Numerical Methods using Matlab Day 2
|
|
- Martha Garrison
- 6 years ago
- Views:
Transcription
1 ECS550NFB Introduction to Numerical Methods using Matlab Day 2 Lukas Laffers lukas.laffers@umb.sk Department of Mathematics, University of Matej Bel June 9, 2015
2 Today Root-finding: find x that solves f(x) = 0 Optimization: find x that optimizes (minimizes/maximizes) f(x), Constrained/Unconstrained optimization
3 Root-finding Given univariate function on an interval. We want to find x that solves f(x) = 0. simple slow robust In MATLAB fzero fsolve - requires Optimization Toolbox
4 Bisection Image source: wiki
5 Root-finding: Newton Method Based on a Taylor approximation: h f(x) f (x), and x k+1 = x k f(x) f (x) f(x + h) f(x) + hf (x) key is the linear approximation, NR method is as good as is the linear approximation Multiple dimensions f(x + h) f(x) + h T f(x) h ( f(x)) 1 f(x) and x k+1 = x k ( f(x)) 1 f(x).
6 Newton Method Image source: wiki
7 Newton Method experiment with different formulation. x 2 sin x = 0 vs. 2/x 1/ sin x = 0, this may help us to get rid of a flat part if x k fails to settle down, use different starting point verification: f(x ɛ) < 0 < f(x + ɛ)
8 Secant Method In Newton s method we had to calculate derivative, this may be computationally expensive Secant method approximates the derivative 0 f(x 1) x 2 x 1 = f(x 0) f(x 1 ) x 0 x 1 x k x k 1 x k+1 = x k f(x k ) f(x k ) f(x k 1 )
9 Secant Method Image source: mathworld.wolfram.com
10 Optimization - problem classification Local vs Global Local - most numerical methods concerns local optimization. The nature of the problem may guarantee the uniqueness. Global - usually stochastic methods. There is a chance that we jump-off local extrema. Constrained vs Unconstrained Unconstrained Constrained
11 Optimization Understand the problem! There exists no method that it superior in all situations. How many variables do we optimize over? What is the shape of the objective function (e.g. concave/convex)? Is the constraint set convex? How costly is it to evaluate the objective function? How costly is it to evaluate the gradient of the objective function? Trade-off speed vs. generality.
12 What is available Without toolbox fminbnd - minimum of a single variable function on a fixed interval fminsearch - unconstrained derivative-free minimization fzero - find a root of a nonlinear function Optimization Toolbox Constrained minimization Linear and Quadratic programming Mixed integer programming Global Optimization Toolbox fminbnd - minimum of a single variable function on a fixed interval fminsearch - unconstrained derivative-free minimization fzero - find a root of a nonlinear function
13 Formulating the Problem in MATLAB structure problem min x f(x) s.t. A x b A eq x = b eq lb x ub f - function to be minimized A and b defining inequality restrictions Aeq and beq defining equality restrictions lb and ub defining bounds on variables x0 - starting point solver - (e.g. linprog, fminunc) options - in MATLAB run: solver(problem)
14 Optimization in MATLAB structure output exitflag - reason why solver terminated (1, 0, 1, 2, 3,...) lambda - Lagrange multipliers at the solution (lower, upper, ineqlin, eqlin) output - information about the optimization (iterations, constrviolation, firstorderopt,...) structure options Algorithm - algorithm to be used Maxiter - maximum number of iterations TolFun - termination tolerance for the function value.
15 Unconstrained optimization Golden Section Search Univariate function Interval Newton Method Multivariate function Gradient supplied Hessian supplied Quasi-Newton Methods Multivariate function Gradient supplied Hessian approximated Nelder-Mead Method Multivariate function only need function values Trust region Methods Multivariate function only need function values useful for large-scale problems
16 What is a good method fast - in terms of computing speed, usually measured in no. of function evaluations or in general rate of convergence x k+1 x x k x reliable - guarantees success, (assumptions needed for this!) robust - behaves well under different scenarios efficient - is the fastest (for a certain class of problems, under certain assumptions)
17 Convex and non-convex sets and functions Convex sets and functions. Image source: Brandimarte
18 Unconstrained optimization - conditions for smooth functions First order necessary conditions: x is local min of f, f is continuously differentiable around x, then f(x ) = 0. Second order necessary conditions: x is local min of f, 2 f is continuous around x, then f(x ) = 0 and 2 f(x ) 0. Second order sufficient conditions: 2 f is continuous around x, f(x ) = 0 and 2 f(x ) 0, then x is strict local min of f. Convex/concave differentiable function = stationary point is a global minimizer/maximizer.
19 Unconstrained optimization - strategies Line search: min α>0 f(x + αp). Set direction p and make step of size α Trust-region: Approximate f with some function m in a region around x.
20 Unconstrained optimization - strategies Image source: Nocedal and Wright
21 Step size How to choose the step size α? Fixed step size - reduce step size if no optimum found Wolfe conditions (c 1 = 10 4, c 2 (c 1, 1)) Sufficient decrease: f(xk + α k h k ) f(x k ) + c 1 α f(x k ) T h k Curvature: f(xk + α k h k ) T h k c 2 f(x k ) T h k (step is not too small) Backtracking line search - adaptively reduce the step size (from α to βα, for some β (0.1, 0.8)), until the step is good enough according to some criterion (e.g. sufficient decrease) Exact line search - arg min α 0 f(x α f(x)) - usually not efficient
22 Wolfe conditions Image source: Nocedal and Wright
23 Golden Section Search Extremum of unimodal function on an interval. We choose the points at which we evaluate f in a clever way: we make sure to use the the interval that contains the extremum shrinks at the best possible rate is I n+1 I n
24 Golden Section Search Image source: wiki
25 Steepest descent Let us choose the natural direction going down the hill, f(x) x k+1 = x k α f(x k ) When do we stop our algorithm? f(x) < ɛ
26 Steepest descent Image source: Nocedal and Wright
27 Steepest descent - scaling Image source: Nocedal and Wright
28 Newton Method We require information about the gradient and hessian (This can be very costly). f(x + h) = f(x) + f(x) T h ht 2 f(x)h minimize via direction h, we get h = ( 2 f(x)) 1 f(x) basically root-finding applied to the first derivative computation of ( 2 f(x)) 1 costly convergence not guaranteed (may stuck in infinite cycle)
29 Quasi-Newton Methods We avoid the calculation of the Hessian matrix and simplify the computation of the search direction. Start with B 0, some positive definite matrix. We will update B 0 B 1. Given the information of gradient, we iteratively update our approximation of the Hessian matrix. Our Hessian matrix in step k, B k has low rank is symmetric is positive definite B k is chosen so that the quadratic approximation of f match the gradient of f at x k+1 and x k is not very different from B k 1 (in a certain sense (Frobenius norm)), so that B k does not change wildly makes it easy to find B 1 k+1 (using Sherman-Morrison formula), therefore simplify the calculation of the optimal step.
30 Quasi-Newton Methods There are different ways to update the approximation of the Hessian matrix. DFP (Davidon-Fletcher-Powel) BFGS (Broyden-Fletcher-Goldfarb-Shanno), superseds DFP. Instead of imposing conditions on B k, we impose conditions on H k = B 1 k L-BFGS, L-DFP - limited memory versions
31 Algorithm Overview We need: H 0, x 0 Step1 Check if your solution is good enough ( f(x k ) > ɛ ), if not, continue. Step2 Compute optimal direction H k f(x k ) Step3 Compute the size of the optimal step in this direction (linesearch - one dimensional optimization) and compute x k+1 Step4 Evaluate the inverse Hessian H k+1, k = k + 1; Step5 Go to Step 1
32 Quasi-Newton Methods more efficient than Newton methods in situations when evaluating Hessian is costly (O(n 2 ) vs O(n 3 )) we need our function to have quadratic Taylor approximation near an optimum super linear convergence rate we can use H 0 = ( f(x k+1 ) f(x k )) T (x k+1 x k ) ( f(x k+1 ) f(x k )) T ( f(x k+1 ) f(x k )) I for a quadratic function n-steps of Quasi-Newton method is one Newton step.
33 Linear Programming min x c T x s.t. A x b A eq x = b eq lb x ub [x, fval, exit] = linprog(c, A, b, Aeq, beq, lb, ub, x0, options)
34 Linear Programming Classical examples Travelling salesman problem Vehicle routing problem Cost minimization Manufacturing and transportation More recent economics applications Test for rationality of consumption behaviour (Chernye, Rock and Vermuelen) Identification and shape restrictions in nonparametric instrumental variables estimation (Freyberger and Horowitz)
35 Linear Programming - Example min x x 1 + 2x 2 3x 3 s.t. 2x 1 + x 2 + 3x 3 1 x 1 + 2x 2 0.5x 3 2 x 1 + x 2 + x 3 = 1 0 x 1, x 2, x 3 1 c = [1 2-3]; A = [-2 1 3; ]; b = [1; 2]; Aeq = [1 1 1]; beq = 1; lb = [0 0 0]; ub = [1 1 1]; options = optimset( linprog ); [x,fval,exit] = linprog(c,a,b,aeq,beq,lb,ub,[]...,options)
36 Linear Programming - Algorithms interior-point dual-simplex active-set simplex
37 Linear Programming - Simplex algorithm Image source: wiki
38 Integer Programming min x c T x s.t. x(intcon) are integers A x b A eq x = b eq lb x ub [x, fval, exit] = intlinprog(c, intcon, A, b, Aeq, beq, lb, ub, x0, options)
39 Integer Programming - Example min x x 1 + 2x 2 3x 3 + x 4 s.t. x 4 is an integer 2x 1 + x 2 + 3x 3 2x 4 1 x 1 + 2x 2 0.5x 3 4x 4 2 x 1 + x 2 + x 3 + x 4 = 1 0 x 1, x 2, x 3 1 c = [ ]; A = [ ; ]; b = [1; 2]; Aeq = [ ]; beq = 1; lb = [ ]; ub = [ ]; intcon = 4; [x,fval,exit] = intlinprog(c,intcon,a,b,aeq,beq,lb,ub)
40 Integer Programming - Branch and Bound Example from cs2035/courses/ieor4600.s07/bb-lecb.pdf max x x 1 + 4x 2 s.t. x 1, x 2 are integers 10x x x x 2 49 x x 1, x 2 Optimal solution (3.8, 3), with Z = 8.2
41 Integer Programming - Branch and Bound x 1 3 or x 1 4 Branch x 1 3 Branch x 1 4 x = (3, 2.6), Z = 6.2 x = (4, 2.9), Z = 7.6
42 Branch x 1 4 x = (4, 2.9), Z = 7.6 Branch x 1 4 and x 2 2 Branch x 1 4 and x 2 3 NO SOLUTION x = (4, 2), Z = 4
43 Branch x 1 3 x = (3, 2.6), Z = 7.4 Branch x 1 3 and x 2 2 Branch x 1 3 and x 2 3 NO SOLUTION x = (1.8, 2), Z = 6.2
44 Branch x 1 3 and x 2 2 x = (1.8, 2), Z = 6.2 Branch x 1 3, x 2 2 and x 1 1 Branch x 1 3, x 2 2 and x 1 2 x = (1, 1.16), Z = 5.4 x = (2, 2), Z = 6
45 Quadratic Programming min x 1 2 xt Hx + f T x s.t. A x b A eq x = b eq lb x ub [x, fval, exit] = quadprog(h, f, A, b, Aeq, beq, lb, ub, x0, options)
46 Quadratic Programming - Example min x x 2 1 x 1x 2 + x x 1 + 3x 2 s.t. 2x 1 + x 2 1 x 1 2x 2 3 x 1 + 2x 2 = 4 H = [2-1; -1 2]; f = [2; 3]; A = [-2 1; 1-2]; b = [-1; 3]; Aeq = [1 2]; beq = 4; lb = []; ub = []; [x,fval,exit] = quadprog(h,f,a,b,aeq,beq,lb,ub)
47 Global Optimization methods Image source:
48 Global Optimization methods In MATLAB s Global Optimization Toolbox Global Search and Multistart Solvers - generate multiple starting points, filter nonpromising points Genetic Algorithm Solver - population of points, we simulate evolution of population. Phases: Selection: we select good parents Crossover: they produce children Mutation: induce randomness - can jump off local optimate Pattern Search Solver - direct search, no derivative needed. Simulated Annealing - probabilistic search algorithm that mimics the physical process of annealing. We slowly reduce the system temperature to minimize the system energy.
49 Choice of Algorithm Identify the objective function linear quadratic smooth nonlinear nonsmooth Identify the types of constraints none bound linear smooth discrete
50 Optimization Cookbook - practical advice What to do if...? Solver did not succeed Not sure if the solver succeeded Solver succeeded
51 Solver did not succeed Try to find out what is going on, set display to iter. Does the {objective function, max constraint violation, first order optimality criterion, trust region radius} decrease? increase MaxIter or MaxFunEvals relax tolerances change initial point center and scale your problem if problem is unbounded - check formulation of the problem start from a simpler problem, iteratively add restrictions and use optimal solutions as starting points for more complex problem
52 Not sure whether solver succeeded The first order optimality condition is not satisfied Final point = Initial point. Change the initial point to some nearby points Local minimum possible Non-smooth function - this is the best we can possibly get Set the optimum as a new starting point and re-run the optimization Try a different algorithm Play with tolerances Takes too long? Use a sparse solver (uses less memory), use parallel computing
53 Solver succeeds - Robustness Local minimum vs Global minimum? Use a grid of initial points. Check whether the formulation of the problem in MATLAB corresponds to the problem at hand - try objective function at a few points, check sign if max, check sign of inequalities. Check with Global optimization toolbox.
54 Lagrange multipliers Lagrange multipliers tell you how important particular restrictions are at the optimal solution use it for your advantage, which restrictions are important, causes problems to your solver?
55 Tolerances and Stopping Criteria TolX TolFun TolCon MaxIter MaxFunEvals many others for different algorithms (e.g. Interior-point)
56 For Tomorrow Install Dynare
57 Literature Miranda, M., and P. Fackler. Applied computational economics. (2001). Wright, Stephen J., and Jorge Nocedal. Numerical optimization. Vol. 2. New York: Springer, J. E. Dennis, Jr.; Jorge J. More. Quasi-Newton Methods, Motivation and Theory. SIAM Review, Vol. 19, No. 1 (Jan., 1977), Nocedal, J. (1980). Updating Quasi-Newton Matrices with Limited Storage. Mathematics of Computation 35 (151): Branch and Bound method explained on an example cs2035/courses/ieor4600.s07/bb-lecb.pdf Branch and Bound method - example with animals -science-spring-2013/tutorials/mit15 053S13 tut10.pdf
5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationQuasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno February 6, / 25 (BFG. Limited memory BFGS (L-BFGS)
Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb Shanno (BFGS) Limited memory BFGS (L-BFGS) February 6, 2014 Quasi-Newton methods: Symmetric rank 1 (SR1) Broyden Fletcher Goldfarb
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Justin Solomon CS 205A: Mathematical Methods Optimization II: Unconstrained Multivariable 1
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationOptimization II: Unconstrained Multivariable
Optimization II: Unconstrained Multivariable CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization II: Unconstrained
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex
More informationLine Search Methods. Shefali Kulkarni-Thaker
1 BISECTION METHOD Line Search Methods Shefali Kulkarni-Thaker Consider the following unconstrained optimization problem min f(x) x R Any optimization algorithm starts by an initial point x 0 and performs
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More information2. Quasi-Newton methods
L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationQuasi-Newton Methods
Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Quasi Newton Methods Barnabás Póczos & Ryan Tibshirani Quasi Newton Methods 2 Outline Modified Newton Method Rank one correction of the inverse Rank two correction of the
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationUnconstrained Multivariate Optimization
Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationImproving L-BFGS Initialization for Trust-Region Methods in Deep Learning
Improving L-BFGS Initialization for Trust-Region Methods in Deep Learning Jacob Rafati http://rafati.net jrafatiheravi@ucmerced.edu Ph.D. Candidate, Electrical Engineering and Computer Science University
More informationLecture 14: October 17
1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationQuasi-Newton Methods. Javier Peña Convex Optimization /36-725
Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationMotivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)
AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.
More information4M020 Design tools. Algorithms for numerical optimization. L.F.P. Etman. Department of Mechanical Engineering Eindhoven University of Technology
4M020 Design tools Algorithms for numerical optimization L.F.P. Etman Department of Mechanical Engineering Eindhoven University of Technology Wednesday September 3, 2008 1 / 32 Outline 1 Problem formulation:
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationQuasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization
Quasi-Newton Methods Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization 10-725 Last time: primal-dual interior-point methods Given the problem min x f(x) subject to h(x)
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationMATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2 1 Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 8 topics: quadratic optimisation, least squares,
More informationComparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems
International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationGradient-Based Optimization
Multidisciplinary Design Optimization 48 Chapter 3 Gradient-Based Optimization 3. Introduction In Chapter we described methods to minimize (or at least decrease) a function of one variable. While problems
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationLecture 18: November Review on Primal-dual interior-poit methods
10-725/36-725: Convex Optimization Fall 2016 Lecturer: Lecturer: Javier Pena Lecture 18: November 2 Scribes: Scribes: Yizhu Lin, Pan Liu Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationORIE 6326: Convex Optimization. Quasi-Newton Methods
ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 14: Unconstrained optimization Prof. John Gunnar Carlsson October 27, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 27, 2010 1
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationContents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3
Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationCubic regularization in symmetric rank-1 quasi-newton methods
Math. Prog. Comp. (2018) 10:457 486 https://doi.org/10.1007/s12532-018-0136-7 FULL LENGTH PAPER Cubic regularization in symmetric rank-1 quasi-newton methods Hande Y. Benson 1 David F. Shanno 2 Received:
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationIntroduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems
New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationIntroduction to unconstrained optimization - direct search methods
Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the
More informationSolving Nonlinear Equations
Solving Nonlinear Equations Jijian Fan Department of Economics University of California, Santa Cruz Oct 13 2014 Overview NUMERICALLY solving nonlinear equation Four methods Bisection Function iteration
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More information(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)
Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationOptimization Methods for Machine Learning
Optimization Methods for Machine Learning Sathiya Keerthi Microsoft Talks given at UC Santa Cruz February 21-23, 2017 The slides for the talks will be made available at: http://www.keerthis.com/ Introduction
More informationDetermination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study
International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:
More informationUnconstrained minimization of smooth functions
Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and
More informationApplied Computational Economics Workshop. Part 3: Nonlinear Equations
Applied Computational Economics Workshop Part 3: Nonlinear Equations 1 Overview Introduction Function iteration Newton s method Quasi-Newton methods Practical example Practical issues 2 Introduction Nonlinear
More informationImproved Damped Quasi-Newton Methods for Unconstrained Optimization
Improved Damped Quasi-Newton Methods for Unconstrained Optimization Mehiddin Al-Baali and Lucio Grandinetti August 2015 Abstract Recently, Al-Baali (2014) has extended the damped-technique in the modified
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationOPER 627: Nonlinear Optimization Lecture 14: Mid-term Review
OPER 627: Nonlinear Optimization Lecture 14: Mid-term Review Department of Statistical Sciences and Operations Research Virginia Commonwealth University Oct 16, 2013 (Lecture 14) Nonlinear Optimization
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationOptimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29
Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationNONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM
NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM JIAYI GUO AND A.S. LEWIS Abstract. The popular BFGS quasi-newton minimization algorithm under reasonable conditions converges globally on smooth
More informationImproving the Convergence of Back-Propogation Learning with Second Order Methods
the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible
More informationKrzysztof Tesch. Continuous optimisation algorithms
Krzysztof Tesch Continuous optimisation algorithms Gdańsk 16 GDAŃSK UNIVERSITY OF TECHNOLOGY PUBLISHERS CHAIRMAN OF EDITORIAL BOARD Janusz T. Cieśliński REVIEWER Krzysztof Kosowski COVER DESIGN Katarzyna
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More information