Optimization. Totally not complete this is...don't use it yet...

Similar documents
Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Applied Numerical Analysis

Scientific Computing: Optimization

8 Numerical methods for unconstrained problems

Lecture 34 Minimization and maximization of functions

UNCONSTRAINED OPTIMIZATION

Lecture 17: Numerical Optimization October 2014

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Lecture V. Numerical Optimization

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Optimization Methods

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

Introduction to unconstrained optimization - direct search methods

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 6: Monday, Mar 7. e k+1 = 1 f (ξ k ) 2 f (x k ) e2 k.

Optimization Concepts and Applications in Engineering

Numerical solutions of nonlinear systems of equations

Minimization of Static! Cost Functions!

GENG2140, S2, 2012 Week 7: Curve fitting

Lecture 8 Optimization

2.098/6.255/ Optimization Methods Practice True/False Questions

17 Solution of Nonlinear Systems

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Quasi-Newton Methods

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Chapter 4. Unconstrained optimization

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

Optimization and Calculus

Line Search Methods. Shefali Kulkarni-Thaker

Programming, numerics and optimization

ECE 595, Section 10 Numerical Simulations Lecture 7: Optimization and Eigenvalues. Prof. Peter Bermel January 23, 2013

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Numerical optimization

Fitting The Unknown 1/28. Joshua Lande. September 1, Stanford

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Numerical Optimization Techniques

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 5. Nonlinear Equations

Scientific Computing: An Introductory Survey

Statistics 580 Optimization Methods

Chapter 3 Numerical Methods

NUMERICAL MATHEMATICS AND COMPUTING

Root Finding Methods

Optimization Methods

Review for Exam 2 Ben Wang and Mark Styczynski

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Optimization and Root Finding. Kurt Hornik

Numerical Methods in Informatics

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods.

Mathematical optimization

Numerical Optimization: Basic Concepts and Algorithms

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Numerical Methods I Solving Nonlinear Equations

4 damped (modified) Newton methods

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS BISECTION METHOD

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Roots of equations, minimization, numerical integration

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization

SOLUTIONS to Exercises from Optimization

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Written Examination

Math 273a: Optimization Netwon s methods

Lecture 8. Root finding II

Optimization II: Unconstrained Multivariable

Numerical Solution of f(x) = 0

Inverse Problems and Optimal Design in Electricity and Magnetism

LINEAR AND NONLINEAR PROGRAMMING

Fundamental Theorems of Optimization

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerical Optimization

Linear Programming. H. R. Alvarez A., Ph. D. 1

Slide 1 Math 1520, Lecture 10

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

Topic 8c Multi Variable Optimization

Week_4: simplex method II

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Multivariate Newton Minimanization

In an optimization problem, the objective is to optimize (maximize or minimize) some function f. This function f is called the objective function.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Math 10C - Fall Final Exam

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Review of Classical Optimization

Finding the Roots of f(x) = 0. Gerald W. Recktenwald Department of Mechanical Engineering Portland State University

Finding the Roots of f(x) = 0

MATH 3795 Lecture 12. Numerical Solution of Nonlinear Equations.

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)

Gradient Descent. Sargur Srihari

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

ECE 307 Techniques for Engineering Decisions

Optimization II: Unconstrained Multivariable

Transcription:

Optimization Totally not complete this is...don't use it yet...

Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign change but not change in function characteristic Even if we took the derivative and used the bi-section, it would not work well However a tri-section method could be developed though it would not be very efficient Instead a variant of a tri-section method called the Golden section search can be used Based of the golden ration or the divine proportion (see the book) The Divine Proportion: A Study in Mathematical Beauty (Dover Books on Mathematics) Fun math gives some ideas on the nature of the golden ratio... check it out

Golden Ratio Golden ratio Used in art (golden ratio, golden rectangle, golden spiral, etc.) Enhances the visual aspect of a painting or building; found in nature Related to Fibonacci l +m l = l m =ϕ ϕ= 1+ 5 =1.618 2

Golden rectangle Golden Rectangle

Golden Search Method Golden search Choose two values to bound the maximum of the function like in the bisection method: x l and x u Choose efficient intermediate points: which is determined from the Golden ratio Compare f(x 0 ) to f(x 1 ) to determine which intermediate point is the best to continue the search d=x u x l x 0 =x l + d ϕ x 1 =x u d ϕ

Golden Search Method Criteria Golden search If f(x 0 ) > f(x 1 ) then the region from x l to x 1 is not going to contain our maximum: therefore x l should be move to x 1 If f(x 0 ) < f(x 1 ) then the region from x u to x 0 is not going to contain our maximum: therefore x u should be moved to x 0 If f(x 0 ) = f(x 1 ) within tolerance then you found the maximum AND abs(x 0 x 1 ) 0 (within tolerance) IF not select either x l to x 1 or x u to x 0

Golden Search Method Issues Bracketing has a number of advantages, but some disadvantages as well. Pros: Convergence Cons: Slow, not good for multiple roots tightly packed Golden Search is similar to Bisection method in the issues it faces Most real optimization problems are complicated and can be time-consuming to solve; therefore slow is NOT good.

Newton's Method Can use the method we developed for root solving, but need to change it a little bit We find the optimum value instead of the root by making a substitution Define a new function g (x)= f ' ( x) Define the optimal value as x o f ' ( x o )=g ( x o )=0 for the optimal value (root for g ; optimum for f ) The root of g ( x)is x i+1 =x i g( x i) g ' (x i ) Therefore the optimum value is x i+1 =x i f ' ( x i) f ' ' ( x i ) Alternatively we could derive something setting the derivative of a Taylor series equal to zero

Brent's Method Uses golden search or section method with parabolic interpolation Parabolic interpolation models the function as a parabola and sees if the optimal value of that is in the bound then it continues else it switches to golden search method...

Parabolic Interpolation Parabolic Interpolation In the local region near on optimal value, the function looks parabolic You need three initial guesses (obviously) Choose next set of points in a method similar to the method for the Golden Search Method (eliminate the section that does not have the maximum)

Optimization: Several Variables Non-gradient and gradient methods Optimizing for a system that has several variables is rather complicated and really requires a computer to do it There are two standard methods (plus simulation methods) that are used to optimize for these systems Non-gradient methods: No derivatives Gradient methods: Derivatives (what type?) Simulation methods (these methods are beyond this course) Simulated Annealing Neural Nets Genetic Algorithms

Optimization: Non-gradient Method Optimizing one variable at a time using a method like Newton's Can unfortunately easily get lost Powell's Method is one particular non-gradient method (from personal experience it was not very effective)there are two standard methods (plus simulation methods) that are used to optimize for these systems

Optimization: Gradient Method Use first derivative for direction (or gradient or Del f) But to determine if we have a local min or max we need the double derivative Need the Hessian if H > 0 and f = f x ^i + f y ^j H= 2 f 2 f x 2 y 2 ( 2 f )2 x y 2 f x 2 2 f > 0 then f (x, y)hasa local min if H > 0 and < 0then f ( x, y )has alocal max x 2 if H < 0then f ( x, y )has a 'saddle' point

Optimization: Gradient Method One gradient method is called gradient descent or steepest descent method (there is another method by this name as well so it is a bit confusing) Other methods are conjugate gradient method Need matrix that is symmetric and positive-definite For non-linear assume an approximation to a quadratic function Quasi-Newton methods Nelder-Mead method Uses form of nonlinear simplex method Heuristic method Biconjugate gradient method

Optimization: Gradient Method Finite differences Sometimes, the derivatives may not be resolvable so we use finite-differences Like secant method but for partial derivatives and need to perturb it slightly Center-difference approach f x f f ( x, y +δ y ) f (x, y δ y) = y 2δ y = f ( x+δ x, y) f (x δ x, y ) 2δ x 2 f f ( x+δ x, y) 2 f (x, y ) f ( x δ x, y ) 2= x δ x 2 f f (x +δ x, y +δ y) f (x+δ x, y δ y) f (x δ x, y+δ y)+f ( x δ x, y δ y) = x y 4δ x δ y δ x,δ y=small difference

Optimization: Gradient Method Steepest Ascent Method Steepest Ascent Method is a very common method that uses directional functions (with a gradient) Iteration method Each time get a maximum and use that as a new starting point (solve for h in below example equations) Linearly convergent SLOW Matlab/octave method (as well as a lot of other 4 th level programming languages x=x 0 + f x h y= y 0 + f y h

Levenberg-Marquardt Method One of the best methods (personal experience) Combines the steepest descent method with the Newton's method Basically you use the steepest descent method to achieve a very good starting point for the Newton's method. Other methods do exist (literally 100s)

Linear Programming Simplex Method An objective is reached given constraints This is an optimization method Programming is used in the past sense, not computer but planning This method can get very complicated in the details which I will not cover I will just cover the simple stuff

Linear Programming Simplex Method Window frame company Has three plants Plant 1 produces specialty frames Plant 2 produces wood frames Plant 3 produces the glass and assembles Two new products to increase profitability One will use a specialty frame The other will use a wood frame A branch of the company was able to figure out the capacity used by each plant for a unit and the profit therein

Linear Programming Simplex Method Percent capacity per unit per minute % of availability Product 1 Product 2 Available Capacity Plant 1 1 0 4 Plant 2 0 2 12 Plant 3 3 2 18 Unit Profit $3.00 $5.00

Linear Programming Simplex Method Define x 1 and x 2 as the products (units) of the respective plant per minute Therefore we want to maximize Z = 3 x 1 + 5 x 2 Constraints would be 0.01 x 1 0.04(multiplied through by 100) x 1 4 2 x 2 12 3 x 1 +2 x 2 18 Could solve this graphically or use slack variables Z 3 x 1 5 x 2 =0 x 1 + x 3 =4 2 x 2 + x 4 =12 3 x 1 +2 x 2 + x 5 =18 x i 0 for i=1,2,3,4,5

Linear Programming Simplex Method Set-up would be as follows Pick column with largest negative coefficient Z x1 x2 x3 x4 x5 Right Side Z 1-3 -5 0 0 0 0 Ratio x3 (s1) 0 1 0 1 0 0 4 4/0 =... x4 (s2) 0 0 2 0 1 0 12 12/2 = 6 (min) x5 (s3) 0 3 2 0 0 1 18 18/2 = 9

Linear Programming Simplex Method Pick row pivot row with minimum pivot Then use the pivot (not the row but the number 2 in this case) like in Gauss-Jordan Z x1 x2 x3 x4 x5 Right Side Z 1-3 -5 0 0 0 0 Ratio x3 0 1 0 1 0 0 4 4/0 =... x4 0 0 2 0 1 0 12 12/2 = 6 (min) x5 0 3 2 0 0 1 18 18/2 = 9

Linear Programming Simplex Method Follow this... Z new =Z old ( 5) x 2new s 1new =s 1old 0 x 2new x 2new = s 2old pivot s 3new =s 3old 2 x 2new Z x1 x2 x3 x4 x5 Right Side Z 1-3 0 0 5/2 0 30 x3 0 1 0 1 0 0 4 x2 0 0 1 0 1/2 0 6 x5 0 3 0 0-1 1 6 Ratio

Linear Programming Simplex Method Repeat... Z x1 x2 x3 x4 x5 Right Side Ratio Z 1-3 0 0 5/2 0 30 NA x3 0 1 0 1 0 0 4 4/1 = 4 x2 0 0 1 0 1/2 0 6 NA x5 0 3 0 0-1 1 6 6/3 = 2 (min)

Linear Programming Simplex Method Repeat... Optimal resources Profit $36 given x 1 = 2 and x 2 = 6 Z x1 x2 x3 x4 x5 Right Side Note Z 1 0 0 0 3/2 1 36 Optimal Z x3 0 0 0 1 1/3-1/3 2 x2 0 0 1 0 1/2 0 6 Optimal x2 x1 0 1 0 0-1/3 1/3 2 Optimal x1

Linear Programming Simplex Method Repeat... Shadow prices show the rate at which a resource might be increased slightly to increase Z Z x1 x2 x3 x4 x5 Right Side Note Z 1 0 0 0 3/2 1 36 Shadow Prices x3 0 0 0 1 1/3-1/3 2 x2 0 0 1 0 1/2 0 6 x1 0 1 0 0-1/3 1/3 2