MULTI VARIABLE OPTIMIZATION

Size: px
Start display at page:

Download "MULTI VARIABLE OPTIMIZATION"

Transcription

1 MULI VARIABLE OPIMIZAION Min f(x 1, x 2, x 3,----- x n ) UNIDIRECIONAL SEARCH - CONSIDER A DIRECION S r r x( α ) = x +α s - REDUCE O Min f (α) - SOLVE AS A SINGLE VARIABLE PROBLEM Min Point s r

2 Uni directional search (example) Min f(x 1, x 2 ) = (x 1-10) 2 + (x 2-10) 2 S = (2, 5) (search direction) X = (2, 1) (Initial guess)

3 DIREC SERACH MEHODS - SEARCH HROUGH MANY DIRECIONS - FOR N VARIABLES 2 N DIRECIONS -Obtained by altering each of the n values and taking all combinations

4 EVOLUIONARY OPIMIZAION MEHOD - COMPARE ALL 2 N +1 POINS & CHOOSE HE BES. - CONINUE ILL HERE IS AN IMPROVE MEN - ELSE DECREASE INCREAMEN SEP 1: x 0 = INIIAL POIN i = SEP REDUCION PARAMEER FOR EACH VARIABLE = ERMINAION PARA MEER

5 2 & 2 ; 2 / 4 : 1) (2 3: 2 / 2 2 : 0 0 GOO x x ELSE GOO x x IF SEP Min x SEP x x POINS CREAE ELSE SOP IF SEP i i N i i i N = = = + = ± = <

6

7 Hooke Jeeves pattern search Pattern Search --- Create a set of search directions iteratively Should be linearly independent A combination of exploratory and pattern moves Exploratory find the best point in the vicinity of the current point Pattern Jump in the direction of change, if better then continue, else reduce size of exploratory move and continue

8 Exploratory move Current solution is x c ; set i = 1; x = x c S1: f = f(x), f + = f(x i + i ), f - = f(x i - i ) S2: f min = min (f, f +, f - ); set x corresponding to f min S3: If i = N, go to 4; else i = i + 1, go to 1 S4: If x x c, success, else failure

9 Pattern Move S1: Choose x (0), I, for I = 1, 2, N, ε, and set k = 0 S2: Perform exploratory move with x k as base point; If success, x k+1 = x, go to 4 else goto 3 S3: If < ε, terminate Else set i = i / α --- i, go to 2

10 Pattern Move (contd) S4: k = k+1; x k+1 p = x k + (x k x k-1 ) S5: Perform another exploratory move with x k+1 p as the base point; Result = x k+1 S6: If f(x k+1 ) < f(x k ), goto S4 Else gotos3

11 Example : Consider the Himmelblau function: f ( x, x2) = ( x1 + x2 11) + ( x1 + x2 7 Solution Step 1 Selection of initial conditions 1. Initial Point : (0) x = (0,0) 2. Increment vector : 3. Reduction factor : α = 2 4. ermination parameter : ε = Iteration counter : k = ) = (0.5,0.5)

12

13 Step 2 Perform an Iteration of the exploratory move with base point as x = x (0) hus we set x (0) = x = (0,0) and i = 1 he exploratory move will be performed with the following steps

14 Steps for the Exploratory move Step 1 : Explore the vicinity of the variable x1 ( Calculate the function values at three points (0) (0) x + 1x ) = (0.5,0.5) f = f (( 0.5,0.5) ) = (0) x = (0,0) f = f (( 0,0) ) = 170 ( (0) (0) x 1x ) = ( 0.5,0.5) = f (( 0.5,0.5) ) = Step 2 : ake the Minimum of above function and corresponding point Step 3 : As i 1 : all variables are not explored Increment counter i=2 and explore second variable. First iteration completed f

15 Step1: At this point the base point is x = (0.5,0) explore the variable x2 and calculate the function values. f = f (( 0.5,0.5) ) = f = f (( 0.5,0) ) = f = f (( 0.5, 0.5) ) = Step 2 : fmin = and point, x=(0.5,0.5) Step 3 : As i=2 move to the step 4 of the exploratory move Step 4 : ( of the Exploratory move ) c Since x x the move is success and we set x = (0.5,0.5)

16 (1) As the move is success, set x = x = (0.5,0.5) move to step 4 x SEP 4 : We set k=1 and perform Pattern move (2) p = ( x (1) + ( x (1) x (0) ) ) = 2(0.5,0.5) Step 5 : Perform another exploratory move as before ( 2) and with x p as the base point. he new point is x = (1.5,1.5) (2) Set the new point x = x = (1.5,1.5 ) (2) (1) Step 6 : f ( x ) = is smaller than f ( x ) = Proceed to next step to perform another pattern move (0,0) = (1,1)

17 SEP 4 : Set k=2 and create a new point (2) (1) Note: as x is better than x, a jump along the direction (2) ( x x true minimum (3) (2) (1) x p = (2x x ) = (2.5,2.5) (1) ) is made, this will take search closer to SEP 5 : Perform another exploratory move to find any better point around the new point. Performing the move on both variables we have New point (3) x = (3.0,2.0) his point is the true minimum point

18 In the example the minimum of the Hookes-Jeeves algorithm happen in two iterations : this may not be the case always Even though the minimum point is reached there is no way of finding whether the optimum is reached or not he algorithm proceeds until the norm if the increment vector is small. SEP 6 : function value at new point 3 2 f ( x ) = 0 < f ( x ) = hus move on to step 4

19 SEP 4 : he iteration counter k = 3 and the new point (4) (3) (2) x p = (2x x ) = (4.5,2.5) SEP 5 : With the new point as base the search is success and x=(4.0,2.0) and thus we set (4) x = (4.0,2.0) SEP 6 : he function value is 50, which is larger that the earlier i.e. 0. hus we move to step 3 Step 3 : Since = 0. 5 ε we reduce the increment vector = ( 0.5,0.5) / 2 = (0.25,0.25) and proceed to Step 2 to perform the iterations

20 Step 2 : Perform an exploratory move with the (3) following as current point x = (3.0,2.0) he exploratory move on both the variables is failure (3) and we obtain x = (3.0,2.0) thus we proceed to Step 3 Step 3 : Since is not small reduce the increment vector and move to Step 2. he new increment vector is = (0.125,0.125) he algorithm now continues with step 2 and step 3 until is smaller than the termination factor. he final solution is x * = (3.0,2.0 ) with the function value 0

21 POWELL S CONJUGAE DIRECION MEHOD For a quadratic function IN 2 VARIABLES - AKE 2 POINS x 1 & x 2 AND - A DIRECION d IF y 1 IS A SOLUION OF MIN f ( x 1 + λd) & y 2 IS A SOLUION OF MIN f ( x 2 + λd) HEN (y 2 -y 1 ) IS CONJUGAE O d OPIMUM LIES ALONG (y 2 -y 1 ) y 2 x 2 y 1 x 1

22 OR FROM x 1 GE y 1 ALONG (1,0) y 1 GE x 2 ALONG (0,1) x 2 GE y 2 ALONG (1,0) AKE (y 2 -y 1 )

23 Alternate to the above method - One point ( x 1 ) and both coordinate directions ((1,0) and (0,1) ) - Can be used to create a pair of conjugate 2 1 directions ( d and ( y y ))

24 Point ( y 1 ) obtained by unidirectional search along ( 1,0) from the point ( x 1 ). Point ( x 2 ) obtained by unidirectional search along from the point ( y 1. ( 0,1) ) Point ( y 2 ) obtained by unidirectional search along (1,0) from the point ( x 2 ). he figure shown also follows the Parallel Subspace Property. his requires hree Unidirectional searches.

25 Example : Consider the Himmelblau function: f ( x , x2) = ( x1 + x2 11) + ( x1 + x2 7) Solution (0) Step1 : Begin with a point x = (0,4) (1) (2) Initial direction s = (1,0) and s = (0,1) Step 2: Find minimum along the first search direction. Any point along that direction can be written as ( ) x p = x (0) + αs (1)

26 hus the point x p can be written as x p =(α,4) Now the two variable function can be expressed in terms of one variable F( α) = ( α 7) + ( α + 9) We are looking for the point which the function value is minimum. == Following the procedure of numerical differentiation. Using the bounding phase method, the minimum is bracketed in the interval (1,4), and using the golden search method the minimum α*=2.083 with three decimals places of accuracy. hus x 1 = (2.083,4.00)

27

28 Similarly find the minimum point along the second search direction. A general point on the line is 1 2 x ( α) = ( x + αs ) = (2.083, ( α + 4)) Using similar approach as said earlier we have α*= x 2 = (2.083,2.408) From the above point perform a unidirectional search along the first search direction and obtain the minimum point x 3 = (2.881,2.408)

29 Step 3 : According to the parallel subspace property, we find the new conjugate direction (2) s = (1,0) Step 4 : the magnitude of search vector d is not small. hus the new conjugate search direction are s (1) = (0.798, 1.592) / (0.798, 1.592) = (0.448, 0.894) his completes one iteration of Powell s conjugate direction method.

30 Step 2 : A single variable minimization along the search direction s (1) from the point x (3) =(2.881,2.408) results in the new point x = (3.063,2.045) One more unidirectional search along the s 2 from the point x 4 results in the point x 5. Another minimization along s 1 results in x 6 Step 3 : the new conjugate direction is (6) (4) d = ( x x ) = (0.055, 0.039) he unit vector along this direction is ( 0.816,-0.578)

31 Step 4 : he new pair of conjugate search direction are (1) (2) s = (0.448, 0.894) s = (0.055, 0.039) he search direction d ( before normalizing ) may be considered to be small and therefore the algorithm may be terminated

32 EXENDED PARALLEL SUBSPACE PROPERY. Let us assume that the point y 1 is found after unidirectional searches along each of m ( < N ) conjugate directions from a chosen point x 1 and 2 similarly, the point y is found after unidirectional searches along each of m conjugate directions from another point x 2. he 2 1 vector ( y y ) is the conjugate to all m search directions.

33 ALGORIHM Step 1 : Choose starting point And a set of N linearly independent direction Step 2 : Minimize along N unidirectional search directions using the previous minimum point to begin the next search. Step 3 : Form a new conjugate direction d using the extended parallel subspace property. Step 4 : If d is small or search directions are linearly dependent, ERMINAE, Else replace starting point for N directions, set s (1) =d / d And go to Step 2

34 FOR N VARIABLES SEP 1: AKE x 0 & N LINEARLY INDEPENDEN DIRECIONS s 1, s 2, s 3, s 4, s N s i = e i SEP 2: - MINIMIZE ALONG N UNI-DIRECIONAL SEARCH DIRECIONS,USING PREVIOUS BES EVERY IME - PERFORM ANOHER SEARCH ALONG s 1 SEP 3: FROM HE CONJUGAE DIRECION d SEP 4: IF d IS SMALL ERMINAE ELSE s j j 1 = s j & s 1 = d / d GOO 2

35 Powell s method with N variables Start from x 1 Get y 1 by doing search along s 1 Find y 2 by doing search along s 2, s 3, s 4.. s n, s 1 (y 2 y 1 ) is conjugate to s 1 Replace s n by (y 2 -y 1 ) and start same procedure starting from s 2

36 GRADIEN BASED MEHODS he methods exploit the derivative information of the function and are faster. Cannot be applied to problems where the objective function is discrete or discontinuous. Efficient when the derivative information is easily available Some algorithms require first order derivatives while some require first and second order derivatives of the objective function. he derivatives can be obtained by numerical computations

37 Methods in Gradient Search. Decent Direction Cauchy s (steepest decent) method Newton s Method Marquardt s Method Conjugate gradient method Variable-metric method

38 (t) By definition, the first derivative f ( x ) at any (t) point ( x ) represents the direction of the maximum increase of the function value. If we are interested in finding a point with the minimum function value, ideally we should be searching along the opposite to the first derivative direction, that is, we should search (t) along f ( x ) Any search made in this direction will have smaller function value.

39 DECEN DIRECION (t) A search direction ( d ) is a decent direction at a (t) ( t) ( t) point ( x ) if the condition f ( x ). d 0 is (t) satisfied in the vicinity of the point ( x ) It can also be proved by comparing function values at two points along any decent direction. ( t) ( t) he magnitude of the vector f ( x ). d for a (t) decent direction ( d ) specifies how decent the search direction is. he above statement can be explained with the help of an example on the next side

40 If ( t) ( t). d ( ) is used, the quantity 2 = f x is maximally negative f ( x ( t) ). d ( t) (t) hus the search direction f ( x ) is called the steepest decent direction Note : = x x x1 2 3 x n

41 Example Consider the Himmelblau function: f ( x , x2) = ( x1 + x2 11) + ( x1 + x2 7) We would like to determine whether the direction ( t) ( t ) ( d ) = (1,0 ) at the point ( x ) = (1,1 ) is a decent direction or not. Refer to the figure below

42 It is clear from the figure that if we move locally (t) (t) along ( d ) from the point ( x ) will reduce the function value Investigate this aspect by calculating the (t) derivative f ( x ) at the point. Derivative as calculated numerically is ( t) f ( x ) = ( 46, 38) (t) aking the dot product we obtain f ( x ) and ( x f ( x ( t) ) d( t) = ( 46, 38) = 46 he above is a negative quantity, thus the search direction is a Decent direction 1 0 (t) )

43 he amount of non-negativity suggests the extent of decent in the direction ( t) If the search direction d ( t) = f ( x ) = ( 46, 38) is used, the magnitude of the above dot product is 46 ( 46, 38) = 3, he direction (46,38) or (0.771,0.637) is more descent than he above direction is the steepest direction at the point x (t)

44 In nonlinear function the steepest decent direction at any point may not exactly pass through the true minimum. he steepest decent direction is a direction which is a local best direction It is not guaranteed that moving along the steepest decent direction will always take the search closer to the true minimum

45 CAUCHY S ( SEEPES DECEN) MEHOD Search direction used is the negative of the gradient at any particular point x (t) : k s = f ( x (k ) ) As the direction gives the maximum decent in function values, it is also known as Steepest Decent Methods he algorithm guarantees improvement in the function value at every iteration

46 Algorithm in brief ; At every iteration ; o find the minimum point along direction Compute derivative at current point Perform unidirectional search in the negative to this derivative direction he minimum point becomes the current point and the search continues from this point ; Algorithm continues until a point having a small enough gradient vector is found.

47 Algorithm in Steps SEP 1: CHOOSE M (MAX. NO. OF IERAIONS) SEP 2: CALCULAE 1, 2, x 0, k = 0 f ( x k ) SEP 3: IF f ( x k ) 1 ERMINAE IF k M ERMINAE SEP 4: UNIDIRECIONAL SEARCH USING 2

48 MIN. f( x k+ 1 ) = f( x k α f( x k )) SEP 5: IF x k+ 1 x x k k k= k+ 1 ELSE GOO SEP MEHOD WORKS WELL WHEN x k IS FAR FROM x * (OPIMUM) - IF POIN IS CLOSE O x * HEN CHANGE IN GRADIEN VECOR IS VERY SMALL. - OHER MEHODS USE VARIAION -SECOND DERIVAIVES(NEWON S) -COMBIMAION -CONJUGAE GRADIEN MEHOD

49 Example Consider the Himmelblau function: f ( x , x2) = ( x1 + x2 11) + ( x1 + x2 7) Step 1: Choose large value of M for proper convergence M 100 is normally chosen. Initial conditions : M = 100 x (0) = (0,0) ε 1 = ε 2 =10-3 k=0

50 Step 2 : he derivative at the initial point x 0 calculated and is found to be (- 14, - 22) is Step 3 : As the magnitude of the derivative is not small and k = 0 < M = 100. do not terminate proceed to step 4

51 Step 4 : perform a line search form x (0) in the direction (0) f ( x ) such that the function value is minimum. Along that direction, any point can be expressed by fixing a value for the parameter α 0 in the equation. x = x 0 ( α,22 ) 0 (0) α f ( x ) = α Using Golden section search in the interval (0,1) α 0* = minimum point along the direction is x 1 = (1.788,2.810) Step 5 : As x 1 and x 0 are quite different, we do not terminate, but go back to step 2, this completes one iteration.

52 Step 2: he derivative vector at this point is ( , ) Step 3 : he magnitude of the derivative vector is not smaller than ε1, thus, we continue with step 4. Step 4 : Another unidirectional search along ( , ) from the point x1=(1.788,2.810) using the golden section search finds the new point x2=(23.008,1.99) with a function value equal to Continue the process until the termination criteria is reached

53 Penalty function approach ransformation method - convert to a sequence of unconstrained problems. Give penalty to (violated) constraints. Add to objective function. Solve. Use result as starting point for next iteration. Alter penalties ands repeat.

54 Minimise Subjected to Penalty = ( x x2 11) + ( x1 + x2 7) 2 2 ( x1 5) + x ( x1 5) < x > Feasible region Minimum point Infeasible region x 2 x 1

55 1. Choose Process 2. Form modified objective function P( x k 3. Start with x. Find k + 1 so as to minimize P. 4. If k, R P k ) = f ε,, 1 ε 2 R, Ω. ( x k ) + Ω( R k, g( x (use ) ε 1 k 1 1 ( + k k x, R ) P( x, R k ) < ε 2 ), h( x erminate Else R k + = cr, k=k+1; go to step 2. x k k ))

56 At any stage minimize P(x,R) = f(x)+ω(r,g(x),h(x)) R = set of penalty parameters Ω= penalty function

57

58

59 ypes of penalty function Parabolic penalty 2 Ω = R{h(x)} - for equality constraints - only for infeasible points Interior penalty functions - penalize feasible points Exterior penalty functions - penalize infeasible points Mixed penalty functions - combination of both

60 Infinite barrier penalty Ω = R g (x) j - inequality constraints. - R is very large. -Exterior. Log penalty Ω=-R ln[g(x)] -inequality constraints - for feasible points -interior. -initially large R. - larger penalty close to border.

61 Inverse penalty 1 Ω = R g( x) -interior. - larger penalty close to border. - initially large R Bracket order penalty R < g( x) > 2 - <A>=A if A<0. -exterior. - initially small R.

62 Direct search Variable elimination - for equality constraints - Express one variable as a function of others and - Eliminate one variable - Remove all equality constraints

63 Complex search - generate a set of points at random - If a point is infeasible reflect beyond centroid of remaining points - ake worst point and push towards centroid of feasible points.

64 Complex Search Algo S1: Assume a bound in x (x L, x U ), a reflection parameter α, and ε & δ S2: Generate a set of P (= 2N) initial points For each point Sample N times to determine x (P) i, in the given bound if x (P) is infeasible, calculate x (centroid) of the current set of points and set x P = x P +1/2(x-x P ) until x P is feasible. If x P is feasible, continue till u get P points

65 Complex Search Algo (contd) S3: Reflection step select x R such that f(x R ) = max (f(x P )) = F max calculate x, centroid of remaining points x m = x + α (x -x R ) If x m is feasible and f(x m ) > F max, retrtact half the distance to x, and continue till f(x m ) < F max If x m is feasible and f(xm) < Fmax, Go to S5 If x m is infeasible, Goto S4

66 Complex Search Algo (contd) S4: Check for feasibility of the solution For all i, reset violated variable bounds if x im < x il, x im = x i L if x im > x iu, x im = x i U If the resulting x im is infeasible, retract half the distance to the centroid, repeat till x m is feasible

67 Complex Search Algo (contd) S5: Replace x R by x m, check for termination f mean = mean of f(x P ), x mean = mean (x P ) p ( f ( x p ) f mean 2 ) ε p x p x mean 2 δ

68 Complex Search Algo (contd)

69 Characteristics of complex search For complex feasible region If the optimum is well inside the search space, the algo is efficient Not so good if the search space is narrow, or the optimum is close to the constraint boundary

Optimization Concepts and Applications in Engineering

Optimization Concepts and Applications in Engineering Optimization Concepts and Applications in Engineering Ashok D. Belegundu, Ph.D. Department of Mechanical Engineering The Pennsylvania State University University Park, Pennsylvania Tirupathi R. Chandrupatia,

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10) Lecture 7: Minimization or maximization of functions (Recipes Chapter 10) Actively studied subject for several reasons: Commonly encountered problem: e.g. Hamilton s and Lagrange s principles, economics

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Introduction to unconstrained optimization - direct search methods

Introduction to unconstrained optimization - direct search methods Introduction to unconstrained optimization - direct search methods Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Structure of optimization methods Typically Constraint handling converts the

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations Subsections One-dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t M ah 5 2 7 Fall 2 0 0 9 L ecure 1 0 O c. 7, 2 0 0 9 Hamilon- J acobi Equaion: Explici Formulas In his lecure we ry o apply he mehod of characerisics o he Hamilon-Jacobi equaion: u + H D u, x = 0 in R n

More information

Optimization. Totally not complete this is...don't use it yet...

Optimization. Totally not complete this is...don't use it yet... Optimization Totally not complete this is...don't use it yet... Bisection? Doing a root method is akin to doing a optimization method, but bi-section would not be an effective method - can detect sign

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

MIT Manufacturing Systems Analysis Lecture 14-16

MIT Manufacturing Systems Analysis Lecture 14-16 MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3. Mah Rahman Exam Review Soluions () Consider he IVP: ( 4)y 3y + 4y = ; y(3) = 0, y (3) =. (a) Please deermine he longes inerval for which he IVP is guaraneed o have a unique soluion. Soluion: The disconinuiies

More information

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore Lecture - 13 Steepest Descent Method Hello, welcome back to this series

More information

CHAPTER 12 DIRECT CURRENT CIRCUITS

CHAPTER 12 DIRECT CURRENT CIRCUITS CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As

More information

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018 MATH 5720: Gradien Mehods Hung Phan, UMass Lowell Ocober 4, 208 Descen Direcion Mehods Consider he problem min { f(x) x R n}. The general descen direcions mehod is x k+ = x k + k d k where x k is he curren

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Multidimensional Unconstrained Optimization Suppose we have a function f() of more than one

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

You should be able to...

You should be able to... Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Chapter 8 Gradient Methods

Chapter 8 Gradient Methods Chapter 8 Gradient Methods An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Introduction Recall that a level set of a function is the set of points satisfying for some constant. Thus, a point

More information

Chapter 4. Unconstrained optimization

Chapter 4. Unconstrained optimization Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file

More information

Statistics 580 Optimization Methods

Statistics 580 Optimization Methods Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of

More information

Unconstrained Multivariate Optimization

Unconstrained Multivariate Optimization Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor. ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

Spring Ammar Abu-Hudrouss Islamic University Gaza

Spring Ammar Abu-Hudrouss Islamic University Gaza Chaper 7 Reed-Solomon Code Spring 9 Ammar Abu-Hudrouss Islamic Universiy Gaza ١ Inroducion A Reed Solomon code is a special case of a BCH code in which he lengh of he code is one less han he size of he

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

MPC Infeasibility Handling

MPC Infeasibility Handling MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

Optimization Methods

Optimization Methods Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision

More information

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras Lecture - 09 Newton-Raphson Method Contd We will continue with our

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Review of Classical Optimization

Review of Classical Optimization Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Line Search Methods for Unconstrained Optimisation

Line Search Methods for Unconstrained Optimisation Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic

More information

5.6 Penalty method and augmented Lagrangian method

5.6 Penalty method and augmented Lagrangian method 5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the

More information

Chapter 6: Derivative-Based. optimization 1

Chapter 6: Derivative-Based. optimization 1 Chapter 6: Derivative-Based Optimization Introduction (6. Descent Methods (6. he Method of Steepest Descent (6.3 Newton s Methods (NM (6.4 Step Size Determination (6.5 Nonlinear Least-Squares Problems

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

Numerical Optimization

Numerical Optimization Unconstrained Optimization (II) Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Unconstrained Optimization Let f : R R Unconstrained problem min x

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Constrained optimization

Constrained optimization Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Math 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas

Math 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas Mah 527 Lecure 6: Hamilon-Jacobi Equaion: Explici Formulas Sep. 23, 2 Mehod of characerisics. We r o appl he mehod of characerisics o he Hamilon-Jacobi equaion: u +Hx, Du = in R n, u = g on R n =. 2 To

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

Least Mean Squares Regression. Machine Learning Fall 2018

Least Mean Squares Regression. Machine Learning Fall 2018 Least Mean Squares Regression Machine Learning Fall 2018 1 Where are we? Least Squares Method for regression Examples The LMS objective Gradient descent Incremental/stochastic gradient descent Exercises

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725 Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations

More information

Lecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods.

Lecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods. Lecture 35 Minimization and maximization of functions Powell s method in multidimensions Conjugate gradient method. Annealing methods. We know how to minimize functions in one dimension. If we start at

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

A Hop Constrained Min-Sum Arborescence with Outage Costs

A Hop Constrained Min-Sum Arborescence with Outage Costs A Hop Consrained Min-Sum Arborescence wih Ouage Coss Rakesh Kawara Minnesoa Sae Universiy, Mankao, MN 56001 Email: Kawara@mnsu.edu Absrac The hop consrained min-sum arborescence wih ouage coss problem

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Numerical optimization

Numerical optimization THE UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 408 Phone: 661-111 ext. 857 Email: paul.klein@uwo.ca URL: www.ssc.uwo.ca/economics/faculty/klein/ Numerical optimization In these

More information