MULTI VARIABLE OPTIMIZATION

Similar documents
Optimization Concepts and Applications in Engineering


8 Numerical methods for unconstrained problems

5 Handling Constraints

Numerical optimization

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Nonlinear Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

Lecture 7: Minimization or maximization of functions (Recipes Chapter 10)

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Lecture V. Numerical Optimization

Optimization Methods

Introduction to unconstrained optimization - direct search methods

Algorithms for constrained local optimization

2.3 Linear Programming

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Multidisciplinary System Design Optimization (MSDO)

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

Algorithms for Constrained Optimization

1 Computing with constraints

2.098/6.255/ Optimization Methods Practice True/False Questions

CONSTRAINED NONLINEAR PROGRAMMING

4TE3/6TE3. Algorithms for. Continuous Optimization

Hamilton- J acobi Equation: Explicit Formulas In this lecture we try to apply the method of characteristics to the Hamilton-Jacobi equation: u t

Optimization. Totally not complete this is...don't use it yet...

Lecture 3. Optimization Problems and Iterative Algorithms

MIT Manufacturing Systems Analysis Lecture 14-16

Differential Equations

Lecture 13: Constrained optimization

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

dt = C exp (3 ln t 4 ). t 4 W = C exp ( ln(4 t) 3) = C(4 t) 3.

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

CHAPTER 12 DIRECT CURRENT CIRCUITS

MATH 5720: Gradient Methods Hung Phan, UMass Lowell October 4, 2018

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Nonlinear Optimization for Optimal Control

You should be able to...

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Gradient Descent. Dr. Xiaowei Huang

Optimization and Root Finding. Kurt Hornik

MATH 4211/6211 Optimization Basics of Optimization Problems

The Steepest Descent Algorithm for Unconstrained Optimization

Chapter 8 Gradient Methods

Chapter 4. Unconstrained optimization

Statistics 580 Optimization Methods

Unconstrained Multivariate Optimization

LINEAR AND NONLINEAR PROGRAMMING

Lecture 7 Unconstrained nonlinear programming

Optimization Tutorial 1. Basic Gradient Descent

MS&E 318 (CME 338) Large-Scale Numerical Optimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Programming, numerics and optimization

Spring Ammar Abu-Hudrouss Islamic University Gaza

Convex Optimization. Problem set 2. Due Monday April 26th

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

MPC Infeasibility Handling

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization Methods

Design and Optimization of Energy Systems Prof. C. Balaji Department of Mechanical Engineering Indian Institute of Technology, Madras

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Numerical Optimization of Partial Differential Equations

Review of Classical Optimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Line Search Methods for Unconstrained Optimisation

5.6 Penalty method and augmented Lagrangian method

Chapter 6: Derivative-Based. optimization 1

1 Numerical optimization

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Numerical Optimization

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Computational Linear Algebra

Constrained optimization

Interior-Point Methods for Linear Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Math 527 Lecture 6: Hamilton-Jacobi Equation: Explicit Formulas

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Sufficient Conditions for Finite-variable Constrained Minimization

NonlinearOptimization

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

Least Mean Squares Regression. Machine Learning Fall 2018

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Lecture 35 Minimization and maximization of functions. Powell s method in multidimensions Conjugate gradient method. Annealing methods.

On Lagrange multipliers of trust-region subproblems

A Hop Constrained Min-Sum Arborescence with Outage Costs

Quasi-Newton Methods

Two Coupled Oscillators / Normal Modes

IE 5531: Engineering Optimization I

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Numerical optimization

Transcription:

MULI VARIABLE OPIMIZAION Min f(x 1, x 2, x 3,----- x n ) UNIDIRECIONAL SEARCH - CONSIDER A DIRECION S r r x( α ) = x +α s - REDUCE O Min f (α) - SOLVE AS A SINGLE VARIABLE PROBLEM Min Point s r

Uni directional search (example) Min f(x 1, x 2 ) = (x 1-10) 2 + (x 2-10) 2 S = (2, 5) (search direction) X = (2, 1) (Initial guess)

DIREC SERACH MEHODS - SEARCH HROUGH MANY DIRECIONS - FOR N VARIABLES 2 N DIRECIONS -Obtained by altering each of the n values and taking all combinations

EVOLUIONARY OPIMIZAION MEHOD - COMPARE ALL 2 N +1 POINS & CHOOSE HE BES. - CONINUE ILL HERE IS AN IMPROVE MEN - ELSE DECREASE INCREAMEN SEP 1: x 0 = INIIAL POIN i = SEP REDUCION PARAMEER FOR EACH VARIABLE = ERMINAION PARA MEER

2 & 2 ; 2 / 4 : 1) (2 3: 2 / 2 2 : 0 0 GOO x x ELSE GOO x x IF SEP Min x SEP x x POINS CREAE ELSE SOP IF SEP i i N i i i N = = = + = ± = <

Hooke Jeeves pattern search Pattern Search --- Create a set of search directions iteratively Should be linearly independent A combination of exploratory and pattern moves Exploratory find the best point in the vicinity of the current point Pattern Jump in the direction of change, if better then continue, else reduce size of exploratory move and continue

Exploratory move Current solution is x c ; set i = 1; x = x c S1: f = f(x), f + = f(x i + i ), f - = f(x i - i ) S2: f min = min (f, f +, f - ); set x corresponding to f min S3: If i = N, go to 4; else i = i + 1, go to 1 S4: If x x c, success, else failure

Pattern Move S1: Choose x (0), I, for I = 1, 2, N, ε, and set k = 0 S2: Perform exploratory move with x k as base point; If success, x k+1 = x, go to 4 else goto 3 S3: If < ε, terminate Else set i = i / α --- i, go to 2

Pattern Move (contd) S4: k = k+1; x k+1 p = x k + (x k x k-1 ) S5: Perform another exploratory move with x k+1 p as the base point; Result = x k+1 S6: If f(x k+1 ) < f(x k ), goto S4 Else gotos3

Example : Consider the Himmelblau function: 2 2 2 f ( x, x2) = ( x1 + x2 11) + ( x1 + x2 7 Solution Step 1 Selection of initial conditions 1. Initial Point : (0) x = (0,0) 2. Increment vector : 3. Reduction factor : α = 2 4. ermination parameter : ε = 10 3 5. Iteration counter : k = 0 2 1 ) = (0.5,0.5)

Step 2 Perform an Iteration of the exploratory move with base point as x = x (0) hus we set x (0) = x = (0,0) and i = 1 he exploratory move will be performed with the following steps

Steps for the Exploratory move Step 1 : Explore the vicinity of the variable x1 ( Calculate the function values at three points (0) (0) x + 1x ) = (0.5,0.5) f = f (( 0.5,0.5) ) = 157.81 + (0) x = (0,0) f = f (( 0,0) ) = 170 ( (0) (0) x 1x ) = ( 0.5,0.5) = f (( 0.5,0.5) ) = 171.81 Step 2 : ake the Minimum of above function and corresponding point Step 3 : As i 1 : all variables are not explored Increment counter i=2 and explore second variable. First iteration completed f

Step1: At this point the base point is x = (0.5,0) explore the variable x2 and calculate the function values. f = f (( 0.5,0.5) ) = 144.12 + f = f (( 0.5,0) ) = 157.81 f = f (( 0.5, 0.5) ) = 165.62 Step 2 : fmin = 144.12 and point, x=(0.5,0.5) Step 3 : As i=2 move to the step 4 of the exploratory move Step 4 : ( of the Exploratory move ) c Since x x the move is success and we set x = (0.5,0.5)

(1) As the move is success, set x = x = (0.5,0.5) move to step 4 x SEP 4 : We set k=1 and perform Pattern move (2) p = ( x (1) + ( x (1) x (0) ) ) = 2(0.5,0.5) Step 5 : Perform another exploratory move as before ( 2) and with x p as the base point. he new point is x = (1.5,1.5) (2) Set the new point x = x = (1.5,1.5 ) (2) (1) Step 6 : f ( x ) = 63.12 is smaller than f ( x ) = 144. 12 Proceed to next step to perform another pattern move (0,0) = (1,1)

SEP 4 : Set k=2 and create a new point (2) (1) Note: as x is better than x, a jump along the direction (2) ( x x true minimum (3) (2) (1) x p = (2x x ) = (2.5,2.5) (1) ) is made, this will take search closer to SEP 5 : Perform another exploratory move to find any better point around the new point. Performing the move on both variables we have New point (3) x = (3.0,2.0) his point is the true minimum point

In the example the minimum of the Hookes-Jeeves algorithm happen in two iterations : this may not be the case always Even though the minimum point is reached there is no way of finding whether the optimum is reached or not he algorithm proceeds until the norm if the increment vector is small. SEP 6 : function value at new point 3 2 f ( x ) = 0 < f ( x ) = 63.12 hus move on to step 4

SEP 4 : he iteration counter k = 3 and the new point (4) (3) (2) x p = (2x x ) = (4.5,2.5) SEP 5 : With the new point as base the search is success and x=(4.0,2.0) and thus we set (4) x = (4.0,2.0) SEP 6 : he function value is 50, which is larger that the earlier i.e. 0. hus we move to step 3 Step 3 : Since = 0. 5 ε we reduce the increment vector = ( 0.5,0.5) / 2 = (0.25,0.25) and proceed to Step 2 to perform the iterations

Step 2 : Perform an exploratory move with the (3) following as current point x = (3.0,2.0) he exploratory move on both the variables is failure (3) and we obtain x = (3.0,2.0) thus we proceed to Step 3 Step 3 : Since is not small reduce the increment vector and move to Step 2. he new increment vector is = (0.125,0.125) he algorithm now continues with step 2 and step 3 until is smaller than the termination factor. he final solution is x * = (3.0,2.0 ) with the function value 0

POWELL S CONJUGAE DIRECION MEHOD For a quadratic function IN 2 VARIABLES - AKE 2 POINS x 1 & x 2 AND - A DIRECION d IF y 1 IS A SOLUION OF MIN f ( x 1 + λd) & y 2 IS A SOLUION OF MIN f ( x 2 + λd) HEN (y 2 -y 1 ) IS CONJUGAE O d OPIMUM LIES ALONG (y 2 -y 1 ) y 2 x 2 y 1 x 1

OR FROM x 1 GE y 1 ALONG (1,0) y 1 GE x 2 ALONG (0,1) x 2 GE y 2 ALONG (1,0) AKE (y 2 -y 1 )

Alternate to the above method - One point ( x 1 ) and both coordinate directions ((1,0) and (0,1) ) - Can be used to create a pair of conjugate 2 1 directions ( d and ( y y ))

Point ( y 1 ) obtained by unidirectional search along ( 1,0) from the point ( x 1 ). Point ( x 2 ) obtained by unidirectional search along from the point ( y 1. ( 0,1) ) Point ( y 2 ) obtained by unidirectional search along (1,0) from the point ( x 2 ). he figure shown also follows the Parallel Subspace Property. his requires hree Unidirectional searches.

Example : Consider the Himmelblau function: f ( x 2 2 2 2 1, x2) = ( x1 + x2 11) + ( x1 + x2 7) Solution (0) Step1 : Begin with a point x = (0,4) (1) (2) Initial direction s = (1,0) and s = (0,1) Step 2: Find minimum along the first search direction. Any point along that direction can be written as ( ) x p = x (0) + αs (1)

hus the point x p can be written as x p =(α,4) Now the two variable function can be expressed in terms of one variable 2 2 2 F( α) = ( α 7) + ( α + 9) We are looking for the point which the function value is minimum. == Following the procedure of numerical differentiation. Using the bounding phase method, the minimum is bracketed in the interval (1,4), and using the golden search method the minimum α*=2.083 with three decimals places of accuracy. hus x 1 = (2.083,4.00)

Similarly find the minimum point along the second search direction. A general point on the line is 1 2 x ( α) = ( x + αs ) = (2.083, ( α + 4)) Using similar approach as said earlier we have α*=-1.592 x 2 = (2.083,2.408) From the above point perform a unidirectional search along the first search direction and obtain the minimum point x 3 = (2.881,2.408)

Step 3 : According to the parallel subspace property, we find the new conjugate direction (2) s = (1,0) Step 4 : the magnitude of search vector d is not small. hus the new conjugate search direction are s (1) = (0.798, 1.592) / (0.798, 1.592) = (0.448, 0.894) his completes one iteration of Powell s conjugate direction method.

Step 2 : A single variable minimization along the search direction s (1) from the point x (3) =(2.881,2.408) results in the new point x = (3.063,2.045) One more unidirectional search along the s 2 from the point x 4 results in the point x 5. Another minimization along s 1 results in x 6 Step 3 : the new conjugate direction is (6) (4) d = ( x x ) = (0.055, 0.039) he unit vector along this direction is ( 0.816,-0.578)

Step 4 : he new pair of conjugate search direction are (1) (2) s = (0.448, 0.894) s = (0.055, 0.039) he search direction d ( before normalizing ) may be considered to be small and therefore the algorithm may be terminated

EXENDED PARALLEL SUBSPACE PROPERY. Let us assume that the point y 1 is found after unidirectional searches along each of m ( < N ) conjugate directions from a chosen point x 1 and 2 similarly, the point y is found after unidirectional searches along each of m conjugate directions from another point x 2. he 2 1 vector ( y y ) is the conjugate to all m search directions.

ALGORIHM Step 1 : Choose starting point And a set of N linearly independent direction Step 2 : Minimize along N unidirectional search directions using the previous minimum point to begin the next search. Step 3 : Form a new conjugate direction d using the extended parallel subspace property. Step 4 : If d is small or search directions are linearly dependent, ERMINAE, Else replace starting point for N directions, set s (1) =d / d And go to Step 2

FOR N VARIABLES SEP 1: AKE x 0 & N LINEARLY INDEPENDEN DIRECIONS s 1, s 2, s 3, s 4,------- s N s i = e i SEP 2: - MINIMIZE ALONG N UNI-DIRECIONAL SEARCH DIRECIONS,USING PREVIOUS BES EVERY IME - PERFORM ANOHER SEARCH ALONG s 1 SEP 3: FROM HE CONJUGAE DIRECION d SEP 4: IF d IS SMALL ERMINAE ELSE s j j 1 = s j & s 1 = d / d GOO 2

Powell s method with N variables Start from x 1 Get y 1 by doing search along s 1 Find y 2 by doing search along s 2, s 3, s 4.. s n, s 1 (y 2 y 1 ) is conjugate to s 1 Replace s n by (y 2 -y 1 ) and start same procedure starting from s 2

GRADIEN BASED MEHODS he methods exploit the derivative information of the function and are faster. Cannot be applied to problems where the objective function is discrete or discontinuous. Efficient when the derivative information is easily available Some algorithms require first order derivatives while some require first and second order derivatives of the objective function. he derivatives can be obtained by numerical computations

Methods in Gradient Search. Decent Direction Cauchy s (steepest decent) method Newton s Method Marquardt s Method Conjugate gradient method Variable-metric method

(t) By definition, the first derivative f ( x ) at any (t) point ( x ) represents the direction of the maximum increase of the function value. If we are interested in finding a point with the minimum function value, ideally we should be searching along the opposite to the first derivative direction, that is, we should search (t) along f ( x ) Any search made in this direction will have smaller function value.

DECEN DIRECION (t) A search direction ( d ) is a decent direction at a (t) ( t) ( t) point ( x ) if the condition f ( x ). d 0 is (t) satisfied in the vicinity of the point ( x ) It can also be proved by comparing function values at two points along any decent direction. ( t) ( t) he magnitude of the vector f ( x ). d for a (t) decent direction ( d ) specifies how decent the search direction is. he above statement can be explained with the help of an example on the next side

If ( t) ( t). d ( ) is used, the quantity 2 = f x is maximally negative f ( x ( t) ). d ( t) (t) hus the search direction f ( x ) is called the steepest decent direction Note : = x x x1 2 3 x n

Example Consider the Himmelblau function: f ( x 2 2 2 2 1, x2) = ( x1 + x2 11) + ( x1 + x2 7) We would like to determine whether the direction ( t) ( t ) ( d ) = (1,0 ) at the point ( x ) = (1,1 ) is a decent direction or not. Refer to the figure below

It is clear from the figure that if we move locally (t) (t) along ( d ) from the point ( x ) will reduce the function value Investigate this aspect by calculating the (t) derivative f ( x ) at the point. Derivative as calculated numerically is ( t) f ( x ) = ( 46, 38) (t) aking the dot product we obtain f ( x ) and ( x f ( x ( t) ) d( t) = ( 46, 38) = 46 he above is a negative quantity, thus the search direction is a Decent direction 1 0 (t) )

he amount of non-negativity suggests the extent of decent in the direction ( t) If the search direction d ( t) = f ( x ) = ( 46, 38) is used, the magnitude of the above dot product is 46 ( 46, 38) = 3,560 38 he direction (46,38) or (0.771,0.637) is more descent than he above direction is the steepest direction at the point x (t)

In nonlinear function the steepest decent direction at any point may not exactly pass through the true minimum. he steepest decent direction is a direction which is a local best direction It is not guaranteed that moving along the steepest decent direction will always take the search closer to the true minimum

CAUCHY S ( SEEPES DECEN) MEHOD Search direction used is the negative of the gradient at any particular point x (t) : k s = f ( x (k ) ) As the direction gives the maximum decent in function values, it is also known as Steepest Decent Methods he algorithm guarantees improvement in the function value at every iteration

Algorithm in brief ; At every iteration ; o find the minimum point along direction Compute derivative at current point Perform unidirectional search in the negative to this derivative direction he minimum point becomes the current point and the search continues from this point ; Algorithm continues until a point having a small enough gradient vector is found.

Algorithm in Steps SEP 1: CHOOSE M (MAX. NO. OF IERAIONS) SEP 2: CALCULAE 1, 2, x 0, k = 0 f ( x k ) SEP 3: IF f ( x k ) 1 ERMINAE IF k M ERMINAE SEP 4: UNIDIRECIONAL SEARCH USING 2

MIN. f( x k+ 1 ) = f( x k α f( x k )) SEP 5: IF x k+ 1 x x k k k= k+ 1 ELSE GOO SEP 2 1 - MEHOD WORKS WELL WHEN x k IS FAR FROM x * (OPIMUM) - IF POIN IS CLOSE O x * HEN CHANGE IN GRADIEN VECOR IS VERY SMALL. - OHER MEHODS USE VARIAION -SECOND DERIVAIVES(NEWON S) -COMBIMAION -CONJUGAE GRADIEN MEHOD

Example Consider the Himmelblau function: f ( x 2 2 2 2 1, x2) = ( x1 + x2 11) + ( x1 + x2 7) Step 1: Choose large value of M for proper convergence M 100 is normally chosen. Initial conditions : M = 100 x (0) = (0,0) ε 1 = ε 2 =10-3 k=0

Step 2 : he derivative at the initial point x 0 calculated and is found to be (- 14, - 22) is Step 3 : As the magnitude of the derivative is not small and k = 0 < M = 100. do not terminate proceed to step 4

Step 4 : perform a line search form x (0) in the direction (0) f ( x ) such that the function value is minimum. Along that direction, any point can be expressed by fixing a value for the parameter α 0 in the equation. x = x 0 ( 0 0 14α,22 ) 0 (0) α f ( x ) = α Using Golden section search in the interval (0,1) α 0* = 0.127 minimum point along the direction is x 1 = (1.788,2.810) Step 5 : As x 1 and x 0 are quite different, we do not terminate, but go back to step 2, this completes one iteration.

Step 2: he derivative vector at this point is ( - 30.707, 18.803 ) Step 3 : he magnitude of the derivative vector is not smaller than ε1, thus, we continue with step 4. Step 4 : Another unidirectional search along ( 30.707,-18.803) from the point x1=(1.788,2.810) using the golden section search finds the new point x2=(23.008,1.99) with a function value equal to 0.018 Continue the process until the termination criteria is reached

Penalty function approach ransformation method - convert to a sequence of unconstrained problems. Give penalty to (violated) constraints. Add to objective function. Solve. Use result as starting point for next iteration. Alter penalties ands repeat.

Minimise Subjected to Penalty = ( x 2 2 2 2 1 + x2 11) + ( x1 + x2 7) 2 2 ( x1 5) + x2 26 0 2 2 2 ( x1 5) + 2 26 0 0.1 < x > Feasible region Minimum point Infeasible region x 2 x 1

1. Choose Process 2. Form modified objective function P( x k 3. Start with x. Find k + 1 so as to minimize P. 4. If k, R P k ) = f ε,, 1 ε 2 R, Ω. ( x k ) + Ω( R k, g( x (use ) ε 1 k 1 1 ( + k k x, R ) P( x, R k ) < ε 2 ), h( x erminate. 1 5. Else R k + = cr, k=k+1; go to step 2. x k k ))

At any stage minimize P(x,R) = f(x)+ω(r,g(x),h(x)) R = set of penalty parameters Ω= penalty function

ypes of penalty function Parabolic penalty 2 Ω = R{h(x)} - for equality constraints - only for infeasible points Interior penalty functions - penalize feasible points Exterior penalty functions - penalize infeasible points Mixed penalty functions - combination of both

Infinite barrier penalty Ω = R g (x) j - inequality constraints. - R is very large. -Exterior. Log penalty Ω=-R ln[g(x)] -inequality constraints - for feasible points -interior. -initially large R. - larger penalty close to border.

Inverse penalty 1 Ω = R g( x) -interior. - larger penalty close to border. - initially large R Bracket order penalty R < g( x) > 2 - <A>=A if A<0. -exterior. - initially small R.

Direct search Variable elimination - for equality constraints - Express one variable as a function of others and - Eliminate one variable - Remove all equality constraints

Complex search - generate a set of points at random - If a point is infeasible reflect beyond centroid of remaining points - ake worst point and push towards centroid of feasible points.

Complex Search Algo S1: Assume a bound in x (x L, x U ), a reflection parameter α, and ε & δ S2: Generate a set of P (= 2N) initial points For each point Sample N times to determine x (P) i, in the given bound if x (P) is infeasible, calculate x (centroid) of the current set of points and set x P = x P +1/2(x-x P ) until x P is feasible. If x P is feasible, continue till u get P points

Complex Search Algo (contd) S3: Reflection step select x R such that f(x R ) = max (f(x P )) = F max calculate x, centroid of remaining points x m = x + α (x -x R ) If x m is feasible and f(x m ) > F max, retrtact half the distance to x, and continue till f(x m ) < F max If x m is feasible and f(xm) < Fmax, Go to S5 If x m is infeasible, Goto S4

Complex Search Algo (contd) S4: Check for feasibility of the solution For all i, reset violated variable bounds if x im < x il, x im = x i L if x im > x iu, x im = x i U If the resulting x im is infeasible, retract half the distance to the centroid, repeat till x m is feasible

Complex Search Algo (contd) S5: Replace x R by x m, check for termination f mean = mean of f(x P ), x mean = mean (x P ) p ( f ( x p ) f mean 2 ) ε p x p x mean 2 δ

Complex Search Algo (contd)

Characteristics of complex search For complex feasible region If the optimum is well inside the search space, the algo is efficient Not so good if the search space is narrow, or the optimum is close to the constraint boundary