Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark
Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT R Lusby (4) Solution Methods /37
One Variable Unconstrained Optimization (jg Let s consider the simplest case: Unconstrained optimization with just a single variable x, where the differentiable function to be maximized is concave The necessary and sufficient condition for a particular solution x = x to be a global maximum is: df dx = 0 x = x If this can be solved directly you are done What if it cannot be solved easily analytically? We can utilize search procedures to solve it numerically Find a sequence of trial solutions that lead towards the optimal solution R Lusby (4) Solution Methods 3/37
Bisection Method One Variable Unconstrained Optimization(jg Can always be applied when f (x) concave It can also be used for certain other functions If x denotes the optimal solution, all that is needed is that df dx > 0 df dx = 0 df dx < 0 if x < x if x = x if x > x These conditions automatically hold when f (x) is concave The sign of the gradient indicates the direction of improvement R Lusby (4) Solution Methods 4/37
Example (jg f (x) df dx = 0 x x R Lusby (4) Solution Methods 5/37
Bisection Method Overview(jg Bisection Given two values, x < x, with f (x) > 0, f (x) < 0 Find the midpoint ˆx = x+x Find the sign of the slope of the midpoint The next two values are: x = ˆx if f (ˆx) < 0 x = ˆx if f (ˆx) > 0 What is the stopping criterion? R Lusby (4) Solution Methods 6/37
Bisection Method Overview(jg Bisection Given two values, x < x, with f (x) > 0, f (x) < 0 Find the midpoint ˆx = x+x Find the sign of the slope of the midpoint The next two values are: x = ˆx if f (ˆx) < 0 x = ˆx if f (ˆx) > 0 What is the stopping criterion? x x < ɛ R Lusby (4) Solution Methods 6/37
Bisection Method Example(jg The Problem maximize f (x) = x 3x 4 x 6 R Lusby (4) Solution Methods 7/37
Bisection Method What does the function look like?(jg 9 8 7 6 5 4 3 0.5.0.5 R Lusby (4) Solution Methods 8/37
Bisection Method Calculations(jg Iteration f (ˆx) x x ˆx f (ˆx) 0 0 7.0000 -.00 0 0.5 5.78 0. 0.5 0.75 7.6948 3 4.09 0.75 0.875 7.8439 4 -.9 0.75 0.875 0.85 7.867 5.3 0.85 0.875 0.84375 7.889 6-0.34 0.85 0.84375 0.885 7.885 7 0.5 0.885 0.84375 0.8359375 7.8839 x 0.836 0.885 < x < 0.84375 f (x ) = 7.8839 R Lusby (4) Solution Methods 9/37
Bisection Method Continued(jg Intuitive and straightforward procedure Converges slowly An iteration decreases the difference between the bounds by one half Only information on the derivative of f (x) is used More information could be obtained by looking at f (x) R Lusby (4) Solution Methods 0/37
Newton Method Introduction(jg Basic Idea: Approximate f (x) within the neighbourhood of the current trial solution by a quadratic function This approximation is obtained by truncating the Taylor series after the second derivative f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) Having set x i at iteration i, this is just a quadratic function of x i+ Can be maximized by setting its derivative to zero R Lusby (4) Solution Methods /37
Newton Overview max f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) f (x i+ ) f (x i ) + f (x i )(x i+ x i ) What is the stopping criterion? x i+ = x i f (x i ) f (x i ) R Lusby (4) Solution Methods /37
Newton Overview max f (x i+ ) f (x i ) + f (x i )(x i+ x i ) + f (x i ) (x i+ x i ) f (x i+ ) f (x i ) + f (x i )(x i+ x i ) What is the stopping criterion? x i+ = x i f (x i ) f (x i ) x i+ x i < ɛ R Lusby (4) Solution Methods /37
Same Example (jg The Problem maximize: f (x) = x 3x 4 x 6 f (x) = x 3 x 5 f (x) = 36x 3 60x 4 x i+ = x i + x 3 x 5 3x 3 + 5x 4 R Lusby (4) Solution Methods 3/37
Newton Method The function again(jg 9 8 7 6 5 4 3 0.5.0.5 R Lusby (4) Solution Methods 4/37
Newton Method Calculations(jg Iteration x i f (x i ) f (x i ) f (x i ) f (ˆx) 7 - -96 0.875 0.875 7.8439 -.940-6.733 0.84003 3 0.84003 7.8838-0.35-55.79 0.83763 4 0.83763 7.8839-0.0006-54.790 0.8376 x = 083763 f (x ) = 7.8839 R Lusby (4) Solution Methods 5/37
Several variables Newton still works(jg Newton: Multivariable Given x, the next iterate maximizes the quadratic approximation f (x ) + f (x )(x x ) + (x x ) T H(x ) (x x ) x = x H(x ) f (x ) T Gradient search The next iterate maximizes f along the gradient ray maximize: g(t) = f (x + t f (x ) T ) s.t. t 0 x = x + t f (x ) T R Lusby (4) Solution Methods 6/37
Example (jg The Problem maximize: f (x, y) = xy + y x y R Lusby (4) Solution Methods 7/37
Example Continued(jg The vector of partial derivatives is given as ( f f (x) =, f,..., f ) x x x n Here f = y x x f = x + 4y y Suppose we select the point (x,y)=(0,0) as our initial point f (0, 0) = (0, ) R Lusby (4) Solution Methods 8/37
Example Continued(jg Perform an iteration x = 0 + t(0) = 0 y = 0 + t() = t Substituting these expressions in f (x) we get Differentiate wrt to t f (x + t f (x)) = f (0, t) = 4t 8t d dt (4t 8t ) = 4 6t = 0 Therefore t = 4, and x = (0, 0) + 4 (0, ) = ( 0, ) R Lusby (4) Solution Methods 9/37
Example Perform a second iteration(jg Gradient at x = (0, ) is f (0, ) = (, 0) Determine step length ( x = 0, ) + t(, 0) Substituting these expressions in f (x) we get ( f (x + t f (x)) = f t, ) = t t + Differentiate wrt to t d dt (t t + ) = t = 0 Therefore t =, and x = ( 0, ) + (, 0) = (, ) R Lusby (4) Solution Methods 0/37
y (, 3 4 ( 3 4, ) ( 7 7 8 8, 8) 7 ) ( 3 4, 4) 3 ( ( 0, ), ) x R Lusby (4) Solution Methods /37
Quadratic programming (jg maximize: c T x xt Qx subject to: Ax b (λ) x 0 (µ) Q is symmetric and positive semidefinite Lagrangian: L(x, λ, µ) = c T x xt Qx + λ T (b Ax) + µ T x Applying KKT yields Qx + A T λ µ = c Ax + v = b x, λ, µ, v 0 x T µ = 0, λ T v = 0 complementarity constraints R Lusby (4) Solution Methods /37
Example (jg Problem minimize: f (x) subject to: g (x) 0 g (x) 0 KKT Conditions f (x) + u g (x) + u g (x) = 0 u g (x) = 0 u 0 u g (x) = 0 u 0 R Lusby (4) Solution Methods 3/37
QP solution (jg KKT Conditions Qx + A T λ µ = c Ax + v = b x, λ, µ, v 0 x T µ = 0 λ T v = 0 Add artificial variables to constraints with positive c j Subtract artificial variables to constraints with negative b i Initial basic variables are artificial variables some of µ, and some of v Do phase of the Simplex method with a restricted entry rule, Ensures complementarity constraints are always satisfied R Lusby (4) Solution Methods 4/37
Example (jg The Problem maximize: 5x + 30y + 4xy x 4y subject to: x + y 30 x, y 0 R Lusby (4) Solution Methods 5/37
Example Continued(jg The Parameters are as follows: [ ] [ ] [ 5 c =, A =, x = 30 x y ], Q = [ 4 4 4 8 ], b = 30 The Non-Linear Component of the objective function is xt Qx = x + 4xy 4y R Lusby (4) Solution Methods 6/37
Example New Linear Program(jg Minimize: Z = z + z subject to: 4x 4y + λ µ + z = 5 Complementarity Conditions 4x + 8y + λ µ + z = 30 x + y + v = 30 x, y, λ, µ, µ, v, z, z 0 xµ = 0, yµ = 0, v λ = 0 R Lusby (4) Solution Methods 7/37
Modified Simplex Initial Tableau(jg BV Z x y λ µ µ v z z rhs Z - 0-4 -3 0 0 0-45 z 0 4-4 - 0 0 0 5 z 0-4 8 0-0 0 30 v 0 0 0 0 0 0 30 R Lusby (4) Solution Methods 8/37
Modified Simplex First Pivot(jg BV Z x y λ µ µ v z z rhs Z - - 0-0 0-30 z 0 0 - - 0 30 y 0-4 0-8 0 0 8 v 0 0-0 4 0-4 30 8 45 R Lusby (4) Solution Methods 9/37
Modified Simplex Second Pivot(jg BV Z x y λ µ µ v z z rhs Z - 0 0-5 3 4 0 z 0 0 0 5 - - 3 4-3 y 0 0 8 0-6 x 0 0-4 0 8 4-5 5 4 75 4 0 6 8 0-45 8 4 R Lusby (4) Solution Methods 30/37
Modified Simplex Final Tableau(jg BV Z x y λ µ µ v z z rhs Z - 0 0 0 0 0 0 0 λ 0 0 0-5 - 3 3 y 0 0 0 0 - x 0 0 0-0 0-5 5 3 40 0-0 0 5 0 3 40 9 0-0 R Lusby (4) Solution Methods 3/37
Separable Programming (jg The Problem maximize: j f j(x j ) subject to: Ax b x 0 Each f j is approximated by a piece-wise linear function f (y) = s y + s y + s 3 y 3 y = y + y + y 3 0 y u 0 y u 0 y 3 u 3 R Lusby (4) Solution Methods 3/37
Separable Programming Continued(jg Special restrictions: y = 0 whenever y < u y 3 = 0 whenever y < u If each f j is concave, Why? Automatically satisfied by the simplex method R Lusby (4) Solution Methods 33/37
Sequential Unconstrained Minimization Technique (jg The Problem maximize: f (x) subject to: g(x) b x 0 For a sequence of decreasing positive r s, solve maximize f (x) rb(x) B is a barrier function approaching as a feasible point approaches the boundary of the feasible region For example B(x) = i b i g i (x) + j x j R Lusby (4) Solution Methods 34/37
The Problem maximize: xy subject to: x + y 3 x, y 0 r x y 0.90.36 0 0.987.95 0 4 0.998.993 Class exercise Verify that the KKT conditions are satisfied at x = & y = R Lusby (4) Solution Methods 35/37
Class exercises (jg Separable programming maximize: 3x x 4 + 4y y subject to: x + y 9 x, y 0 Formulate this as an LP model using x = 0,,, 3 and y = 0,,, 3 as breakpoints for the approximating piece-wise linear functions R Lusby (4) Solution Methods 36/37
Class exercises Continued(jg Sequential Unconstrained Minimization Technique If f is concave and g i is convex i, show that the following is concave f (x) r ( b i g i (x) + x j i j R Lusby (4) Solution Methods 37/37