Introduction to Unconstrained Optimization: Part 2 James Allison ME 555 January 29, 2007
Overview Recap Recap selected concepts from last time (with examples) Use of quadratic functions Tests for positive definiteness Demonstration: tire clearance problem using gradient descent Convex sets and functions Demonstration: Newton s method for non-quadratic problem
Use of Function approximations Numerical Examples
Use of Function approximations Derivation of optimality conditions Development of optimization algorithms Numerical Examples
Gradient of a Quadratic Function Example: f (x) = f 0 + b x + 1 2 x Ax where: Gradient: [ a1 a A = 2 a 3 a 4 ] [ b1, b = b 2 ]
Gradient of a Quadratic Function Example: where: Gradient: f (x) = f (x) = f 0 + b x + 1 2 x Ax [ a1 a A = 2 a 3 a 4 [ b1 b 2 ] [ b1, b = b 2 ] [ a1 x + 1 a 2 x 2 a 3 x 1 a 4 x 2 ] ] = b + Ax
Hessian of a Quadratic Function Hessian: f (x) = [ b1 b 2 ] [ a1 x + 1 a 2 x 2 a 3 x 1 a 4 x 2 ]
Hessian of a Quadratic Function Hessian: f (x) = [ b1 b 2 ] [ a1 x + 1 a 2 x 2 a 3 x 1 a 4 x 2 [ ] a1 a H = 2 = A a 3 a 4 ]
Jacobian Recap Used in discussion on Newton s method Used again in Ch. 5 First derivative of a vector-valued, multivariate function If f = [f 1, f 2,..., f m ] T and x = [x 1, x 2,..., x n ] T, then f 1 / x 1... f 1 / x n J..... f m / x 1... f m / x n
Jacobian Recap f 1 / x 1... f 1 / x n J..... f m / x 1... f m / x n written in terms of the gradients of f:
Jacobian Recap f 1 / x 1... f 1 / x n J..... f m / x 1... f m / x n written in terms of the gradients of f: f 1 (x) T f 2 (x) T J =. f n (x) T
Tests for 1 λ i > 0 i = 1, 2,..., n 2 Determinant of all leading principal minors is positive 3 All pivots of A in rref are positive
Tests for 1 λ i > 0 i = 1, 2,..., n Practical, fast algorithms available, eigenvalues provide insight (How?) 2 Determinant of all leading principal minors is positive 3 All pivots of A in rref are positive
Tests for 1 λ i > 0 i = 1, 2,..., n Practical, fast algorithms available, eigenvalues provide insight (How?) 2 Determinant of all leading principal minors is positive What is a leading principal minor? Determinant? Useful for quick checks on small matrices Equivalent to all eigenvalues positive 3 All pivots of A in rref are positive
Tests for 1 λ i > 0 i = 1, 2,..., n Practical, fast algorithms available, eigenvalues provide insight (How?) 2 Determinant of all leading principal minors is positive What is a leading principal minor? Determinant? Useful for quick checks on small matrices Equivalent to all eigenvalues positive 3 All pivots of A in rref are positive Also equivalent to all eigenvalues positive (May as well compute eigenvalues)
Example Gradient: min x f (x) = 4x 1 + 2x 2 + 4x 2 1 4x 1 x 2 + x 2 2 Hessian:
Example Gradient: Hessian: min x f (x) = 4x 1 + 2x 2 + 4x 2 1 4x 1 x 2 + x 2 2 [ ] 4 + 8x1 4x f (x) = 2 2 4x 1 + 2x 2
Example Gradient: Hessian: min x f (x) = 4x 1 + 2x 2 + 4x 2 1 4x 1 x 2 + x 2 2 [ ] 4 + 8x1 4x f (x) = 2 2 4x 1 + 2x 2 H = [ 8 4 4 2 ]
Saddle Point Descent Example How can you determine what directions will result in descent after moving from a saddle point?
Saddle Point Descent Example How can you determine what directions will result in descent after moving from a saddle point? Approach: find pertubations ( f = f (x) f (x 0 )) that result in a function decrease
Saddle Point Descent Example How can you determine what directions will result in descent after moving from a saddle point? Approach: find pertubations ( f = f (x) f (x 0 )) that result in a function decrease Example (from Monday): [ ] f (x) = x 5 2.6 Ax, wherea = 2.6 2 v 1 = [.949.314 ] [.314, λ 1 = 5.86, v 2 =.949 ], λ 2 = 2.86
* ), * ), * ), Recap! #"$ % 0 - %1"2./ 0 34$5 76 8 %9 /% #"$ % '() '() '() & & & -! #"$./ 0 - %1"2./ 0 34$5 76 8 %9 /% #"$ % +
& % & % & % Recap Saddle Point Descent Example! " '!(" ' " )+*", - ". ' " $& $& $& $% $% $% #& #& #& #$% #$% #$% #$& % #$% $% #$& % #$% $% #$& % #$% $% " λ 1 =.769 λ 2 = 7.23 λ 1 =.723 λ 2 =.769 λ 1 = 5.86 λ 2 = 2.86
Tire Clearance Design Problem Physical design optimization problem Simplified to two variables for visualization Solved using gradient descent What is the minimum clearance between a vehicle tire and its surrounding wheel well throughout its range of motion?
Orthogonal Gradient Descent Directions
Set and Function Convex Set: a line connecting any two points in the set is completely in the set x(λ) = λx 2 + (1 λ)x 1, 0 λ 1 Convex Function: a line connecting any two points on a function lies on or below the function f (x(λ)) λf (x 2 ) + (1 λ)f (x 1 )
Newton s Method for a Cubic Objective gradient: min x f (x) = x 3 1 9x 2 1 + 23x 1 + x 3 2 9x 2 2 + 23x 2 30 Hessian:
Newton s Method for a Cubic Objective gradient: Hessian: min x f (x) = x 3 1 9x 2 1 + 23x 1 + x 3 2 9x 2 2 + 23x 2 30 f (x) = [ 3x 2 1 18x 1 + 23 3x 2 2 18x 2 + 23 ]
Newton s Method for a Cubic Objective gradient: Hessian: min x f (x) = x 3 1 9x 2 1 + 23x 1 + x 3 2 9x 2 2 + 23x 2 30 f (x) = H = [ 3x 2 1 18x 1 + 23 3x 2 2 18x 2 + 23 [ 6x1 18 0 0 6x 2 18 ] ]