Itroductio to Optimizatio Techiques How to Solve Equatios
Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually beyod aalytic tractability ca be treated by computer methods. Two basic approaches for resolvig complex optimizatio problems by umerical techiques:. Formulate the ecessary coditios describig the optimal solutio ad solve these equatios umerically. 2. Bypass the formulatio of the ecessary coditios ad implemet a direct iterative search for the optimum. 2
Successive Approximatio Successive Approximatio Equatio of the form x T( x) x : fixed poit of the trasformatio T ivariat uder T For successive vectors x T( x), uder appropriate coditios the sequece { } coverges to a solutio of the origial equatio. x Defiitio Let S be a subset of a ormed space X ad let T be a trasformatio mappig S ito S. The T is said to be a cotractio mappig if there is a, 0 such that T( x ) T( x ) x x for all x, x S 2 2 2 3
Cotractio Mappig Theorem Theorem (Cotractio Mappig Theorem) If T is a cotractio mappig o a closed subset S of a Baach space, there is a uique vector x0 S satisfyig x0 T( x0). Furthermore, x0 ca be obtaied by the method successive approximatio startig from a arbitrary iitial vector i S. Proof. (Existece ad Uiqueess) First we show existece of. Select a arbitrary elemet x S. Defie the sequece { x } by the formula x T( x). The x x T( x) T( x ) x x. Therefore x x x 2 x. It follows that x x x x x x x x p p p p p2 p2 p3 x2 x k x2 x x2 x k 0. x 0 4
Cotractio Mappig Theorem { x } is a Cauchy sequece. Sice S is a closed subset of a complete space, a elemet x S such that x x. 0 0 We ow show that x T( x ). We have 0 0 x T( x ) x x x T( x ) x x x T( x ) 0 0 0 0 0 0 x x x x 0 0 ca be made arbitrary small by appropriate choice of. Uiqueess of x0 Assume that ad y are fixed poits. The x0 0 x0 y0 T( x0) T( y0) x0 y0 x0 y0 5
Successive Approximatio f ( x) x x 3 x 2 x Successive approximatio. 6
Iterative Methods of Optimizatio Newto s Method A iterative techique for solvig a equatio of the form Px ( ). Theorem Let X ad Y be Baach spaces ad let P be a mappig from X to Y. Assume further that. P is twice Frechet differetiable ad that P( x) K. 2. There is a poit x X such that p P( x ) has a bouded iverse with p, p [ P( x )]. 3. The costat h K satisfies h. 2 The the sequece x x p [ P( x)] exists for all ad coverges to a solutio of Px ( ). p 7
Iterative Methods of Optimizatio Descet Methods Direct approach for optimizatio problems: Iteratio i such a way as to decrease the cost fuctioal cotiuously from oe step to the ext. Geeral iteratio formula x x p p : scalar step size : directio vector is selected to miimize f ( x p ) is arraged so that f( x p ) f( x ) for small positive is take as the smallest positive root of the equatio d f ( x p) 0 d 8
Iterative Methods of Optimizatio f icreasig x 2 p x The descet process i X Newto s method for optimizatio problems [ f ( )] f ( ) x x x x 9
Iterative Methods of Optimizatio Steepest Descet Method of steepest descet: most widely used descet procedure for miimizig a fuctioal f p Directioal vector at a give poit : egative of the gradiet of f at Ex. Miimizatio of a quadratic fuctioal f ( x) [ x, Qx] 2[ x, b] x x where Q is self-adjoit positive-defiite f ( x ) Assume costats m [ xqx, ] if xθ [ xx, ] M [ xqx, ] sup xθ [ xx, ] 0 m, M 0
Iterative Methods of Optimizatio f x 0 is miimized by a uique vector : Qx 0 b x: approximate solutio r b Qx: residual of the approximatio f ( x) 2r Method of steepest descet x x r r b Qx is chose to miimize f ( x ) f ( x ) [ x r, Q( x r)] 2[ x r, b] 2 has a miimum at [ r, Qr ] 2 [ r, r ] [ x, Qx ] 2[ x, b] [, ] r r [ r, Qr ]
Iterative Methods of Optimizatio Steepest descet method [, ] () x r r x r [ r, Qr ] r bqx Theorem For ay x X the sequece { x } defied by () coverges (i orm) to the uique solutio x of Qx b. Furthermore, defiig F( x) [ xx, Q( xx )] 0 0 0 the rate of covergece satisfies m [ y, y] F( x) F( x) m m M where y x0 x. 2
Cojugate Directio Method Cojugate Directio Method Problem of miimizig a quadratic fuctioal o a Hilbert space Miimum orm problem i a Hilbert space. Normal equatio ca be solved by machiery of orthogoalizatio, the Gram-Schmidt procedure, ad Fourier series philosophy of cojugate directio method Quadratic objective fuctioal f f ( x) [ x, Qx] 2[ x, b] Q : self-adjoit liear operator [ xqx, ] M [ xx, ] [ xqx, ] m[ xx, ] M, m 0, x H 3
Cojugate Directio Method x 0 Uique vector miimizig f = Uique solutio of the equatio Qx b. Formulatio of a miimum orm problem New ier product: equivalet to miimizig [ xy, ] [ xqx, ] mi f ( x) [ x, Qx] 2[ x, b] Q 2 xx [ xx, Q( xx )] 0 Q 0 0 Suppose we have ( or ca geerate) a sequece of vectors { p, p2, } orthogoal w.r.t. ier product [,] Q Q-orthogoal sequece, a sequece of cojugate directios 4
Cojugate Directio Method x 0 The vector ca be expaded i a Fourier series w.r.t. x 0 k p k k { p, p, } 2 the -th partial sum x p k k k x x 0 Q is miimized over the subspace S{ p, p,, p } 2 If { p } is complete, coverges to x. Fourier coefficiet k x 0 [ p, x ] [ p, Qx ] [ p, b] k i 0 Q i 0 i 5
Cojugate Directio Method Theorem (Method of Cojugate Directios) Let {p i } be a sequece i H such that [ pi, Qp j] 0, i j, ad such that the closed liear subspace geerated by the sequece is H. The for ay x H the sequece geerated by the recursio x x p [ p, r] [ p, Qp] r bqx satisfies [ r, pk] 0, k,2,, ad x x 0 which is the uique solutio of Qx b. 6
Proof Cojugate Directio Method Defie y x x, y θ. The, [ p, bqx Qy ] [ p, y y ] y y p y p 0 Q [ p, Qp] [ p, p] Q Sice y S{ p, p,, p } ad {p i } is Q-orthogoal, [ p, y ] 0. 2 [ p, y ] [ p, y ] y y p p 0 Q k 0 Q k [ p, p] Q k [ pk, pk] Q Q -th partial sum of a Fourier expasio of y 0 Covergece w.r.t Covergece w.r.t. Q y y x x 0 0 7
Cojugate Directio Method Orthogoality relatio [ r, pk] 0 follows from the fact that the error y y x x is Q-orthogoal to the subspace S{ p, p,, p }. 0 0 Ex Basic approximatio problem i a Hilbert space X. ˆ i i 2 Fid x a y i S{ y, y2,, y } which best approximates a give vector x. Normal equatio: Ga b G: Gram matrix of { y, y2,, y } b [ xy, ] i i Ga b is equivalet to the ucostraied miimizatio: T T mi aga2 ab, a a E 8
Cojugate Directio Method This problem ca be solved by the cojugate directio method. {p i } ca be costructed by G.S.P. k-th approximatio k a xˆ k k ai yi i best approximatio to x i the subspace S{ y, y,, y }. 2 k Cojugate directio method Solvig the origial approximatio problem by a Gram-Schmidt orthogoalizatio of the vectors { y, y2,, y}. 9
Cojugate Gradiet Method The Cojugate Gradiet Method Method of selectig directio vectors for miimizig the fuctioal f ( x) [ x, Qx] 2[ b, x]. Choose p = r = b - Qx p : directio of the egative gradiet of f at x. [, ] 2. Choose p r r Qp p, where r 2 = b - Qx 2. 2 2 2 [ p, Qp] p 2 is Q-orthogoal to p S{ p, p 2 } S{ r, r 2 } { r, r 2, } : sequece of egative gradiets { p, p, } : sequece of Q-orthogoalized versio of {r i } 2 20
3. recursive formula Cojugate Gradiet Method [ r, p ] x x p [ p, Qp] [ r, Qp ] p r p [ p, Qp] Theorem Let x H be give. Defie p = b - Qx ad (*) r b Qx x x p p r p [ r, p] [ r, Qp] where, [ p, Qp ] [ p, Qp ] The the sequece {x } coverges to x 0 = Q - b. 2
Proof Cojugate Gradiet Method Claim: this is a method of cojugate directios Assume the above claim is true for { pk} k, { xk} k. We shall show that it is true for oe more step. Sice α + is chose accordig to C.D.M., we must oly show that p + is Q-orthogoal to { }. [ p, Qp ] [ p, Qr ] [ p, Qp ] k k k p k k For k =, [ p, Qr ] [ p, Qp] 0 For k <, [ p, Qp ] 0 k [, ] p Qr [ p, Qp ] pk Qr Qpk r Qpk S p p2 pk [, ] [, ] {,,, } S{ p, p,, p } 2 22
Cojugate Gradiet Method For ay cojugate directio method [ r, ] 0,. pi i The method is a cojugate directio method. Next, we prove that the sequece {x } coverges to x 0. Defie x b Qx Q b Qx E( ) [, ( )] E( x ) E( x ) [ r, p ] (*) p r p [ r, p ] 0 [ r, p ] [ r, r ] [ r, r ] E( x ) E( x ) E( x ) [ r, Q r] 23
Cojugate Gradiet Method From (*) ad the Q-orthogoality of p ad p - 2 2 [ r, Qr] [ p, Qp] [ p, Qp ] By defiitio of m 3 [ r, Q r] [ p, Qp ] [ r, r ] m From, 2, 3, m E( x ) ( ) E( x) M E( x ) 0 r θ 24
Cojugate Gradiet Method Sigificat improvemet i the rate of covergece. For m[ xx, ] [ xqx, ] M[ xx, ], the covergece rate 2 2 4 c 0 E( ) x x x m m c M For steepest descet, the best estimate 2 2 c x x0 E( x) m c The cojugate directio method coverges to the optimal solutio withi steps i a -dim quadratic problem. c 25
PARTAN Extesio of the cojugate directio method: applicable to the miimizatio of a o-quadratic fuctioal f. PARTAN : Method of PARallel TANget B A x 2 x p r x p Poit : lie( x, x 2) lie( x, B) A A r : egative gradiet of f at f( x ) : the miimum of f alog the x lie( x, B) 26
PARTAN PARTAN procedure for miimizig a arbitrary fuctioal f.. First miimize f alog the egative gradiet directio from. Fid the poit A. 2. Miimize f alog the lie determied by x ad A. Fid x 2. 3. Set x x. Go to Step. 2 For a quadratic fuctioals, PARTAN = Cojugate gradiet method - Rapid covergece i oquadratic fuctioals - No theoretical covergece aalysis is available. Methods for Solvig Costraied Problems Devisig computatioal procedure requires a lot of igeuity ad thorough familiarity with the basic priciples ad existig techiques of the area. x 27
Projectio Method Optimizatio problem with costraits. Use a descet method while remaiig withi the costrait regio. Cosider miimize Subject to f ( x) Ax b f : fuctioal o the Hilbert space X A : bouded liear operator : H Y b : fixed by A is assumed to have closed rage.. Start from a poit x { x Ax b} 2. Fid p (directio vector) from steepest descet or Newto s method 3. Project p oto the ullspace of A, N ( A). Fid the ew directio vector. g 28
4. Next poit x2 x g Sice g ( A), Projectio Method is chose from mi f ( x g) N 2 ( ) A x A x g Ax b x2 { x Ax b} : feasible solutio f( x ) * f x g R A ( ) ( ) g N ( A) f x g A A * ( ) R( ) N ( ) g f( x ) A * 29
g Projectio Method * is chose so that { ( ) } N ( A) g I A AA A f x * * [ ( ) ] ( ) A f x A At the solutio, f( x ) N ( A) x 0 f( x ) * f x0 R A ( ) ( ) N ( A) f( x ) A * 0 0 Necessary coditio i terms of the Lagrage multiplier 30
Primal-Dual Method Duality theorem : basis for optimizatio problem. Cosider a dual method for the covex problem: miimize f( x) subject to Gx ( ), x Assumig that the costrait is regular, this problem is equivalet to * max if { f( x) G( x), z } * z, x Defie the dual fuctioal * * ( z ) if{ f( x) G( x), z } x Equivalet dual problem * maximize ( z ) * subject to z 3
Primal-Dual Method Ex (Hildreth s Quadratic Programmig Procedure) Cosider T miimize xqx 2 subject to Ax c T bx x Q A b c : -vector : positive defiite matrix m : matrix : vector : m vector 32
Dual problem maximize subject to Primal-Dual Method 2 2 T T T P d b Q b T where P AQ A, d AQ bc Equivaletly miimize 2 subject to T T P Optimal solutio 0 x Q ( ba ) T 0 0 d 33