Department of Chemical and Biological Engineering LECTURE NOTE II. Chapter 3. Function of Several Variables

Size: px
Start display at page:

Download "Department of Chemical and Biological Engineering LECTURE NOTE II. Chapter 3. Function of Several Variables"

Transcription

1 LECURE NOE II Chapter 3 Functon of Several Varables Unconstraned multvarable mnmzaton problem: mn f ( x), x R x N where x s a vector of desgn varables of dmenson N, and f s a scalar obectve functon - Gradent of f: f f f f f = x1 x x3 x N - Possble locatons of local optma ponts where the gradent of f s zero boundary ponts only f the feasble regon s defned ponts where f s dscontnuous ponts where the gradent of f s dscontnuous or does not exst - ssumpton for the development of optmalty crtera f and ts dervatves exst and are contnuous everywhere 31 Optmalty Crtera - Optmalty crtera are necessary to recognze the soluton - Optmalty crtera provde motvaton for most of useful methods - aylor seres expanson of f 1 f ( x) = f( x) + f( x) x+ x f( x) x+ O3 ( x) f where x s the current expanson pont, f( x) = x = x x s the change n x, x x f ( x) s the NxN symmetrc Hessan matrx at x, O ( x) 3 s the error of nd-order expanson - In order for x to be local mnmum f = f( x) f( x) for x x δ ( δ > ) - In order for x to be strct local mnmum f = f( x) f( x) > for x x δ ( δ > ) II-1

2 - Optmalty crteron (strct) 1 = + > < f f( x) f( x) f( x) x x f( x) x, x δ f x f x ( ) = and ( ) > (postve defnte) - For Qz ( ) = z z s postve defnte f Qz ( ) >, z s postve semdefnte f Qz ( ), zand z z z= s negatve defnte f Qz ( ) <, z s negatve semdefnte f Qz ( ), zand z z z= s ndefnte f Qz ( ) > for some zand Qz ( ) < for other z est for postve defnte matrces 1 If any one of dagonal elements s not postve, then s not pd ll the leadng prncpal determnants must be postve 3 ll egenvalues of are postve est for negatve defnte matrces 1 If any one of dagonal elements s not negatve, then s not nd ll the leadng prncpal determnant must have alternate sgn startng from D 1 < (D >, D 3 <, D 4 >, ) 3 ll egenvalues of are negatve est for postve semdefnte matrces 1 If any one of dagonal elements s nonnegatve, then s not psd ll the prncpal determnants are nonnegatve est for negatve semdefnte matrces 1 If any one of dagonal elements s nonpostve, then s not nsd ll the -th order prncpal determnants are nonpostve f s odd, and nonnegatve f s even Remar 1: he prncpal mnor of order of NxN matrx Q s a submatrx of sze x obtaned by deletng any n- rows and ther correspondng columns from the matrx Q Remar : he leadng prncpal mnor of order of NxN matrx Q s a submatrx of sze x obtaned by deletng the last n- rows and ther correspondng columns Remar 3: he determnant of a prncpal mnor s called the prncpal determnant For NxN matrx, there are N 1 prncpal determnant n all II-

3 - he statonary pont x s a mnmum f f ( x) s postve defnte, maxmum f f ( x) s negatve defnte, saddle pont f f ( x) s ndefnte - heorem 31 Necessary condton for a local mnmum * For x to be local mnmum of f(x), t s necessary that * * f( x ) = and f( x ) - heorem 3 Suffcent condton for strct local mnmum * * If f( x ) = and f( x ) >, * then x to be strct or solated local mnmum of f(x) Remar 1: he reverse of heorem 31 s not true (eg, f(x)=x 3 at x=) Remar : he reverse of heorem 3 s not true (eg, f(x)=x 4 at x=) 3 Drect Search Methods - Drect search methods use only functon values - For the cases where f s not avalable or may not exst Modfed smplex search method (Nelder and Mead) - In n dmensons, a regular smplex s a polyhedron composed of n+1 equdstant ponts whch form ts vertces (for -d equlateral trangle, for 3-d tetrahedron) - Let x = ( x 1, x,, xn) ( = 1,,, n+ 1) be the -th vector pont n R n of the smples vertces on each step of the search Defne f( x ) = max{ f( x ); = 1,, n+ 1}, h f( x ) = max{ f( x ); = 1,, n+ 1} and g h f( xl ) = mn{ f( x ); = 1,, n+ 1} Select an ntal smplex wth termnaton crtera (M=) ) Decde x h, x g, xl among (n+1) ponts n smplex vertces and let x c be the x 1 1 centrod of all vertces excludng the worst pont h 1 n + xc = x xh n = ) Calculate f ( x h ), f ( x l ), and f ( x g ) If xl s same as prevous one, then let M=M+1 If M>165n+5n, then M= and go to v) ) Reflecton: xr = xc + α( xc xh) (usually α =1) If f ( xl ) f( xr) f( xg), then set xh = xr and go to ) v) Expanson: If f ( x ) < f( x ), x = x + γ ( x x ) r l e c r c II-3

4 (8 γ 3 ) If f ( xe) f( xr), then set xh = xe and go to ) v) Contracton: If f ( xr) f( xh), xt = xc + β ( xh xc) (4 β 6 ) Else f f ( xr) > f( xg), xt = xc β ( xh xc) hen set xh = xt and go to ) v) If the smplex s small enough, then stop Otherwse, Reducton: x = x + 5( x x ) for = 1,,, n+ 1 nd go to ) l l Remar 1: he ndces h and l have the one of value of Remar : he termnaton crtera can be that the longest segment between ponts s small enough and the largest dfference between functon values s small enough Remar 3: If the contour of the obectve functon s severely dstorted and elongated, the search can very neffcent and fal to converge Hooe-Jeeves Pattern Search - It conssts of exploratory moves and pattern moves Select an ntal guess x, ncrement vectors for = 1,,, n and termnaton crtera Start wth =1 ) Exploratory search: ( ) ( 1) Let =1 and xb = x B ry xn = xb + If f ( xn ) < f( xb ), then xb = xn C Else, try xn = xb If f ( xn ) < f( xb ), then xb = xn D Else, let = +1 and go to B untl >n ( ) ( 1) ) If exploratory search fals ( xb = x ) * ( 1) If < ε for = 1,,, n, then x = x and stop B Else, = 5 for = 1,,, n and go to ) ) Pattern search: 1) ( 1) Let xp = xb + ( xb xb ) 1) ( ) B If f ( xp ) < f( xb ), then x = x p and go to ) C Else, x = x b and go to ) Remar 1: HJ method may be termnated prematurely n the presence of severe nonlnearty and wll degenerate to a sequence of exploratory moves Remar : For the effcency, the pattern search can be modfed to perform a lne search n the pattern search drecton Remar 3: he Rosenbloc s rotatng drecton method wll rotate the exploratory search drecton based on the prevous moves usng Gram-Schmdt orthogonalzaton II-4

5 - Let ξ1, ξ,, ξn be the ntal search drecton - Let α be the net dstance moved n ξ drecton nd u1 = α1ξ1+ αξ + + αnξn u = αξ + + αξ n n hen u = αξ n n n ˆ ξ = u / u ξ = w / w for =,3,, n where w = u [( u) ξ] ξ = 1 - Use ξ1, ξ,, ξn as a new search drecton for exploratory search Remar 4: More complcated methods can be derved However, the next Powell s Conugate Drecton Method s better f a more sophstcated algorthm s to be used Powell s Conugate Drecton Method - Motvatons It s based on the model of a quadratc obectve functon If the obectve functon of n varables s quadratc and n the form of perfect square, then the optmum can be found after exactly n sngle varable searches Quadratc functons: ( ) qx = a+ bx+ 5xC x Smlarty transform (Dagonalzaton): Fnd wth x=z so that Qx ( ) = xcx= z Cz= zd z (D s a dagonal matrx) cf) If C s dagonalzable, s the egenvector of C For optmzaton, C of obectve functon s not generally avalable - Conugate drectons Defnton: Gven an nxn symmetrc matrx C, the drecton s 1, s,,, s r (r n) are sad to be C conugate f the drectons are lnearly ndependent and s C s = for all II-5

6 Remar 1: If ss = for all they are orthogonal Remar : If s s the -th column of a matrx, then C s a dagonal matrx - Parallel subspace property For a D-quadratc functon, pc a drecton d and two ntal ponts x 1 and x Let z be the mnmum pont of mn f ( x + λd) f f x = = ( b + x C ) d = λ x λ x= z1 or z ( b + z C) d = ( b + z C) d = ( z z ) C d = 1 1 ( z z ) and d are conugate drectons 1 λ - Extended Parallel subspace property For a quadratc functon, pc n-drecton s =e ( = 1,,, n) and a ntal ponts x ) Perform a lne search n s n drecton and let the result be x 1 ) Perform n lne searches for s 1, s,, s n startng from a last lne search result Let the last pont be z 1 after n lne search ) hen replace s wth s +1 (=1,, n-1) and set s n = (z 1 -x 1 ) v) Repeat ) and ) (n-1) tmes hen s 1, s,, s n are conugate each other - Gven C, fnd n-conugate drectons Choose n lnearly ndependent vectors, u 1, u,, u n Let z 1 = u 1 1 u z z = u z for,3,, n = = 1 z z B Recursve method (from an arbtrary drecton z 1 ) z z z z z 1 1 = 1 1 z1 z1 II-6

7 z z z z z+ 1 = z z z 1 for,3,, n-1 = z z z 1z 1 cf) Select b so that z z+ 1 = z ( z + bz) = - Powell s conugate drecton method Select ntal guess x and a set of n lnearly ndependent drectons (s = e ) (1) (1) (1) ) Perform a lne search n e n drecton and let the result be x and x = x (=1) ) Startng at x, perform n lne search n s drecton from the prevous pont of lne search result for = 1,,, n Let the pont obtaned from the each lne search be x ) Form a new conugated drecton, s n+1 usng the extended parallel subspace property sn+ 1 = ( xn x )/ xn x * v) If sn+ 1 < ε, then x = x and stop v) Perform addtonal lne search n s n+1 drecton and let the result be x n + 1 v) Delete s 1 and replace s wth s +1 for = 1,,, n hen set ( + 1) ( x = x ) n + 1 and =+1 and go to ) Remar 1: If the obectve functon s quadratc, the optmum wll be found after n lne searches Remar : Before step v), needs a procedure to chec the lnear ndependence of the conugate drecton set Modfcaton by Sargent * Suppose λ s obtaned by mn f ( x λ sn + 1) λ 1) ( ) + ( x = x + λsn+ 1) nd let Chec f f( x ) f( x ) = max f( x ) f( x ) m 1 m 1 ( ) 1) * f( x ) f( x ) λ < f( xm 1) f( xm ) 5 If yes, use old drectons agan Else delete s m and add s n+1 B Modfcaton by Zangwll D s1 s s n = and Let [ ] x x = max x x m 1 m 1 Chec f x x m 1 m s n+ 1 det( D ) ε If yes, use old drectons agan Else delete s m and add s n+1 Remar 3: hs method wll converge to a local mnmum at superlnear convergence rate II-7

8 cf) Let ( + 1) ε lm = C where ε = x x r ε * If C<1, then t s convergent at r-order of convergence rate r=1 : lnear convergence rate r= : quadratc convergence rate r=1 and C= : superlnear convergence rate mong unconstraned multdmensonal drect search methods, the Powell s conugate drecton method s the most recommended method 33 Gradent based Methods - ll technques employ a smlar teraton procedure: ( + 1) ( ) ( ) ( ) x = x + α s( x ) where α s the step-length parameter found by a lne search, and sx ( ) s the search drecton - he α s decded by a lne search n the search drecton sx ( ) ) Start from an ntal guess x (=) ) Decde the search drecton sx ( ) ) Perform a lne search n the search drecton and get an mproved pont v) Chec the termnaton crtera If satsfed, then stop v) Else set =+1 and go to ) ( 1) x + - Gradent based methods requre accurate values of frst dervatve of f(x) - Second-order methods use values of second dervatve of f(x) addtonally Steepest descent Method (Cauchy s Method) f( x) = f( x) + f( x) x+ (hgher-order terms gnored) f ( x) f( x) = f( x) x he steepest descent drecton: Maxmze the decent by choosng ( ) = argmax ( ) = ( ) ( α > ) * x f x x α f x x x he search drecton: ermnaton crtera: sx ( ) ( ) ( ) = α f( x ) < and/or ( ) f( x ) ε f 1) x x / x < ε x Remar 1: hs method shows slow mprovement near optmum ( f( x) ) Remar : hs method possesses a descent property II-8

9 f x < ( ) ( ) s( x ) Newton s Method (Modfed Newton s Method) f( x) = f( x) + f( x) x+ (hgher-order terms gnored) he optmalty condton for approxmate dervatve at x : f( x) = f( x) + f( x) x= 1 x = f( x) f( x) ( ) ( ) 1 ( ) he search drecton: s( x ) = f( x ) f( x ) (Newton s method) ( ) ( ) ( ) 1 ( ) ( s x = α f x ) f( x ) (Modfed Newton s method) Remar 1: In the modfed Newton s method, the step-sze parameter α s decded by a lne search to ensure for the best mprovement Remar : he calculaton of the nverse of Hessan matrx f ( x ) mposes qute heavy computaton when the dmenson of the optmzaton varable s hgh Remar 3: he famly of Newton s methods exhbts quadratc convergence ε 1) ( ) C ε (C s related to the condton of Hessan ( ) f ( x ) ) * 1) * ( ) * f ( x ) f ( x ) x x = x x f ( x ) f x f x x x f x = f ( x ) f ( x ) * ( ) * ( ) ( ) ( ) ( x x = x x ) f ( x ) * ( ) * ( ) + ( )( ) ( ) 1 lso, f the ntal condton s chosen such that ε <, the method wll C converge It mples that the ntal condton s chosen poorly, t may dverge Remar 4: he famly of Newton s methods does not possess the descent property ( ) ( ) 1 ( ) f( x ) s( x ) = f( x ) f( x ) f( x ) < only f the Hessan s postve defnte Marquardt s Method (Marquardt s compromse) - hs method combnes steepest descent and Newton s methods - he steepest descent method has good reducton n f when x s far from * - Newton s method possesses quadratc convergence near x * x - he search drecton: sx ( ) ( ) ( ) 1 ( ) [ = H + λ I ] f( x ) - Start wth large λ, say 1 4 (steepest descent drecton) and decrease to zero II-9

10 1) ( ) 1) ( ) If f ( x ) < f( x ), then set λ = 5λ 1) ( ) Else set λ = λ Remar 1: hs s qute useful for the problems wth obectve functon form of f ( x) = f ( x) + f ( x) + + f ( x) (Levenberg-Marquardt method) 1 m Remar : Goldsten and Prce lgorthm Let δ ( δ < 5) and γ be postve numbers ) Start from x wth =1 Let φ ( x ) = f( x ) ) Chec f ε ( ) f( x ) < If yes, then stop ) Calculate ( ) f( x ) f( x θφ ( x )) (, θ ) = ( ) ( ) θ f( x ) φ( x ) gx If gx (,1) < δ, select θ such that δ < g( x, θ ) < 1 δ Else, θ = 1 v) Let Q = [ Q1 Q Q n ] (approxmaton of the Hessan) where Q f x + f x e f x = ( 1) γ f( x ) ( ) ( 1) ( ) ( γ ( ) ) ( ) If then ( x ) Q s sngular or φ ( x ) = f( x ) Else ( ) ( ) 1 ( ) f( x ) Q( x ) f( x ), φ 1 ( ) ( x ) Qx ( ) f( x ) = v) Set x = x θφ( x ) and =+1 hen go to ) 1) Conugate Gradent Method - Quadratcally convergent method: he optmum of a n-d quadratc functon can be found n approxmately n steps usng exact arthmetc - hs method generates conugate drectons usng gradent nformaton - For a quadratc functon, consder two dstnct ponts, x and x (1) Let gx ( ) = f( x ) = C x + b and (1) (1) (1) gx ( ) = f( x ) = x + b C = = = (1) (1) gx ( ) gx ( ) gx ( ) C( x x ) C x (Property of quadratc functon: expresson for a change n gradent) ( 1) ( ) ( ) ( ) - Iteratve update equaton: + x = x + α s( x ) II-1

11 f x α ( + 1) ( ) ( ) = bs + s C( x + α s ) ( ) = s ( b+ Cx ) + s α s = α = s ( ) ( ) s f( x ) ( ) ( ) Cs and ( 1) ( ) ( + f x ) s = (optmalty of lne search) - Search drecton: - In order that the 1 ( ) ( ) s = g + γ s wth = s = g s s C-conugate to all prevous search drecton ) Choose γ such that (1) s C s = where s = g + γ s = g γ g (1) (1) (1) + = (1) [ g γ g ] C[ x/ α ] x =α s ( ) (1) [ g + γ g ] g = (property of quadratc functon) (1) (1) (1) (1) (1) (1) g ( g g ) g g g γ = = = = (1) g ( g g ) g g g g g ) Choose γ and γ (1) such that s () C s (1) = and s () C s = where () () (1) (1) s = g γ g γ ( g + γ g ) γ = and (1) γ = g g () (1) ) In general, s = g + γ s ( ) (1) ( ) ( 1) ( ) g ( 1) s = g + s (Fletcher and Reeves Method) ( 1) g Remar 1: Varatons of conugate gradent method ) Mele and Cantrell (Memory gradent method) ( ) ( 1) s = f( x ) + γ s (1) ( ) ( 1) where γ s sought drectly at each teraton such that s C s = cf) Use when the obectve and gradent evaluatons are very nexpensve II-11

12 ) Danel ( 1) s f( x ) f( x ) ( 1) = ( ) + ( 1) ( ) ( 1) s f x s s f( x ) s ) Sorenson and Wolfe x ( ) gx ( ) s f x + s gx ( ) s ( ) ( ) ( 1) = ( ) ( ) ( 1) v) Pola and Rbere s f x x ( ) gx ( ) s gx ( ) ( ) ( ) ( 1) = ( ) + ( 1) Remar : hese methods are doomed to a lnear rate of convergence n the absence of perodc restarts to avod the dependency of the drectons Set s = g( x ) whenever ( ) ( 1) ( ) ( ) ( ) ( ) gx gx gx or every n teratons Remar 3: he Pola and Rbere method s more effcent for general functons and less senstve to nexact lne search than the Fletcher and Reeves Quas-Newton Method - Mmc the Newton s method usng only frst-order nformaton - Form of search drecton: sx ( ) ( ) ( ) = f( x ) where s an nxn matrx call the metrc - Varable metrc methods employ search drecton of ths form - Quas-Newton method s a varable metrc method wth the quadratc property x = C 1 - Recursve form for estmaton of the nverse of Hessan ( ) = + ( s a correcton to the current metrc) 1) c c 1 * 1 - If approaches to H = f ( x ), on addtonal lne search wll produce the mnmum f the functon s quadratc 1 ( ) - ssume H = β ( ) 1) ( ) hen x = β β ( ) c g = x / β Famly of solutons ( ) 1 x y z c = ( ) (y and z are arbtrary vectors) ( ) β y z - DFP method (Davdon-Fletcher-Powell) II-1

13 Let β = 1, y = x and g z = x ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( ) ( 1) x x = + ( 1) ( 1) ( 1) ( 1) ( 1) If s any symmetrc postve defnte, then wll be so n the absence of round-off error ( = I s a convenent choce) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) z x x z z z = + ( 1) ( 1) ( 1) ( 1) ( 1) x g ( ab) ( z x ) = aa + a= z b= bb x ( ) ( 1) z z z z ( 1) ( 1) 1/ ( 1) 1/ ( 1) where, ( 1) ( 1) ) ( 1) ( 1) ( 1) ( ) ( 1) ( 1) ( 1) ( 1) x g = x g x g = x g ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) x g = ( α g g ) > ) ( aa)( bb) ( ab) (Schwarz nequalty) ) If a and b are proportonal (z and ( 1) g are too), ( aa )( bb ) ( ab ) = but ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) x z = c x g = cα g g z z > hs method has the descent property ( ) f = f( x ) x= α f( x ) f( x ) < for ( α ) > - Varatons McCormc (Pearson No) ( 1) ( 1) ( 1) ( 1) ( ) ( 1) ( x ) x = + ( 1) ( 1) x Pearson (Pearson No3) ( ) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( x ) = + Broydon 1965 method (not symmetrc) ( 1) ( 1) ( 1) II-13

14 ( ) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( x ) x = + x ( 1) ( 1) ( 1) Broydon symmetrc Ran-one method (1967) Zoutend = + ( ) ( 1) = ( ) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( x )( x ) ( x ) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) BFS method (Broydon-Fletcher-Shanno, ran-two method) I I x x x ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( ) x ( 1) x x x = ( 1) ( 1) ( 1) + ( 1) ( 1) ( 1) Invarant DFP (Oren, 1974) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( ) x ( 1) x x = ( 1) ( 1) ( 1) ( 1) + ( 1) ( 1) ( 1) ( 1) x Hwang (Unfcaton of many varatons) ( ) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) ( 1) = + x B x where B s x and Remar: If ω = 1 and ( 1) ( 1) ( 1) ( 1) ( 1) B x g = [ ω -1] B = dag(1/ x, 1/ ), ths method wll be same as DFP method Remar 1: s these methods terate, tends to become ll-condtoned or nearly sngular hus, they requre restart ( = I: loss of nd -order nformaton) cf) Condton number= rato of max and mn magntudes of egenvalues of Ill-condtoned: f has large condton number Remar : he sze of s qute bg f n s large (computaton and storage) Remar 3: BFS method s wdely used and nown that t has decreased need for restart and t s less dependent on exact lne search Remar 4: he lne search s the most tme-consumng phase of these methods Remar 5: If the gradent s not explctly avalable, the numercal gradent can be obtaned usng, for example, forward and central dfference approxmatons If II-14

15 the changes n x and/or f between teratons are small, the central dfference approxmaton s better at the cost of more computaton 34 Comparson of Methods - est functons Rosenbloc s functon: f ( x) = 1( x x ) + (1 x ) x x1x + 1 Fenton and Eason s Functon: f( x) = 1+ x x 1 ( xx 1 ) Wood s functon: f ( x) = 1( x x ) + (1 x ) + 9( x x ) + (1 x ) ( x 1) + ( x4 1) + 198( x 1)( x4 1) - est results Hmmelblau (197): BFS, DFP and Powell s drect search methods are superor Sargent and Sebastan (1971): BFS among BFS, DFP and FR methods Shanno and Phua (198): BFS Relats (1983): FR among Cauchy, FR, DFP, and BFS methods II-15

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Topic 5: Non-Linear Regression

Topic 5: Non-Linear Regression Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

CHAPTER 3 UNCONSTRAINED OPTIMIZATION

CHAPTER 3 UNCONSTRAINED OPTIMIZATION . Prelmnares CHAPER 3 UNCONSRAINED OPIMIZAION.. Introducton In ths chapter we wll examne some theory for the optmzaton of unconstraned functons. We wll assume all functons are contnuous and dfferentable.

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Review of Taylor Series. Read Section 1.2

Review of Taylor Series. Read Section 1.2 Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14 APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Dynamic Systems on Graphs

Dynamic Systems on Graphs Prepared by F.L. Lews Updated: Saturday, February 06, 200 Dynamc Systems on Graphs Control Graphs and Consensus A network s a set of nodes that collaborates to acheve what each cannot acheve alone. A network,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Practical Newton s Method

Practical Newton s Method Practcal Newton s Method Lecture- n Newton s Method n Pure Newton s method converges radly once t s close to. It may not converge rom the remote startng ont he search drecton to be a descent drecton rue

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices

Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices Approxmate D-optmal desgns of experments on the convex hull of a fnte set of nformaton matrces Radoslav Harman, Mára Trnovská Department of Appled Mathematcs and Statstcs Faculty of Mathematcs, Physcs

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

First day August 1, Problems and Solutions

First day August 1, Problems and Solutions FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

The equation of motion of a dynamical system is given by a set of differential equations. That is (1) Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems )

Lecture 2 Solution of Nonlinear Equations ( Root Finding Problems ) Lecture Soluton o Nonlnear Equatons Root Fndng Problems Dentons Classcaton o Methods Analytcal Solutons Graphcal Methods Numercal Methods Bracketng Methods Open Methods Convergence Notatons Root Fndng

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

MEM Chapter 4b. LMI Lab commands

MEM Chapter 4b. LMI Lab commands 1 MEM8-7 Chapter 4b LMI Lab commands setlms lmvar lmterm getlms lmedt lmnbr matnbr lmnfo feasp dec2mat evallm showlm setmvar mat2dec mncx dellm delmvar gevp 2 Intalzng the LMI System he descrpton of an

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

Nice plotting of proteins II

Nice plotting of proteins II Nce plottng of protens II Fnal remark regardng effcency: It s possble to wrte the Newton representaton n a way that can be computed effcently, usng smlar bracketng that we made for the frst representaton

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Georgia Tech PHYS 6124 Mathematical Methods of Physics I Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Review: Fit a line to N data points

Review: Fit a line to N data points Revew: Ft a lne to data ponts Correlated parameters: L y = a x + b Orthogonal parameters: J y = a (x ˆ x + b For ntercept b, set a=0 and fnd b by optmal average: ˆ b = y, Var[ b ˆ ] = For slope a, set

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Math 217 Fall 2013 Homework 2 Solutions

Math 217 Fall 2013 Homework 2 Solutions Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Optimization. September 4, 2018

Optimization. September 4, 2018 Optmzaton September 4, 2018 Optmzaton problem 1/34 An optmzaton problem s the problem of fndng the best soluton for an objectve functon. Optmzaton method plays an mportant role n statstcs, for example,

More information

Chapter 4: Root Finding

Chapter 4: Root Finding Chapter 4: Root Fndng Startng values Closed nterval methods (roots are search wthn an nterval o Bsecton Open methods (no nterval o Fxed Pont o Newton-Raphson o Secant Method Repeated roots Zeros of Hgher-Dmensonal

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

Optimization. August 30, 2016

Optimization. August 30, 2016 Optmzaton August 30, 2016 Optmzaton problem 1/31 An optmzaton problem s the problem of fndng the best soluton for an objectve functon. Optmzaton method plays an mportant role n statstcs, for example, to

More information