Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Size: px
Start display at page:

Download "Fixed-Point Iterations, Krylov Spaces, and Krylov Methods"

Transcription

1 Fxed-Pont Iteratons, Krylov Spaces, and Krylov Methods Fxed-Pont Iteratons Solve nonsngular lnear syste: Ax = b (soluton ˆx = A b) Solve an approxate, but spler syste: Mx = b x = M b Iprove the soluton usng the resdual: r = b Ax (teratve refneent) Error, e = xˆ x, satsfes Ae = b Ax = r Don t copute exact error, nstead solve Mz = r and set x = x + z Iterate: = b Ax = b A( x + z ) = r Az z M = r (solve Mz = r ) x x z + Methods: Jacob teraton (dagonal), Gauss-Sedel (upper trangular), any others such as (S)SOR,...

2 Fxed-Pont Iteratons Convergence of such teratons? Let A= M N (atrx splttng). Then Mz = b Ax Mz = b Mx + Nx Mx = Nx + b + (fxed-pont teraton x = M Nx + M b) + Note that the fxed-pont s the soluton (proof?) Error: e = xˆ x = M Nxˆ+ M b M Nx M b = M N x x = M Ne = ( ˆ ) ( M N) e Resdual: r = ( NM ) r and ( ) ( M r M N M r ) = (proof?) To analyze convergence we need to ntroduce/revew a nuber of concepts 3 Rate of Convergence Let ˆx be the soluton of Ax = b, and we have terates x, x, x, { x } converges (q-)lnearly to ˆx f there are N and c [,) such that for N: x xˆ c x xˆ, + { x } converges (q-)superlnearly to ˆx f there are N and a sequence { c } that converges to such that for N: x xˆ c x xˆ + { x } converges to ˆx wth (q-)order at least p f there are p >, c, and N such that N: x x ˆ c x x ˆ p (quadratc f p =, cubc + f p = 3, and so on) { x } converges to ˆx wth j-step (q-)order at least p f there are a fxed nteger j, p >, c, and N, such that N: x xˆ c x xˆ p + j 4

3 Nors A nor on a vector space V s any functon f : V such that. f ( x) and f ( x) = x =,. f ( x) = f ( x), 3. f ( x + y) f ( x) + f ( y), where x V and. n n n n Iportant vector spaces n ths course:,, and, (atrces). Note that the set of all -by-n atrces (real or coplex) s a vector space. Many atrx nors possess the subultplcatve or consstency property: n f AB f A f B for all A and B (or real atrces). ( ) ( ) ( ) Note that strctly speang ths s a property of a faly of nors, because n general each f s defned on a dfferent vector space. 5 Nors We can defne a atrx nor usng a vector nor (an nduced atrx nor): Ax A = ax = ax Ax x x x = Induced nors are always consstent (satsfy consstency property). Two nors. and. are equvalent f there exst postve, real constants a β and b such that x : a x x b x β The constants depend on the two nors but not on x. All nors on a fnte densonal vector space are equvalent. 6 3

4 Nors Soe useful nors on n n,, n p p = x p = n, n : p-nors: x, especally p =,,, where x ax = x. Induced atrx p-nors are: A A A n = ax a (ax absolute colun su) j ( A) = j = σ (ax sngular value harder to copute than others) ax = n ax a (ax absolute row su) j= j Matrx Frobenus nor: n A = a F j, = j (slar to vector -nor for a atrx) All these nors are consstent (satsfy the subultplcatve property) 7 Egenvalues and Egenvectors Let Ax = λx and ya = λy (for sae λ). We call the colun vector x a (rght) egenvector, the row vector y a left egenvector, and λ an egenvalue, the trple together s called an egentrple, and,x λ,y a (rght) egenpar or left egenpar. ( λ ) and ( ) The set of all egenvalues of A, Λ ( A), s called the spectru of A. If the atrx A s dagonalzable (has a coplete set of egenvectors) we have A= VΛV AV = VΛ, where V s a atrx wth the rght egenvectors as coluns and Λ s a dagonal atrx wth the egenvalues as coeffcents. A slar decoposton can be gven for the left egenvectors. 8 4

5 Spectral Radus The spectral radus ρ ( A) s defned as ρ( A) ax { λ : λ Λ( A) } Theore: For all A and ε > a consstent nor. So, f ρ ( A) <, then a consstent nor. Tae ε ( ρ( A) ) Defne A = and apply theore above. = A (coplex conjugate transpose). T If A s Hertan ( A If A s noral ( AA = A ), then ( ) ρ A = A. = A A),then ρ ( A) = A. =. exsts such that A ρ( A) exsts such that A <. + ε. 9 Fxed-Pont Iteratons Under what condtons does e and x xˆ (convergence) for arbtrary e? Theore: ( ) e ff ( M N) e = M N e for arbtrary. Proof: Let G = M N and the atrx nor. be nduced by a vector nor.. Assue G Then G G and Ge G e for any e.. Assue Ge for all e. Consder the dentty atrx I = η η η n. GI= G η Gη Gη n ; so G snce GI= G. Alternatvely, consder G η G η G η n ( note that G η ) 5

6 Nors Note that we can generalze the result for the one-nor to all nors by usng the equvalence of nors on fnte densonal vector spaces. Slarly, the results are readly generalzed for nconsstent atrx nors (wth AB > A B possble), by usng the equvalence of nors on fnte densonal spaces. Fxed-Pont Iteratons Theore: G ff ( G) ρ <. Proof:. G ρ ( G) <. For each egenvalue λ of G there exsts at least one egenvector v s.t. Gv = λv. Then Gv v = λ = λ v and Gv. So, λ v λ <. Snce ths holds for each egenvalue, ρ ( G). ( G) G < ust hold. ρ <. There exsts a consstent nor. s.t. G <. Hence, G G. Therefore G G. So e ( x xˆ ) for arbtrary e ff ρ ( M N) <. 6

7 Krylov Spaces Gven x, set r = b Ax. For =,,, z M = r, x = x + z, + r = b Ax = r Az. + + Note that x x = z = M r and hence x x = z + z + + z. + + Ths ples = + + ( ) + + ( ) x x M r M N M r M N M r. So, correcton x x s gven by polynoal S + () t = + t + t + + t : x x = ( M N ) M r = S ( M N ) M r. + = Note also e = ( M N) e and ( ) ( ) r = NM r = M M N M r. 3 Slarty Transforaton Let A have egenpars (, v ) Av = λ v λ : Defne the slarty transforaton: BAB The atrx BAB ( ) has the sae egenvalues, λ, as A and egenvectors Bv : BAB Bv = BAv = λ Bv Show that ( M N) has the sae egenvalues as ( ) NM. 4 7

8 Polynoals and Spaces Man coputatonal cost s n the ultplcaton by A and solvng for M. So, we can try to generate better polynoals (faster convergence) at sae cost. { } Also correcton span,( ),, + ( ) We call a space K ( B, y) span { y, By, B y,, B y } x x M r M N M r M N M r the Krylov (sub)space of denson assocated wth B and y. So, x x K ( M N, M r ) e ( M N ) e K ( M = N, e + ) and M ( r M N ) M r K ( M N, M r + ) =. Therefore, alternatvely we can copute better approxate solutons fro the sae space (faster convergence) at sae cost. 5 Krylov Spaces So, a Krylov space s a space of polynoals n a atrx tes a vector. These spaces nhert the any portant approxaton propertes that polynoals on the real lne or n the coplex plane possess. For splcty let the atrx B be dagonalzable, B = VΛV. Then B = VΛV VΛV = VΛV and generally B = VΛ V. So, the polynoal p () t = + t + + t appled to B gves ( ) = ( + Λ+ Λ + + Λ ) ( ) = ( Λ) = dag ( ( λ ),, ( λ )) p B V I V and hence n p B Vp V V p p V So, the polynoal s appled to the egenvalues ndvdually. Ths allows us to approxate solutons to lnear systes, egenvalue probles, and ore general probles usng polynoal approxaton. 6 8

9 Approxaton by Matrx Polynoals Let B = VΛV, let Λ( B) Ω. If p () t for all t Ω, then p ( B ) B. t ζ Let y = Vζ. Then p ( B) y = v p ( λ) ζ v B y = λ Furtherore, let ε and λ λ > δ (for soe egenvalue λ ) j ε, t Ω and t λ > δ, If p () t =, t = λ, p B y vζ. then ( ) If we can construct such polynoals for odest we have an effcent lnear solver or egensolver. 7 Krylov Spaces We have M N = M ( M A) = I M A. So, ( ) ( x x I M A M r K M A, M r + + ) =, and x M Nx M b xˆ= I M A xˆ+ M b M Axˆ= M b. = +, + wth fxed-pont ( ) So, we solve the precondtoned proble M Ax = M b. Precondtonng as to prove the convergence of Rchardson s teraton x = ( I A ) x + b + However, wth A = M A, M = I, and b = M b our teraton becoes Rchardson s teraton, and we have Krylov spaces and polynoals based on A. Hence, for splcty we consder A and b as an explctly precondtoned atrx and vector, and wor wth Krylov spaces n A and r (ost of the te). 8 9

10 Approxatons fro Krylov Spaces Rchardson for Ax = b sees update z K ( I A, r ) = K ( A, r ) How to defne an teraton that fnds better approxaton n sae space? Gven x and r = b Ax fnd z K ( A, r ) and set x = x + z. There are several possbltes. Two partcularly portant ones are. Fnd z K ( A, r ). Fnd z K ( A, r ) such that r = r Az s nal. such that e = xˆ ( x + z ) s nal. The second one sees hard, but s possble n practce for specal nors. Further possbltes for optal solutons exst, and for non-hertan atrces certan non-optal solutons turn out to have advantages as well. 9 Inner Products Many ethods to select z fro the Krylov space are related to projectons. We call f : S S an nner product over the real vector space S, f for all vectors xyz,, and scalars,. f (, xx) and f (, xx) = x=. f ( xz, ) = fxz (, ) 3. f ( x + y, z) = f( x, z) + f( y, z) 4. f (, xz) = fzx (, ) For a coplex nner product, f : S S, over a coplex vector space S we have nstead of property (4): f (, xz) = fzx (, ). Inner products are often wrtten as, xy, (, ) xy, or xy,, etc.. We say x and y are orthogonal (w.r.t -IP), x y f xy, =.

11 Inner products and Nors Each nner product defnes, or nduces, a nor: x = x, x. (proof?) Many nors are nduced by nner products, but not all. Those nors that are have addtonal nce propertes (that we ll dscuss soon). An nner product and ts nduced nor satsfy: xy, x y (CS neq) A nor nduced by an nner product satsfes the parallelogra equalty: x + y + x y = x + y ( ) In ths case we can fnd the nner product fro the nor as well: Real case: xy, = ( x+ y x y ) 4 Coplex case: Re xy, = ( x+ y x y ) I xy, = x+ y x y 4 4, ( ) Mnu Resdual Solutons Frst, we consder nzng the -nor of the resdual: Fnd z K ( A, r ) such that r = r Az s nal. n The vector -nor s nduced by the Eucldean nner product yx= yx. = It aes sense to nze the resdual, because the error s n general unnown, and we can only drectly nze specal nors of the error (those that don t requre the error ). Moreover, e A = r A r, so the nor of the error s bounded by a constant tes the nor of the resdual. Fnally, note that r = e Ae (show ths s a nor f A regular) AA Theore: z s the nzer ff r = b A( x + z ) K ( A, Ar ). Note that b A( x + z ) = r Az.

12 Mnu Resdual Solutons Proof: Let f () z = r Az. Then zˆ K ( A, r ): r Azˆ n, ples ẑ s a statonary pont of f () z : For any unt vector (, ) f () zˆ p K A r we have =. p f ( zˆ+ εp) f( zˆ) So, l =, whch gves ε, ε ε r Azˆ ε Ap r Azˆ l = ε ε εp A ( r Azˆ) ε( r Azˆ) Ap + ε p A Ap l = ε ε p A ( r Azˆ) ( r Azˆ) Ap = for any unt vector p K ( A, r ). So, ( r Azˆ) Ap = ( r Azˆ) Ap for any unt p K ( A, r ) (why?) Snce Ap K ( A, Ar ) we have ( r Az ˆ) K ( A, Ar ) AK ( A, r ). 3 Mnu Resdual Solutons Mnzng a nor (cont. functon) s n general coplcated. However, the orthogonalty condtons lead naturally to a lnear syste of equatons. Let { w, w,, w } for a bass for K ( A, r ) and { Aw } for (, ) = K A Ar. Let z = W ζ (for soe unnown ζ ). Now the orthogonalty condtons Aw ( r AW ζ) yeld the lnear equatons ( waaw ) ζ = war. = j j j In atrx for: WAAWζ = WAr noral equatons (accuracy probles) LS proble: n r K ζ where K = AW ζ More accurate to solve LS proble usng QR-decoposton. Solvng for ζ requres only the soluton of an syste ndependent of n. 4

13 Mnu Resdual Solutons We have seen that n r K ζ K ( r K ζ) ζ Copute K = Q R (QR-decoposton) where n n n s.t. QQ = I and uppertrangular Q If ran( K ) ( ) =, then R s nonsngular and range(q ) = range( K ) Now Q ( r K ζ) gves Q r Q Q R ζ = R ζ = Q r (easy solve) Note that K ζ = Q Q r r K ζ = r Q Q r K ζ s the orthogonal projecton of r onto range ( K ) (, ) = K A Ar. r r K ζ R = s the orthogonal projecton of (, ). r onto ( K A Ar ) QQ and I Q Q are orthogonal projectors. P s projector f P = P, orthogonal projector f R(P) N(P) P = P. 5 Mnu Resdual Solutons Iteraton-wse the proble s solved n four steps:. Extend the Krylov spaces K ( A, r ) and K ( A, Ar ) by addng the respectve + next vectors Ar and A r (only atvec). Update orthogonal bass for K ( A, Ar ) : QR-decop. of K 3. Update projected atrx and projecton of r (orthog) onto K ( A, Ar ) 4. Solve the projected proble, e.g. Rζ = Q r. Note that ths proble s only or ( + ) rrespectve of the sze of the lnear syste. These steps vary soewhat for dfferent ethods. We would le to carry out these steps effcently. The GCR ethod (Generalzed Conjugate Resduals) llustrates these steps well. (Esenstat, Elan, and Schulz 983) 6 3

14 Mnu Resdual Solutons: GCR GCR: Ax = b Choose x (e.g. x = ) and tolerance ε; set r = b Ax ; = whle r ε do = + r adds search vector to K ( A, r ) u = r ; c = Au Ar extends K ( A, Ar ) for j =,, do (start QR decoposton) u = u u c c Orthogonalze c aganst prevous c j j j and c = c c c c update u such that Au = c antaned j j end do u = u / c ; c = c / c Noralze; (end QR decoposton) x = x + uc r Project new c out of resdual and update r = r cc r soluton accordngly; note r c for j j end do What happens f c r 7 Mnu Resdual Solutons: GCR Fro the algorth we see that f cr for j = : j j u = νr span{ r } = K ( A, r ), ν s a noralzaton constant c = νar span{ Ar } K ( A, Ar ). r = r c = r νar K ( A, r ) also Ar span{ r, r } By nducton (and the stateents above) we can show u = r β u span{ r,, r } K ( A, r ) = j< j j c = Ar β c span{ Ar,, Ar } K ( A, Ar ) = and also that j< j j c = Au, c c, c,, c, and cc = (last by constructon) r = r c span{ r, Ar, Ar,, Ar } = K ( A, r ) + These relatons hold even f { r, Ar,, A r } are dependent for soe. In that case all the spaces have a axu denson of

15 Mnu Resdual Solutons: GCR Theore: Let cr for =,,,. Then b Ax for GCR s nal. We use our earler theore on the nu resdual. We have (prevous slde) x = x + u β = x + z and z K ( A, r ) as requred. = Now we only need to show that r K ( A, Ar ). Note that our assupton ples r for =,,,. By constructon we have r = r ccr cr = cr cccr = Assue that for j = we have r c, c,, c (nduc. hypo.). j j Fro r = r c c r we have r c. j j j j j j j For < j, cr = cr j j = (fro orthogonalty c and nducton hypothess). Ths proves the requred orthogonalty result snce the c span K ( A, Ar ). 9 Mnu Resdual Solutons: GCR Recaptulaton of GCR after t.s: r = n{ r Az z = U ζ} u K ( A, r ) (, ), r K ( A, r ) for =,, & = for r +., c K A Ar Ths ples that K ( A, r ) = span{ r,, r} for =,, (f cr =/ ). + Let U = [ u u u ] and C = [ c c c ] Then AU = C, C C = I, and range( U ) = K ( A, r ). For GCR the projected syste s the atrx for the noral equatons (but coputed plctly): U A AU = ( AU ) ( AU ) = C C = I. The projected rght hand sde s C r, and so we have ζ = C r. z = U ζ s gven by condton C ( r AU ζ) =, whch gves ζ = C r. Ths gves z = U C r, x x z = +, and r = r C C r. 3 5

16 Mnu Resdual Solutons: GCR Note that C C and I C C are both orthogonal projectors. C C s a projecton snce C C C C = C C. C C s an orthogonal projecton snce C C s Hertan. I C C s a projecton snce I C C s Hertan. ( I C C )( I C C ) = I C C C C + C C = I C C. I C C s an orthogonal projecton snce 3 Model Probles Dscretze ( ) ( ) pu qu + ru + su + tu = f. x x y y x y D (,j+) C (-,j) V (,j) (+,j) A (,j-) B Integrate equalty over box V. Use Gauss dvergence theore to get pu ( ) ( ) x pu + qu dxdy = n ds V x x y y V qu y And approxate the lne ntegral nuercally. 3 6

17 Model Probles Now we approxate the boundary ntegral pu x nds V qu y. We approxate the ntegrals over each sde of box V usng the dpont rule and we approxate the dervatves usng central dfferences. C Δy pu n dy p ( U U ) and so on for the other sdes B x /, j, j, j Δ x + + We approxate the ntegrals over ru, su, tu, and f usng the area of the box x y and the value at the dpont of the box, where we use central dfferences for dervatves. So, u ( U U,, )/( Δ x x + j j ), and so on. For varous exaples we wll also do ths whle strong convecton relatve to the esh sze aes central dfferences a poor choce (as t gves nterestng systes). 33 Model probles Ths gves the dscrete equatons Δy p U U p U U Δx Δx q U U p U U ( ) ( ) +, j +, j j,, j j,, j j, + ( j, + j, ) j, ( j, j, ) Δy ( Δy /) r ( U U ) ( Δx /) s ( U U ) + + j, +, j, j j, j, + j, + ΔxΔyt U = ΔxΔyf j, j, j, Often we dvde ths result agan by ΔΔ. x y 34 7

18 Mnu Resdual Solutons: GMRES An alternatve s to generate teraton-wse an orthogonal bass for K ( A, r ) The Arnold algorth (teraton) goes as follows: v = r r ; Let for =, v Av ; = + for j =, h = v v j, j + ; v = v h v ; + + j, j end h = v ; v = v h ; +, , end Note/show the followng results: AV = V H (Arnold recurrence) V V = I (orthogonal), H = V AV (upper Hessenberg) Mnu Resdual Solutons: GMRES { } Usng AV V H + as follows. Let z = V ζ, and nze r AV ζ over all -vectors ζ. Note that ths s an n least squares proble (as before). =, we solve n r Az z K ( A, r ) Now substtute r = V η r and + AV = V H. Ths gves + ( ) V η r V H ζ = V η r H ζ = η r H ζ The latter s a sall ( ) + least squares proble we can solve by standard dense lnear algebra technques (e.g. usng LAPACK) We can explot the structure of H and the least squares proble to. do ths effcently,. copute the resdual nor wthout coputng the resdual 36 8

19 GMRES By constructon H has the followng structure H = h, h, h,3 h, h, h, h, h,3 h, h, h 3, h 3,3 h 4,3 h, h, h, h, h +, (Upper Hessenberg) Cheapest QR decop. s by Gvens rotatons to zero lower dagonal. G H H = c s s c I = & & & & & h 3, h 3, GMRES Next step we copute: G H G H H = c s s c I & & & & & & & h 3, h 3,3 h 3, h 4,3 h 4, = & & & & & & & & & h 4,3 h,3 After Gvens rotatons: G H G H H = Q H + H = r, r, r, r 3,3 r, = R 6-7 9/4/7 / :5 AM

20 GMRES Theore: An unreduced ( + ) % Hessenberg atrx s nonsngular. (unreduced eans no zeros on subdagonal) Proof:? GMRES So the least squares proble y = arg n e ær æ H y : y Š can be solved by ultplyng H y l e ær æ fro left by R H Q : y = R Q H e ær æ In practce: Stepwse copute G H (G H G H H ) and G H (G H G H e ær æ ) In Arnold step, update H wth new colun; then carry out prevous Gvens rotatons on new colun. Copute new Gvens rotaton and update H and rght hand sde (of sall least squares proble): G H (G H G H e ær æ ) 8-9 9/4/7 / :5 AM

21 GMRES The least squares syste now loos le R y = Q H + e ær æ. We ay assue has no zeros on dagonal (see later) R Snce botto row of R s zero we can only solve for (Q H + e ær æ ) (frst coeff.s) Ths s exactly what we do by solvng R y = Q H e ær æ Note LS resdual nor equals the nor of the actual resdual: ær æ = q + H e ær æ ( q + snce t changes wth ): ær AV yæ = V + e ær æ V + H y = e ær æ H y Ths way we can ontor convergence wthout actually coputng updates to soluton and resdual (cheap). GMRES GMRES: Ax = b CHOOSE x (E.G. x = ) AND tol r = b Ax ; = ; v = r /ær æ ; WHILE ær æ > tol = + ; ṽ + = Av ; FOR j = :, h j, = v j H ṽ + ; ṽ + = ṽ + h j, v ; END h +, = æṽ + æ ; v + = ṽ + /h +, ; UPDATE QR-DECOMP.: H = Q + R ær æ = q + H e ær æ END y = R Q H e ær æ ; x = x + V y ; r = r V + H y = V + I Q Q H e ær æ ; (or sply r = b Ax ) 3-3 9/4/7 / :5 AM

22 Mnu Resdual Solutons: GMRES GMRES: Ax = b Choose x, tolerance ε; set r = b Ax ; v = r r, =. whle r ε do = + v Av ; = + for j =, h = v v ; j, j + v = v h v ; + + j, j end h = v ; v = v h ; +, , Solve LS n η r H ζ ζ ( = r ) by constructon (actually we update the soluton rather than solve fro scratch see later) end x = x + V ζ ; r = r V H ζ = V η r H ζ r b Ax or sply = ( ) Convergence Restarted GCR Test proble on unt square: Interor: grd ponts = Boundary u = for x = and y = u = elsewhere ( u) Resdual Nor vs Nuber of Iteratons log r full GCR GCR() -8 GCR(5) GCR() GCR(5) GCR() Iteraton count.5 3 x

23 Convergence restarted GMRES Test proble on unt square: grd ponts Interor: ( u) = Boundary u = for x = and y = u = elsewhere log r GMRES() full GMRES GMRES(5) GMRES() GMRES() GMRES(5) log r GMRES() full GMRES GMRES(5) GMRES() GMRES() GMRES(5) x 4 Iteraton count Iteraton count 39 GMRES vs GCR GMRES() x unnowns te (s) teratons log( r / b ) full rgcr() x unnowns te (s) teratons log( r / b ) full

24 Conjugate Gradents () Hertan atrces: Error nzaton n the A-nor We are solvng Ax = bwth ntal guess x d r = b Ax and xˆ s the soluton to Ax = b. The error at teraton s = xˆ (x + z ), where z K (A, r ) s the th update to the ntal guess. Theore: Let Abe Hertan, then the vector z c K (A, r ) satsfes z = arg n{æxˆ (x + z)æ A : z c K (A,r )} ff r h r Az satsfes r Ω K (A, r ). The ost portant algorth of ths class s the Conjugate Gradent Algorth. Conjugate Gradents () Proof: z = arg n{æxˆ (x + z)æ A : z K (A, r )} w (xˆ x ) z Ω A K (A, r ) We now. K (A,r )=span r, r,, r Ths gves r Ω A (xˆ x z ) for =,, g A(xˆ x z ), r for =,, b Ax Az, r for =,, r Az, r for =,, r,r for =,, r ΩK (A, r ) g g g g 9/4/7 / :8 AM

25 Conjugate Gradents (3) So we can nze the error by choosng the update such that the new resdual s orthogonal to all prevous resduals. Hence, the nae orthogonal resdual ethods. (note the coparson between Orthon() and Steepest Descent) We can generate an orthogonal bass for the Krylov subspace usng the Lanczos teraton, the 3-ter recurrence verson of the Arnold-teraton. Conjugate Gradents (4) Lanczos teraton: CHOOSE q ; = ; q = ; FOR =,, DO q + = Aq ; = Aq,q ; q + = q + q ; q + = q + q ; = æq +æ ; q + = q +/ ; END Show q + = q + q sets q +Ωq. (one arguent s the syetry of the Hessenberg atrx for Arnold, gve another) Ths algorth generates the recurrence relaton: T AQ = Q T + q + e, where Q = [q q q ], T =. 9/4/7 / :8 AM

26 Conjugate Gradents (5) Use Lanczos orthonoral bass for nzng A-nor of error. z = arg n{æxˆ (x + z)æ A : z K (A, r )} ff r h r Az satsfes r Ω K (A, r ). q = r /ær æ ; Lanczos ethod: AQ = Q T + q + e T Solve r AQ y Ω Q w Q H (ær æ q AQ y )= w Q H (ær æ q AQ y )= w ær æ e Q H AQ y =. Notce. range(q )=span{r, r,, r } AQ = Q T + q + e T u Q H AQ = T So we reduced our proble to solvng ær æ e T y = : y = T e ær æ Conjugate Gradents (6) In order to update step-by-step we use sae trc as n MINRES: Let H T = L D L ; then y = L H D L e ær æ, where L s unt lower b-dagonal wth lower dagonal coeff.s, l, l,, l (ndex gves the colun) Change of varables: and : P = Q L H ŷ = D L e ær æ Q y = P ŷ Each teraton only the last coponent of ŷ changes. Fro P L H = Q we get a recurrence for p : p + l p = q (p = q ) So every new step we copute a new q +, we update the decoposton of T and fro that ŷ + and p +. x = x + p ŷ, (where s th cop of vector ) r = r Ap ŷ, = q + ŷ, ŷ, ŷ 9/4/7 / :8 AM

27 Conjugate Gradents (7) Ths leads to the coupled two-ter recurrence for of CG CG algorth: Ax = b x t r = b Ax p = r ; = CHOOSE ; WHILE END ær æ > tol = + ; = r,r p,ap ; x = x + p ; r = r Ap ; = r,r r,r ; p = r p ; DO Conjugate Gradents log r - -4 GMRES CG # teratons (atvecs) 9/4/7 / :8 AM

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations Solve nonsingular linear syste: Ax = b (solution ˆx = A b) Solve an approxiate, but sipler syste: Mx = b x = M b Iprove

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Slobodan Lakić. Communicated by R. Van Keer

Slobodan Lakić. Communicated by R. Van Keer Serdca Math. J. 21 (1995), 335-344 AN ITERATIVE METHOD FOR THE MATRIX PRINCIPAL n-th ROOT Slobodan Lakć Councated by R. Van Keer In ths paper we gve an teratve ethod to copute the prncpal n-th root and

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Applied Mathematics Letters

Applied Mathematics Letters Appled Matheatcs Letters 2 (2) 46 5 Contents lsts avalable at ScenceDrect Appled Matheatcs Letters journal hoepage: wwwelseverco/locate/al Calculaton of coeffcents of a cardnal B-splne Gradr V Mlovanovć

More information

Finite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003

Finite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003 Fnte Vector Space epresentatons oss Bannster Data Asslaton esearch Centre, eadng, UK ast updated: 2nd August 2003 Contents What s a lnear vector space?......... 1 About ths docuent............ 2 1. Orthogonal

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

y new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion)

y new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion) Feature Selecton: Lnear ransforatons new = M x old Constrant Optzaton (nserton) 3 Proble: Gven an objectve functon f(x) to be optzed and let constrants be gven b h k (x)=c k, ovng constants to the left,

More information

Modified parallel multisplitting iterative methods for non-hermitian positive definite systems

Modified parallel multisplitting iterative methods for non-hermitian positive definite systems Adv Coput ath DOI 0.007/s0444-0-9262-8 odfed parallel ultsplttng teratve ethods for non-hertan postve defnte systes Chuan-Long Wang Guo-Yan eng Xue-Rong Yong Receved: Septeber 20 / Accepted: 4 Noveber

More information

Minimization of l 2 -Norm of the KSOR Operator

Minimization of l 2 -Norm of the KSOR Operator ournal of Matheatcs and Statstcs 8 (): 6-70, 0 ISSN 59-36 0 Scence Publcatons do:0.38/jssp.0.6.70 Publshed Onlne 8 () 0 (http://www.thescpub.co/jss.toc) Mnzaton of l -Nor of the KSOR Operator Youssef,

More information

Solutions for Homework #9

Solutions for Homework #9 Solutons for Hoewor #9 PROBEM. (P. 3 on page 379 n the note) Consder a sprng ounted rgd bar of total ass and length, to whch an addtonal ass s luped at the rghtost end. he syste has no dapng. Fnd the natural

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Xiangwen Li. March 8th and March 13th, 2001

Xiangwen Li. March 8th and March 13th, 2001 CS49I Approxaton Algorths The Vertex-Cover Proble Lecture Notes Xangwen L March 8th and March 3th, 00 Absolute Approxaton Gven an optzaton proble P, an algorth A s an approxaton algorth for P f, for an

More information

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.

What is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner. (C) 998 Gerald B Sheblé, all rghts reserved Lnear Prograng Introducton Contents I. What s LP? II. LP Theor III. The Splex Method IV. Refneents to the Splex Method What s LP? LP s an optzaton technque that

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

LECTURE :FACTOR ANALYSIS

LECTURE :FACTOR ANALYSIS LCUR :FACOR ANALYSIS Rta Osadchy Based on Lecture Notes by A. Ng Motvaton Dstrbuton coes fro MoG Have suffcent aount of data: >>n denson Use M to ft Mture of Gaussans nu. of tranng ponts If

More information

XII.3 The EM (Expectation-Maximization) Algorithm

XII.3 The EM (Expectation-Maximization) Algorithm XII.3 The EM (Expectaton-Maxzaton) Algorth Toshnor Munaata 3/7/06 The EM algorth s a technque to deal wth varous types of ncoplete data or hdden varables. It can be appled to a wde range of learnng probles

More information

Computational and Statistical Learning theory Assignment 4

Computational and Statistical Learning theory Assignment 4 Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

CSCE 790S Background Results

CSCE 790S Background Results CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each

More information

The Parity of the Number of Irreducible Factors for Some Pentanomials

The Parity of the Number of Irreducible Factors for Some Pentanomials The Party of the Nuber of Irreducble Factors for Soe Pentanoals Wolfra Koepf 1, Ryul K 1 Departent of Matheatcs Unversty of Kassel, Kassel, F. R. Gerany Faculty of Matheatcs and Mechancs K Il Sung Unversty,

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

1 Definition of Rademacher Complexity

1 Definition of Rademacher Complexity COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the

More information

MATH Homework #2

MATH Homework #2 MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx

More information

Homework Notes Week 7

Homework Notes Week 7 Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Gradient Descent Learning and Backpropagation

Gradient Descent Learning and Backpropagation Artfcal Neural Networks (art 2) Chrstan Jacob Gradent Descent Learnng and Backpropagaton CSC 533 Wnter 200 Learnng by Gradent Descent Defnton of the Learnng roble Let us start wth the sple case of lnear

More information

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form

Denote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form SET OF METHODS FO SOUTION THE AUHY POBEM FO STIFF SYSTEMS OF ODINAY DIFFEENTIA EUATIONS AF atypov and YuV Nulchev Insttute of Theoretcal and Appled Mechancs SB AS 639 Novosbrs ussa Introducton A constructon

More information

Quantum Mechanics for Scientists and Engineers

Quantum Mechanics for Scientists and Engineers Quantu Mechancs or Scentsts and Engneers Sangn K Advanced Coputatonal Electroagnetcs Lab redkd@yonse.ac.kr Nov. 4 th, 26 Outlne Quantu Mechancs or Scentsts and Engneers Blnear expanson o lnear operators

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede. ) with a symmetric Pcovariance matrix of the y( x ) measurements V Fall Analyss o Experental Measureents B Esensten/rev S Errede General Least Squares wth General Constrants: Suppose we have easureents y( x ( y( x, y( x,, y( x wth a syetrc covarance atrx o the y( x easureents

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy.

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy. Elastc Collsons Defnton: to pont asses on hch no external forces act collde thout losng any energy v Prerequstes: θ θ collsons n one denson conservaton of oentu and energy occurs frequently n everyday

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

A Radon-Nikodym Theorem for Completely Positive Maps

A Radon-Nikodym Theorem for Completely Positive Maps A Radon-Nody Theore for Copletely Postve Maps V P Belavn School of Matheatcal Scences, Unversty of Nottngha, Nottngha NG7 RD E-al: vpb@aths.nott.ac.u and P Staszews Insttute of Physcs, Ncholas Coperncus

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

An Optimal Bound for Sum of Square Roots of Special Type of Integers

An Optimal Bound for Sum of Square Roots of Special Type of Integers The Sxth Internatonal Syposu on Operatons Research and Its Applcatons ISORA 06 Xnang, Chna, August 8 12, 2006 Copyrght 2006 ORSC & APORC pp. 206 211 An Optal Bound for Su of Square Roots of Specal Type

More information

Relaxation Methods for Iterative Solution to Linear Systems of Equations

Relaxation Methods for Iterative Solution to Linear Systems of Equations Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary

More information

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup

BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (

More information

Math 217 Fall 2013 Homework 2 Solutions

Math 217 Fall 2013 Homework 2 Solutions Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

arxiv: v2 [math.co] 3 Sep 2017

arxiv: v2 [math.co] 3 Sep 2017 On the Approxate Asyptotc Statstcal Independence of the Peranents of 0- Matrces arxv:705.0868v2 ath.co 3 Sep 207 Paul Federbush Departent of Matheatcs Unversty of Mchgan Ann Arbor, MI, 4809-043 Septeber

More information

Riccati Equation Solution Method for the Computation of the Solutions of X + A T X 1 A = Q and X A T X 1 A = Q

Riccati Equation Solution Method for the Computation of the Solutions of X + A T X 1 A = Q and X A T X 1 A = Q he Open Appled Inforatcs Journal, 009, 3, -33 Open Access Rccat Euaton Soluton Method for the Coputaton of the Solutons of X A X A Q and X A X A Q Mara Ada *, Ncholas Assas, Grgors zallas and Francesca

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

1. Statement of the problem

1. Statement of the problem Volue 14, 010 15 ON THE ITERATIVE SOUTION OF A SYSTEM OF DISCRETE TIMOSHENKO EQUATIONS Peradze J. and Tsklaur Z. I. Javakhshvl Tbls State Uversty,, Uversty St., Tbls 0186, Georga Georgan Techcal Uversty,

More information

Implicit scaling of linear least squares problems

Implicit scaling of linear least squares problems RAL-TR-98-07 1 Iplct scalng of lnear least squares probles by J. K. Red Abstract We consder the soluton of weghted lnear least squares probles by Householder transforatons wth plct scalng, that s, wth

More information

Synopsis of Numerical Linear Algebra

Synopsis of Numerical Linear Algebra Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Preference and Demand Examples

Preference and Demand Examples Dvson of the Huantes and Socal Scences Preference and Deand Exaples KC Border October, 2002 Revsed Noveber 206 These notes show how to use the Lagrange Karush Kuhn Tucker ultpler theores to solve the proble

More information

By M. O'Neill,* I. G. Sinclairf and Francis J. Smith

By M. O'Neill,* I. G. Sinclairf and Francis J. Smith 52 Polynoal curve fttng when abscssas and ordnates are both subject to error By M. O'Nell,* I. G. Snclarf and Francs J. Sth Departents of Coputer Scence and Appled Matheatcs, School of Physcs and Appled

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Our focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e.

Our focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e. SSTEM MODELLIN In order to solve a control syste proble, the descrptons of the syste and ts coponents ust be put nto a for sutable for analyss and evaluaton. The followng ethods can be used to odel physcal

More information

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2

Lecture 6/7 (February 10/12, 2014) DIRAC EQUATION. The non-relativistic Schrödinger equation was obtained by noting that the Hamiltonian 2 P470 Lecture 6/7 (February 10/1, 014) DIRAC EQUATION The non-relatvstc Schrödnger equaton was obtaned by notng that the Hamltonan H = P (1) m can be transformed nto an operator form wth the substtutons

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

CHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS

CHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS Chapter 6: Constraned Optzaton CHAPER 6 CONSRAINED OPIMIZAION : K- CONDIIONS Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

On Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1

On Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1 Ffteenth Internatonal Workshop on Algebrac and Cobnatoral Codng Theory June 18-24, 2016, Albena, Bulgara pp. 35 40 On Syndroe Decodng of Punctured Reed-Soloon and Gabduln Codes 1 Hannes Bartz hannes.bartz@tu.de

More information

1 Review From Last Time

1 Review From Last Time COS 5: Foundatons of Machne Learnng Rob Schapre Lecture #8 Scrbe: Monrul I Sharf Aprl 0, 2003 Revew Fro Last Te Last te, we were talkng about how to odel dstrbutons, and we had ths setup: Gven - exaples

More information

Recap: the SVM problem

Recap: the SVM problem Machne Learnng 0-70/5-78 78 Fall 0 Advanced topcs n Ma-Margn Margn Learnng Erc Xng Lecture 0 Noveber 0 Erc Xng @ CMU 006-00 Recap: the SVM proble We solve the follong constraned opt proble: a s.t. J 0

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

Formal solvers of the RT equation

Formal solvers of the RT equation Formal solvers of the RT equaton Formal RT solvers Runge- Kutta (reference solver) Pskunov N.: 979, Master Thess Long characterstcs (Feautrer scheme) Cannon C.J.: 970, ApJ 6, 55 Short characterstcs (Hermtan

More information

CHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS

CHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS CHAPER 7 CONSRAINED OPIMIZAION : HE KARUSH-KUHN-UCKER CONDIIONS 7. Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based unconstraned

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values

However, since P is a symmetric idempotent matrix, of P are either 0 or 1 [Eigen-values Fall 007 Soluton to Mdterm Examnaton STAT 7 Dr. Goel. [0 ponts] For the general lnear model = X + ε, wth uncorrelated errors havng mean zero and varance σ, suppose that the desgn matrx X s not necessarly

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Quantum Mechanics I - Session 4

Quantum Mechanics I - Session 4 Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................

More information

{ In general, we are presented with a quadratic function of a random vector X

{ In general, we are presented with a quadratic function of a random vector X Quadratc VAR odel Mchael Carter à Prelnares Introducton Suppose we wsh to quantfy the value-at-rsk of a Japanese etals tradng fr that has exposure to forward and opton postons n platnu. Soe of the postons

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Scattering by a perfectly conducting infinite cylinder

Scattering by a perfectly conducting infinite cylinder Scatterng by a perfectly conductng nfnte cylnder Reeber that ths s the full soluton everywhere. We are actually nterested n the scatterng n the far feld lt. We agan use the asyptotc relatonshp exp exp

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Economics 101. Lecture 4 - Equilibrium and Efficiency

Economics 101. Lecture 4 - Equilibrium and Efficiency Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of

More information

On Finite Rank Perturbation of Diagonalizable Operators

On Finite Rank Perturbation of Diagonalizable Operators Functonal Analyss, Approxmaton and Computaton 6 (1) (2014), 49 53 Publshed by Faculty of Scences and Mathematcs, Unversty of Nš, Serba Avalable at: http://wwwpmfnacrs/faac On Fnte Rank Perturbaton of Dagonalzable

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Multiplicative Functions and Möbius Inversion Formula

Multiplicative Functions and Möbius Inversion Formula Multplcatve Functons and Möbus Inverson Forula Zvezdelna Stanova Bereley Math Crcle Drector Mlls College and UC Bereley 1. Multplcatve Functons. Overvew Defnton 1. A functon f : N C s sad to be arthetc.

More information

From Biot-Savart Law to Divergence of B (1)

From Biot-Savart Law to Divergence of B (1) From Bot-Savart Law to Dvergence of B (1) Let s prove that Bot-Savart gves us B (r ) = 0 for an arbtrary current densty. Frst take the dvergence of both sdes of Bot-Savart. The dervatve s wth respect to

More information

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.

763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1. 7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)

More information

Description of the Force Method Procedure. Indeterminate Analysis Force Method 1. Force Method con t. Force Method con t

Description of the Force Method Procedure. Indeterminate Analysis Force Method 1. Force Method con t. Force Method con t Indeternate Analyss Force Method The force (flexblty) ethod expresses the relatonshps between dsplaceents and forces that exst n a structure. Prary objectve of the force ethod s to deterne the chosen set

More information