ON THE EFFICIENT UPDATE OF RECTANGULAR LU FACTORIZATIONS SUBJECT TO LOW RANK MODIFICATIONS

Size: px
Start display at page:

Download "ON THE EFFICIENT UPDATE OF RECTANGULAR LU FACTORIZATIONS SUBJECT TO LOW RANK MODIFICATIONS"

Transcription

1 ON THE EFFICIENT UPDATE OF RECTANGULAR LU FACTORIZATIONS SUBJECT TO LOW RANK MODIFICATIONS PETER STANGE, ANDREAS GRIEWANK, AND MATTHIAS BOLLHÖFER Abstract. In ths paper we ntroduce a new method for the computaton of KKT matrces that arse from solvng constraned, nonlnear optmzaton problems. Ths method requres updatng of null-space factorzatons after a low rank modfcaton. The update procedure has the advantage that t s sgnfcantly cheaper than a re-factorzaton of the system at each new terate. Ths paper focuses on the cheap update of a rectangular LU decomposton after a rank-1 modfcaton. Two dfferent procedures for updatng the LU factorzaton are presented n detal and compared regardng ther costs of computaton and ther stablty. Moreover we wll ntroduce an extenson of these algorthms whch further mproves the computaton tme. Ths turns out to be an excellent alternatve to algorthms based on orthogonal transformatons. Key words. KKT-System, LU factorzaton, low-rank modfcaton 1. Introducton. Ths work s motvated by the soluton of the followng constraned optmzaton problem: { c (x) = 0 I mn f(x) subject to where I E =, (I E) = {1,..., k}. (1.1) x n c j (x) 0 j E If m k,.e. c j (x) = 0 for j E, actve constrants are known, optma of ths problem are locally characterzed as saddle ponts of the Lagrange functon L(x, λ) = f(x) + λ T c(x) = f(x) + m λ c (x), where λ = (λ 1,..., λ m ) T. (1.2) These saddle ponts can be computed by solvng the statonary condton =1 0 = x,λ L(x, λ) [g(x, λ), c(x) m [ f + λ c (x), c(x). =1 Ths can be done by the quas-newton method ntroduced n [7. Thereby a sequence of lnearzed KKT systems of the form [ [ [ Bk A T k sk gk = (1.3) A k 0 σ k c k has to be solved. Here the matrces A k m n and B k n n (wth n m) approxmate the Jacoban c(x) T of the actve constrants and the Hessan of the Lagrange functon 2 x L = 2 f(x) + m λ 2 c (x). The vectors s k and σ k are the current optmzaton steps from whch the =1 new pont and the new Lagrange multplers can be obtaned by x k+1 = x k + s k and λ k+1 = λ k + σ k. (1.4) Ths s a full step. The convergence can be globalzed by reduced steps x k+1 = x k + α k s k and λ k+1 = λ k + α k σ k. (1.5) ths work was supported by the DFG-research center Matheon Mathematcs for key technologes n Berln Insttute for Mathematcs, Technsche Unverstät Berln, Germany Insttute for Mathematcs, Humboldt-Unverstät zu Berln, Germany Insttute for Mathematcs, Technsche Unverstät Berln, Germany 1

2 wth a step length α k, e.g. obtaned by a lne search. Throughout the paper we make no assumptons on the sparsty pattern of c and 2 L,.e., A k, B k mght be dense matrces. The approxmate Jacoban A k and the approxmate Hessan B k wll be updated n every step of the optmzaton procedure. Here we consder rank-one updates that are lnearly nvarant and can be effcently computed by Automatc Dfferentaton [5. Wth ths technque t s feasble to compute matrx-vector and vector-matrx products wth dervatve matrces cheaply. In the sequel we wll drop the ndex k to smplfy the notaton. The updated matrces A k+1, B k+1 wll be denoted by A +, B Updatng the Hessan. The Hessan wll be modfed as shown n [7 by the symmetrcrank-one update formula (SR1): where B + = B + (w Bs)(w Bs)T (w Bs) T s B + ɛhh T, (1.6) w = B + s g(x +, λ) g(x, λ) (1.7) and s s k as n (1.3). It can be seen from (1.7) that ths update satsfes the drect secant condton. By Automatc Dfferentaton the vectors g can be evaluated n the adjont mode wthout the knowledge of the full Jacoban c(x) Updatng the Jacoban. Ths matrx can be updated smlarly to the Hessan wth the two-sded-rank-one update formula (TR1) [7: It satsfes the drect secant condton A + = A + (y As)(µT σ T A) µ T s σ T As and the adjont secant condton up to O( σ s 2 ) A + δrρ T. (1.8) A + s = y c(x + ) c(x) (1.9) σ T A + = µ T σ T c(x + ) (1.10) where σ σ k and s s k are as n (1.3). Equaton (1.10) can be computed n the reverse or adjont mode of automatc dfferentaton. Unless the constrant functon c(x) s affne the two condtons wll not be exactly consstent. But the devaton wll only be of order O( σ s 2 ) whch s wthn the scope of quas Newton methods. More detals about these two updates n ths optmzaton context are gven n [ Null-space Representaton. For the soluton of KKT systems t s necessary to solve the lnear system of equatons (1.3). One way to do ths conssts of decomposng the matrces A and B. Here and throughout the paper we wll assume that A has full rank. In contrast to [8 a complete LU factorzaton nstead of a QR decomposton of the approxmate Jacoban wll be performed. P z AP s = LU (1.11) where U = [ U 1 U 2 m m m d wth d = n m and U1 nonsngular. As n [8 the approxmate Hessan wll be projected to the null- and range-space of A T. 2 The

3 columns of [ U 1 Z = P s Z wth Z = 1 U 2 I d (1.12) form a bass of the null-space of A. As range-space bass we wll use the columns of the permuted dentty augmented by a block of zeros [ Im Y = P s Ỹ = P s 0 n m. (1.13) Combnng these spaces to Q = [ Y Z, (1.3) can be transformed by multplyng from left and rght by Q, respectvely ts transposed to E C U 1 T LT P z C T M 0 s y s z = Y T g Z T g, (1.14) Pz T LU σ c wth the notaton E = Y T BY m m, C = Y T BZ m d, M = Z T BZ d d. (1.15) Intally we assume that M s postve defnte. Ths can be acheved by startng wth dentty matrces for A and B. Factorzng the matrx M s necessary to solve system (1.14). To do so we wll use a transposed Cholesky factorzaton: M = RR T where R s an upper trangular matrx. Ths s only possble f n every step M s postve defnte. We wll ensure ths e.g. by dampng the (SR1) or (TR1) update. Further note that because of ther structures t s not necessary to store Y and Z explctly. Instead we use the mplct representaton va the LU decomposton of A. Consequently our total storage requrement s the same as that for A and B, and far less than that requred by the QR factorzaton Updatng Decompostons. Decomposng the KKT systems (1.3) n the prevous descrbed way costs O(n 3 ) operatons n every Newton step whch s qute expensve n the case of large problems. The computatonal effort can be reduced by one order to O(n 2 ) f A and B need not to be refactorzed n every step. Because of the low-rank correctons to A and B by the (SR1) and (TR1) update formulas ths s possble by updatng the factors drectly. Two dfferent procedures for updatng the LU factorzaton wll be presented n detal and compared regardng ther costs of computaton and ther stablty. Moreover we wll ntroduce an extenson of these algorthms whch wll further mprove the computaton tme. Numercal examples confrm that ths approach s an excellent alternatve to algorthms based on orthogonal transformatons. 2. LU Updatng. The LU factorzaton P z AP s = LU of a dense matrx A m n has an algebrac complexty of O(nm 2 ) operatons n general. In partcular n our applcaton where we have to solve a sequence of KKT systems the assocated sub-matrces undergo a sequence of low-rank modfcatons. Ths requres the factorzaton of A n every step. Thus t s better to factorze A only once at the begnnng of the computaton. Then the factors P z, P s, L and U can be updated drectly wth an effort of O(mn) operatons. Next, two dfferent algorthms of Bennett [1 and Schwetlck/Kelbasnsk [10 respectvely Fletcher/Matthews [2 wll be llustrated. Also a new method combnng these two algorthms s shown. In addton a new approach for the case of column permutatons wll be presented. Further we wll show how the updatng procedure can be faster by an effcent row-wse mplementaton and regardng the structure of the low rank term. Moreover t wll be shown that the algorthm by Bennett s advantageous n the symmetrc postve defnte case. The basc problem can be descrbed as follows. Let the rank-one modfcaton be A + = A + uv T, (2.1) 3

4 where u m and v n and let the decomposton P z AP s = LU (2.2) be gven and assume that A + also has full rank. We want to compute new updated factors L +, U +, P z +, P s + such that A + = (P + z ) T L + U + (P + s ) T = P T z LUP T s + uv T. (2.3) 2.1. Algorthm I - Schwetlck/Kelbasnsk. At frst we ntroduce the updatng algorthm by Schwetlck/Kelbasnsk [10. It can be used for updatng an LU factorzaton by a rank-one term only usng row pvotng. In ths method L s a lower trangular matrx wth unt dagonal. Durng the procedure the column permutaton P s s kept unchanged. In the case of rectangular matrces A ths may cause problems. After updatng t can occur that the leadng part U 1 + of U + does not have full rank. Then t s necessary to permute a column from the rear part to the front n order to restore the regularty of U 1. A new method how to fnd and permute approprate columns wll be descrbed n addton to the man algorthm. The updatng procedure wthout column pvotng (P s I) works as follows: From (2.3) one obtans A + = Pz T L(U + ũvt ) wth Pz T Lũ = u. (2.4) Usng a sequence of elementary transformatons, the vector ũ wll be reduced to a multple of the frst vector of unty. Ths wll be done by elmnatng the components of ũ step by step, startng from the last one and gong upwards. Ths procedure requres pvotng snce the entres of ũ may be small. The elmnaton process looks lke where à = (Pz T P u T ) P T 1 (P u LT u ) } {{ } L (T u U + T u ũv T ) Ũ (2.5) T u = T m 1 T m 2 T 1, P u = P m 1 P m 2 P 1. (2.6) In the smplest case we can fnd a lower trangular matrx T such that A () = P z (LT 1 )(T U + T ũv T ) (2.7) and T elmnates ũ +1. If pvotng s requred then T = T,U T,L P s used, where a) P s chosen to nterchange ũ and ũ +1, b) T,L s a lower unt trangular matrx that elmnates the + 1-th component of P ũ and fnally c) T,U turns (P LP T trangular form. T 1 1,L )T,U back to lower P T P T P T P T P T (P LP T ) (P U L a [ ( [ U a (L a T 1,L ) (T,LU a L b [ ( [ U b (L b T 1,U ) (T,U U b L 0 U 0 [ ( [ P1 T + P ũ + v T ) u a ( ) ) ( ) + T,Lu a v T ) u b + ( 0 ) ( ) + T,Uu b v T ) u 0 + ( 0 ) ( ) ) ) 4

5 Repeatng the elmnaton process fnally leads to where à = (Pz T Pu T ) P T 1 (P u LT u ) } {{ } L (T u U + T u ũ v T ) (2.8) û Ũ T u = T m 1 T m 2 T 1, P u = P m 1 P m 2 P 1, (2.9) and Ũ s upper Hessenberg, P s a permutaton matrx and L s lower unt trangular. Snce û s a multple of the frst unt vector the rank-one term ûv T can be added to Ũ wthout destroyng the Hessenberg form. Fnally, to elmnate the extra lower sub-dagonal n Ũ a second sequence of transformatons has to be done accordng to the same prncple: where A + = ( P T Pd T ) 1 (P d LT d ) (T d Ũ) (2.10) L + U + P + z T d = T 1 T 2 T m 1, P d = P 1 P 2 P m 1. (2.11) The number of operatons amounts knm for updatng an (m n) -matrx by ths algorthm. Here s 5 k 9 where the best case arses f no permutatons are requred and the worst bound s attaned n the opposte case.e. f one has to permute every row. To acheve numercal stablty pvotng s done n [10 f u < l +1, u + u +1. (2.12) Ths condton ensures that T 1,L and T 1,U always elmnate a small element by a larger one n modulus. Further ths guarantees the property that all entres n the frst lower sub-dagonal of L reman smaller than one n modulus. Hence all other elements n ths matrx can only grow by a factor of three. Certanly, ths strategy causes a large number of permutatons whereas the runtme of the algorthm s growng. For ths reason t s advsable to nclude a dampng factor τ n (2.12): u < τ l +1, u + u +1 (2.13) where 0 < τ 1. As a compromse between numercal stablty and algebrac effcency we suggest to chose τ = Column Permutatons. In the case of updatng a rectangular LU factorzaton, column permutatons n U are requred to keep the leadng part of U nonsngular. They are necessary f an element on the man dagonal n U 1 becomes very small or equals zero. Then ths column u U 1 must be nterchanged wth a sutable column u j U 2. Ths permutaton corresponds wth U + = UP j (2.14) where P j only nterchanges the columns, j and 0 < m < j n. Assume that after a rank-one modfcaton the matrx U has the followng form: U (1) = u 0 ɛ 0 0 [ U 1 U (2.15)

6 where 0 ɛ max[ dag(u 1 ). Then column u can be seen as an approxmate lnear combnaton of the columns n front of t. In ths case U 1 s almost rank defcent. To restore nonsngularty, u s moved to the rear part U 2. The frst task s to determne a sutable column u j U 2 whch can be nterchanged wth u to restore the regularty of the frst block. Therefore at frst the element ɛ must be transformed to the last dagonal poston u mm. So we have to remove column u and re-nsert t n the m -th poston. Ths corresponds to a column permutaton n U 1 : U (2) = U (1) P (1) = u 0 ɛ 0 0 (2.16) Now U (2) has become upper Hessenberg and the new lower sub-dagonal entres have to be elmnated wth the elementary transformatons ntroduced n Secton 2.1. U (3) = T U (2) = ũ u j ɛ u mj (2.17) Now the small element ɛ has been moved down n U and we can compare t wth the other elements n the last row of U 2. The column u j of U 2 assocated wth largest entry u mj n modulus wll be nterchanged wth ũ. U + = U (3) P (2) = u j (2.18) u mj ɛ ũ Now the matrx U 1 has been transformed to the desred nonsngular form. The steps (2.16) and (2.17) are only necessary to determne whch column of U 2 must be permuted. The permutaton tself can be done as the followng rank-one update drectly nterchangng u and u j U + = U + ( u u j ) ( ej e ) T. (2.19) So t s more advantageous to perform (2.16) and (2.17) on a temporary vector nstead explctly on U Algorthm II - Bennett. The algorthm by Bennett s used to update a trangular factorzaton by drectly changng the factors L and U step by step. All matrces except the permutatons are as n (2.1)-(2.3). To our knowledge ths procedure can not be combned wth pvotng. Of course, ths can cause numercal nstabltes but offers some great mprovements n runtme. Bennetts approach s qute dfferent to Algorthm I. Here the update s done recursvely based on: [ [ [ [ [ A11 A A = U11 U = L A 21 A 1 U 22 0 à 1 = 12, (2.20) I 0 à 0 I where à = A 22 L 21 U 12 represents the Schur complement. L 1 s the dentty matrx where only the elements on the frst column are replaced by correspondng elements of the frst column of L, 6 L 21

7 U 1 s defned analogously. Usng ths notaton we can update the matrx factors row by row and column by column begnnng at the top. Gven [ ( ) ( ) T 1 0 A + = A + γuv T u1 v1 = L 1 U 0 à 1 + γ (2.21) u 2 v 2 we obtan [ [ [ A + U + = 11 U + L + 21 I 0 à I (2.22) L + 1 U + 1 where U + 11 = U 11 + u 1 v 1, L + 21 = L 21U 11 + u 2 v 1 U + 11, U + 12 = U 12 + u 1 v T 2. (2.23) In addton the new Schur complement Ã+ can be represented as a remanng rank-one modfcaton of the former Schur complement n the form à + = à + γũṽt (2.24) where γ = γ U + 11, ũ = u 2 L 21 u 1, ṽ T = U 11 v T 2 v 1 U 12. (2.25) After calculatng these terms the procedure can be restarted wth the new reduced rank-one update (2.24) for Ã. The new scalar factor γ can be ncluded n one of the vectors ũ, ṽt. That way the complete updatng process can be done step by step obtanng the new factors L + and U +. The number of operatons amounts 4nm for updatng an (m n)-matrx by ths algorthm, cp. fgure (2.1). Ths s always better than Algorthm I even f there are no permutatons necessary. Furthermore ths method can be easly extended for hgher rank modfcatons. In addton t s applcable to the symmetrc postve defnte case. Moreover, Bennett s algorthm can be mplemented n a way whch allows row-wse computaton of the new elements of L and U. Ths s a sgnfcant pont n modern computatons where memory accesses needs more and more tme compared to floatng pont operatons whch become faster. Smlar to Algorthm I where the row-wse memory access can only be done f no pvotng s requred t reduces sgnfcantly the computatonal tme for updatng the matrx L. Therefore the modfcaton of L has to be delayed. Ths corresponds to the kj-varant of the Gaussan elmnaton whch s used e.g. for ncomplete LU factorzatons [11. Fgure (2.1) dsplays the new row-wse Algorthm n contrast to the standard verson of Bennett. In fgure (2.2) and fgure (2.3) the dfferences n matrx access between these two algorthms s shown. The hatchng shows whch matrx areas have been changed untl step durng the updatng procedure. Nevertheless the man problem n ths algorthm s that there are no known possbltes to combne pvotng wth the low rank update. Ths can cause numercal stablty problems durng the updatng procedure. Remark In the case of updatng the LDL T factorzaton of a symmetrc postve defnte matrx, ths method corresponds to the algorthm ntroduced n [3. It turns out that ths updatng technque s an excellent alteratve to methods usng plane rotatons. 7

8 Standard recursve LU updatng 1: for = 1 to m do 2: //dagonal update 3: U = U + u v 4: v = v /U 5: for j = + 1 to m do 6: //L update 7: u j = u j u L j 8: L j = L j + v u j 9: end for 10: for j = + 1 to n do 11: //U update 12: U j = U j + u v j 13: v j = v j v U j 14: end for 15: end for Row-wse recursve LU updatng 1: for = 1 to m do 2: for j = 1 to 1 do 3: //delayed L update 4: u = u u j L j 5: L j = L j + v j u 6: end for 7: //dagonal update 8: U = U + u v 9: v = v /U 10: for j = + 1 to n do 11: //U update 12: U j = U j + u v j 13: v j = v j v U j 14: end for 15: end for Fg Standard and Row-wse Algorthms for the Rank-One Modfcaton of the LU Factorzaton PSfrag replacements Fg Standard Bennett Algorthm PSfrag replacements Fg Row-wse Bennett Algorthm PSfrag replacements 2.4. Algorthm III - Extenson. Here a new combnaton of the prevous descrbed algorthms wll be presented. Thereby the good features from both wll be combned. Furthermore the method wll be mproved for update vectors begnnng wth a leadng part consstng of zero elements. Ths s useful n the KKT applcaton ntroduced n ths paper, where lnear constrants causes such zero entres. Ths algorthm wll be done sequentally by the followng three stages: explotng leadng zeros n u resp. v to the update startng wth Algorthm II as long as t s stable swtchng to Algorthm I n the case that pvotng s requred In each step the whole updatng problem wll be reduced as dsplayed n Fgure (2.4). U 11 U 21 L 11 Ũ 11 Ũ12 L 11 L 21 L 22 U 22 L 21 L 22 Ũ 22 ˆL Û Fg Alg.III 8

9 Step 1: If one of the update vectors has a leadng block of zero entres, only a sub-matrx of A has to be changed. We wll llustrate the nfluence on the factors n the case u T = ( ) 0 u T 2. Here only the lower part of A wll be modfed: [ [ [ A + A + = 1 A1 + 0 ρ = T L A 2 + u 2 v T = 11 U 11 L 11 U 12 L 21 U 11 + u 2 v1 T L 21 U 12 + L 22 U 22 + u 2 v2 T. (2.26) A + 2 That means that the matrx parts L 11, U 11 and U 12 reman unchanged. The other parts can be computed as follows. Gven we obtan the formula to calculate L + 21 L + 21 U + 12 = L 21U 12 + ũ 2 v T 1 (L + 21 L 21)U 12 = ũ p v T 1 L + 21 = L 21 + ũ p v T 1 U Now the remanng new sub-matrces L + 22 and U 22 + updatng problem. Ths s represented by can be computed n form of a new dense LU L + 21 U 12 + L + 22 U 22 + = L 21U 12 + L 22 U 22 + ũ 2 v2 T (L + 21 L 21)U 12 + L + 22 U 22 + = L 22U 22 + ũ 2 v2 T L + 22 U 22 + = L 22U 22 + ũ 2 (v2 T vt 1 U ). (2.27) ṽ T From (2.27) we conclude that t s suffcent to consder the reduced updatng problem for L 22 and U 22. Ths computaton can be done n an analogous way f the vector v has leadng zero entres. Step 2: Gven the remanng submatrces L 22, U 22 and the update vectors ũ 2, ṽ T, we ntally start usng Algorthm II where v 2 s replaced by ṽ. As we wll see n Secton (4), ths algorthm wll be sgnfcantly faster than Algorthm I. For reasons of effcency we wll chose the row-wse verson. Algorthm II wll be used as long as t s stable (see step 3). If we have to stop t for stablty, then L + 21 s not yet computed. Thereby L 21 wll be modfed usng L 11, Ũ11. Ths wll be done by the standard verson of Algorthm II. Step 3: In step we swtch to Algorthm I f U + τ max( U + 2 ) where (0 τ 1). (2.28) Ths condton s used to safeguard the updatng process and t avods tny pvots. Up from ths pont the remanng matrx parts ˆL and Û wll be updated by Algorthm I. Notce that row and column permutatons appled to ˆL and Û wll also be performed on L+ 21, L + 21, U 12 +, Ũ KKT Updatng. At frst we have to modfy the approxmate Jacoban A + = A + δrρ T, (3.1) where δ, r and ρ are as n Secton 1.2. These terms are computed as part of the optmzaton process usng s and σ accordng to (1.8) after solvng (1.3). Here we assume that A and A + have full rank. Ths can be guaranteed, e.g. by dampng the update (3.1). The modfcaton (3.1) can be done by one of the algorthms descrbed n Secton 2. Whenever A wll be modfed we have to adjust ts correspondng null-space Z. Due to the regularty of the approxmate Jacoban, Z s regular n every step. 9

10 3.1. Updatng the Null-Space. If the approxmate Jacoban s updated by a rank-one term, the modfcaton of the correspondng null-space to A s of rank-one, too. Ths rank correcton can be computed n a way that Z mantans ts trapezod structure. So we obtan Z + = Z + zρ T z, where z n, ρ z d. (3.2) 1 Modfcatons only occur n the matrx product Ẑ = U1 U 2 of Z. Hence only the upper part of z denoted by z m s non-zero. These vectors z and ρ z can be calculated usng the Sherman-Morrson-formula [4. The new sgnfcant null-space s gven by Ẑ + = (U 1 + ) 1 U 2 + = [ U 1 + r ρ T 1 [ 1 U2 + r ρ T 2 where ρ T = [ ρ T 1 ρ T 2 = ρ T P s and r = δl 1 Pz T r. Straghtforward computaton yelds [ [ (U2 ) + r( ρ T 2 ) Ẑ + = Ẑ U 1 1 r z ( ρ T 2 ) ( ρt 1 )U 1 1 } 1 + ( ρ T 1 1 r {{ } ρ T z )U 1 (3.3), (3.4) whch represents the null-space modfcaton as rank-one correcton correspondng to the update of A. Snce a column nterchange n A can also be read as a rank-one update, the null-space modfcaton of Ẑ can be computed analogously. In the partcular case of an addtonal column nterchange n A, t s necessary to collect the two separate rank-one updates nto a sngle rank-two update to avod numercal problems. Wth the knowledge of z and ρ z the projected Hessan can be adjusted wth respect to the null-space of A. In the followng ths wll be descrbed n detal Updatng the Projected Hessan. The projected Hessan has to be modfed whenever one of the followng three cases occurs: (a) the SR1 update (1.6): B + = B + ɛhh T (b) modfcaton of the null-space (3.2): Z+ = Z + zρ T z (c) column permutaton n A: P s + = P j P s (a) The matrces E and C are updated wth respect to the rank-one correcton of B. We obtan and E + = Y T B + Y = Y T (B + ɛhh T )Y = Y } T {{ BY } +ɛy T hh T Y (3.5) E C + = Y T B + Z = Y T (B + ɛhh T )Z = Y } T {{ BZ } +ɛy T hh T Z. (3.6) C The update of the projected null-space M = Z T BZ T can be represented by Z T B + Z = R + R T + = ZT ( B + ɛhh T ) Z = RR T + ɛh z h T z, (3.7) where h z = Z T h. As long as the rank-one update s constructed to preserve postve defnteness, t can be computed wth several algorthms for updatng the Cholesky factorzaton [3, 4. Here we have to pay attenton to choose ɛ such that M remans postve defnte. Ths can be acheved e.g. by dampng the rank-one term or by pre-adjustng the Hessan [8. (b) The changes n E and C caused by the null-space modfcaton of Z can be done n a very smlar way to case (a). In contrast to the frst case, M underles a rank-two modfcaton R + R+ T = Z+BZ T + = ( Z T + ρ z z T ) B ( Z + zρ T ) z = Z } T {{ BZ } +ρ z b T z + b z ρ T z + µρ z ρ T z, (3.8) RR T 10

11 where b z = Z T B z and µ = z T B z. Once more we have to avod to loose postve defnteness of M. Ths can be acheved usng the strateges descrbed n [8. (c) In the case of column permutatons appled to A at frst we adjust the projected Hessan smlarly to case (b) but use a rank-two term. Ths rank-two correcton conssts of the modfcatons n Ẑ caused by the rank-one update of A (1.8) and by the column permutaton n U (2.19), compare Secton 3.1. Furthermore, the two rows n P s whch were nterchanged, cause addtonal changes n E, C and M. As before the row nterchange n the null-space P s + Z + = P j P s Z+ can be read as a rank-one update whch leads us to case (b) agan. At last the permutaton P s occurrng n the range-space bass Y (1.13) has to be regarded, too. Ths requres to remove and re-compute some sngle rows, resp. sngle columns n E and C. We wll show ths for the matrx C. Startng wth C = Ỹ T Ps T BP s Z we obtan C (b) = Ỹ T Ps T BP s + Z + after a low rank correcton from the rght. Our objectve s to compute C + = Ỹ (P T s )+ BP + s Z + = Ỹ P T s P T j BP + s Z +. (3.9) Suppose that C (b) s gven, then C + can be obtaned from C (b) by replacng the -th row c T c T j,.e. c T 1. C (b) = c T. c T m c T 1. = C + = c T j. c T m by, (3.10) where 1 m < j n and c T j can be computed as c T j = e T j P s T BZ. To do so, we need the sngle row [e T j P s T B of B. Snce B s not explctly stored we have to use ts representaton [ Y T Z T B [ Y Z [ E C = C T RR T [ [ Im 0 E C = B = P s ẐT I d C T RR T [ Im Ẑ 0 I d P T s. (3.11) 4. Numercal Results. In ths secton we wll compare the algorthms descrbed prevously regardng ther runtme. The tests were done on an Athlon-XP machne wth 256kB CPU cache and 512 MB man memory. We mplemented the algorthms n C usng the operatng system Lnux and the gcc compler wth the opton -o3. At frst we wll compare the computaton tme of several low-rank update algorthms for rectangular matrces. After that we wll solve a KKT problem arsng from constraned optmzaton by the quas Newton approach of Secton Rectangular LU Updatng. The updatng algorthms Alg.I, Alg.II and the QRupdatng algorthm of [10 wll be compared n runtme. In our computaton we started wth the dentty and appled our algorthms to 50 randomly generated rank-one modfcatons. The computaton tmes n Table 4.1 are gven n seconds. Alg.I represents the updatng method of Schwetlck. It s used n four dfferent varatons. The frst two cases use τ = 0 whch means that no row permutaton wll be done. These two varants dffer n the way L s updated. The thrd verson (τ = 1) represents the orgnal algorthm descrbed n [10. Verson 4 of Schwetlck s method uses the relaxed parameter τ = 0.1 whch typcally reduces the number of row nterchanges. Alg.II shows the method of Bennett whch s appled n ts orgnal form [1 and n the new row-wse mplementaton (Fgure 2.1). The performance of the QR-updatng algorthm s shown n the column QR. 11

12 Table 4.1 Comparng LU Updatng dmenson Alg.I Alg.I Alg.I Alg.I Alg.II Alg.II QR m n τ = 0 τ = 0, row-wse τ = 1 τ = 0.1 row-wse As we can see, the row-wse verson of Alg.II s always the fastest. Especally n the cases of large matrx dmensons t s sgnfcantly faster than all other methods. Of course t offers no possbltes to prevent numercal nstabltes by pvotng. Among all algorthms that address stablty t has turned out that Alg.I wth τ = 0.1 performs best. It s approx. 30% faster than n the case where τ = 1 and t s approx. 40% faster than the QR update. For these experments based on random updates we dd not use column pvotng snce we observed that column nterchanges were hardly necessary for ths class of problems. For ths reason Alg.III whch combnes Alg.I and Alg.II wll be dscussed n the next example. In a further example we wll update a rectangular matrx 50 tmes by structured rank-one modfcatons. That means we want to use a sequence of rank-one correctons n the way as we expect n the optmzaton procedure. Startng wth vectors whch have a large norm leads to a lot of pvotng at the begnnng. Step by step we reduce ths vector norm so that no permutatons wll be requred fnally. We wll compare Alg.III to the QR-based method, to Alg.I usng full pvotng and to the fast Bennett method. In Table 4.2 the runtme for the total updatng process s pre- Table 4.2 Comparng Structured Updatng QR-based LU-based n m Alg.I Alg.II Alg.III sented. We can see that all LU-based algorthms, Alg.I, Alg.II (row-wse) and Alg.III are faster than the QR-verson. Furthermore t turns out that only Alg.III can take an advantage of the specal structure of the updatng sequence KKT solvng. In ths secton we wll compare methods for solvng a whole KKT problem. On one hand QR-based algorthms for factorzng and updatng A k together wth Gvenstechnques for modfyng B k are consdered. On the other hand the new LU-methods from Alg.I and Alg.III appled to A k are used n combnaton wth the algorthm of [3 for B k. We use an optmzaton envronment whch s provded by Andrea Walther 1. Ths code uses a globalzng approach based on lne search. For computng the requred dervatves the AD-tool ADOL-C [6 s used. We wll solve the followng optmzaton problem [9: 1 Insttute for Scentfc Computng, D Dresden, Germany 12

13 Mnmze f n 1 ( ) 2 f(x) = x + x +1 (4.1) =1 subject to c x + 2x x +2 1 = 0 (1 n 2). (4.2) We ntalze x as a random vector where ( 0.5 x 0.5). The dervatve matrces n the frst step are chosen as A 0 = [ I 0 and B 0 = I. In fact of the quadratc-lnear structure of ths problem, the optmal soluton should be acheved after n teratons. As we can see n Table 4.3 the LU-based method s sgnfcantly faster than the Table 4.3 KKT Example n LU-based steps QR-based steps QR-based verson. Further, the number of teratons are close to the expected theoretcal value. 5. Conclusons. We have ntroduced several new effcent algorthms for updatng LUfactorzatons after low-rank modfcatons. The algorthm of Kelbasnsk/Schwetlck was extended for rectangular matrces usng addtonal column pvotng wthout loosng ts quadratc complexty. Ths s a central pont for applyng LU decompostons to non-square systems. Moreover we mproved the algorthm of Bennett regardng to effcent matrx access n memory. In addton a new method for specal structured updates was developed. These new algorthms can sgnfcantly mprove dfferent applcatons. Ths s shown n the case of solvng nonlnear constrant optmzaton problems. Numercal results prove that our method s much faster than, e.g. usng a QR-based verson. So far we have not dscussed the case when B k turns to be ndefnte. Currently the updates are suffcently damped. The generalzaton to the ndefnte case wll be dscussed n an upcomng paper. Our promsng numercal results ndcate that our low-rank-modfcaton algorthms can also be used effcently n a wde feld of applcatons. 13

14 REFERENCES [1 J. Bennett, Trangular factors of modfed matrces, Numersche Mathematk, 7 (1965), pp [2 R. Fletcher and S. Matthews, Stable modfcaton of explct lu factors for smplex updates, Mathematcal Programmng, 30 (1984), pp [3 R. Fletcher and M. Powell, On the modfcaton of ldl t factorzatons, Mathematcs of Computaton, 28 (1974), pp [4 G. Golub and C. van Loan, Matrx Computatons, The Johns Hopkns Unversty Press, second ed., [5 A. Grewank, Evaluatng Dervatves, Prncples and Technques of Algorthmc Dfferentaton, Number 19 n Fronters n Appl. Math. SIAM, [6 A. Grewank, D. Juedes, and J. Utke, Adol-c, a package for the automatc dfferentaton of algorthms wrtten n c/c++, ACM Trans. Math. Software 22, (1996), pp [7 A. Grewank and A. Walther, On constraned optmzaton by adjont based quas-newton methods, Optmzaton Methods and Software, 17 (2002), pp [8 A. Grewank, A. Walther, and M. Korzec, Mantanng factorzed kkt systems subject to rank-one updates of hessans and jacobans, (2005). [9 W. Hock and K. Schttkowsk, Test examples for nonlnear programmng codes, Lectures Notes n Economcs and Mathematcal Systems, (1987). [10 A. Kelbasnsk and H. Schwetlck, Numersche lneare Algebra, Verlag Harr Deutsch, [11 Y. Saad, Iteratve Methods for Sparse Lnear Systems, PWS Publshng, Boston,

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo

form, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm

On a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm Jacob SVD Gabrel Okša formulaton One-Sded Block-Jacob Algorthm Acceleratng Parallelzaton Conclusons On a Parallel Implementaton of the One-Sded Block Jacob SVD Algorthm Gabrel Okša 1, Martn Bečka, 1 Marán

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra

MEM 255 Introduction to Control Systems Review: Basics of Linear Algebra MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1 C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k) STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system. Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that

P A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Solving Security Constrained Optimal Power. Flow Problems by a Structure Exploiting. Interior Point Method

Solving Security Constrained Optimal Power. Flow Problems by a Structure Exploiting. Interior Point Method Noname manuscrpt No. (wll be nserted by the edtor) Solvng Securty Constraned Optmal Power Flow Problems by a Structure Explotng Interor Pont Method Na-Yuan Chang Andreas Grothey Receved: date / Accepted:

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Numerical Properties of the LLL Algorithm

Numerical Properties of the LLL Algorithm Numercal Propertes of the LLL Algorthm Frankln T. Luk a and Sanzheng Qao b a Department of Mathematcs, Hong Kong Baptst Unversty, Kowloon Tong, Hong Kong b Dept. of Computng and Software, McMaster Unv.,

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

An efficient algorithm for multivariate Maclaurin Newton transformation

An efficient algorithm for multivariate Maclaurin Newton transformation Annales UMCS Informatca AI VIII, 2 2008) 5 14 DOI: 10.2478/v10065-008-0020-6 An effcent algorthm for multvarate Maclaurn Newton transformaton Joanna Kapusta Insttute of Mathematcs and Computer Scence,

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

SL n (F ) Equals its Own Derived Group

SL n (F ) Equals its Own Derived Group Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu

More information

Temperature. Chapter Heat Engine

Temperature. Chapter Heat Engine Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition

Single-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals Smultaneous Optmzaton of Berth Allocaton, Quay Crane Assgnment and Quay Crane Schedulng Problems n Contaner Termnals Necat Aras, Yavuz Türkoğulları, Z. Caner Taşkın, Kuban Altınel Abstract In ths work,

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1 P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013

Lagrange Multipliers. A Somewhat Silly Example. Monday, 25 September 2013 Lagrange Multplers Monday, 5 September 013 Sometmes t s convenent to use redundant coordnates, and to effect the varaton of the acton consstent wth the constrants va the method of Lagrange undetermned

More information

Fundamental loop-current method using virtual voltage sources technique for special cases

Fundamental loop-current method using virtual voltage sources technique for special cases Fundamental loop-current method usng vrtual voltage sources technque for specal cases George E. Chatzaraks, 1 Marna D. Tortorel 1 and Anastasos D. Tzolas 1 Electrcal and Electroncs Engneerng Departments,

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information