A Riemannian Limited-Memory BFGS Algorithm for Computing the Matrix Geometric Mean
|
|
- Norma Arnold
- 5 years ago
- Views:
Transcription
1 A Remannan Lmted-Memory BFGS Algorthm for Computng the Matrx Geometrc Mean Xnru Yuan 1, Wen Huang 2, P.-A. Absl 2 and Kyle A. Gallvan 1 1 Florda State Unversty 2 Unversté Catholque de Louvan July 3, 2016 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 1
2 Symmetrc Postve Defnte (SPD) Matrx Defnton A symmetrc matrx A s called postve defnte f x T Ax > 0 for all nonzero vector x R n. 2 2 SPD matrx v λv u λu u λu 3 3 SPD matrx w λw v λv A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 2
3 Possble Applcatons of SPD Matrces Dffuson tensors n medcal magng A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 3
4 Motvaton of Averagng SPD Matrces Subtas n nterpolaton schemes Example: retreve a tensor feld between measured tensor values at sparse ponts [PFA06] 1 Fgure : Intal tensor measurements Fgure : Retreved tensor feld after a certan nterpolaton scheme 1 X. Pennec, P. Fllard and N. Ayache, A Remannan framewor for tensor computng, Internatonal Journal of Computer Vson, 2006 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 4
5 Averagng Schemes: from Scalars to Matrces Let A 1,..., A K be SPD matrces. A =1 Not approprate n many practcal applcatons Generalzed arthmetc mean: 1 K K A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 5
6 Averagng Schemes: from Scalars to Matrces Let A 1,..., A K be SPD matrces. A =1 Not approprate n many practcal applcatons Generalzed arthmetc mean: 1 K K A A+B 2 B det A = 50 det( A+B 2 ) = det B = 50 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 5
7 Averagng Schemes: from Scalars to Matrces Let A 1,..., A K be SPD matrces. A =1 Not approprate n many practcal applcatons Generalzed arthmetc mean: 1 K K A A+B 2 B det A = 50 det( A+B 2 ) = det B = 50 Generalzed geometrc mean: (A 1 A K ) 1/K Not approprate due to non-commutatvty How to defne a matrx geometrc mean? A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 5
8 Desred Propertes of a Matrx Geometrc Mean The desred propertes are gven n the ALM lst 2, some of whch are: Invarance under permutaton: G(A π(1),..., A π(k) ) = G(A 1,..., A K ) wth π a permutaton of (1,..., K) 2 T. Ando, C.-K. L, and R. Mathas, Geometrc means, Lnear Algebra and Its Applcatons, 385: , 2004 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 6
9 Desred Propertes of a Matrx Geometrc Mean The desred propertes are gven n the ALM lst 2, some of whch are: Invarance under permutaton: G(A π(1),..., A π(k) ) = G(A 1,..., A K ) wth π a permutaton of (1,..., K) Consstency wth scalars: f A 1,..., A K commute, then G(A 1,..., A K ) = (A 1,..., A K ) 1/K 2 T. Ando, C.-K. L, and R. Mathas, Geometrc means, Lnear Algebra and Its Applcatons, 385: , 2004 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 6
10 Desred Propertes of a Matrx Geometrc Mean The desred propertes are gven n the ALM lst 2, some of whch are: Invarance under permutaton: G(A π(1),..., A π(k) ) = G(A 1,..., A K ) wth π a permutaton of (1,..., K) Consstency wth scalars: f A 1,..., A K commute, then G(A 1,..., A K ) = (A 1,..., A K ) 1/K Invarance under nverson: G(A 1,..., A K ) 1 = G(A 1 1,..., A 1 K ) 2 T. Ando, C.-K. L, and R. Mathas, Geometrc means, Lnear Algebra and Its Applcatons, 385: , 2004 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 6
11 Desred Propertes of a Matrx Geometrc Mean The desred propertes are gven n the ALM lst 2, some of whch are: Invarance under permutaton: G(A π(1),..., A π(k) ) = G(A 1,..., A K ) wth π a permutaton of (1,..., K) Consstency wth scalars: f A 1,..., A K commute, then G(A 1,..., A K ) = (A 1,..., A K ) 1/K Invarance under nverson: G(A 1,..., A K ) 1 = G(A 1 1,..., A 1 K ) Determnant equalty: det(g(a 1,..., A K )) = (det(a 1 ) det(a K )) 1/K 2 T. Ando, C.-K. L, and R. Mathas, Geometrc means, Lnear Algebra and Its Applcatons, 385: , 2004 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 6
12 Geometrc Mean of SPD Matrces A well-nown mean on the manfold of SPD matrces s the Karcher mean [Kar77]: G(A 1,..., G K ) = arg mn X S n K K dst 2 (X, A ), where dst(x, Y ) = log(x 1/2 YX 1/2 ) F s the dstance under the Remannan metrc =1 g(η X, ξ X ) = trace(η X X 1 ξ X X 1 ). It has been shown recently [LL11] that the Karcher mean satsfes all the geometrc propertes n the ALM lst. A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 7
13 Remannan Manfolds A Remannan manfold M s a smooth set wth a smoothly-varyng nner product on the tangent spaces. A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 8
14 Manfold of SPD Matrces Let S n ++ be the manfold of SPD matrces Tangent space at X S n ++: T X S n ++ = {S R n n : S = S T } Dmenson: d = n(n + 1)/2 Remannan metrc: g(η X, ξ X ) = trace(η X X 1 ξ X X 1 ) A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 9
15 Algorthms G(A 1,..., G K ) = arg mn X S n K K dst 2 (X, A ), =1 Remannan steepest descent [RA11] Remannan steepest descent, conjugate gradent, BFGS, and trust regon Newton methods [JVV12] Rchardson-le teraton [BI13] Lmted-memory Remannan BFGS method [YHAG16] A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 10
16 Karcher Mean of SPD Matrces Hemsttchng phenomenon for steepest descent well-condtoned Hessan ll-condtoned Hessan Small condton number fast convergence Large condton number slow convergence A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 11
17 Karcher Mean of SPD Matrces Condtonng of the Remannan Hessan of F at the mnmzer: For the cost functon F (X ) = 1 2K K =1 dst2 (A, X ), we have Hess F (X )[ X, X ] 1 X ln(max κ ), 2 where κ s the condton number of A. If max κ = 10 10, then 1 + ln(max κ ) A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 12
18 Optmzaton Algorthm: from Eucldean to Remannan Update formula: x +1 = x + α η, where η s the search drecton and α s the stepsze. Search drecton: η = B 1 grad f (x ). A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 13
19 Optmzaton Algorthm: from Eucldean to Remannan Update formula: x +1 = x + α η, where η s the search drecton and α s the stepsze. Search drecton: η = B 1 grad f (x ). x x + α η R x (α η ) Optmzaton on a Manfold A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 13
20 Retracton Defnton A retracton s a mappng R from T M to M satsfyng the followng: R s contnuously dfferentable R x (0) = x DR x (0)(η) = η Eucldean Remannan x +1 = x + α η x +1 = R x (α η ) A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 14
21 Remannan Optmzaton Algorthm Eucldean Remannan x +1 = x + α η x +1 = R x (α η ) Steepest Descent: η = grad f (x ) Newton s Method: Quas-Newton Method: η = Hess f (x ) 1 grad f (x ) η = B 1 grad f (x ) A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 15
22 Remannan Gradent and Hessan Remannan gradent: grad F (X ) = 1 K K =1 A 1/2 log(a 1/2 XA 1/2 )A 1/2 X 1/2 Remannan Hessan: Hess F (X )[ξ X ] = 1 2K + 1 K K =1 ξ X log(a 1 X ) 1 2K K =1 K =1 X D(log)(A 1 X )[A 1 ξ X ] log(xa 1 )ξ X A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 16
23 Rchardson-le Iteraton [BI13] Update formula: X +1 = X αx 1/2 K =1 log(x 1/2 A 1 X 1/2 )X 1/2 Stepsze: α = 2/ K =1 κ + 1 κ 1 log κ, where κ s the condton number of matrx X 1/2 A 1 X 1/2 Local lnear convergence s proved A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 17
24 Rchardson-le Iteraton, Ctd The Rchardson-le teraton can be consdered from the vew of Remannan optmzaton algorthm X +1 = R X (αη ) Search drecton: η = grad F (X ) Choce of retracton: R X (η) = X + η Stepsze α satsfes the Wolfe condton A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 18
25 BFGS Quas-Newton Algorthm: from Eucldean to Remannan Search drecton: η = B 1 grad f (x ) B update: B +1 = B B s s T B s T B s + y y T y T s, where s = x +1 x, and y = grad f (x +1 ) grad f (x ). A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 19
26 BFGS Quas-Newton Algorthm: from Eucldean to Remannan Search drecton: η = B 1 grad f (x ) B update: B +1 = B B s s T B s T B s + y y T y T s, where s = x +1 x, and y = grad f (x +1 ) grad f (x ). On a manfold: replaced by R 1 x (x +1 ) on dfferent tangent spaces A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 19
27 Vector Transport Vector Transport Transport a tangent vector from one tangent space to another T ηx ξ x, denotes transport of ξ x to tangent space of R x (η x ) A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 20
28 Remannan BFGS (RBFGS) Algorthm Update formula: x +1 = R x (α η ) wth η = B 1 grad f (x ) B update [HGA15]: B +1 = B B s ( B s ) + y y ( B s ) s y s, where s = T α η α η, y = β 1 grad f (x +1 ) T α η grad f (x ), and B = T α η B Tα 1 η. A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 21
29 Remannan BFGS (RBFGS) Algorthm Update formula: x +1 = R x (α η ) wth η = B 1 grad f (x ) B update [HGA15]: B +1 = B B s ( B s ) + y y ( B s ) s y s, where s = T α η α η, y = β 1 grad f (x +1 ) T α η grad f (x ), and B = T α η B Tα 1 η. Stores and transports B 1 as a dense matrx Requres excessve computaton tme and storage space for large-scale problem A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 21
30 Lmted-memory RBFGS (LRBFGS) Remannan BFGS: B +1 = B B s ( B s ) + y y ( B s ) s y s, where s = T α η α η, y = β 1 grad f (x +1 ) T α η grad f (x ), and B = T α η B Tα 1 η Lmted-memory Remannan BFGS: Stores only the m most recent s and y Transports those vectors to the new tangent space rather than the entre matrx B 1 Computatonal and storage complexty depends upon m A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 22
31 Lmted-memory RBFGS (LRBFGS), Ctd B 1 update [HGA15]: B 1 = Ṽ Ṽ 1... Ṽ m H 0 +1Ṽ m... Ṽ 1 Ṽ + ρ m Ṽ Ṽ 1... Ṽ m+1s (+1) + + ρ s (+1) s (+1), s (+1) m m Ṽ m+1... Ṽ 1 Ṽ where Ṽ = d ρ y (+1) s (+1), ρ = g(y 1,s ), H +1 0 = g(s,y ) g(y,y ) d, and s (+1) represents a tangent vector n T x+1 M gven by transportng s, and lewse for y (+1) Avods expensve matrx operatons Requres less memory A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 23
32 Complexty Comparson RBFGS Lmted-memory RBFGS Acton Complexty Acton Complexty get B 1 +1 from B 1 O(d 2 ) - - B 1 η O(d 2 ) B 1 η O(md) d denotes the dmenson of SPD manfold A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 24
33 Algorthm Comparson Not sure where to put ths table Algorthm Remannan steepest descent Remannan BFGS Remannan lmted-memory BFGS Remannan Newton s method Convergence global, lnear global(convex), superlnear global(convex), faster than lnear global, quadratc A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 25
34 Outlne Bacground of the SPD Karcher mean computaton A bref descrpton of the lmted-memory RBFGS (LRBFGS) method Apply the LRBFGS to the SPD Karcher mean computaton Practcal mplementaton Experments on varous data sets A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 26
35 Implementatons Cost functon: F (X ) = 1 2K Remannan gradent: = 1 2K K dst 2 (A, X ) =1 K log(a 1/2 =1 XA 1/2 ) 2 F grad F (X ) = 1 K K =1 A 1/2 log(a 1/2 XA 1/2 )A 1/2 X 1/2 The domnant computaton tme s on the functon evaluaton A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 27
36 Implementatons Effcent representatons of tangent vectors, see [YHAG16] Retracton Exponental mappng: Exp X (ξ X ) = X 1/2 exp(x 1/2 ξ X X 1/2 )X 1/2 Vector transport Parallel translaton: T pηx (ξ X ) = Qξ X Q T, where Q = X 1/2 exp( X 1/2 η X X 1/2 2 )X 1/2 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 28
37 Implementatons Effcent numercal representatons of tangent vectors Retracton Exponental mappng: Exp X (ξ X ) = X 1/2 exp(x 1/2 ξ X X 1/2 )X 1/2 Second order retracton [JVV12]: R X (ξ X ) = X + ξ X ξ X X 1 ξ X Vector transport Parallel translaton: T pηx (ξ X ) = Qξ X Q T, where Q = X 1/2 exp( X 1/2 η X X 1/2 2 )X 1/2 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 29
38 Implementatons Effcent numercal representatons of tangent vectors Retracton Exponental mappng: Exp X (ξ X ) = X 1/2 exp(x 1/2 ξ X X 1/2 )X 1/2 Second order retracton: R X (ξ X ) = X + ξ X ξ X X 1 ξ X Vector transport Parallel translaton: T pηx (ξ X ) = Qξ X Q T, where Q = X 1/2 exp( X 1/2 η X X 1/2 2 )X 1/2 Vector transport by parallelzaton 3 : essentally an dentty 3 W. Huang, P.-A. Absl, and K. A. Gallvan, A Remannan symmetrc ran-one trust-regon method, Mathematcal Programmng, 2014 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 30
39 Numercal Results I: K = 100, 3 3, d = 6, m = 2 1 κ(a ) RSD 10 2 RSD 10 0 RCG LRBFGS 10 0 RCG LRBFGS 10-2 RL Iteraton 10-2 RL Iteraton dst(µ,x) dst(µ,xt) teratons tme (s) 10 3 κ(a ) RSD 10 2 RSD 10 0 RCG LRBFGS 10 0 RCG LRBFGS RL Iteraton RL Iteraton dst(µ,x) dst(µ,xt) teratons tme (s) Fgure : Evoluton of averaged dstance between current terate and the exact Karcher mean wth respect to tme and teratons A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 31
40 Numercal Results II: K = 30, , d = 5050, m = 2 1 κ(a ) RSD RCG LRBFGS RL Iteraton RSD RCG LRBFGS RL Iteraton dst(µ, Xt ) dst(µ, X ) teratons tme (s) 104 κ(a ) RSD RCG LRBFGS RL Iteraton dst(µ, Xt ) dst(µ, X ) RSD RCG LRBFGS RL Iteraton teratons tme (s) Fgure : Evoluton of averaged dstance between current terate and the exact Karcher mean wth respect to tme and teratons A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 32
41 Summary Analyze the condtonng of the Hessan of the Karcher cost functon Apply a Remannan verson of lmted-memory BFGS method to computng the SPD Karcher mean Present effcent mplementatons Compare wth the state-of-the-art methods The proposed LRBFGS method s seen to be the method of choce for computng the SPD Karcher mean when the data matrces ncrease n sze or number. A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 33
42 Implementaton n Matlab F (X ) = 1 2K = 1 2K = 1 2K K log(a 1/2 =1 XA 1/2 ) 2 F K log(a 1/2 LL T A 1/2 =1 K log[(a 1/2 =1 L)(A 1/2 ) 2 F L) T ] 2 F Let S = A 1/2 L, and SS T = QUQ T, where QQ T = I, and U s an upper trangle matrx whose dagonal elements are the egenvalues. Then F (X ) = 1 2K = 1 2K K log(ss T ) 2 F =1 K Q log(dag(u))q T 2 F =1 A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 34
43 References I D. A. Bn and B. Iannazzo. Computng the Karcher mean of symmetrc postve defnte matrces. Lnear Algebra and ts Applcatons, 438(4): , W. Huang, K. A. Gallvan, and P.-A. Absl. A Broyden class of quas-newton methods for Remannan optmzaton. SIAM Journal on Optmzaton, 25(3): , B. Jeurs, R. Vandebrl, and B. Vandereycen. A survey and comparson of contemporary algorthms for computng the matrx geometrc mean. Electronc Transactons on Numercal Analyss, 39: , H. Karcher. Remannan center of mass and mollfer smoothng. Communcatons on Pure and Appled Mathematcs, A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 35
44 References II Jmme Lawson and Yongdo Lm. Monotonc propertes of the least squares mean. Mathematsche Annalen, 351(2): , Xaver Pennec, Perre Fllard, and Ncholas Ayache. A remannan framewor for tensor computng. Internatonal Journal of Computer Vson, 66(1):41 66, Q. Rentmeesters and P.-A. Absl. Algorthm comparson for Karcher mean computaton of rotaton matrces and dffuson tensors. In 19th European Sgnal Processng Conference, pages , Aug Xnru Yuan, Wen Huang, P.-A. Absl, and Kyle A. Gallvan. A Remannan lmted-memory BFGS algorthm for computng the matrx geometrc mean. In ICCS 2016, A LRBFGS Algorthm for Computng the Matrx Geometrc Mean 36
Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013
ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run
More informationAn efficient algorithm for multivariate Maclaurin Newton transformation
Annales UMCS Informatca AI VIII, 2 2008) 5 14 DOI: 10.2478/v10065-008-0020-6 An effcent algorthm for multvarate Maclaurn Newton transformaton Joanna Kapusta Insttute of Mathematcs and Computer Scence,
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationOPERATOR POWER MEANS DUE TO LAWSON-LIM-PÁLFIA FOR 1 < t < 2
OPERATOR POWER MEANS DUE TO LAWSON-LIM-PÁLFIA FOR 1 < t < 2 YUKI SEO Abstract. For 1 t 1, Lm-Pálfa defned a new famly of operator power means of postve defnte matrces and subsequently by Lawson-Lm ther
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationOn a Parallel Implementation of the One-Sided Block Jacobi SVD Algorithm
Jacob SVD Gabrel Okša formulaton One-Sded Block-Jacob Algorthm Acceleratng Parallelzaton Conclusons On a Parallel Implementaton of the One-Sded Block Jacob SVD Algorthm Gabrel Okša 1, Martn Bečka, 1 Marán
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationA Riemannian Limited-Memory BFGS Algorithm for Computing the Matrix Geometric Mean
This space is reserved for the Procedia header, do not use it A Riemannian Limited-Memory BFGS Algorithm for Computing the Matrix Geometric Mean Xinru Yuan 1, Wen Huang, P.-A. Absil, and K. A. Gallivan
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationNumerical Properties of the LLL Algorithm
Numercal Propertes of the LLL Algorthm Frankln T. Luk a and Sanzheng Qao b a Department of Mathematcs, Hong Kong Baptst Unversty, Kowloon Tong, Hong Kong b Dept. of Computng and Software, McMaster Unv.,
More informationCRITICAL POINT ANALYSIS OF JOINT DIAGONALIZATION CRITERIA. Gen Hori and Jonathan H. Manton
CRITICAL POINT ANALYSIS OF JOINT DIAGONALIZATION CRITERIA Gen Hor and Jonathan H Manton Bran Scence Insttute, RIKEN, Satama 351-0198, Japan hor@bspbranrkengojp Department of Electrcal and Electronc Engneerng,
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationREPORTS IN INFORMATICS
REPORTS IN INFORMATICS ISSN 0333-3590 Sparsty n Hgher Order Methods n Optmzaton Ger Gundersen and Trond Stehaug REPORT NO 327 June 2006 Department of Informatcs UNIVERSITY OF BERGEN Bergen, Norway Ths
More informationDivergence, Signal Decomposition and Information Geometry
APSIPA-009-Sapporo Dvergence, Sgnal Decomposton and Informaton Geometry Shun-ch Amar RIKEN Bran Scence Insttute Sgnal and Informaton Processng Sgnal and nformaton: Probablty dstrbutons and postve arrays
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationAffine and Riemannian Connections
Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons
More informationIntroduction. - The Second Lyapunov Method. - The First Lyapunov Method
Stablty Analyss A. Khak Sedgh Control Systems Group Faculty of Electrcal and Computer Engneerng K. N. Toos Unversty of Technology February 2009 1 Introducton Stablty s the most promnent characterstc of
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationCompiling for Parallelism & Locality. Example. Announcement Need to make up November 14th lecture. Last time Data dependences and loops
Complng for Parallelsm & Localty Announcement Need to make up November 14th lecture Last tme Data dependences and loops Today Fnsh data dependence analyss for loops CS553 Lecture Complng for Parallelsm
More informationMEM 255 Introduction to Control Systems Review: Basics of Linear Algebra
MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationApproximate D-optimal designs of experiments on the convex hull of a finite set of information matrices
Approxmate D-optmal desgns of experments on the convex hull of a fnte set of nformaton matrces Radoslav Harman, Mára Trnovská Department of Appled Mathematcs and Statstcs Faculty of Mathematcs, Physcs
More informationMathematical Preparations
1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the
More informationSCALED STEEPEST DESCENT METHOD
SCALED SEEPES DESCEN MEHOD We want to solve the same problem 0. mn x R n f x + x usng another approach, whch we called Scaled Steepest Descent method SSD n short. We propose to solve 0. by tang the safeguarded
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationCONDITIONS FOR INVARIANT SUBMANIFOLD OF A MANIFOLD WITH THE (ϕ, ξ, η, G)-STRUCTURE. Jovanka Nikić
147 Kragujevac J. Math. 25 (2003) 147 154. CONDITIONS FOR INVARIANT SUBMANIFOLD OF A MANIFOLD WITH THE (ϕ, ξ, η, G)-STRUCTURE Jovanka Nkć Faculty of Techncal Scences, Unversty of Nov Sad, Trg Dosteja Obradovća
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationThe Algorithms of Broyden-CG for. Unconstrained Optimization Problems
Internatonal Journal of Mathematcal Analyss Vol. 8, 014, no. 5, 591-600 HIKARI Ltd, www.m-hkar.com http://dx.do.org/10.1988/jma.014.497 he Algorthms of Broyden-CG for Unconstraned Optmzaton Problems Mohd
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More informationEfficient, General Point Cloud Registration with Kernel Feature Maps
Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More informationProjective change between two Special (α, β)- Finsler Metrics
Internatonal Journal of Trend n Research and Development, Volume 2(6), ISSN 2394-9333 www.jtrd.com Projectve change between two Specal (, β)- Fnsler Metrcs Gayathr.K 1 and Narasmhamurthy.S.K 2 1 Assstant
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationDFT with Planewaves pseudopotential accuracy (LDA, PBE) Fast time to solution 1 step in minutes (not hours!!!) to be useful for MD
LLNL-PRES-673679 Ths work was performed under the auspces of the U.S. Department of Energy by under contract DE-AC52-07NA27344. Lawrence Lvermore Natonal Securty, LLC Sequoa, IBM BGQ, 1,572,864 cores O(N)
More information6) Derivatives, gradients and Hessian matrices
30C00300 Mathematcal Methods for Economsts (6 cr) 6) Dervatves, gradents and Hessan matrces Smon & Blume chapters: 14, 15 Sldes by: Tmo Kuosmanen 1 Outlne Defnton of dervatve functon Dervatve notatons
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationThe Problem: Mapping programs to architectures
Complng for Parallelsm & Localty!Last tme! SSA and ts uses!today! Parallelsm and localty! Data dependences and loops CS553 Lecture Complng for Parallelsm & Localty 1 The Problem: Mappng programs to archtectures
More informationLecture 17: Lee-Sidford Barrier
CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationREAL TIME OPTIMIZATION OF a FCC REACTOR USING LSM DYNAMIC IDENTIFIED MODELS IN LLT PREDICTIVE CONTROL ALGORITHM
REAL TIME OTIMIZATION OF a FCC REACTOR USING LSM DYNAMIC IDENTIFIED MODELS IN LLT REDICTIVE CONTROL ALGORITHM Durask, R. G.; Fernandes,. R. B.; Trerweler, J. O. Secch; A. R. federal unversty of Ro Grande
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationInvariant deformation parameters from GPS permanent networks using stochastic interpolation
Invarant deformaton parameters from GPS permanent networks usng stochastc nterpolaton Ludovco Bag, Poltecnco d Mlano, DIIAR Athanasos Dermans, Arstotle Unversty of Thessalonk Outlne Startng hypotheses
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationSimilarities, Distances and Manifold Learning
Smlartes, Dstances and Manfold Learnng Prof. Rchard C. Wlson Dept. of Computer Scence Unversty of York Part I: Eucldean Space Poston, Smlarty and Dstance Manfold Learnng n Eucldean space Some famous technques
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSupplementary Material for: Geometric robustness of deep networks: analysis and improvement
Supplementary Materal for: Geometrc robustness of deep networks: analyss and mprovement 1 Samplng Random Transformatons Ths secton provdes a detaled examnaton on how the samplng of random transformatons
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationGeometric Registration for Deformable Shapes. 2.1 ICP + Tangent Space optimization for Rigid Motions
Geometrc Regstraton for Deformable Shapes 2.1 ICP + Tangent Space optmzaton for Rgd Motons Regstraton Problem Gven Two pont cloud data sets P (model) and Q (data) sampled from surfaces Φ P and Φ Q respectvely.
More informationPreconditioning techniques in Chebyshev collocation method for elliptic equations
Precondtonng technques n Chebyshev collocaton method for ellptc equatons Zh-We Fang Je Shen Ha-We Sun (n memory of late Professor Benyu Guo Abstract When one approxmates ellptc equatons by the spectral
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationMATH Homework #2
MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationThe Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method
Journal of Electromagnetc Analyss and Applcatons, 04, 6, 0-08 Publshed Onlne September 04 n ScRes. http://www.scrp.org/journal/jemaa http://dx.do.org/0.46/jemaa.04.6000 The Exact Formulaton of the Inverse
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationBootstrap AMG for Markov Chain Computations
Bootstrap AMG for Markov Chan Computatons Karsten Kahl Bergsche Unverstät Wuppertal May 27, 200 Outlne Markov Chans Subspace Egenvalue Approxmaton Ingredents of Least Squares Interpolaton Egensolver Bootstrap
More informationThe lower and upper bounds on Perron root of nonnegative irreducible matrices
Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More information1. Introduction. We consider methods for the smooth unconstrained optimization
BEHAVIOR OF ACCELERATED GRADIENT METHODS NEAR CRITICAL POINTS OF NONCONVEX FUNCTIONS MICHAEL O NEILL AND STEPHEN J WRIGHT Abstract We examne the behavor of accelerated gradent methods n smooth nonconvex
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationMATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018
MATH 5630: Dscrete Tme-Space Model Hung Phan, UMass Lowell March, 08 Newton s Law of Coolng Consder the coolng of a well strred coffee so that the temperature does not depend on space Newton s law of collng
More informationPractical Newton s Method
Practcal Newton s Method Lecture- n Newton s Method n Pure Newton s method converges radly once t s close to. It may not converge rom the remote startng ont he search drecton to be a descent drecton rue
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationAlgebraic Solutions to Multidimensional Minimax Location Problems with Chebyshev Distance
Recent Researches n Appled and Computatonal Mathematcs Algebrac Solutons to Multdmensonal Mnmax Locaton Problems wth Chebyshev Dstance NIKOLAI KRIVULIN Faculty of Mathematcs and Mechancs St Petersburg
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationConsistency & Convergence
/9/007 CHE 374 Computatonal Methods n Engneerng Ordnary Dfferental Equatons Consstency, Convergence, Stablty, Stffness and Adaptve and Implct Methods ODE s n MATLAB, etc Consstency & Convergence Consstency
More information