Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
|
|
- Sheila Price
- 5 years ago
- Views:
Transcription
1 Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm on R n s a functon,, from R n nto R wth the propertes: () x 0 for all x R n () x = 0 f and only f x = 0 () αx = α x for all α R and x R n (v) x + y x + y for all x, y R n Defnton The Eucldean norm l 2 and the nfnty norm l for the vector x = (x 1, x 2,..., x n ) t are defned by x 2 = { =1 x 2 } 1/2 and x = max 1 n x Cauchy-Bunyakovsky-Schwarz Inequalty for Sums Dstances For each x = (x 1, x 2,..., x n ) t and y = (y 1, y 2,..., y n ) t n R n, x t y = { x y x 2 =1 =1 =1 } 1/2 { } 1/2 y 2 = x 2 y 2 Defnton The dstance between two vectors x = (x 1,..., x n ) t and y = (y 1,..., y n ) t s the norm of the dfference of the vectors. The l 2 and l dstances are { } 1/2 x y 2 = (x y ) 2 =1 x y = max 1 n x y Convergence Matrx Norms Defnton A sequence { k=1 of vectors n Rn s sad to converge to x wth respect to the norm f, gven any ε > 0, there exsts an nteger N(ε) such that x < ε, for all k N(ε) The sequence of vectors { } converges to x n R n wth respect to f and only f lm k = x. For each x R n, Defnton A matrx norm on n n matrces s a real-valued functon satsfyng () A 0 () A = 0, f and only f A = 0 () αa = α A (v) A + B A + B (v) AB A B x x 2 n x
2 Natural Matrx Norms If s a vector norm, the natural (or nduced) matrx norm s gven by Corollary A = max x =1 Ax For any vector z 0, matrx A, and natural norm, Az A z If A = (a ) s an n n matrx, then A = max 1 n =1 a Egenvalues and Egenvectors Defnton The characterstc polynomal of a square matrx A s p(λ) = det(a λi) Defnton The zeros λ of the characterstc polynomal are egenvalues of A, x 0 satsfyng (A λi)x = 0 s a correspondng egenvector. Defnton The spectral radus ρ(a) of a matrx A s ρ(a) = max λ, If A s an n n matrx, then () A 2 = [ρ(a t A)] 1/2 () ρ(a) A, for any natural norm for egenvalues λ of A Convergent Matrces Iteratve Methods for Lnear Systems Defnton An n n matrx A s convergent f lm k (Ak ) = 0, for each = 1, 2,..., n and = 1, 2,..., n The followng statements are equvalent. () A s a convergent matrx () lm n A n = 0, for some natural norm () lm n A n = 0, for all natural norms (v) ρ(a) < 1 (v) lm n A n x = 0, for every x Drect methods for solvng Ax = b, e.g. Gaussan elmnaton, compute an exact soluton after a fnte number of steps (n exact arthmetc) Iteratve algorthms produce a sequence of approxmatons x (1), x (2),... whch hopefully converges to the soluton, and may requre less memory than drect methods may be faster than drect methods may handle specal structures (such as sparsty) n a smpler way Resdual r = bax Iteratve Drect Iteraton Two Classes of Iteratve Methods Jacob s Method Statonary methods (or classcal teratve methods) fnds a splttng A = M K and terates = M 1 (Kx (k1) + b) = T x (k1) + c Jacob, Gauss-Sedel, Successve Overrelaxaton (SOR) Krylov subspace methods use only multplcaton by A (and possbly by A T ) and fnd solutons n the Krylov subspace {b, Ab, A 2 b,..., A k1 b} Conugate Gradent (CG), Generalzed Mnmal Resdual (GMRES), BConugate Gradent (BCG), etc An teratve technque to solve Ax = b starts wth an ntal approxmaton x (0) and generates a sequence of vectors { } k=0 that converges to x. Jacob s Method Solve for x n the the th equaton of Ax = b: x = =1 Ths leads to the teraton = 1 ( a =1 ( a ) x + b, a a a x (k1) ) + b, for = 1, 2,..., n for = 1, 2,..., n
3 Matrx form of Jacob s Method Convert Ax = b nto an equvalent system x = T x + c, select ntal vector x (0) and terate = T x (k1) + c For Jacob s method, splt A nto dagonal and off-dagonal parts: a11 a12 a1n a a12 a1n a21 a22 a2n..... = 0 a a an1,n an1 an2 ann 0 0 ann an1 a n,n }{{}}{{}}{{}}{{} A D L U Ths transforms Ax = (D L U)x = b nto Dx = (L + U)x + b, and f D 1 exsts, ths leads to the Jacob teraton: = D 1 (L + U)x (k1) + D 1 b = T x (k1) + c where T = D 1 (L + U) and c = D 1 b The Gauss-Sedel Method The Gauss-Sedel Method Improve Jacob s method by, for > 1, usng the already updated components 1,..., x(k) 1 when computng x(k) : = 1 1 (a ) (a x (k1) ) + b a =1 =+1 In matrx form, the method can be wrtten (D L) = Ux (k1) + b and f (D L) 1 exsts, ths leads to the Gauss-Sedel teraton = (D L) 1 Ux (k1) + (D L) 1 b = T g x (k1) + c g where T g = (D L) 1 U and c g = (D L) 1 b General Iteraton Methods Lemma If the spectral radus satsfes ρ(t ) < 1, then (I T ) 1 exsts, and (I T ) 1 = I + T + T 2 + = For any x (0) R n, the sequence = T x (k1) + c =0 converges to the unque soluton of x = T x + c f and only f ρ(t ) < 1. T General Iteraton Methods Corollary If T < 1 for any natural matrx norm, then = T x (k1) + c converges for any x (0) R n to a vector x R n s.t. x = T x + c. The followng error estmates hold: 1 x T k x (0) x 2 x T k 1 T x(1) x (0) A strctly dagonally domnant = Jacob and Gauss-Sedel converges for any x (0). (Sten-Rosenberg) If a > 0 for all and a < 0 for, then one and only one of the followng holds: () 0 ρ(t g ) < ρ(t ) < 1 () 1 < ρ(t ) < ρ(t g ) () ρ(t ) = ρ(t g ) = 0 (v) ρ(t ) = ρ(t g ) = 1 The Resdual Vector Defnton The resdual vector for x R n wth respect to the lnear system Ax = b s r = b A x. Consder the approxmate soluton vector n Gauss-Sedel: wth resdual vector = ( 1, x(k) 2,..., x(k) r (k) The Gauss-Sedel method: = 1 1 b a can then be wrtten as 1, x(k1) = (r (k) 1, r(k) 2,..., r(k) n )t =1 a = x (k1) =+1 + r(k) a,..., x (k1) n ) t a x (k1) Successve Over-Relaxaton The relaxaton methods uses an teraton of the form = x (k1) + ω r(k) a for some postve ω. Wth ω > 1, they can accelerate the convergence of the Gauss-Sedel method, and are called successve over-relaxaton (SOR) methods. Wrte the SOR method as = (1 ω)x (k1) + ω a 1 b =1 whch can be wrtten n the matrx form a = T ω x (k1) + c ω where T ω = (D ωl) 1 [(1 ω)d + ωu] and c ω = ω(d ωl) 1 b. =+1 a x (k1)
4 Convergence of the SOR Method (Kahan) If a 0 for all, then ρ(t ω ) ω 1 and the SOR method can converge only f 0 < ω < 2. (Ostrowsk-Rech) If A s PD and 0 < ω < 2, then SOR converges for any x (0). If A s PD and trdagonal, then ρ(t g ) = [ρ(t )] 2 < 1, and the optmal ω for SOR s whch gves ρ(t ω ) = ω 1. 2 ω = [ρ(t )] 2 Error Bounds Suppose Ax = b, A s nonsngular, x x, and r = b A x. Then for any natural norm, and f x, b 0, Defnton x x r A 1 x x x A A 1 r b The condton number of nonsngular matrx A n the norm s K(A) = A A 1 In terms of K(A), the error bounds can be wrtten: x x K(A) r A, x x K(A) r x b Iteratve Refnement Errors n both matrx and rght-hand sde Algorthm: Iteratve Refnement Solve Ax (1) = b r (k) = b A Solve Ay (k) = r (k) x (k+1) = + y (k) resdual compute accurately! solve for correcton mprove soluton Allows for errors n the soluton of the lnear systems, provded the resdual r s computed accurately Suppose A s nonsngular and The soluton x to δa < 1 A 1 (A + δa) x = b + δb approxmates the soluton x of Ax = b wth the error estmate ( x x K(A) A δb x A K(A) δa b + δa ) A Inner products Krylov Subspace Algorthms Defnton The nner product for n-dmensonal vectors x, y s x, y = x t y For any vectors x, y, z and real number α: (a) x, y = y, x (b) αx, y = x, αy = α x, y (c) x + z, y = x, y + z, y (d) x, x 0 (e) x, x = 0 x = 0 Create a sequence of Krylov subspaces for Ax = b: K k = {b, Ab,..., A k1 b} and fnd approxmate solutons x k n K k Only matrx-vector products nvolved For SPD matrces, the most popular algorthm s the Conugate Gradents method [Hestenes/Stefel, 1952] Fnds the best soluton x k K k n the norm x A = x t Ax Only requres storage of 4 vectors (not all the k vectors n K k ) Remarkably smple and excellent convergence propertes Orgnally nvented as a drect algorthm! (converges after n steps n exact arthmetc)
5 The Conugate Gradents Method Propertes of Conugate Gradents Vectors Algorthm: Conugate Gradents Method x 0 = 0, r 0 = b, p 0 = r 0 α k = (rk1 t r k1)/(p t k1 Ap k1) x k = x k1 + α k p k1 r k = r k1 α k Ap k1 β k = (rk t r k)/(rk1 t r k1) p k = r k + β k p k1 step length approxmate soluton resdual mprovement ths step search drecton Only one matrx-vector product Ap k1 per teraton Operaton count O(n) (excludng the matrx-vector product) The spaces spanned by the solutons, the search drectons, and the resduals are all equal to the Krylov subspaces: K k = span ({x 1, x 2,..., x k }) = span ({p 0, p 1,..., p k1 }) ({ }) = span ({r 0, r 1,..., r k1 }) = span b, Ab,..., A k1 b The resduals are orthogonal: r t k r = 0 ( < k) The search drectons are A-conugate: p t k Ap = 0 ( < k) Optmalty of Conugate Gradents The errors e k = x x k are mnmzed n the A-norm Proof. For any other pont x = x k x K k the error s e 2 A = (e k + x) t A(e k + x) = e t k Ae k + ( x) t A( x) + 2e t k A( x) But e t k A( x) = rt k ( x) = 0, snce r k s orthogonal to K k, so x = 0 mnmzes e A Monotonc: e k A e k1 A, and e k = 0 n k m steps Proof. Follows from K k K k+1, and that K k R m unless converged Optmzaton n CG CG can be nterpreted as a mnmzaton algorthm We know t mnmzes e A, but ths cannot be evaluated CG also mnmzes the quadratc functon ϕ(x) = 1 2 xt Ax x t b: e k 2 A = e t k Ae k = (x x k ) t A(x x k ) = x t k Ax k 2x t k Ax + x t Ax = x t k Ax k 2x t k + xt b = 2ϕ(x k ) + constant At each step α k s chosen to mnmze x k = x k1 + α k p k1 The conugated search drectons p k gve mnmzaton over all of K k Optmzaton by Conugate Gradents The Method of Steepest Descent We know that solvng Ax = b s equvalent to mnmzng the quadratc functon ϕ(x) = 1 2 xt Ax x t b The mnmzaton can be done by lne searches, where ϕ(x k ) s mnmzed along a search drecton p k Very smple approach: Set search drecton p k to the negatve gradent r k Corresponds to movng n the drecton ϕ(x) changes the most The α k+1 that mnmzes ϕ(x k + α k+1 p k ) s wth the resdual r k = b Ax k pt k r k α k+1 = p t k Ap k Algorthm: Steepest Descent x 0 = 0, r 0 = b α k = (rk1 t r k1)/(rk1 t Ar k1) x k = x k1 α k r k1 r k = r k1 + α k Ar k1 step length approxmate soluton resdual The resdual s also mnus the gradent of ϕ(x k ): ϕ(x k ) = Ax k b = r k Poor convergence, tends to move along prevous search drectons
6 The Method of Conugate Drectons The optmzaton can be mproved by better search drectons Let the search drecton be A-conugate, or p t Ap k = 0 Then the algorthm wll converge n at most n steps, snce the ntal error can be decomposed along the p s: n1 e 0 = δ k p k, wth δ k = pt k Ae 0 p t k Ap k k=0 But ths s exactly the α we choose at step k: pt k r k α k+1 = p t k Ap k = pt k Ae k p t k Ap = pt k Ae 0 k p t k Ap k snce the error e k s the ntal e 0 plus a combnaton of p 0,..., p k1, whch are all A-conugate to p k. Each component δ k s then subtracted out at step k, and the method converges after n steps. Choosng A-conugate Search Drectons One method to choose p k whch s A-conugate to prevous search vectors s by Gram-Schmdt: k1 p k = p 0 k β k p, wth β k = p0 t k Ap p t Ap =0 The ntal p 0 k vectors should be lnearly ndependent, for example column k + 1 of dentty matrx Drawback: Must store all prevous search vectors p k Conugate Gradents s smply Conugate Drectons wth a partcular ntal vector n Gram-Schmdt: p 0 k = r k Ths gves orthogonal resduals r t k r = 0 for k, and β k = 0 for k > + 1 Precondtoners for Lnear Systems Precondtoned Conugate Gradents Man dea: Instead of solvng Ax = b solve, usng a nonsngular n n precondtoner M, whch has the same soluton x M 1 Ax = M 1 b Convergence propertes based on M 1 A nstead of A Trade-off between the cost of applyng M 1 and the mprovement of the convergence propertes. Extreme cases: M = A, perfect condtonng of M 1 A = I, but expensve M 1 M = I, do nothng M 1 = I, but no mprovement of M 1 A = A To keep symmetry, solve (C 1 AC )C x = C 1 b wth CC = M Can be wrtten n terms of M 1 only, wthout reference to C: Algorthm: Precondtoned Conugate Gradents Method x 0 = 0, r 0 = b, p 0 = M 1 r 0, z 0 = p 0 α k = (rk1 T z k1)/(p T k1 Ap k1) x k = x k1 + α k p k1 r k = r k1 α k Ap k1 z k = M 1 r k β k = (rk T z k)/(rk1 T z k1) p k = z k + β k p k1 step length approxmate soluton resdual precondtonng mprovement ths step search drecton Commonly Used Precondtoners A precondtoner should approxmately solve the problem Ax = b Jacob precondtonng - M = dag(a), very smple and cheap, mght mprove certan problems but usually nsuffcent Block-Jacob precondtonng - Use block-dagonal nstead of dagonal. Another varant s usng several dagonals (e.g. trdagonal) Classcal teratve methods - Precondton by applyng one step of Jacob, Gauss-Sedel, SOR, or SSOR Incomplete factorzatons - Perform Gaussan elmnaton but gnore fll, results n approxmate factors A LU or A R T R (more later) Coarse-grd approxmatons - For a PDE dscretzed on a grd, a precondtoner can be formed by transferrng the soluton to a coarser grd, solvng a smaller problem, then transferrng back (multgrd)
Chapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 17. a ij x (k) b i. a ij x (k+1) (D + L)x (k+1) = b Ux (k)
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 08 LECTURE 7. sor method remnder: n coordnatewse form, Jacob method s = [ b a x (k) a and Gauss Sedel method s = [ b a = = remnder: n matrx form, Jacob method
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationRelaxation Methods for Iterative Solution to Linear Systems of Equations
Relaxaton Methods for Iteratve Soluton to Lnear Systems of Equatons Gerald Recktenwald Portland State Unversty Mechancal Engneerng Department gerry@pdx.edu Overvew Techncal topcs Basc Concepts Statonary
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationMATH Homework #2
MATH609-601 Homework #2 September 27, 2012 1. Problems Ths contans a set of possble solutons to all problems of HW-2. Be vglant snce typos are possble (and nevtable). (1) Problem 1 (20 pts) For a matrx
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationSolution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method
Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationDISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization
DISCRIMINANTS AND RAMIFIED PRIMES KEITH CONRAD 1. Introducton A prme number p s sad to be ramfed n a number feld K f the prme deal factorzaton (1.1) (p) = po K = p e 1 1 peg g has some e greater than 1.
More informationMEM 255 Introduction to Control Systems Review: Basics of Linear Algebra
MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More informationfor Linear Systems With Strictly Diagonally Dominant Matrix
MATHEMATICS OF COMPUTATION, VOLUME 35, NUMBER 152 OCTOBER 1980, PAGES 1269-1273 On an Accelerated Overrelaxaton Iteratve Method for Lnear Systems Wth Strctly Dagonally Domnant Matrx By M. Madalena Martns*
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationP A = (P P + P )A = P (I P T (P P ))A = P (A P T (P P )A) Hence if we let E = P T (P P A), We have that
Backward Error Analyss for House holder Reectors We want to show that multplcaton by householder reectors s backward stable. In partcular we wsh to show fl(p A) = P (A) = P (A + E where P = I 2vv T s the
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More information= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.
Chapter Matlab Exercses Chapter Matlab Exercses. Consder the lnear system of Example n Secton.. x x x y z y y z (a) Use the MATLAB command rref to solve the system. (b) Let A be the coeffcent matrx and
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationGoogle PageRank with Stochastic Matrix
Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationCOMPLEX NUMBERS AND QUADRATIC EQUATIONS
COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationAppendix B. Criterion of Riemann-Stieltjes Integrability
Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for
More information332600_08_1.qxp 4/17/08 11:29 AM Page 481
336_8_.qxp 4/7/8 :9 AM Page 48 8 Complex Vector Spaces 8. Complex Numbers 8. Conjugates and Dvson of Complex Numbers 8.3 Polar Form and DeMovre s Theorem 8.4 Complex Vector Spaces and Inner Products 8.5
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationMatrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD
Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More information[7] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Clis, New Jersey, (1962).
[7] R.S. Varga, Matrx Iteratve Analyss, Prentce-Hall, Englewood ls, New Jersey, (962). [8] J. Zhang, Multgrd soluton of the convecton-duson equaton wth large Reynolds number, n Prelmnary Proceedngs of
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationform, and they present results of tests comparng the new algorthms wth other methods. Recently, Olschowka & Neumaer [7] ntroduced another dea for choo
Scalng and structural condton numbers Arnold Neumaer Insttut fur Mathematk, Unverstat Wen Strudlhofgasse 4, A-1090 Wen, Austra emal: neum@cma.unve.ac.at revsed, August 1996 Abstract. We ntroduce structural
More informationIntroduction to Simulation - Lecture 5. QR Factorization. Jacob White. Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy
Introducton to Smulaton - Lecture 5 QR Factorzaton Jacob Whte hanks to Deepak Ramaswamy, Mchal Rewensk, and Karen Veroy Sngular Example LU Factorzaton Fals Strut Jont Load force he resultng nodal matrx
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationOutline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique
Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationEigenvalues of Random Graphs
Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the
More informationA Parallel Multisplitting Solution of the Least Squares Problem
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Lnear Algebra Appl., 5, 11 31 (1998) A Parallel Multsplttng Soluton of the Least Squares Problem R. A. Renaut Department of Mathematcs, Arzona State Unversty,
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationSome basic inequalities. Definition. Let V be a vector space over the complex numbers. An inner product is given by a function, V V C
Some basc nequaltes Defnton. Let V be a vector space over the complex numbers. An nner product s gven by a functon, V V C (x, y) x, y satsfyng the followng propertes (for all x V, y V and c C) (1) x +
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationOverlapping additive and multiplicative Schwarz iterations for H -matrices
Lnear Algebra and ts Applcatons 393 (2004) 91 105 www.elsever.com/locate/laa Overlappng addtve and multplcatve Schwarz teratons for H -matrces Rafael Bru a,1, Francsco Pedroche a, Danel B. Szyld b,,2 a
More informationMATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)
1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons
More informationBézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0
Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationHomework Notes Week 7
Homework Notes Week 7 Math 4 Sprng 4 #4 (a Complete the proof n example 5 that s an nner product (the Frobenus nner product on M n n (F In the example propertes (a and (d have already been verfed so we
More informationLecture 10: May 6, 2013
TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationQuantum Mechanics I - Session 4
Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................
More informationFormal solvers of the RT equation
Formal solvers of the RT equaton Formal RT solvers Runge- Kutta (reference solver) Pskunov N.: 979, Master Thess Long characterstcs (Feautrer scheme) Cannon C.J.: 970, ApJ 6, 55 Short characterstcs (Hermtan
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationThe lower and upper bounds on Perron root of nonnegative irreducible matrices
Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College
More informationRobust Norm Equivalencies and Preconditioning
Robust Norm Equvalences and Precondtonng Karl Scherer Insttut für Angewandte Mathematk, Unversty of Bonn, Wegelerstr. 6, 53115 Bonn, Germany Summary. In ths contrbuton we report on work done n contnuaton
More informationEstimation: Part 2. Chapter GREG estimation
Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the
More informationFall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede
Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationMATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS
MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationLeast-Squares Solutions of Generalized Sylvester Equation with Xi Satisfies Different Linear Constraint
Advances n Lnear Algebra & Matrx heory 6 6 59-7 Publshed Onlne June 6 n ScRes. http://www.scrp.org/journal/alamt http://dx.do.org/.6/alamt.6.68 Least-Squares Solutons of Generalzed Sylvester Equaton wth
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationTHE Hadamard product of two nonnegative matrices and
IAENG Internatonal Journal of Appled Mathematcs 46:3 IJAM_46_3_5 Some New Bounds for the Hadamard Product of a Nonsngular M-matrx and Its Inverse Zhengge Huang Lgong Wang and Zhong Xu Abstract Some new
More informationDIFFERENTIAL FORMS BRIAN OSSERMAN
DIFFERENTIAL FORMS BRIAN OSSERMAN Dfferentals are an mportant topc n algebrac geometry, allowng the use of some classcal geometrc arguments n the context of varetes over any feld. We wll use them to defne
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms
More informationBallot Paths Avoiding Depth Zero Patterns
Ballot Paths Avodng Depth Zero Patterns Henrch Nederhausen and Shaun Sullvan Florda Atlantc Unversty, Boca Raton, Florda nederha@fauedu, ssull21@fauedu 1 Introducton In a paper by Sapounaks, Tasoulas,
More information