Norms and Perturbation theory for linear systems
|
|
- Gervais Chase
- 5 years ago
- Views:
Transcription
1 CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector v =vec(a) obtained by stacking the columns of a matrix A on top of each other. Let A =(a ij ) ij and B =(b ij ) ij be matrices for which the product AB is defined. Then kabk S = X X a ik b kj X a ik b kj i,j k i,j,k X a ik b lj = X a ik X b lj = kak S kbk S, i,j,k,l i,k l,j where the first inequality follows from the triangle inequality and multiplicative property of the absolute value. Since A and B where arbitrary, this proves that the sum norm is consistent. Exercise 7.8: Consistency of max norm? Observe that the max norm is a matrix norm. This follows since it is equal to the l -norm of the vector v =vec(a) obtainedbystackingthecolumnsofamatrixaon top of each other. To show that the max norm is not consistent we use a counter example. Let A = B =.Then = => =, contradicting kab M kak M kbk M. M M Exercise 7.9: Consistency of modified max norm? Exercise 7.8 shows that the max norm is not consistent. In this Exercise we show that the max norm can be modified so as to define a consistent matrix norm. (a) Let A C m,n and define kak := p mnkak M as in the Exercise. To show that k kdefines a consistent matrix norm we have to show that it fulfills the three matrix norm properties and that it is submultiplicative. Let A, B C m,n be any matrices and any scalar. () Positivity. Clearly kak = p mnkak M 0. Moreover, kak =0 () a i,j =08i, j () A =0. () Homogeneity. k Ak = p mnk Ak M = p mnkak M = kak. 39 M M
2 (3) Subadditivity. One has ka + Bk = p nmka + Bk M p nm kak M + kbk M = kak + kbk. () Submultiplicativity. One has kabk = p mn max im jn p mn max im jn p mn max im qx a i,k b k,j k= qx a i,k b k,j k= max b k,j kq jn q p mn max a i,k im kq = kakkbk. (b) For any A C m,n,let qx a i,k k= max b k,j kq jn kak () := mkak M and kak () := nkak M. Comparing with the solution of part (a) we see, that the points of positivity, homogeneity and subadditivity are fulfilled here as well, making kak () and kak () valid matrix norms. Furthermore, for any A C m,q, B C q,n, kabk () = m max im jn qx k= = kak () kbk (), a i,k b k,j m max a i,k im kq q max b k,j kq jn kabk () = n max im jn qx a i,k b k,j q k= = kak () kbk (), max a i,k im kq n max b k,j kq jn which proves the submultiplicativity of both norms. Exercise 7.: The sum norm is subordinate to? For any matrix A =(a ij ) ij C m,n and column vector x =(x j ) j C n,onehas kaxk = a ij x j a ij x j a ij x k = kak S kxk, k= which shows that the matrix norm k k S is subordinate to the vector norm k k. 0
3 Exercise 7.: The max norm is subordinate to? Let A =(a ij ) ij C m,n be a matrix and x =(x j ) j C n acolumnvector. (a) One has kaxk = max a ij x j max a ij x j max a ij,...,m,...,m = kak M kxk.,...,m,...,n x j (b) Assume that the maximum in the definition of kak M is attained in column l, implying that kak M = a k,l for some k. Lete l be the lth standard basis vector. Then ke l k =and kae l k = max,...,m a i,l = a k,l = a k,l =kak M ke l k, which is what needed to be shown. (c) By (a), kak M kaxk /kxk for all nonzero vectors x, implyingthat kak M max kaxk kxk. By (b), equality is attained for any standard basis vector e l for which there exists a k such that kak M = a k,l.weconcludethat kak M =max kaxk kxk, which means that k k M is the (, )-operator norm (see Definition 7.3). Exercise 7.9: Spectral norm Let A = U V be a singular value decomposition of A, andwrite := kak for the biggest singular value of A. Since the orthogonal matrices U and V leave the Euclidean norm invariant, max y Ax = kxk ==kyk = max kxk ==kyk max y U V x = kxk ==kyk ix i y i max kxk ==kyk max y x kxk ==kyk i x i y i = max kxk ==kyk y x max kxk ==kyk kyk kxk =. Moreover, this maximum is achieved for x = v and y = u,andweconclude kak = = max kxk ==kyk y Ax. Since A is nonsingular we find ka k =max Exercise 7.0: Spectral norm of the inverse ka xk kxk =max kxk kaxk. This proves the second claim. For the first claim, let n be the singular values of A. Again since A is nonsingular, n must be nonzero, and Equation(7.7)
4 states that = ka k n.fromthisandwhatwejustprovedwehavethat n for any x so that kxk =,sothatalsokaxk n for such x. kaxk We have A = Exercise 7.: p-norm example, A = 3. Using Theorem 7.5, one finds kak = kak = 3 and ka k = ka k =. The singular values of A are the square roots of the zeros of 0=det(A T A I) =(5 ) 6 = 0 +9=( 9)( ). Using Theorem 7.7, we find kak = =3andkA k = =. Alternatively, since A is symmetric positive definite, we know from (7.8) that kak = and ka k = /, where = 3 is the biggest eigenvalue of A and = is the smallest. Exercise 7.: Unitary invariance of the spectral norm Suppose V is a rectangular matrix satisfying V V = I. Then kvak = max kxk = kvaxk = max kxk = x A V VAx = max kxk = x A Ax = max kxk = kaxk = kak. The result follows by taking square roots. Exercise 7.5: kauk rectangular A Let U =[u,u ] T be any matrixsatisfying=u T U.ThenAU is a -matrix, and clearly the operator -norm of a -matrix equals its euclidean norm (when viewed as a vector): a a x = a x a x = x a. a In order for kauk < kak to hold, we need to find a vector v with kvk =sothat kauk < kavk.inotherwords,weneedtopickamatrixa that scales more in the direction v than in the direction U. Forinstance,if 0 0 A = 0, U =, v = 0, then kak = max kaxk kavk => =kauk. kxk =
5 Exercise 7.6: p-norm of diagonal matrix The eigenpairs of the matrix A =diag(,..., n)are(, e ),...,( n, e n ). For (A) = max{,..., n }, onehas kak p = ( x p + + nx n p ) /p max (x,...,x n)6=0 ( x p + + x n p ) /p ( (A) p x p + + (A) p x n p ) /p max (x,...,x n)6=0 ( x p + + x n p ) /p On the other hand, for j such that (A) = j, onefinds kak p =max kaxk p kxk p kae j k p ke j k p = (A). = (A). Together, the above two statements imply that kak p = (A) foranydiagonalmatrix A and any p satisfying p. Exercise 7.7: Spectral norm of a column vector We write A C m, for the matrix corresponding to the column vector a C m.write kak p for the operator p-norm of A and kak p for the vector p-norm of a. Inparticular kak is the spectral norm of A and kak is the Euclidean norm of a. Then kak p =max kaxk p x =max x kak p x = kak p, proving (b). Note that (a) follows as the special case p =. Exercise 7.8: Norm of absolute value matrix (a) One finds +i A = i p = p. (b) Let b i,j denote the entries of A. Observe that b i,j = a i,j = b i,j. Together with Theorem 7.5, these relations yield kak F = kak =max jn kak = max im a i,j a i,j a i,j = =max jn = max im b i,j = k A k F, b i,j = k A k, b i,j = k A k, which is what needed to be shown. (c) To show this relation between the -norms of A and A, wefirstexamine the connection between the l -norms of Ax and A x, wherex =(x,...,x n )and x =( x,..., x n ). We find kaxk = a i,j x j 3 a i,j x j = k A x k.
6 Now let x with kx k =beavectorforwhichkak = kax k. That is, let x be aunitvectorforwhichthemaximuminthedefinitionof-normisattained.observe that x is then a unit vector as well, k x k =. Then,bytheaboveestimateof l -norms and definition of the -norm, kak = kax k k A x k k A k. (d) By Theorem 7.5, we can solve this exercise by finding a matrix A for which A and A have di erent largest singular values. As A is real and symmetric, there exist a, b, c R such that a b a b A = b c, A =, b c a A T A = + b ab + bc ab + bc b + c, A T A = a + b ab + bc ab + bc b + c. To simplify these equations we first try the case a + c =0. Eliminatingc we get a A T A = + b 0 a 0 a + b, A T A = + b ab ab a + b. To get di erent norms we have to choose a, b in such a way that the maximal eigenvalues of A T A and A T A are di erent. Clearly A T A has a unique eigenvalue := a + b and putting the characteristic polynomial (µ) =(a + b µ) ab of A T A to zero yields eigenvalues µ ± := a + b ± ab. Hence A T A has maximal eigenvalue µ + = a + b + ab = + ab. The spectral norms of A and A therefore di er whenever both a and b are nonzero. For example, when a = b = c =wefind A =, kak = p, k A k =. Exercise 7.35: Sharpness of perturbation bounds Suppose Ax = b and Ay = b + e. Let K = K(A) =kakka k be the condition number of A. Lety A and y A be unit vectors for which the maxima in the definition of the operator norms of A and A are attained. That is, ky A k ==ky A k, kak = kay A k,andka k = ka y A k. Ifb = Ay A and e = y A,then ky xk kxk = ka ek ka bk = ka y A k = ka k = kakka k ky A k ky A k kay A k = K kek kbk, showing that the upper bound is sharp. If b = y A ky xk kxk = ka ek ka bk = showing that the lower bound is sharp. and e = Ay A,then ky A k ka y A k = ka k = kay A kakka k ky A k = K Exercise 7.36: Condition number of nd derivative matrix Recall that T =tridiag(,, ) and, by Exercise.6, T is given by T ij = T ji =( ih)j >0, j i m, h = m +. kek kbk,
7 From Theorems 7.5 and 7.7, we have the following explicit expressions for the -, - and -norms kak =max a i,j, kak =, ka k =, kak = max a i,j jn m im for any matrix A C m,n,where is the largest singular value of A, m the smallest singular value of A, andweassumedato be nonsingular in the third equation. a) For the matrix T this gives ktk = ktk = m +form =, andktk = ktk =form 3. For the inverse we get kt k = kt k = = h for m = 8 and kt k = 3 for m =. Form>, one obtains == 3 j X T = ( jh)i + ij ( ih)j i=j = kt k It is easy to commit an error when simplifying this sum. It can be computed symbolically using Symbolic Math Toolbox using the following code: syms i j m simplify(symsum((-j/(m+))*i,i,,j-) + symsum( (-i/(m+))*j,i,j,m)) Here we have substituted h =/(m +). Thisproducestheresult j (m + arrive at this ourselves, we can first rewrite the expression as j X ( jh)i + ( ih)j j X ( ih)j. (j )j The first sum here equals ( jh).thesecondsumequals m(m +) j jh i = mj jh = mj mj/ =mj/. The third sum equals j X j j X jh i = j(j ) hj j Combining the three sums we get = j(j )( hj)/. j ((j )( jh)+m (j )( hj)) = j (m + j), j). To which we also arrived at above. This can also be written as j h j which is a quadratic function in j that attains its maximum at j = = m+. For odd m>, h this function takes its maximum at integral j, yielding kt k = h. For even 8 m>, on the other hand, the maximum over all integral j is attained at j = m = h h or j = m+ = +h,whichbothgivekt k h = (h ). 8 5
8 Since T is symmetric, the row sums equal the column sums, so that kt k = kt k.weconcludethatthe-and-condition numbers of T are 8 m =; cond (T) =cond (T) = >< 6 m =; h >: m odd, m>; h m even, m>. b) Since the matrix T is symmetric, T T T = T and the eigenvalues of T T T are the squares of the eigenvalues,..., n of T. As all eigenvalues of T are positive, each singular value of T is equal to an eigenvalue. Using that i = cos(i h), we find = m = cos(m h) =+cos( h), m = = It follows that cond (T) = m cos( h). = +cos( h) cos( h) =cot h. c) From tan x>xwe obtain cot x = <. Using this and cot x>x tan x x 3 we find h 3 < cond (T) < h. (d) For p =,substituteh =/(m+) in c) and use that / < /. For p =, we need to show due to a) that h /3 < h h. when m is odd, and that h /3 < (h ) h. when m is even. The right hand sides in these equations are obvious. The left equation for m odd is also obvious since / < /. The left equation for m even is also obvious since /3 < /. Exercise 7.7: When is a complex norm an inner product norm? As in the Exercise, we let hx, yi = s(x, y)+is(x,iy), s(x, y) = kx + yk kx yk. We need to verify the three properties that define an inner product. Let x, y, z be arbitrary vectors in C m and a C be an arbitrary scalar. () Positive-definiteness. One has s(x, x) =kxk 0and s(x,ix) = kx + ixk kx ixk = k( + i)xk k( i)xk ( +i i )kxk = =0, so that hx, xi = kxk 0, with equality holding precisely when x = 0. 6
9 () Conjugate symmetry. Since s(x, y) isreal,s(x, y) = s(y, x), s(ax, ay) = a s(x, y), and s(x, y) = s(x, y), hy, xi = s(y, x) is(y, ix) = s(x, y) is(ix, y) = s(x, y) is(x, iy) = hx, yi. (3) Linearity in the first argument. Assuming the parallelogram identity, s(x, z)+s(y, z) = kx + zk kz xk + ky + zk kz yk = z + x + y + x y z x + y x y z + x + y = z + x + y = z + x + y x + y =s, z, x y + x y z x + y z x + y + x y z x + y implying that s(x + y, z) =s(x, z)+s(y, z). It follows that hx + y, zi = s(x + y, z)+is(x + y,iz) = s(x, z)+ s(y, z)+ is(x, iz)+ is(y, iz) = s(x, z)+ is(x, iz)+ s(y, z)+ is(y, iz) = hx, zi + hy, zi. + x y That hax, yi = ahx, yi follows, mutatis mutandis, fromtheproofoftheorem 7.5. Exercise 7.8: p-norm for p =and p = We need to verify the three properties that define a norm. Consider arbitrary vectors x =[x,...,x n ] T and y =[y,...,y n ]inr n and a scalar a R. First we verify that k k is a norm. () Positivity. Clearly kxk = x + + x n 0, with equality holding precisely when x = = x n =0,whichhappensifandonlyifx is the zero vector. () Homogeneity. One has kaxk = ax + + ax n = a ( x + + x n )= a kxk. (3) Subadditivity. Using the triangle inequality for the absolute value, kx+yk = x +y + + x n +y n x + y + + x n + y n = kxk +kyk. Next we verify that k k is a norm. () Positivity. Clearly kxk =max{ x,..., x n } 0, with equality holding precisely when x = = x n =0,whichhappensifandonlyifx is the zero vector. () Homogeneity. One has kaxk =max{ a x,..., a x n } = a max{ x,..., x n } = a kxk. 7
10 (3) Subadditivity. Using the triangle inequality for the absolute value, kx + yk =max{ x + y,..., x n + y n } max{ x + y,..., x n + y n } max{ x,..., x n } +max{ y,..., y n } = kxk + kyk Exercise 7.9: The p-norm unit sphere In the plane, unit spheres for the -norm, -norm, and -norm are Exercise 7.50: Sharpness of p-norm inequality Let p.thevectorx l =[, 0,...,0] T R n satisfies kx l k p =( p + 0 p p ) /p ==max{, 0,..., 0 } = kx l k, and the vector x u =[,,...,] T R n satisfies kx u k p =( p + + p ) /p = n /p = n /p max{,..., } = n /p kx u k. Exercise 7.5: p-norm inequalities for arbitrary p Let p and q be integers satisfying q p, andletx =[x,...,x n ] T C n. Since p/q, the function f(z) =z p/q is convex on [0, ). For any z,...,z n [0, ) and,..., n 0satisfying + + n =, Jensen s inequality gives iz i p/q = f iz i if(z i )= iz p/q i. In particular for z i = x i q and = = n =/n, p/q p/q n p/q x i q = n x i q n x i q p/q = n Since the function x 7 x /p is monotone, we obtain n /q kxk q = n /q /q x i q n /p x i p. /p x i p = n /p kxk p, from which the right inequality in the exercise follows. The left inequality clearly holds for x = 0, soassumex 6= 0. Without loss of generality we can then assume kxk =,sincekaxk p kaxk q if and only if kxk p kxk q for any nonzero scalar a. Then, for any i =,...,n,onehas x i, implying 8
11 that x i p x i q. Moreover, since x i =forsomei, onehas x q + + x n q, so that /p /p /q kxk p = x i p x i q x i q = kxk q. Finally we consider the case p =. Thestatementisobviousforq = p, soassume that q is an integer. Then /q /q kxk q = x i q kxk q = n /q kxk, 7 proving the right inequality. Using that the map x x /q is monotone, the left inequality follows from kxk q =(max x i ) q i x i q = kxk q q. 9
then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja
Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk
More informationESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then
ESTIMATION OF ERROR Let bx denote an approximate solution for Ax = b; perhaps bx is obtained by Gaussian elimination. Let x denote the exact solution. Then introduce r = b Abx a quantity called the residual
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationLinear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions
Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationMath 413/513 Chapter 6 (from Friedberg, Insel, & Spence)
Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector
More information4 Linear Algebra Review
Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationLecture 2 INF-MAT : A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems
Lecture 2 INF-MAT 4350 2008: A boundary value problem and an eigenvalue problem; Block Multiplication; Tridiagonal Systems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationEigenpairs and Similarity Transformations
CHAPTER 5 Eigenpairs and Similarity Transformations Exercise 56: Characteristic polynomial of transpose we have that A T ( )=det(a T I)=det((A I) T )=det(a I) = A ( ) A ( ) = det(a I) =det(a T I) =det(a
More informationLecture 7. Econ August 18
Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar
More informationSolutions Chapter 5. The problem of finding the minimum distance from the origin to a line is written as. min 1 2 kxk2. subject to Ax = b.
Solutions Chapter 5 SECTION 5.1 5.1.4 www Throughout this exercise we will use the fact that strong duality holds for convex quadratic problems with linear constraints (cf. Section 3.4). The problem of
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationChater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationChapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices
Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationFunctional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009
Eduardo Corona Functional Analysis: Assignment Set # Spring 2009 Professor: Fengbo Hang April 29, 2009 29. (i) Every convolution operator commutes with translation: ( c u)(x) = u(x + c) (ii) Any two convolution
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More information10.34: Numerical Methods Applied to Chemical Engineering. Lecture 2: More basics of linear algebra Matrix norms, Condition number
10.34: Numerical Methods Applied to Chemical Engineering Lecture 2: More basics of linear algebra Matrix norms, Condition number 1 Recap Numerical error Operations Properties Scalars, vectors, and matrices
More information50 Algebraic Extensions
50 Algebraic Extensions Let E/K be a field extension and let a E be algebraic over K. Then there is a nonzero polynomial f in K[x] such that f(a) = 0. Hence the subset A = {f K[x]: f(a) = 0} of K[x] does
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationLecture Notes in Linear Algebra
Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................
More information12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)
1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear
More informationj=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).
Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationLecture Notes for Inf-Mat 3350/4350, Tom Lyche
Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?
KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationHW2 Solutions
8.024 HW2 Solutions February 4, 200 Apostol.3 4,7,8 4. Show that for all x and y in real Euclidean space: (x, y) 0 x + y 2 x 2 + y 2 Proof. By definition of the norm and the linearity of the inner product,
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationContents. Appendix D (Inner Product Spaces) W-51. Index W-63
Contents Appendix D (Inner Product Spaces W-5 Index W-63 Inner city space W-49 W-5 Chapter : Appendix D Inner Product Spaces The inner product, taken of any two vectors in an arbitrary vector space, generalizes
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationMATH 532: Linear Algebra
MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationPart 1a: Inner product, Orthogonality, Vector/Matrix norm
Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSynopsis of Numerical Linear Algebra
Synopsis of Numerical Linear Algebra Eric de Sturler Department of Mathematics, Virginia Tech sturler@vt.edu http://www.math.vt.edu/people/sturler Iterative Methods for Linear Systems: Basics to Research
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationMath 702 Problem Set 10 Solutions
Math 70 Problem Set 0 Solutions Unless otherwise stated, X is a complex Hilbert space with the inner product h; i the induced norm k k. The null-space range of a linear map T W X! X are denoted by N.T
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationLinear Algebra. Shan-Hung Wu. Department of Computer Science, National Tsing Hua University, Taiwan. Large-Scale ML, Fall 2016
Linear Algebra Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Large-Scale ML, Fall 2016 Shan-Hung Wu (CS, NTHU) Linear Algebra Large-Scale ML, Fall
More informationHomework #2 Solutions Due: September 5, for all n N n 3 = n2 (n + 1) 2 4
Do the following exercises from the text: Chapter (Section 3):, 1, 17(a)-(b), 3 Prove that 1 3 + 3 + + n 3 n (n + 1) for all n N Proof The proof is by induction on n For n N, let S(n) be the statement
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationQuadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.
Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric
More informationJordan Normal Form. Chapter Minimal Polynomials
Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko
More informationEIGENVALUES IN LINEAR ALGEBRA *
EIGENVALUES IN LINEAR ALGEBRA * P. F. Leung National University of Singapore. General discussion In this talk, we shall present an elementary study of eigenvalues in linear algebra. Very often in various
More informationLECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel
LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationAnalytic Number Theory Solutions
Analytic Number Theory Solutions Sean Li Cornell University sxl6@cornell.edu Jan. 03 Introduction This document is a work-in-progress solution manual for Tom Apostol s Introduction to Analytic Number Theory.
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More information[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of
. Matrices A matrix is any rectangular array of numbers. For example 3 5 6 4 8 3 3 is 3 4 matrix, i.e. a rectangular array of numbers with three rows four columns. We usually use capital letters for matrices,
More informationChapter 4 Euclid Space
Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,
More informationFinal Project # 5 The Cartan matrix of a Root System
8.099 Final Project # 5 The Cartan matrix of a Root System Thomas R. Covert July 30, 00 A root system in a Euclidean space V with a symmetric positive definite inner product, is a finite set of elements
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationInner Product and Orthogonality
Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationThe Threshold Algorithm
Chapter The Threshold Algorithm. Greedy and Quasi Greedy Bases We start with the Threshold Algorithm: Definition... Let be a separable Banach space with a normalized M- basis (e i,e i ):i2n ; we mean by
More informationFunctional Analysis Review
Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary
More informationFunctions of Several Variables
Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More information