Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de
|
|
- Opal Wilkerson
- 6 years ago
- Views:
Transcription
1 Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c
2 Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value decomosition or SVD of a matrix is then resented The SVD exoses the -norm of a matrix, but its value to us goes much further: it enables the solution of a class of matrix erturbation roblems that form the basis for the stability robustness concets introduced later it solves the so-called total least squares roblem, which is a generalization of the least squares estimation roblem considered earlier and it allows us to clarify the notion of conditioning, in the context of matrix inversion These alications of the SVD are resented at greater length in the next lecture Examle To r o vide some immediate motivation for the study and alication of matrix norms, we begin with an examle that clearly brings out the issue of matrix conditioning with resect to inversion The question of interest is how sensitive the inverse of a matrix is to erturbations of the matrix Consider inverting the matrix A = () : A q u i c k calculation that shows A ; = ; ; : () Now suose we i n vert the erturbed matrix A + A = () :
3 The result now i s (A + A) ; = A ; + ( A ; ) = ; : ; () Here A denotes the erturbation in A and (A ; ) denotes the resulting e r - turbation in A ; Evidently a % change in one entry of A has resulted in a % change in the entries of A ; If we w ant to solve the roblem Ax = b where b = [ ; ] T, then x = A ; b = [; :] T, while after erturbation of A we get x + x = ( A + A) ; b = [ ; :] T Again, we see a % change in the entries of the solution with only a % change in the starting data The situation seen in the above examle is much w orse than what can ever arise in the scalar case If a is a scalar, then d(a ; )=(a ; ) = ; da=a, so the fractional change in the inverse of a has the same maginitude as the fractional change in a itself What is seen in the above examle, therefore, is a urely matrix henomenon It would seem to b e related to the fact that A is nearly singular in the sense that its columns are nearly deendent, its determinant is much smaller than its largest element, and so on In what follows (see next lecture), we shall develo a sound way to measure nearness to singularity, and show h o w this measure relates to sensitivity u n d e r i n version Before understanding such sensitivity to erturbations in more detail, we need ways to measure the \magnitudes" of vectors and matrices We have already introduced the notion of vector norms in Lecture, so we n o w turn to the denition of matrix norms Matrix Norms An m n comlex matrix may b e v i e w ed as an oerator on the (nite dimensional) normed vector sace C n : A mn : ( C n k k ) ;! ( C m k k ) () where the norm here is taken to be the standard Euclidean norm Dene the induced -norm of A as follows: k Axk k Ak = su x= k xk () = max k Axk kxk = : () The term \induced" refers to the fact that the denition of a norm for vectors such as Ax and x is what enables the above denition of a matrix norm From this denition, it follows that the induced norm measures the amount of \amlication" the matrix A rovides to vectors on the unit shere in C n, ie it measures the \gain" of the matrix Rather than measuring the vectors x and Ax using the -norm, we could use any -norm, the interesting cases being = Our notation for this is k Ak = max k Axk : (8) kxk =
4 The triangle inequality holds since: An imortant question to consider is whether or not the induced norm is actually a norm, in the sense dened for vectors in Lecture Recall the three conditions that dene a norm: k xk, and k xk = () x = k xk = j j k xk k x + yk k xk + k yk Now let us verify that k Ak is a norm on C mn using the receding denition: k Ak since k Axk for any x Futhermore, k Ak = () A =, since k Ak is calculated from the maximum of k Axk evaluated on the unit shere k Ak = j j k Ak follows from k yk = j j k yk (for any y) k A + Bk = max k (A + B)xk kxk = max k Axk + k Bx k kxk = k Ak + k Bk : Induced norms have t wo additional roerties that are very imortant: k Axk k Ak k xk, w h i c h is a direct consequence of the denition of an induced norm For A mn, B nr, k ABk k Ak k Bk (9) which is called the submultilicative roerty This also follows directly from the denition: k ABxk k Ak k Bx k k Ak k Bk k xk for any x: Dividing by k xk : from which the result follows k ABxk k xk k Ak k Bk Before we turn to a more detailed study of ideas surrounding the induced -norm, which will be the focus of this lecture and the next, we m a k e some remarks about the other induced norms of ractical interest, namely the induced -norm and induced -norm We shall also
5 say something about an imortant matrix norm that is not an induced norm, namely the Frobenius norm and It is a fairly simle exercise to rove that mx kak = max ja ij j (max of absolute column sums of A) () jn i= nx kak = max ja ij j (max of absolute row sum s of A) : () im j= (Note that these denitions reduce to the familiar ones for the -norm and -norm of column vectors in the case n = ) The roof for the induced -norm involves two stages, namely: Prove that the quantity in Equation () rovides an uer bound : kaxk kxk 8x Show that this bound is achievable for some x = x^: kax^k = kx^k for some ^x : In order to show how these stes can b e imlemented, we give the details for the -norm case Let x C n and consider nx kaxk = max j a ij x j j im j= nx max ja ij jjx j j im j= max ja ij ja max jx j j im j= jn nx max ja ij ja kxk im j= The above inequalities show that an uer bound is given by nx max kaxk = max ja ij j: kxk = im j=
6 @ X X A Now in order to show t h a t this uer bound is achieved by som e vector x^, let i b e an index Pn at which the exression of achieves a maximum, that is = j= jaij j Dene the vector x ^ as sgn(ai) sgn(ai) x^ = : sgn(ain) Clearly kx^k = and X n kax^k = jaij j = : j= The roof for the -norm roceeds in exactly the same way, and is left to the reader There are matrix norms ie functions that satisfy the three dening conditions stated earlier that are not induced norms The most imortant examle of this for us is the Frobenius norm: n m kak F = ja ij j () = j= i= trace(a ; A) (verify) () In other words, the Frobenius norm is dened as the root sum of squares of the entries, ie the usual Euclidean -norm of the matrix when it is regarded simly as a vector in C mn Although it can be shown that it is not an induced matrix norm, the Frobenius norm still has the submultilicative roerty that was noted for induced norms Yet other matrix norms may b e dened (some of them without the submultilicative roerty), but the ones above are the only ones of interest to us Singular Value Decomosition Before we discuss the singular value decomosition of matrices, we begin with some matrix facts and denitions Some Matrix Facts: A matrix U C nn is unitary if U U = UU = I Here, as in Matlab, the suerscrit denotes the (entry-by-entry) comlex conjugate of the transose, which is also called the Hermitian transose or conjugate transose A matrix U R nn is orthogonal if U T U = UU T = I, where the suerscrit the transose T denotes Proerty: If U is unitary, then kux k = kxk
7 ; " # If S = S (ie S equals its Hermitian transose, in which case we s a y S is Hermitian), then there exists a unitary matrix such t h a t U SU = [diagonal matrix] For any matrix A, both A A and AA are Hermitian, and thus can always be diagonalized by unitary matrices For any matrix A, the eigenvalues of A A and AA are always real and non-negative (roved easily by contradiction) Theorem (Singular Value Decomosition, or SVD) Given any matrix A C mn, A can be written as mm mn A = nn U V () where U U = I, V V = I, = () r and i = ith nonzero eigenvalue of A The A i are termed the singular values of A, and ie, are arranged in order of descending magnitude, r > : Proof: We will rove this theorem for the case rank(a) = m the general case involves very little more than what is required for this case The matrix AA is Hermitian, and it can therefore be diagonalized by a unitary matrix U C mm, so that U U = AA : Note that = diag( : : : m) has real ositive diagonal entries i due to the fact that AA is ositive denite We can write = = diag( : : : m) Dene V C mn by V = ; U A V has orthonormal rows as can b e seen from the following calculation: V V = AA U ; = I Choose the matrix V in such a w ay that U V = V V is in C nn and unitary Dene the m n matrix = [ j] This imlies that In other words we h a ve A = U V V = V = U A: One cannot always diagonalize an arbitrary matrix cf the Jordan form
8 = = Examle For the matrix A given at the beginning of this lecture, the SVD comuted easily in Matlab by writing [u s v] = svd(a) is Observations: A = :8 : : : ; 8 : : : ;:8 :8 : () i) which tells us U diagonalizes AA AA = U V V T U U T U = U r U () ii) A A = V T U U V V T V = V r V (8) which tells us V diagonalizes A A iii) If U and V are exressed in terms of their columns, ie, h i U = u u u m and then h i V = v v v n X r A = i u i v i= i (9)
9 = P which is another way to write the SVD The u i are termed the left singular vectors of A, and the v i are its right singular vectors From this we see that we can alternately interret Ax as X r ; Ax = i u i v () i x {z } i= rojection which is a weighted sum of the u i, where the weights are the roducts of the singular values and the rojections of x onto the v i Observation (iii) tells us that R a(a) = san f u : : : u r g (because Ax = i= c i u i where the c i are scalar weights) Since the columns of U are indeendent, dim R a(a) = r = rank ( A), and f u : : : u r g constitute a basis for the range sace of A The null sace of A is given by sanf v r+ : : : v ng To see this: U V x = () V x = () v x r v r x () v i x = i = : : : r () x sanf v r+ : : : v ng : Examle One alication of singular value decomosition is to the solution of a system of algebraic equations Suose A is an m n comlex matrix and b is a vector in C m Assume that the rank of A is equal to k, with k < m We are looking for a solution of the linear system Ax = b By alying the singular value decomosition rocedure to A, we get r A = = U U V V where is a k k non-singular diagonal matrix We will exress the unitary matrices U and V columnwise as h U = u u : : : u m h V = v v : : : v n : A necessary and sucient condition for the solvability of this system of equations is that u i b = for all i satisfying k < i m Otherwise, the system of equations is inconsistent This condition means that the vector b must be orthogonal to the i i
10 v = : last m ; k columns of U Therefore the system of linear equations can be written as V x = U b V x = b u b u u b k u b m u b Using the above equation and the invertibility o f, w e can rewrite the system of equations as v u b v x = u b : : : v u By using the fact that k k k b v h i v k v v : : : v k = I we obtain a solution of the form kx x = u i= i ib v i: From the observations that were made earlier, we know that the vectors v k+ v k+ : : : v n san the kernel of A, and therefore a general solution of the system of linear equations is given by kx x = (u b) + nx i v i i= i i=k+ i v i where the coecients i, with i in the interval k + i n, are arbitrary comlex numbers
11 Relationshi to Matrix Norms The singular value decomosition can be used to comute the induced -norm of a matrix A Theorem kaxk kak = su x= kxk = () = max (A) which tells us that the maximum amlication is given by the maximum singular value Proof: kaxk ku V xk su = su = kxk x x = kxk kv xk = su x= kxk kyk = su y = kv y k P r i= i jy i j = su P y= r jy i j i= : For y = [ ] T, kyk =, and the suremum is attained (Notice that this correonds to x = v Hence, Av = u ) Another alication of the singular value decomosition is in comuting the minimal amlication a full rank matrix exerts on elements with -norm equal to Theorem Given A C mn, suose rank(a) = n Then min kaxk = n (A) : () kxk = Note that if rank(a) < n, then there is an x such that the minimum is zero (rewrite A in terms of its SVD to see this) Proof: For any kxk =, xk kaxk = ku V = kv xk (invariant u n d e r m ultilication by unitary matrices) = kyk
12 It follows that for any matrix A, not necessarily square, v x A A v A v v Figure : Grahical deiction of the maing involving A Note that Av = u and that Av = u for y = V x Now! X n kyk = j i y i j i= n : T Note that the minimum is achieved for y = [ ] t h us the roof is comlete The Frobenius norm can also be exressed quite simly in terms of the singular values We l e a ve you to verify that kak F Examle Matrix Inequality Xn X m ja ij j A j= i= ; = trace(a A)! rx = i i= () We say A B, t wo square matrices, if x Ax x B x for all x = : kak $ A A I:
13 Show that one way to do this is via Householder transformations, as follows: Exercises Exercise Verify that for any A, an m n matrix, the following holds: kak k Ak mkak : n Exercise Suose A = A Find the exact relation between the eigenvalues and singular values of A Does this hold if A is not conjugate symmetric? Exercise Show that if rank(a) =, then, kak F = kak Exercise This roblem leads you through the argument for the existence of the SVD, using an iterative construction Showing that A = U V, w here U and V are unitary matrices is equivalent t o showing that U AV = a) Argue from the denition of kak that there exist unit vectors (measured in the -norm) x C m and y C such that Ax = y, where = kak b) We can extend b o th x and y above to orthonormal bases, ie we can nd unitary matrices V and U whose rst columns are x and y resectively: ~ V = [ x V~ ] U = [ y U ] n and likewise for U hh V = I ; h = x ; [ : : : ] h h c) Now dene A = U AV Why is ka k = kak? d) Note that A = y Ax y AV~ w = U~ Ax U~ AV~ B What is the justication for claiming that the lower left element in the above matrix is? e) Now show that ka w k + w w and combine this with the fact that ka k = kak = to deduce that w =, so A = B
14 At the next iteration, we aly the above rocedure to B, and so on When the iterations terminate, we h a ve the SVD [The reason that this is only an existence roof and not an algorithm is that it begins by i n voking the existence of x and y, but does not show h o w to comute them Very good algorithms do exist for comuting the SVD see Golub and Van Loan's classic, Matrix Comutations, Johns Hokins Press, 989 The SVD is a cornerstone of numerical comutations in a host of alications] Exercise Suose the m n matrix A is decomosed in the form A = U where U and V are unitary matrices, and is an invertible r r matrix ( the SVD could be used to roduce such a decomosition) Then the \Moore-Penrose inverse", or seudo-inverse of A, denoted by A +, can be dened as the n m matrix (You can invoke it in Matlab with inv(a)) A + ; = V a) Show that A + A and AA + are symmetric, and that AA + A = A and A + AA + = A + (These four conditions actually constitute an alternative denition of the seudo-inverse) b) Show that when A has full column rank then A + = (A A) ; A, and that when A has full row rank then A + = A (AA ) ; c) Show that, of all x that minimize ky ; Axk (and there will b e many, if A does not have full column rank), the one with smallest length kxk is given by ^x = A + y V U Exercise All the matrices in this roblem are real Suose A = Q with Q being an m m orthogonal matrix and R an n n invertible matrix (Recall that such a decomosition exists for any matrix A that has full column rank) Also let Y be an m matrix of the form Y = Q where the artitioning in the exression for Y is conformable with the artitioning for A R Y Y
15 To simlify the comutation, minimize the Frobenius norm of in the dention of (A) ^ (a) What choice X of the n matrix X minimizes the Frobenius norm, or equivalently the squared Frobenius norm, of Y ; AX? In other words, nd ^ X = argmin ky ; AXk F Also determine the value of ky ; AXk ^ F (Your answers should b e exressed in terms of the matrices Q, R, Y and Y ) ^ (b) Can your X in (a) also be written as (A A) ; A Y? C an it be w ritten as A + Y, where A + denotes the (Moore-Penrose) seudo-inverse of A? (c) Now obtain an exression for the choice X of X that minimizes ky ; AXk F + kz ; BX k F where Z and B are given matrices of aroriate dimensions (Your answer can be exressed in terms of A, B, Y, and Z) Exercise Structured Singular Values Given a comlex square matrix A, dene the structured singular value function as follows where is some set of matrices (A) = min f max () j det(i ; A) = g a) = fi : C g (A (A of A as: (A) = m a x j i j and the i 's are the eigenvalues of A If, show that ) = ), where is the sectral radius, dened b) If = f C i nn g, show that (A) = max (A) c) If = fdiag( n) j i C g, show that where (A) (A) = (D ; AD) max (D ; AD) D f diag(d d n) j d i > g Exercise 8 Consider again the structured singular value function of a comlex square matrix A dened in the receding roblem If A has more structure, it is sometimes ossible to comute (A) exactly In this roblem, assume A is a rank-one matrix, so that we can write A = uv where u v are comlex vectors of dimension n Comute (A) w hen (a) = diag( : : : n) i C (b) = diag( : : : n) i R
Numerical Linear Algebra
Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and
More informationChapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationthen kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja
Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk
More informationIntroduction to MVC. least common denominator of all non-identical-zero minors of all order of G(s). Example: The minor of order 2: 1 2 ( s 1)
Introduction to MVC Definition---Proerness and strictly roerness A system G(s) is roer if all its elements { gij ( s)} are roer, and strictly roer if all its elements are strictly roer. Definition---Causal
More information216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are
Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity
More informationSums of independent random variables
3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where
More informationAN ALGORITHM FOR MATRIX EXTENSION AND WAVELET CONSTRUCTION W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. This paper gives a practical method of exten
AN ALGORITHM FOR MATRIX EXTENSION AND WAVELET ONSTRUTION W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. This aer gives a ractical method of extending an nr matrix P (z), r n, with Laurent olynomial entries
More informationNorms and Perturbation theory for linear systems
CHAPTER 7 Norms and Perturbation theory for linear systems Exercise 7.7: Consistency of sum norm? Observe that the sum norm is a matrix norm. This follows since it is equal to the l -norm of the vector
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationSets of Real Numbers
Chater 4 Sets of Real Numbers 4. The Integers Z and their Proerties In our revious discussions about sets and functions the set of integers Z served as a key examle. Its ubiquitousness comes from the fact
More informationA note on variational representation for singular values of matrix
Alied Mathematics and Comutation 43 (2003) 559 563 www.elsevier.com/locate/amc A note on variational reresentation for singular values of matrix Zhi-Hao Cao *, Li-Hong Feng Deartment of Mathematics and
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMATH 6210: SOLUTIONS TO PROBLEM SET #3
MATH 6210: SOLUTIONS TO PROBLEM SET #3 Rudin, Chater 4, Problem #3. The sace L (T) is searable since the trigonometric olynomials with comlex coefficients whose real and imaginary arts are rational form
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationAn Inverse Problem for Two Spectra of Complex Finite Jacobi Matrices
Coyright 202 Tech Science Press CMES, vol.86, no.4,.30-39, 202 An Inverse Problem for Two Sectra of Comlex Finite Jacobi Matrices Gusein Sh. Guseinov Abstract: This aer deals with the inverse sectral roblem
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationIntroduction to Numerical Linear Algebra II
Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationThe University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.
The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational
More informationUse of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek
Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.
More information1. The Polar Decomposition
A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with
More informationIntroduction to Group Theory Note 1
Introduction to Grou Theory Note July 7, 009 Contents INTRODUCTION. Examles OF Symmetry Grous in Physics................................. ELEMENT OF GROUP THEORY. De nition of Grou................................................
More informationVarious Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems
Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Conve Analysis and Economic Theory Winter 2018 Toic 16: Fenchel conjugates 16.1 Conjugate functions Recall from Proosition 14.1.1 that is
More information12 CHAPTER 1. PRELIMINARIES Lemma 1.3 (Cauchy-Schwarz inequality) Let (; ) be an inner product in < n. Then for all x; y 2 < n we have j(x; y)j (x; x)
1.4. INNER PRODUCTS,VECTOR NORMS, AND MATRIX NORMS 11 The estimate ^ is unbiased, but E(^ 2 ) = n?1 n 2 and is thus biased. An unbiased estimate is ^ 2 = 1 (x i? ^) 2 : n? 1 In x?? we show that the linear
More informationElementary theory of L p spaces
CHAPTER 3 Elementary theory of L saces 3.1 Convexity. Jensen, Hölder, Minkowski inequality. We begin with two definitions. A set A R d is said to be convex if, for any x 0, x 1 2 A x = x 0 + (x 1 x 0 )
More informationApplied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization
More informationResearch Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inputs
Abstract and Alied Analysis Volume 203 Article ID 97546 5 ages htt://dxdoiorg/055/203/97546 Research Article Controllability of Linear Discrete-Time Systems with Both Delayed States and Delayed Inuts Hong
More informationMATH 2710: NOTES FOR ANALYSIS
MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite
More informationIntroduction to Banach Spaces
CHAPTER 8 Introduction to Banach Saces 1. Uniform and Absolute Convergence As a rearation we begin by reviewing some familiar roerties of Cauchy sequences and uniform limits in the setting of metric saces.
More informationProposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)
RODICA D. COSTIN. Singular Value Decomposition.1. Rectangular matrices. For rectangular matrices M the notions of eigenvalue/vector cannot be defined. However, the products MM and/or M M (which are square,
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationCR extensions with a classical Several Complex Variables point of view. August Peter Brådalen Sonne Master s Thesis, Spring 2018
CR extensions with a classical Several Comlex Variables oint of view August Peter Brådalen Sonne Master s Thesis, Sring 2018 This master s thesis is submitted under the master s rogramme Mathematics, with
More informationFunctional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009
Eduardo Corona Functional Analysis: Assignment Set # Spring 2009 Professor: Fengbo Hang April 29, 2009 29. (i) Every convolution operator commutes with translation: ( c u)(x) = u(x + c) (ii) Any two convolution
More informationFor q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i
Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:
More informationCommutators on l. D. Dosev and W. B. Johnson
Submitted exclusively to the London Mathematical Society doi:10.1112/0000/000000 Commutators on l D. Dosev and W. B. Johnson Abstract The oerators on l which are commutators are those not of the form λi
More informationCHAPTER 5 TANGENT VECTORS
CHAPTER 5 TANGENT VECTORS In R n tangent vectors can be viewed from two ersectives (1) they cature the infinitesimal movement along a ath, the direction, and () they oerate on functions by directional
More informationIntroduction to Linear Algebra. Tyrone L. Vincent
Introduction to Linear Algebra Tyrone L. Vincent Engineering Division, Colorado School of Mines, Golden, CO E-mail address: tvincent@mines.edu URL: http://egweb.mines.edu/~tvincent Contents Chapter. Revew
More informationReal Analysis 1 Fall Homework 3. a n.
eal Analysis Fall 06 Homework 3. Let and consider the measure sace N, P, µ, where µ is counting measure. That is, if N, then µ equals the number of elements in if is finite; µ = otherwise. One usually
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationTRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES
Khayyam J. Math. DOI:10.22034/kjm.2019.84207 TRACES OF SCHUR AND KRONECKER PRODUCTS FOR BLOCK MATRICES ISMAEL GARCÍA-BAYONA Communicated by A.M. Peralta Abstract. In this aer, we define two new Schur and
More informationCHAPTER 2: SMOOTH MAPS. 1. Introduction In this chapter we introduce smooth maps between manifolds, and some important
CHAPTER 2: SMOOTH MAPS DAVID GLICKENSTEIN 1. Introduction In this chater we introduce smooth mas between manifolds, and some imortant concets. De nition 1. A function f : M! R k is a smooth function if
More informationA Numerical Radius Version of the Arithmetic-Geometric Mean of Operators
Filomat 30:8 (2016), 2139 2145 DOI 102298/FIL1608139S Published by Faculty of Sciences and Mathematics, University of Niš, Serbia vailable at: htt://wwwmfniacrs/filomat Numerical Radius Version of the
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.
More informationPositive decomposition of transfer functions with multiple poles
Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.
More informationParticipation Factors. However, it does not give the influence of each state on the mode.
Particiation Factors he mode shae, as indicated by the right eigenvector, gives the relative hase of each state in a articular mode. However, it does not give the influence of each state on the mode. We
More informationMoment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution
Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationCorrespondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R.
1 Corresondence Between Fractal-Wavelet Transforms and Iterated Function Systems With Grey Level Mas F. Mendivil and E.R. Vrscay Deartment of Alied Mathematics Faculty of Mathematics University of Waterloo
More informationElementary Analysis in Q p
Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More information4.3 - Linear Combinations and Independence of Vectors
- Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be
More informationA Recursive Block Incomplete Factorization. Preconditioner for Adaptive Filtering Problem
Alied Mathematical Sciences, Vol. 7, 03, no. 63, 3-3 HIKARI Ltd, www.m-hiari.com A Recursive Bloc Incomlete Factorization Preconditioner for Adative Filtering Problem Shazia Javed School of Mathematical
More informationDISCRIMINANTS IN TOWERS
DISCRIMINANTS IN TOWERS JOSEPH RABINOFF Let A be a Dedekind domain with fraction field F, let K/F be a finite searable extension field, and let B be the integral closure of A in K. In this note, we will
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationESTIMATION OF ERROR. r = b Abx a quantity called the residual for bx. Then
ESTIMATION OF ERROR Let bx denote an approximate solution for Ax = b; perhaps bx is obtained by Gaussian elimination. Let x denote the exact solution. Then introduce r = b Abx a quantity called the residual
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationp-adic Measures and Bernoulli Numbers
-Adic Measures and Bernoulli Numbers Adam Bowers Introduction The constants B k in the Taylor series exansion t e t = t k B k k! k=0 are known as the Bernoulli numbers. The first few are,, 6, 0, 30, 0,
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationNew weighing matrices and orthogonal designs constructed using two sequences with zero autocorrelation function - a review
University of Wollongong Research Online Faculty of Informatics - Paers (Archive) Faculty of Engineering and Information Sciences 1999 New weighing matrices and orthogonal designs constructed using two
More informationNotes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.
Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where
More informationExtension of Minimax to Infinite Matrices
Extension of Minimax to Infinite Matrices Chris Calabro June 21, 2004 Abstract Von Neumann s minimax theorem is tyically alied to a finite ayoff matrix A R m n. Here we show that (i) if m, n are both inite,
More informationTranspose of the Weighted Mean Matrix on Weighted Sequence Spaces
Transose of the Weighted Mean Matri on Weighted Sequence Saces Rahmatollah Lashkariour Deartment of Mathematics, Faculty of Sciences, Sistan and Baluchestan University, Zahedan, Iran Lashkari@hamoon.usb.ac.ir,
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationσ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2
HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For
More informationCSE 599d - Quantum Computing When Quantum Computers Fall Apart
CSE 599d - Quantum Comuting When Quantum Comuters Fall Aart Dave Bacon Deartment of Comuter Science & Engineering, University of Washington In this lecture we are going to begin discussing what haens to
More informationAdditive results for the generalized Drazin inverse in a Banach algebra
Additive results for the generalized Drazin inverse in a Banach algebra Dragana S. Cvetković-Ilić Dragan S. Djordjević and Yimin Wei* Abstract In this aer we investigate additive roerties of the generalized
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationInner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n
Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2
More information4 Linear Algebra Review
Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the text 1 Vectors and Matrices For the context of data analysis, the
More informationON THE NORM OF AN IDEMPOTENT SCHUR MULTIPLIER ON THE SCHATTEN CLASS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 000-9939XX)0000-0 ON THE NORM OF AN IDEMPOTENT SCHUR MULTIPLIER ON THE SCHATTEN CLASS WILLIAM D. BANKS AND ASMA HARCHARRAS
More informationLinGloss. A glossary of linear algebra
LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal
More informationSection 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative
Section 0.0: Comlex Numbers from Precalculus Prerequisites a.k.a. Chater 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.
More informationMath Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.
Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems
More informationLecture notes on Quantum Computing. Chapter 1 Mathematical Background
Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For
More informationMA3H1 TOPICS IN NUMBER THEORY PART III
MA3H1 TOPICS IN NUMBER THEORY PART III SAMIR SIKSEK 1. Congruences Modulo m In quadratic recirocity we studied congruences of the form x 2 a (mod ). We now turn our attention to situations where is relaced
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationt s (p). An Introduction
Notes 6. Quadratic Gauss Sums Definition. Let a, b Z. Then we denote a b if a divides b. Definition. Let a and b be elements of Z. Then c Z s.t. a, b c, where c gcda, b max{x Z x a and x b }. 5, Chater1
More informationCMSC 425: Lecture 4 Geometry and Geometric Programming
CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas
More informationSection 3.9. Matrix Norm
3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix
More informationChapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter Robust
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationOn Wald-Type Optimal Stopping for Brownian Motion
J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of
More informationAnalysis of some entrance probabilities for killed birth-death processes
Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction
More informationMath 104B: Number Theory II (Winter 2012)
Math 104B: Number Theory II (Winter 01) Alina Bucur Contents 1 Review 11 Prime numbers 1 Euclidean algorithm 13 Multilicative functions 14 Linear diohantine equations 3 15 Congruences 3 Primes as sums
More informationMaths for Signals and Systems Linear Algebra in Engineering
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More information