that of the SVD provides new understanding of left and right generalized singular vectors. It is shown

Size: px
Start display at page:

Download "that of the SVD provides new understanding of left and right generalized singular vectors. It is shown"

Transcription

1 ON A VARIAIONAL FORMULAION OF HE GENERALIZED SINGULAR VALUE DECOMPOSIION MOODY.CHU,ROBER. E. FUNDERLIC y AND GENE H. GOLUB z Abstract. A variational formulation for the generalized singular value decomposition (GSVD) of a pair of matrices A R mn and B R pn is presented. In particular, a duality theory analogous to that of the SVD provides new understanding of left and right generalized singular vectors. It is shown that the intersection of row spaces of A and B playsakey role in the GSVD duality theory. he main result that characterizes left GSVD vectors involves a generalized singular value deation process. Key words. Generalized Eigenvalue and Eigenvector, Generalized Singular Value and Singular Vector, Stationary Value and Stationary Point, Deation, Duality. AMS(MOS) subject classications. 65F15, 65H Introduction. he singular value decomposition (SVD) of a given matri A R mn is (1) U AV = S =diagf 1 ::: q g q=minfm ng where U R mm and V R nn are orthogonal matrices, S R mn is zero ecept for the real nonnegative elements 1 ::: r > r+1 = :::= q = 0 on the leading diagonal with r =rank(a). he i i =1 ::: q are the singular values of A. Of the many ways to characterize the singular values of A, the following variational property is of particular interest [4]. heorem 1.1. Consider the optimization problem () ma 6=0 kak kk where kk denotes the -norm of a vector. hen the singular values of A are precisely the stationary values, i.e., the functional evaluations at the stationary points, of the objective function kak=kk with respect to 6= 0. We note that the stationary points R n in the problem () are the right singular vectors of A. At each ofsuchpoints, it follows from the usual duality theory that there eists a vector y R m of unit Euclidean length such that y A is equal to the corresponding stationary value. his y is the corresponding left singular vector of A. he main purpose of this paper is to delineate a similar variational principle that leads to the generalized singular value decomposition (GSVD) of a pair of matrices A R mn and B R pn. While the variational formula analogous to () for the GSVD Department of Mathematics, North Carolina State University, Raleigh, NC his research was supported in part by the National Science Foundation under grants DMS and DMS y Department of Computer Science, North Carolina State University, Raleigh, NC z Department of Computer Science, Stanford University, Stanford, CA his research was supported in part by National Science Foundation under grant CCR

2 is well-known, the corresponding duality theory has apparently not been developed. (See [1] for a related treatise.) he purpose of this note is to ll the duality theory gap for the GSVD problem. Let R(M) and N (M) denote, respectively, the range space and the null space of any given matri M. We will see that the intersection of row spaces of A and B, R(A ) \ R(B )= n z R n j z = A = y B for some R m and y R po plays a fundamental role in the duality theory of the associated GSVD. he equivalence C =[A B ] =0() A = y B suggests that the null space of the matri C := [A B ], ( ) N (C) = R m+p () j C =0 may beinterpreted as a \representation of R(A ) R(B ). But this representation is not unique in that dierent values of N(C) maygive rise to the same z R(A ) R(B ). In particular, all points in the subspace ( ) g (4) S := R m+p j g A = h B =0 ;h collapse into the zero vector in R(A ) R(B ). For a reason to be discussed in the sequel (see (16) and the argument thereafter), the subspace S should be taken out of consideration. More precisely, dene the homomorphism H : N (C) ;! R(A ) R(B ) by z = H( ) () z = A = y B and dene, for every N(C), the quotient map ( containing, i.e., (5) ( ):= + S: ) to be the coset of S hen the rst homomorphism theorem for vector spaces (See, for eample, [5, heorem 4.a]) states that R(A ) R(B )is isomorphic to the quotient space N (C)=S where (6) N (C)=S := ( ( ) j N(C) ) :

3 It is in this quotient space that we establish the duality theory. Recall that linearly independent vectors in N (C) that are not in S will generate naturally linearly independent vectors in the quotient space N (C)=S through the quotient map. hus the simplest way to represent N (C)=S is through the orthogonal complement S? of S in N (C). Dene N(A )andn(b ) to be matrices so that their columns span, respectively, thenull spaces N (A )andn (B ). Dene (7) Z := 6 4 A B N(A ) 0 0 N(B ) 7 5 : hen N (C)=S can be uniquely represented by the subspace (8) N (Z) = ( R m+p j Z We shall have the dimension counted carefully in. Our discussion is based upon the following formulation of the GSVD for A and B by Paige and Saunders [6] (or QSVD in []) that generalizes the original concept in [9]: Definition 1.1. Assume rank(c) =k, then there eist orthogonal U R mm and V R pp and an invertible X R nn such that U 0 A A 0 (9) 0 V X = B B 0 =0 ) : with A R mk and B R pk given by A = 4 B = 4 r s k ; r ; s I A S A O A 5 r s k ; r ; s O B S B I B 5 r s m ; r ; s p ; k + r s k ; r ; s where I A and I B are identity matrices, O A and O B are zero matrices with possibly no rows or no columns, and S A = diagf! (1) A :::! (s) A g and S B = diagf! (1) B :::! (s) B g satisfy he quotients 1 >! (1) A :::! (s) A > 0 0 <! (1) B! (i) A +! (i) B =1: :::! (s) B < 1 i :=!(i) A! (i) B i=1 ::: s

4 are called the generalized singular values of (A B) for which we make use of the notation :=diagf 1 ::: s g. he values of r and s are dened internally by the matrices A and B. Suppose we partition X into four blocks of columns X =[X 1 X X X 4 ] with column sizes r s k;r;s and n;k, respectively. Correspondingly, suppose we partition U into U =[U 1 U U ] with column sizes r s m; r ; s and V into V =[V 1 V V ] with column sizes p ; k + r s k; r ; s, respectively. Observe that X A AX = X B BX = I A 0 SA. 6 4 OAO 7 A 5 0 ::: 0 OB O B 0 SB I B 5 0 ::: 0 where, for simplicity,wehave used \0 to denote various zero matrices with appropriate sizes. Upon eamining the second column block, we notice that A AX = B BX : hat is, f i ji =1 ::: sg is a subset of the eigenvalues of the symmetric pencil (10) A A ; B B: Similarly, we point out the following remarks to include all other cases [, 6]: 1. If k<n, then A AX 4 = B BX 4 = 0 implies that every comple number is an eigenvalue of (10). his is the case that is considered of little interest. We will refer to eigenvalues of this type as defective.. Since A AX = 0 and B BX 6= 0, the pencil (10) has 0 as an eigenvalue with multiplicity k ; r ; s.. Since B BX 1 = 0 and A AX 1 6=0,wemay regard that the pencil (10) has 1 as an eigenvalue with multiplicity r. We view the relationships U AX = S A V BX = S B : as the fundamental and most important components of (9). We refer to the corresponding columns of U and V as the left generalized singular vectors of A and B, respectively. Note that there are two such left vectors for each generalized singular value, one for A, and one for B. Similar to heorem 1.1, we have the following variational formulation. 4

5 heorem 1.. Consider the optimization problem (11) kak ma B6=0 kbk : hen the generalized singular values, 1 ::: s, of(a B) are precisely the nonzero nite stationary values of the objective function in (11). Proof. he stationary values of kak=kbk are square roots of those of the function f() := ha Ai hb Bi where h i is the standard Euclidean inner product. It is not dicult to see that the gradient off at where B 6= 0isgiven by rf() = A A ; f()b B : hb Bi he theorem follows from comparing the rst order condition rf() = 0 with (10). Obviously, the corresponding stationary points R n for the problem (11) are related to columns of the matri X (up to scalar multiplications), which are also eigenvectors of the pencil (10). What is not clear are the roles that U and V play in terms of the variational formula (11). In this note we present some new insights in this regard. In the usual SVD duality theory the left singular vectors can be obtained from the optimization problem (1) ma y6=0 ky Ak kyk a formula similar to (). hus one might rst guess that the duality theory analogous to (11) would be the problem ky Ak ma y B6=0 ky Bk : However, this is certainly not a correct form as a single row vector y is not compatible for left multiplication on both A and B. We will see correct dual forms for the GSVD in (18) and ().. Duality heory. For convenience, we denote It follows from U = [u () 1 ::: u () s ] V = [v () 1 ::: v () s ]: (1) U AX = S A S ;1 B V BX =V 5 BX

6 that U A =V B or equivalently (14) C U ;V =0: Note that the columns of both U and V are unit vectors. Given any null space of C with kk 6= 0 and kyk 6= 0,weobservethat (15) C 4 kk ; kyk y kk kyk 5 = 1 kk C =0 in the where =kk and y=kyk are also unit vectors. Comparing (15) with the relationship (14), we are motivated to consider the role that each generalized singular value i plays in the optimization problem: (16) C ma =0 6=0 kyk kk : However, we need to hastily point out a subtle aw in the formulation of (16). Consider a given point Swith kk 6= 0andkyk 6= 0. hen Sfor arbitrary R. In this case, the optimization subproblem (17) kyk ma A=y B=0 6=0 kk becomes the problem jj ma R 6=0 jj that obviously has no stationary point at all and has maimum innity. he trouble persists so long as contains components from S. It is for this reason that the subspace S should be taken out of consideration. We should, instead of (16), consider the modied optimization problem (See (7) and (8)) (18) ma N (Z) 6=0 kyk kk : We will prove that each i corresponds to a stationary value for the problem (18). But rst it is worthy to point out some interesting remarks: 6

7 1. he optimization problem (18) is consistent with the ordinary singular value A problem where B = I. In this case, Z = I.hus N(Z) N(A ) 0 implies that y = A 6= 0. he forbidden situation A = y = 0 in (18) is not a concern in this case because of the homogeneity in, and only implies that 0 is a stationary value (or equivalently A has a zero singular value.) hus in the case of the ordinary SVD the problem (18) reduces to (1).. It is clear that dim(n (C)) = m + p ; k since we assume rank(c) =k. he structure involved in (9) implies that for S dened in (4) it must be S = R(U ) R(V 1 ): hat is, the size of N(A )andn(b )shouldbem(m ; r ; s) andp (p ; k + r), respectively. It follows that dim(s) = m + p ; k ; s. he space we are interested in is the quotient spacen (C)=S. It is known from the homomorphism theorem that dim(n (C)=S) = dim(n (C)) ; dim(s) [5, Lemma 4.8]. hus dim(n (C)=S) = s. We will see below that this dimension count agrees with our assumption that there are s generalized singular values. he following theorem is critical to the study of stationary values and stationary points of the optimization problem (18). heorem.1. Let the columns of the matri with R ms and R ps be a basis for the subspace N (Z). hen the non-defective nite nonzero eigenvalues of the symmetric pencil of (10), are the same as those of the pencil A A ; B B (19) ; : Proof. Suppose A A = B B. Since is non-defective and nonzero, A A = A B B 6= 0. hat is, represents a nonzero elementinn (C)=S. hus there ;B eist vectors v R s, v 6= 0, A R m;r;s, B R p;k+r such that It follows that ( ; )v = ;( A + In the above, wehave used the fact that Z A = v + N(A ) A ;B = v + N(B ) B : B) +( N(A ) A ; of (19) with v as the corresponding eigenvector. 7 N(B ) B )=0: = 0. his shows that is an eigenvalue

8 o complete the eigenvalue (generalized singular value) set equality, suppose now that ( ; )v = 0 with 6= 0 1 and v 6= 0.Wewant toshow that the equation A v (0) = ;B v has a solution. If this can be done, then since [A B ] = 0 it follows that is an eigenvector of the pencil A A ; B B with eigenvalue. v o show (0) means to show that the vector is in the column space of the v A matri. It suces to show that ;B (1) v v? y z wherever () Rewrite () as [A B ] must have y ;z h A ;B i y z = 0,showing that =0: y ;z N(C). We therefore y = w + N(A ) A ;z = w + N(B ) B for some vectors w A and B of appropriate size. Substituting y and z into (1) implies h i y z v = w ( v ; 1 v)+( v AN(A ) v ; 1 BN(B ) v =0: he assertion is therefore proved. Corollary.. he generalized singular values i, i =1 ::: s,are the stationary values associated with the optimization problem (18). Proof. We have already seen in heorem 1. on how the generalized singular values of (A B) are related to the pencil A A ; B B,whicharenow related to the pencil ;. By heorem 1. again, we conclude that the generalized singular values of (A B) can be found from the stationary values associated with the optimization problem () which is equivalent to (18). k vk ma v6=0 kvk : 8

9 We nowcharacterize the stationary points of (18). In particular, we prove the following result which completes our duality theory. Aside from the fundamental connection between the GSVD and its duality theory, the eigenvalue deation of the proof should be of special interest in its own right. heorem.. Suppose ::: 1 s (18) with corresponding stationary values 1 ::: s. Dene 1 s are stationary points for the problem (4) (5) u i := i k i k v i := y i ky i k : hen the columns of the matrices ~ U := [u1 ::: u s ] and ~ V := [v1 ::: v s ] are the left generalized singular vectors of A and B, respectively. 1 Proof. Suppose is an associated stationary point of (18) with the stationary 1 value 1. (he ordering of which stationary value is found is immaterial in the following discussion. We assume 1 is found rst.) aking this vector to be the rst basis vector in N (Z), we may write 1 = 1 where R m(s;1) and R p(s;1) are to be dened below. Consider the stacked matri Z := 6 4 A B N(A ) 0 0 N(B ) 1 0 Note that, due to the last row inz,thenull space of Z is a proper subspace of that of Z with one less dimension. We may therefore use a basis of the null space of Z to form the columns of the matri that 7 5 :.In this way, we attain the additional property 1 =0: Note that the eigenvector of (19) corresponding to eigenvalue 1 is the same as the stationary point for the problem () with stationary value 1. Since () is simply a coordinate representation of (18) and we already assume that is a stationary 1 point associated with (18), the eigenvector of (19) corresponding to 1 must be the unit vector e 1 R q. It follows that y 1 =0 9 1

10 and hence ; = y 1 y 1 ; ; : We have thus shown that the eigenvalues of the pencil ; are eactly those of the pencil ; with 1 ecluded. Note that the submatri 1 spans anull subspace of Z that is complementary to the vector. After the rst 1 stationary point is found, we may therefore deate (18) to the problem (6) Z ma =0 6=0 kyk kk : A stationary point of (6) will also be a stationary point of (18) since it gives the same stationary value in both problems. his deation procedure may be continued until all nonzero stationary values are found. hen, by construction, ~ U ~ U = I and ~ V ~ V = I. Furthermore, we have ~U A = ~ V B which completes the proof. hat is, we have derived two matrices ~ U and ~ V that play the same role as that of U and V, in (9) respectively.. Summary. We have discussed a variational formulation for the GSVD of a pair of matrices. In particular, we characterize the role of the left generalized singular vectors in this formulation. We summarize the analogies between the SVD and the GSVD in the following table. he stationary values in any of the variational formulations below give rise to the corresponding singular values. here is a close correspondence between the (generalized) eigenvalue problem and the (generalized) singular value problem, as is indicated in heorem 1.1 and heorem 1.. he result in heorem.1 and heorem. apparently are new, and shed lights on the understanding of the left singular vectors. Some of the available numerical methods and approaches for computing the GSVD are available in [, 7, 8, 10]. he deation process used in the characterization of the left singular vectors can be carried out eectively by updating techniques [4]. We anticipate that the discussion here might lead to a new numerical algorithm, especially when a few singular values are required and the matri C is sparse. Acknowledgment. We want to thank an anonymous referee for the many valuable suggestions that signicantly improve this paper. 10

11 Regular Problem Generalized Problem Decomposition U A = SV U AX =V BX See formula (1) See Formula (1) Right Singular Vector V X Variational Formula ma 6=0 kak kk ma B6=0 kak kbk (including zero i i ) See formula () See formula (11) Left Singular Vector U [U V ] Variational Formula ma A I N(A ) 0 kyk kk =0 6=0 6 4 A B N(A ) 0 0 N(B ) ma 7 5 kyk kk =0 6=0 (= ma A 6=0 6=0 ka k kk ) (only positive i i ) See formula (1) See formula (18) able 1 Comparison of variational formulations between SVD and GSVD. 11

12 REFERENCES [1] B. Anstrom, he generalized singular value decomposition and (A;B)-problem, BI, 4(1984), [] Z. Bai and J. W. Demmel, Computing the generalized singular value decomposition, SIAM J. Sci. Comput., 14(199), [] B. De Moor and G. H. Golub, Generalized singular value decomposition: A proposal for a standardized nomenclature, Manuscript NA-89-05, Stanford University, [4] G. H. Golub and C. F. Van Loan, Matri Computations, nd ed., he John Hopkins University Press, Baltimore, MD, [5] I. N. Herstein, opics in Algebra, nd ed., Xero College Publishing, Leington, MS, [6] C. C. Paige and M. A. Saunders, owards a generalized singular value decomposition, SIAM J. Numer. Anal., 18(1981), [7] C. C. Paige, Computing the generalized singular value decomposition, SIAM J. Sci. Stat. Comput., 7(1986), [8] G. W. Stewart, Computing the CS-decomposition of a partitioned orthonormal matri, Numer. Math., 40(198), [9] C. F. Van Loan, Generalizing the singular value decomposition, SIAM J. Numer. Anal., 1(1976), [10] C. F. Van Loan, Computing the CS and the generalized singular value decomposition, Numer. Math., 46(1985),

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992.

Perturbation results for nearly uncoupled Markov. chains with applications to iterative methods. Jesse L. Barlow. December 9, 1992. Perturbation results for nearly uncoupled Markov chains with applications to iterative methods Jesse L. Barlow December 9, 992 Abstract The standard perturbation theory for linear equations states that

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract

UMIACS-TR July CS-TR 2494 Revised January An Updating Algorithm for. Subspace Tracking. G. W. Stewart. abstract UMIACS-TR-9-86 July 199 CS-TR 2494 Revised January 1991 An Updating Algorithm for Subspace Tracking G. W. Stewart abstract In certain signal processing applications it is required to compute the null space

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Math Linear Algebra

Math Linear Algebra Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 205 Motivation When working with an inner product space, the most

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

EE263: Introduction to Linear Dynamical Systems Review Session 2

EE263: Introduction to Linear Dynamical Systems Review Session 2 EE263: Introduction to Linear Dynamical Systems Review Session 2 Basic concepts from linear algebra nullspace range rank and conservation of dimension EE263 RS2 1 Prerequisites We assume that you are familiar

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 191 Applied Linear Algebra Lecture 1: Inner Products, Length, Orthogonality Stephen Billups University of Colorado at Denver Math 191Applied Linear Algebra p.1/ Motivation Not all linear systems have

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 26, 2010 Linear system Linear system Ax = b, A C m,n, b C m, x C n. under-determined

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Math 3C Lecture 25. John Douglas Moore

Math 3C Lecture 25. John Douglas Moore Math 3C Lecture 25 John Douglas Moore June 1, 2009 Let V be a vector space. A basis for V is a collection of vectors {v 1,..., v k } such that 1. V = Span{v 1,..., v k }, and 2. {v 1,..., v k } are linearly

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart.

UMIACS-TR July CS-TR 2721 Revised March Perturbation Theory for. Rectangular Matrix Pencils. G. W. Stewart. UMIAS-TR-9-5 July 99 S-TR 272 Revised March 993 Perturbation Theory for Rectangular Matrix Pencils G. W. Stewart abstract The theory of eigenvalues and eigenvectors of rectangular matrix pencils is complicated

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013. MATH 431: FIRST MIDTERM Thursday, October 3, 213. (1) An inner product on the space of matrices. Let V be the vector space of 2 2 real matrices (that is, the algebra Mat 2 (R), but without the mulitiplicative

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

Parallel Singular Value Decomposition. Jiaxing Tan

Parallel Singular Value Decomposition. Jiaxing Tan Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

33A Linear Algebra and Applications: Practice Final Exam - Solutions

33A Linear Algebra and Applications: Practice Final Exam - Solutions 33A Linear Algebra and Applications: Practice Final Eam - Solutions Question Consider a plane V in R 3 with a basis given by v = and v =. Suppose, y are both in V. (a) [3 points] If [ ] B =, find. (b)

More information

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja

then kaxk 1 = j a ij x j j ja ij jjx j j: Changing the order of summation, we can separate the summands, kaxk 1 ja ij jjx j j: let then c = max 1jn ja Homework Haimanot Kassa, Jeremy Morris & Isaac Ben Jeppsen October 7, 004 Exercise 1 : We can say that kxk = kx y + yk And likewise So we get kxk kx yk + kyk kxk kyk kx yk kyk = ky x + xk kyk ky xk + kxk

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details. Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Lecture notes: Applied linear algebra Part 2. Version 1

Lecture notes: Applied linear algebra Part 2. Version 1 Lecture notes: Applied linear algebra Part 2. Version 1 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 First, some exercises: xercise 0.1 (2 Points) Another least

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

1. Addition: To every pair of vectors x, y X corresponds an element x + y X such that the commutative and associative properties hold

1. Addition: To every pair of vectors x, y X corresponds an element x + y X such that the commutative and associative properties hold Appendix B Y Mathematical Refresher This appendix presents mathematical concepts we use in developing our main arguments in the text of this book. This appendix can be read in the order in which it appears,

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS Math 50B Midterm II Information Spring 019 SOLUTIONS TO PRACTICE PROBLEMS Problem 1. Determine whether each set S below forms a subspace of the given vector space V. Show carefully that your answer is

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information