Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Size: px
Start display at page:

Download "Throughout these notes we assume V, W are finite dimensional inner product spaces over C."

Transcription

1 Math Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal basis B = {u 1, u 2,, u n } of V such that [T ] B is upper triangular In particular, if A M nn (C) then there exists an upper triangular matrix and a unitary matrix U such that A = U U Proof : Let T L(V ) We prove this proposition by induction on dim V The base case, is trivial because the matrix of T with respect to any basis is upper triangular if V is 1-dimensional Let n > 1 be an integer Assume that if dim W = n 1 then for any S L(W ) there exists an orthonormal basis C of W such that [S] C is upper triangular Let n = dim V Let λ be an eigenvalue of T and v be a corresponding eigenvector such that v = 1 Let W = span(v) Then dim W = n 1 Consider that for any orthonormal basis C of W, B = {v} C is an orthonormal basis of V In such a basis B we have [T ] B = λ 0 b 11 b 12 b 1(n 1) 0 b 21 b 22 b 2(n 1) 0 b (n 1)1 b (n 1)2 b (n 1)(n 1) In particular, there exists S L(W ) such that [S] C = b 11 b 12 b 1(n 1) b 21 b 22 b 2(n 1) b (n 1)1 b (n 1)2 b (n 1)(n 1) By induction we can pick an orthonormal basis C of W such that this matrix above is upper triangular Then B is an orthonormal basis of V and [T ] B would be upper triangular

2 Now let A M nn (C) Then there exists T L(C n ) such that its standard matrix is A Then there exists an orthonormal basis B = {v 1, v 2,, v n } of C n such that [T ] B is upper triangular In particular, using S to denote the standard basis, = [T ] B = [I n ] BS A[I n ] SB = [I n ] 1 SB A[I n] SB is upper triangular Let U = [I n ] SB Clearly, U = [v 1 v 2 v n ] is unitary, hence U 1 = U and A = U U Remark: This argument does not hold for R, since a real matrix may have complex eigenvalues and eigenvectors Indeed we know that the entries on the diagonal of are the eigenvalues! Corollary: Let X be a real inner product space Let T L(X) have only real eigenvalues Then there exists an orthonormal basis B = {u 1, u 2,, u n } of X such that [T ] B is upper triangular In particular, if A M nn (R) only has real eigenvalues then there exists an upper triangular matrix and a unitary (orthogonal) matrix U such that A = U U = U U T Proof : Begin by copying the proof of the previous proposition with R in place of C It only remains to show that the eigenvalues of S L(W ) are real Notice that det([t ] B zi n ) = (λ z)det([s] C zi n 1 ) This means that eigenvalues of S are eigenvalues of T, and hence are real, which completes the proof by induction

3 2 Spectral Theorem for Self-Adjoint and Normal Operators T L(V ) is called self-adjoint if T = T A matrix A M nn (C) is called self-adjoint or Hermitian if A = A Proposition: Let T L(V ) be self-adjoint Then the eigenvalues of T are real and there exists an orthonormal basis of eigenvectors In particular, if A M nn (C) is Hermitian, then there exists a unitary matrix U and a real diagonal matrix D such that A = UDU Proof : Let T L(V ) be self-adjoint Pick an orthonormal basis B of V such that A = [T ] B is upper triangular A is Hermitian or self-adjoint (as T is) Then the entries above the main diagonal must be zero as the entries below the main diagonal are zero Moreover, the entries on the diagonal are real since they satisfy the equation z = z, which requires that their imaginary parts are 0 Thus A is a real diagonal matrix, so the eigenvalues of T are real Furthermore, the basis B consists of orthonormal eigenvectors! Now let A M nn (C) be Hermitian There exists a unitary matrix U such that A = UDU and D is upper triangular Then D = U AU and D = (U AU) = U A (U ) = U AU = D Thus D is real and diagonal As a consequence of this proposition, eigenvectors of a self-adjoint operator corresponding to different eigenvalues must be orthogonal: Corollary: Let T L(V ) be self-adjoint and λ, µ σ(t ) (eigenvalues) corresponding to eigenvectors u, v respectively If λ µ then u v

4 An operator T L(V ) (or matrix T M nn (C)) is called normal if T T = T T If T L(V ) is self-adjoint (T = T ) or unitary (T T = I = T T ) then T is normal Proposition: Let T L(V ) be normal Then T has an orthonormal basis of eigenvectors In particular, if A M nn (C) is normal, then there exists a unitary matrix U and a diagonal matrix D such that A = UDU Proof : Let T L(V ) be normal Pick an orthonormal basis B of V such that A = [T ] B is upper triangular A is normal It suffices to show that A is diagonal We will do this by induction on the size of A Clearly, if A is a 1 1 matrix, then it is diagonal and T has an orthonormal basis of eigenvectors Let n > 1 be an integer Assume any (n 1) (n 1) matrix that is normal and uppertriangular must be diagonal Let A M nn be normal and upper triangular Then A = a 11 a 12 a 13 a 1n 0 B 0 where B M (n 1)(n 1) is upper triangular By direct calculation (A A) 11 = a 11 a 11 = a 11 2 and n (A A) 11 = a 1j 2 Since A is normal, this implies a 12 = = a 1n = 0 Hence j=1 A = a B 0

5 A A = a B B 0 = AA = a BB 0 implies B B = BB So B is normal and by induction diagonal This means A is diagonal Proposition: T L(V ) is normal iff for all x V, T x = T x Proof : Let T L(V ) Assume T is normal Let x V So T x = T x T x 2 = (T x, T x) = (x, T T x) = (x, T T x) = (T x, T x) = T x 2 Now assume for all x V, T x = T x Let x, y V Using polarization identities, (T T x, y) = (T x, T y) = 1 4 = 1 α T (x + αy) 2 = α=±1,±i α T x + αt y 2 = 1 4 α=±1,±i α T (x + αy) 2 = α=±1,±i α T x + αt y 2 = (T x, T y) = (T T x, y) α=±1,±i Hence T T = T T, so T is normal Suggested Homework: 21, 23, 24, and 210

6 3 Polar and Singular Value Decompositions A self-adjoint operator T L(V ) is positive definite if for all non-zero x V, (T x, x) > 0, and positive semidefinite if for all x V, (T x, x) 0 We use the notations T > 0 and T 0 for positive definite and semidefinite respectively Proposition: Let T L(V ) be self-adjoint Then (1) T > 0 iff the eigenvalues of T are positive (2) T 0 iff the eigenvalues of T are non-negative Proof : Let T L(V ) be self-adjoint There exists an orthonormal basis B = {v 1, v 2,, v n } of V of eigenvectors of T such that D = [T ] B = diag{λ 1, λ 2,, λ n } is diagonal, where λ 1, λ 2,, λ n are the corresponding real eigenvalues of T (counting multiplicity) Assume T > 0 Then (T v i, v i ) = λ i > 0 for i = 1, 2,, n Now assume λ i > 0 for i = 1, 2,, n Let v V be non-zero There c 1, c 2, c n C, not all zero, such that v = n i=1 c iv i Then (T v, v) = n i=1 c i 2 λ i > 0 Thus T > 0 This proves (1) We get a proof of (2) by replacing all > with Proposition: Let T L(V ) be a positive semidefinite, self-adjoint operator Then there exists a unique positive semidefinite, self-adjoint operator S L(V ) such that S 2 = T S is denoted T and is called the (positive) square root of T Proof : Let T L(V ) be a positive semidefinite, self-adjoint operator Pick a basis B = {v 1, v 2,, v n } of V of eigenvectors of T corresponding to eigenvalues λ 1, λ 2,, λ n (counting multiplicity) Since T 0, the eigenvalues are non-negative Let S L(V ) be given by [S] B = diag{ λ 1, λ 2,, λ n } The eigenvalues of S are λ i 0 for i = 1, 2,, n So S is a positive semidefinite, self-adjoint operator Clearly S 2 = T since [S] 2 B = [T ] B

7 It only remains to show that S is unique Suppose R L(V ) is a positive semidefinite, selfadjoint operator satisfying R 2 = T Let µ 1, µ 2,, µ n be the eigenvalues of R corresponding to an orthonormal basis C = {u 1, u 2,, u n } of V consisting of eigenvectors of R Then [T ] C = [R 2 ] C = diag{µ 2 1, µ 2 2,, µ 2 n} So (up to a re-ordering) the eigenvalues λ i of T satisfy λ i = µ 2 i for i = 1, 2,, n Hence, since R is positive semidefinite, µ i = λ i for i = 1, 2,, n So R = S Thus S is unique Consider an operator T L(V, W ) Its Hermitian square is the operator T T L(V ) The Hermitian square is a positive semidefinite, self-adjoint operator: (T T ) = T (T ) = T T and for all x V, (T T x, x) = (T x, T x) = T x 2 0 Let T L(V, W ) Let R = T T L(V ), the square root of the Hermitian square This (positive semidefinite, self-adjoint) operator R is called the modulus of the operator T, and is denoted T Proposition: Let T L(V, W ) For all x V, T x = T x Proof : Let T L(V, W ) Let x V Then T x 2 = ( T x, T x) = ( T T x, x) = ( T 2 x, x) = (T T x, x) = (T x, T x) = T x 2 Thus T x = T x As a consequence Corollary: Let T L(V, W ) Ker T = Ker T = (range T ) The first equality follows immediately from the previous proposition The second equality makes use of the fact that Ker S = (range S ) for S L(V ), and T = T

8 Proposition (Polar Decomposition): Let T L(V ) Then T = U T for some unitary operator U L(V ) Proof : Let T L(V ) Let x range T Then there exists v V such that x = T v Define U 0 : range T V by U 0 x = T v We must make sure U 0 is well-defined (has a unique output) Suppose v 1 V satisfies x = T v 1 But then v v 1 Ker T = Ker T, so T v = T v 1 as required It is easy to check that U 0 is a linear transformation By construction T = U 0 T and U 0 x = T v = T v = x So U 0 is an isometry (range T ) = Ker T = Ker T = Ker T Pick a unitary operator U 1 : Ker T (range T ) = Ker T, which can be done since dim(ker T ) = dim(ker T ) (Rank Theorem for square matrices!) Define U L(V ) by U = U 0 P range T + U 1 P Ker T, where P X denotes orthogonal projection onto the subspace X (ELFY) U is a unitary transformation and T = U T Let T L(V, W ) The eigenvalues of T = T T are called the singular values of T In particular, if λ is an eigenvalue of T T then λ is a singular value of T Let T L(V, W ) Let σ 1, σ 2,, σ n be the singular values of T (counting multiplicity) Assume σ 1, σ 2,, σ r are the non-zero singular values So σ k = 0 for k = r + 1,, n By definition, σ 2 1, σ 2 2,, σ 2 n are the eigenvalues of T T L(V ) Let v 1, v 2,, v n be an orthonormal basis of V consisting of eigenvectors of T T corresponding to the eigenvalues σ 2 1, σ 2 2,, σ 2 n Such a basis can be selected since T T is self-adjoint So T T v i = σ 2 i v i for i = 1, 2,, n

9 Let w i = 1 σ i T v i for i = 1, 2,, r Then the vectors w 1, w 2,, w r are orthonormal: Let i, j {1, 2,, r} Then (w i, w j ) = ( 1 σ i T v i, 1 σ j T v j ) = 1 σ i σ j (T T v i, v j ) = 1 σ i σ j (σ 2 i v i, v j ) = σ i σ j (v i, v j ) = δ ij, where δ ij is 1 when i = j and 0 otherwise (Kronecker delta function) Claim: The operator T can be represented by T = σ k w k v k To justify the claim, it suffices to check that these operators coincide on a basis of V For i = 1, 2,, r, σ k w k v k v i = T v k (v i, v k ) = T v i, For i = r + 1,, n, σ i = 0 both σ k w k v k v i = 0 and T v i = 0 since σ i = 0 implies T v i 2 = (T v i, T v i ) = (T T v i, v i ) = (0, v i ) = 0 So since the operators agree on a (orthonormal) basis of V they must be equal The representation T = σ k w k v k is called a singular value decomposition of the operator T

10 Proposition: Let T L(V, W ) Suppose T can be represented by T = σ k w k v k, where σ i > 0 for i = 1, 2,, r, v 1, v 2, v r are orthonormal in V, and w 1, w 2, w r orthonormal in W Then this representation is a singular value decomposition of T Proof : Assume T L(V, W ) has the representation described in the proposition It suffices to show that T T v i = σ 2 i v i for i = 1, 2,, r and that this accounts for all non-zero eigenvalues of T T Let i {1, 2,, r} First notice that T = σ k v k w k and T v i = σ k w k v k v i = σ i w i Thus T T v i = σ i σ k v k w k w i = σi 2 v i Extend v 1, v 2, v r to an orthonormal basis v 1, v 2, v n of eigenvectors of T T Let λ be an eigenvalue of T T not in {σ 2 i i = 1, 2,, r} Then for some i = r + 1,, n, T T v i = λv i However, T v i = 0 since σ k w k v k v i = 0 So T T v i = 0 and thus λ = 0

11 Matrix representation of a singular value decomposition: By fixing orthonormal bases for V and W and working in these coordinates, we may assume T L(F n, F m ) Let T = be a singular value decomposition of T σ k w k v k This can be rewritten in matrix form as T = B DÃ, where D = diag{σ 1, σ 2,, σ r } M rr, B M mr has columns w k (k = 1, 2,, r) and à M nr has columns v k (k = 1, 2,, r) In particular, à and B are isometries as their columns are orthonormal In the case when m = n = r, à and B are unitary By extending v 1, v 2,, v r and w 1, w 2,, w r to orthonormal bases v 1, v 2,, v n and w 1, w 2,, w m of V, W respectively, we get unitary matrices A = [v 1 v 2 v n ] M nn and B = [w 1 w 2 w m ] M mm such that T = BDA, where D M mn has D in the upper left r r block and zeroes elsewhere This matrix form is also often called a singular value decomposition of T (ELFY) Write down a random 3 2 matrix and perform the singular value decomposition (in matrix form)

12 From a singular value decomposition to a polar decomposition: Let T = BDA be a singular value decomposition of T L(V ) in matrix form Then set U = BA U is unitary since U 1 = (BA ) 1 = (A ) 1 B 1 = AB = (BA ) = U Notice that T T = AD B BDA = AD 2 A = (ADA ) 2 T = ADA Then U T = (BA )(ADA ) = BDA = T is a polar decomposition Suggested Homework: 31, 33 (first and third matrices only), 35(a), 36(a)(b), and 39

13 4 What do Singular Values Describe? Essentially a singular value decomposition is a diagonalization of T L(V, W ) with respect to a pair of orthonormal bases (one for V, one for W ) So what does it tell us? To gain the appropriate intuition, consider the case where T L(R n, R m ) Let B = {x R n : x 1} (the closed unit ball) Let σ 1, σ 2,, σ n be the singular values (counting multiplicity, 0 s at the end of the list) of T Let B, C be the corresponding orthonormal bases from the singular value decomposition of V, W respectively Then the matrix of T with respect to these bases, [T ] CB, has the diagonal matrix D = diag{σ 1, σ 2,, σ r } (corresponding to the non-zero singular values σ 1, σ 2,, σ r ) as its upper left r r block and zeros everywhere else Let v B Set x = [v] B = [I V ] BS v, where S denotes the standard basis Since the columns of [I V ] SB = [I V ] 1 BS are the orthonormal basis vectors of B, this matrix unitary (an isometry), and hence x B Then x = (x 1 x 2 x n ) T satisfies n i=1 x2 i 1 Let y = [T ] CB x = (y 1 y 2 y n ) T Clearly y i = σ i x i for i = 1, 2,, r, and y j = 0 for j = r + 1,, n Thus y satisfies y 2 i σ 2 i=1 i 1 So in the first r orthonormal basis directions in basis C of R m (corresponding to non-zero singular values) y is on or within the ellipsoid given by yi 2 = 1 The last n r orthonormal basis directions in y are killed (sent to 0) i=1 σ 2 i Furthermore, when x is on the unit sphere (boundary of the unit ball), then y will be on the ellipsoid above, and T (B) is the closed ellipsoidal ball with axes of length 2σ 1, 2σ 2,, 2σ r with respect to the first r coordinates given by C

14 (ELFY) Let B = {v i i = 1, 2,, n} be an orthonormal basis of V Then x = [x] B Now let s discuss the notion of operator norm Let T L(V, W ) The operator norm of T is given by T = max{ T x : x V, x 1} First we justify that the largest singular value is the operator norm: Let x V and x 1 Suppose T = σ k w k v k is a singular value decomposition with respect to two orthonormal bases B = {v i i = 1, 2,, n} and C = {w i i = 1, 2,, m} of V and W respectively Without loss of generality assume that the singular values are arranged in a non-increasing order Let [x] B = (c 1 c 2 c n ) (ELFY) n i=1 c2 i 1 Then [T x] C = [T ] CB [x] B = (σ 1 c 1 σ 2 c 2 σ n c n ) Then T x 2 = [T ] CB [x] B 2 = n σi 2 c 2 i σ1 2 i=1 n c 2 i σ1 2 T x σ 1 i=1 Moreover T v 1 = [T ] CB [v 1 ] B = σ 1 Therefore T = σ 1 (ELFY) Show that the operator norm satisfies the four fundamental properties of a norm: (1) Homogeneity: For all T L(V, W ) and for all c F, ct = c T (2) Triangle Inequality: For all T, S L(V, W ), T + S T + S (3) Positivity (aka non-negativity): For all T L(V, W ), T 0 (4) Definiteness (aka non-degeneracy): Let T L(V, W ) Then T = 0 iff T = 0 L(V, W )

15 An important and immediate consequence of the homogeneity of the operator norm: Let x V be non-zero and c = x Note c R and c > 0 Set y = 1 x Then T y T c by the definition of the operator norm That implies that T x = T cy = c T y c T = T x The so-called Frobenius norm on L(V, W ) is given by T Frob = trace(a A) where A = [T ] CB for some orthonormal bases B, C of V, W respectively This does not depend on the choice of orthonormal bases (so it s well-defined) because a change of basis matrix between orthonormal bases is unitary, and for an m n matrix D, (UDV ) (UDV ) = V D DV has the same characteristic polynomial (and thus the same trace) as D D for all unitary matrices U M mm and for all unitary matrices V M nn How does this one compare to the operator norm? Again, suppose T = σ k w k v k is a singular value decomposition with respect to two orthonormal bases B = {v i i = 1, 2,, n} and C = {w i i = 1, 2,, m} of V and W respectively Without loss of generality assume that the singular values are arranged in a non-increasing order Then [T ] CB has zeros everywhere except that its upper left r r block is the diagonal matrix D = diag{σ 1, σ 2,, σ r } So T 2 Frob = trace([t ] CB [T ] CB) = r i=1 σ2 i = T 2 + r i=1 σ2 i T T Frob [We could also just recall that the trace of A A is the sum of the eigenvalues]

16 Suppose we have an invertible matrix A M nn and we want to solve the equation Ax = b, but we only have an estimate of b Let b denote a small perturbation accounting for the error, so instead of solving Ax = b for an exact solution x (because b is not known exactly) we solve Ay = b + b for an approximate solution y Let x = y x So A(x + x) = b + b A x = b x = A 1 b Then ( ) x x = A 1 b x = A 1 b b b x = A 1 b b Ax x A 1 b b A x x = A 1 A b b Note that A and A 1 denote operator norms [A, A 1 are matrices of operators in L(R n ) with respect to the standard (orthonormal) basis of R n ] So, x, which is the relative error in the solution, is bounded by the product of the (real) x number C = A 1 A and b, which is the relative error in the estimate of b b This motivates the following definition: Let A M nn be invertible The condition number of A is C = A 1 A The condition number tells you by how much relative error in the estimate of b is amplified into the relative error in the approximate solution How does the condition number relate to singular values? Let A M nn be invertible Let σ 1, σ 2,, σ n be the singular values of A in non-increasing order, so σ 1 is the largest and σ n is the smallest We already know that A = σ 1 In particular, σ 2 1, σ 2 2,, σ 2 n are the eigenvalues of A A Let B = {v 1, v 2,, v n } be an orthonormal basis of F n consisting of eigenvectors of A A (ELFY) A A is invertible, (A ) 1 = (A 1 ), and (A A) 1 = A 1 (A 1 ) Since A A is invertible, none of the singular values can be 0 (recall: a square matrix is invertible iff 0 is not an eigenvalue)

17 Now let C = {w i = 1 σ i Av i i = 1, 2,, n} Of course these bases B, C are the orthonormal bases in a singular value decomposition of A Then for i = 1, 2,, n, we have A 1 σ i w i = v i and thus A Av i = σ 2 i v i 1 σ i Av i = (A 1 ) σ i v i w i = (A 1 ) A 1 σ 2 i w i Thus for i = 1, 2,, n (A 1 ) A 1 w i = 1 σ 2 i w i, and thereby the singular values of A 1 are 1 σ 1, 1 σ 2,, 1 σ n So A 1 = 1 σ n the largest singular value of A 1 as that would be In conclusion, the condition number of an invertible matrix A M nn is C = A 1 A = s 1 s n, where s 1 is the largest singular value of A and s n is the smallest singular value of A In other words, its the ratio of the largest to smallest singular value (ELFY) If we pick b = w 1 and b = cw n for c F then the inequality ( ) on the last page becomes an equality Suggested Homework: 41 We will not cover the last two sections of this chapter

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler. Math 453 Exam 3 Review The syllabus for Exam 3 is Chapter 6 (pages -2), Chapter 7 through page 37, and Chapter 8 through page 82 in Axler.. You should be sure to know precise definition of the terms we

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Homework 11 Solutions. Math 110, Fall 2013.

Homework 11 Solutions. Math 110, Fall 2013. Homework 11 Solutions Math 110, Fall 2013 1 a) Suppose that T were self-adjoint Then, the Spectral Theorem tells us that there would exist an orthonormal basis of P 2 (R), (p 1, p 2, p 3 ), consisting

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Singular Value Decomposition (SVD) and Polar Form

Singular Value Decomposition (SVD) and Polar Form Chapter 2 Singular Value Decomposition (SVD) and Polar Form 2.1 Polar Form In this chapter, we assume that we are dealing with a real Euclidean space E. Let f: E E be any linear map. In general, it may

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

Summary of Week 9 B = then A A =

Summary of Week 9 B = then A A = Summary of Week 9 Finding the square root of a positive operator Last time we saw that positive operators have a unique positive square root We now briefly look at how one would go about calculating the

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

1. The Polar Decomposition

1. The Polar Decomposition A PERSONAL INTERVIEW WITH THE SINGULAR VALUE DECOMPOSITION MATAN GAVISH Part. Theory. The Polar Decomposition In what follows, F denotes either R or C. The vector space F n is an inner product space with

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Math Solutions to homework 5

Math Solutions to homework 5 Math 75 - Solutions to homework 5 Cédric De Groote November 9, 207 Problem (7. in the book): Let {e n } be a complete orthonormal sequence in a Hilbert space H and let λ n C for n N. Show that there is

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u MATH 434/534 Theoretical Assignment 7 Solution Chapter 7 (71) Let H = I 2uuT Hu = u (ii) Hv = v if = 0 be a Householder matrix Then prove the followings H = I 2 uut Hu = (I 2 uu )u = u 2 uut u = u 2u =

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

EXAM. Exam 1. Math 5316, Fall December 2, 2012

EXAM. Exam 1. Math 5316, Fall December 2, 2012 EXAM Exam Math 536, Fall 22 December 2, 22 Write all of your answers on separate sheets of paper. You can keep the exam questions. This is a takehome exam, to be worked individually. You can use your notes.

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information