j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Size: px
Start display at page:

Download "j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p."

Transcription

1 LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a vector space, two norms, 2 are said to be equivalent iff there exists constants a, b, 0 < a < b such that a x x 2 b x for all x V. We saw that for finite dimensional normed vector spaces all norms are equivalent. following norms of R n defined as follows, if x = (x,..., x n ) R n : ( n ) /p x j p, if p <, x p = max j n x j, if p =. Verify this directly for the That is, prove all these norms are equivalent without appealing to the theorem about equivalence of norms. addition, prove or disprove: lim x p = x for all x R n. p At first I assumed that the equivalence of these norms was too trivial for me to add a proof, but a certain amount of confusion was visible in the answers. It was permissible to assume that equivalence of norms was an equivalence relation!; the proof of this fact being quite immediate. Therefore, one can select one of them (not two or three) and simply show the others are equivalent to it. The easiest one is. Then one can say: Let p <. That x x p is obvious; on the other hand x p p = n i= x i p n i= x p = n x p so that x p n /p x. And we are done! Concerning the last statement, here s a proof. Let x = (x,..., x n ) R n, let ξ = x = max j n x j. If ξ = 0 then x = 0 and there is nothing to prove, so assume ξ 0. Let k be the number of indices for which x i = ξ, so k n. Then, if p <, x p = ξk /p ( + ϵ p ) /p, where ϵ p = k { x i ξ p : x i < ξ} 0 as p. A trivial application of the popularly named squeeze theorem proves k /p ( + ϵ p ) /p =, hence lim p x p = ξ = x. Exercise 3 Let V be a finite dimensional normed vector space and let {e,..., e n } be a basis. Let {x m } be a sequence of elements of V and write x m = ξ mj e j. Prove that the sequence {x m } converges to x = ξ j e j in V ; that is lim x m x = 0, m if and only if each of the n sequences of (real or complex) numbers {ξ mj } m= converges; more specifically, if and only if lim ξ mj = ξ j for j =,..., n. You may (and almost certainly should) use that in a finite dimensional m vector space all norms are equivalent. I assumed everybody would know how to do this, but I wasn t quite right. My sentence in the statement of the problem You may (and almost certainly should) use that in a finite dimensional vector space all norms are equivalent, should perhaps have been more assertive, say You may use (and cannot possibly avoid using) that in a finite dimensional vector space all norms are equivalent. It is not a consequence of linear independence that in a general vector space n ξ je j small (whatever small may mean) implies that n ξ j is small. It is the Heine Borel theorem that does the trick. The equivalence of all norms! In

2 2 Exercise 4 Let V, W be finite dimensional normed vector spaces; if T : V W is a linear operator define T = sup{ T x W : x V, x V }. (). Prove T < for all T L(V, W ) (this may require assuming all norms in a finite dimensional vector space are equivalent). Let {e,..., e n } be a basis of V. In V we will also consider the norm x V, = ξ j, if x = ξ j e j. By the equivalence of all norms in finite dimensional vector spaces, there exists c > 0 such that x V, c x V for all x V. Now let b = max{ T e i W ; i =,..., n}. Let x = n ξ je j V. Then T x W = It follows that T bc <. ξ j T e j W ξ j T e j W b ξ j = b x V, bc x V. 2. Prove that with T as defined for T L(V, W ), L(V, W ) becomes a normed vector space. This follows from straightforward, essentially obvious, computations. 3. Assume V = W. Prove that this norm in L(V ) also satisfies T S T S and I =. Let T, s L(V ) and let x V. Then T Sx T Sx T S x and T S T S follows. Concerning I, the statement that I = is only 99.99% true. Clearly, Ix = x for all x, and that implies I. But to get I we need to have x V, x such that Ix =. Any x V with x = will do, assuming there is such an x! The statement is thus true as long as V {0}; fails if V is the pathetic trivial space. 4. Let V = C n with the Euclidean norm, which is usually denoted by the same symbol as the absolute value; that is, if z = (z,..., z n ) C n we set z = z 2 = (z z) /2 = z j 2 As usual, we identify operators in L(C n ) with their matrices with respect to the canonical basis of C n ; write T = (t jk ) if T L(C n ). In brief, we identify L(C n ) with M n (C). Now M n (C) can be identified with C n2 so it now makes sense to define for an operator T L(C n ) the norm T p for p [, ]. We only need this for p = 2, so just in case, /2 T 2 = t jk 2. Another norm that one can define in M n (C) is T 2, = max j, k n t jk 2 /2 /2.

3 3 (a) With T being the operator norm defined by (), prove T 2, T T 2 for all T L(C n ). There was originally a part (b) here, which I removed. This is a simple computation, but most everybody got it wrong. First of all recall that if e = (, 0,..., 0),..., e n = (0,..., 0, ) is the standard basis of C n, then T e k = (t k,..., t nk ) for k =,..., n. Since e k = for all k, we have T e k T for all k, hence T max T e k = max t jk 2 k n k n /2 = T 2, Conversely, we will use that if x = (x,..., x n ) C n, then, by Cauchy-Schwarz, 2 n n t jk x k t jk 2 x k 2 = x 2 t jk 2, to get T x 2 = ( t k x k,..., t nk x k ) 2 = proving T T 2. 2 t jk x k x 2 n t jk 2 = T 2 2 x 2, Exercise 5 Let V be a finite dimensional normed vector space, let {T m }, {S m } be sequences of operators in L(V ), converging (in the operator norm ()) to T, S respectively.. Prove that {T m + S m } converges to T + S and that {T m S m } converges to T S. The proof of this is essentially the same that works for sequences in R. 2. Prove: If S L(V ) and S <, then I S is invertible. As a finite dimensional normed vectors space, L(V ) is complete. Let S L(V ) and assume S <. It is immediately verifiable that the sequence {T n = I +S + +S n } is a Cauchy sequence in the operator norm, thus converges to some operator T. Now thus T n (I S) = I + S + + S n (S + + S n ) = I S n, T n (I S) I = S n S n 0 as n. Thus {T n (I S)} converges to I. But by part, it converges to T (I S). Thus T (I S) = I, proving I S invertible, (I S) = T = S n. 3. Let G(V ) be the group of invertible operators in L(V ). Prove it is an open subset of L(V ). Let T G(V ). Let ϵ = T. Assume S T < ϵ. Then S = T (T S) = T (I T (T S)). Now T is invertible, and since n=0 T (T s) T T S < ϵ T =, I T (T S) is also invertible. As the product of two invertible operators, so is S. The result follows.

4 4 Exercise 6 Let V be a normed vector space of finite dimension n. Let T L(V ).. Prove that the sequence of operators {I + T + 2! T m m! T m } = { k! T k } converges in the normed space L(V ) and it makes thus sense to define Since e T = k=0 k=0 k! T k. k! T k <, it is immediate that { m k=0 k! T k } is a Cauchy sequence in L(V ), thus converges. 2. Let V = R 3 and consider the operator (matrix) Evaluate e T explicitly. T = We find the S, N decomposition of the matrix. The characteristic polynomial of this matrix is p(λ) = λ 3 2λ 2 20λ 24 = (x + 2) 2 (x 6). Solving for eigenvectors, for λ = 2 we have to solve. 3x + 3y 5z = 0 x y z = 0 4x + 4y 4z = 0 This solves to x = y, z = 0; giving a one-dimensional eigenspace spanned, for example, by (,, 0). The generalized eigenspace, however, has dimension 2. We can now square 2I T ; the square will have 0 as an eigenvalue; one eigenvector will be the one we found, a second linearly independent one will complete the basis of the generalized eigenspace corresponding to -2. See the notes on Jordan for more details. I get as a possible second eigenspace (, 0, ). Finally, one finds that (, 0, ) is an eigenvector corresponding to the eigenvalue 6. Let U = 0 0 ; 0 Then and U T U = which is the S, N decomposition; U = S = /2 /2 /2 0 0 /2 /2 /2 = , N = k=

5 5 We see SN = NS (as it should be) and since N 2 = 0, we get m=0 m! (S + N)m = = m=0 m! (Sm + mn) = e S (I + N) = e e 2 e e 6 e e e Finally, e T = Ue S+N U = 2 (e6 e 2 ) 2 (3e 2 e 6 ) 2 (e 2 + e 6 ) e 2 2e 2 e 2 2 (e6 e 2 ) 2 (e 2 e 6 ) 2 (e 2 + e 6 ) I usually write up the solution before starting to grade; after grading the problem in question I might add some comments, or even change what I have written. Here I am adding a comment. If A is a square n n matrix, a lot of authors consider it as an operator in F n by defining x xa. In this case the Jordan form is the transpose of what we called the Jordan form. It is interesting that all of you who used the word Jordan used a Jordan form with the s above the main diagonal. It seems that the only one who actually took the trouble to go through the main steps to calculate the exponential was I. Exercise 7 Let f : V K be linear, y V, and assume that f(x) = (x, y) for all x V. Then f can be defined by (); that is f = sup{ f(x) : x V, x }. Prove f = y. f(x) = (x, y) x y = y x, so f y. If y = 0, then f = 0 and f = 0 = y, so assume y 0. Then y/ y = and f(y/ y ) f ; i.e., f (y/ y, y) = y. Exercise 8 Let A L(V ). Prove A = A. Notice first that A := (A ) = A. Now let x V. Then thus A 2 A A, proving A A. But then proving the converse inequality. Exercise 9 Textbook, Exercise 4, 70, page 37. A x 2 = (A x, A x) = (AA x, x) A A x 2, A = (A ) A, To get CA = B, the obvious choice is C = BA. But A might not be invertible! Thinking a bit more carefully, we see that C, applied to anything of the form Ax should be of the form Bx. So we make the obvious definition of C : R(A) V : C(Ax) = Bx. Is this well defined? The answer is yes; if Ax = Ay, then x y ker(a) ker(b), thus Bx = By. Because it is well defined, it is also easy to see that C is linear. The orthogonal complement of R(A) is ker(a ) = ker(a); the tricky part is how to define C on ker(a) so as to assure that C = C. Playing around with inner products, etc., one reaches the conclusion that the definition has to satisfy: If x ker(a), then ACx = B x and (Cx, y) = (x, Cy)

6 6 for all x, y ker(a). Assuming for a moment we have achieved this, let x, y V. Write x = Au + x 2, y = Av + y 2, where u, v V and x 2, y 2 ker A. Now, using where convenient the self-adjointness of A, AB, (Cx, y) = (Bu + Cx 2, Av + y 2 ) = (Bu + Cx 2, Av) + (Bu + Cx 2, y 2 ) = (ABu + ACx 2, v) + (Bu + Cx 2, y 2 ) = (ABu, v) + (ACx 2, v) + (Bu, y 2 ) + (Cx 2, y 2 ), (x, Cy) = (Au + x 2, Bv + Cy 2 ) = (u, ABv + ACy 2 ) + (x 2, Bv + Cy 2 ) = (u, ABv) + (u, ACy 2 ) + (x 2, Bv + Cy 2 ) = (ABu, v) + (u, ACy 2 ) + (x 2, Bv) + (x 2, Cy 2 ). We see (Cx, y) = (x, Cy). In fact, (u, ABv) = (ABu, v) since AB is self-adjoint, (u, ACy 2 ) = (u, B y 2 ) = (Bu, y 2) by the assumption about AC since y 2 ker A, similarly (x 2, Bv) = (B x 2, v) = (ACx 2, v). Finally, (x 2, Cy 2 ) = (Cx 2, y 2 ) since x 2, y 2 ker A. Let us prove now that we can achieve to have C defined on ker A so the desired properties hold. Let {e,..., e m } be a basis of ker A. Since ker(a) ker(b) = R(B ), we have R(A) = ker(a ) = ker(a) R(B ); thus there exist y,..., y m V such that Ay j = e j for j =,..., m. Writing y j = y j + y j with y j R(A), y j ker(a), we have Ay j = Ay j = e j. Replacing the y j s by the y j s, we may assume y j R(A) for j =,..., n. Define C on ker(a) by Ce j = y j, j =,..., m. Then ACe j = B e j and it follows that AC = B on ker(a). Let x, y ker(a), say x = m c je j, y = m d je j. Then (Cx, y) = j,k c j dk (y j, e k ) = 0 since y j R(A) and e k ker(a) = R(A) for all j, k. Similarly (x, Cy) = 0 for x, y ker A. Thus (Cx, y) = 0 = (x, Cy) for x, y ker A. Only one person started this problem. I did not quite understand this person s solution. If right, it is shorter than what I wrote here. Exercise 0 Textbook, Exercise 6, 70, page 37. (Hint: No and yes; prove: If A is skew, A 2 is self-adjoint. Should take about half a second.) If A = 0, then A is both skew and symmetric, and the same is true for all of its powers. Assume thus A 0. Assume A is skew, A = A. Then (A 2 ) = (A ) 2 = ( A) 2 = A 2, so A 2 is symmetric. One could ask whether A 2 = 0, but for a skew matrix this implies that A = 0. Similarly, (A 3 ) = ( A) 3 = A 3, so A 3 is skew. Exercise Textbook, Exercise 0, 70, page 37. (a) det(a) = det(a ) = det( A) = ( ) n det(a), so that if dim V is odd, we get det A = det A, hence det A = 0. (b) Let A be skew adjoint in V. Write V = (ker(a)) (ker(v ) ); because A is skew adjoint it is easy (trivial?) to see that ker(v ) is A-invariant; as an operator in ker(v ), A is still skew-adjoint. It is also invertible, thus by part (a) we must have dim(ker(v ) ) even. since we are done. dim(r(a)) = dim V dim(ker(v )) = dim(ker(v ) ), Exercise 2 Textbook, Exercise 7, 74, page 45. (a) If x V and (A I)x = 0, then Ax = x and x 2 = (x, x) = (Ax, x) = (x, Ax) = (x, x) = x 2 so x = 0. It follows that A I is invertible.

7 7 (b) Let U = (A + I)(A I), where A is skew. Let x V. Then setting y = (A I) x, we have Ux 2 = (A+I)y 2 = Ay 2 +2Re (Ay, y)+ y 2 = Ay 2 2Re (y, Ay)+ y 2 = Ay 2 2Re (Ay, y)+ y 2. Since Re z = Re z, we have that It follows that U is an isometry. Ux 2 = Ay 2 2Re (Ay, y) + y 2 = (A I)y 2 = x 2. (c) If U = (A + I)(A I) where A is skew adjoint, then U I is invertible. To see this, assume x V and Ux = x. Applying A I to both sides, we get (A + I)x = (A I)x, whence x = x, hence x = 0. (d) Assume U is an isometry and U I is invertible. Let A = (U + I)(U I). Since Then for x, y V, setting z = (U I) x, (Ax, y) = ((U + I)z, y) = Exercise 3 Let V be an inner product space and let A be a self-adjoint operator in V. Prove: Let λ = sup{(ax, x) : x V, x = }. Then λ = max σ(a). Moreover, if (Ax, x) = λ, x =, then Ax = λx. The solution I had in mind was more or less as follows. There exists a sequence {x m } with x m = such that λ = lim m (Ax m, x m ). Since we are in a finite dimensional vector space, Heine Borel is valid; a bounded sequence has a convergent subsequence. Passing to this subsequence, we may as well assume {x m } converges; there is x V such that lim m x m = x. It is easy to see that the map y y : V R is continuous; implying x =, and the map y (Ay, y) is continuous, implying (Ax, x) = λ. It should be noted that we used implicitly that A is self-adjoint (at least if the vector space is complex); A self-adjoint guarantees that (Ax, x) is real. Suppose now x = and (Ax, x) = λ. Let y V ; t R and assume first x + ty 0. Then (x + ty)/ x + ty is a vector of norm hence, by the definition of λ, ( ( ) ) x + ty x + ty (A(x + ty), x + ty) = A, λ; x + ty 2 x + ty x + ty i.e., (A(x + ty), x + ty) λ x + ty 2. (2) Since (2) is trivially true if x + ty = 0, it holds for all t R, y V. Keep y fixed for a while. Expanding the two sides of the inequality in (2), using that (x, x) = x 2 = and (Ax, x) = λ, we get (and here we are using the fact that A is selfadjoint to replace (Ay, x) by (y, Ax) = (Ax, y)): (λ y 2 (Ay, y))t 2 + 2Re (Ax λx, y)t 0. (3) If in (3) we divide by t > 0 and let t 0+, we conclude that Re (Ax λx, y) 0. Since y V was arbitrary, we can replace y by y to conclude that Re (Ax λx, y) 0, hence Re (Ax λx, y) = 0. If the space is complex, we can replace next y by iy and conclude that Im (Ax λx, y) 0. Thus ((A λ)x, y) = 0 for all y V, proving Ax = λx. Exercise 4 Let V be finite dimensional inner product space and let A L(V ). Then A 2 = max{λ : λ is an eigenvalue of A A}.

8 8 This exercise is a simple consequence of the previous one. Let λ be the largest eigenvalue of the positive (thus self-adjoint) operator A A. If x V, then Ax 2 = (A Ax, x) λ x 2 by the previous exercise. Thus A 2 λ. On the other hand, there is x V, x =, such that A Ax = λx; then A 2 Ax 2 = (A Ax, x) = λ. Exercise 5 Let F be a field of characteristic 0. Consider the following two matrices in M n (F ): n 0 0 A =, B = Prove they are similar. As a hint, if they are, then B would have to be the Jordan normal form of A. A field of characteristic 0 contains Q as a subfield and we may as well assume F = C; at least for a while. Clearly A is self-adjoint. Trying to find eigenvalues and eigenvectors, I think it is easiest to just look at the equation Ax = λx; if x = (x,..., x n ) this results in n equations x + + x n = λx i ; i =,..., n. If λ 0, the only way all these equations can hold is if x = x 2 = = x n. But then λx = x + + x n = nx, so l = n. If λ = 0, on the other hand, we can easily find n linearly independent solutions. That is, with a minimum of effort we find the following set of eigenvectors, the first one corresponding to the eigenvalue n, the rest to the eigenvalue 0: (,,..., ), (,, 0,..., 0), (, 0,, 0,..., 0), (,..., 0,, 0), (,..., 0, ). As it should be, this is an orthogonal basis, which can easily be normalized. In this basis the operator A has clearly the matrix B, proving the two matrices to be similar. Finally, the matrix of eigenvectors U = makes sense in Q (as does its inverse), so we are done. Exercise 6 Let V be a vector space of finite dimension n over a field F (any field, for now). Let E be a projection operator; that is E L(V ) and E 2 = E.. Show that the minimum polynomial p of E has one of the following forms: p(λ) = λ, p(λ) = λ, p(λ) = λ(λ ). We have E(E I) = 0. If E = 0 it follows that setting p(λ) = l we have p(e) = 0, and p(λ) = l is the minimum polynomial. If E = I, then it is p(λ) = λ. Otherwise let p(λ) = λ(λ ). Since p(e) = 0, the minimum polynomial must be a factor of p(λ). The only factors are λ and (λ ). Since E 0, E I 0, neither factor vanishes on E, thus p(λ) = λ(λ ) is minimum. 2. Show that σ(e) {0, }. This is immediate from part. Exercise 7 Show that an operator T is nilpotent if and only if σ(t ) = {0}. We have to assume here that all zeros of the characteristic polynomial are in the field!! That a nilpotent operator cannot have any eigenvalue other than 0 is clear. Conversely, assume there is x 0, T x = λx, λ 0. Then T k x = λ k x 0 for all k N, so T is not nilpotent. Alternative proof The only way that the spectrum of T can reduce to 0 is if the characteristic polynomial is p(λ) = λ n, which happens if and only if T n = 0.

9 GENERALITIES 9 Exercise 8 Prove: A L(V ) is normal if and only if V has an orthonormal basis consisting of eigenvectors of A (Definitely assume V is a complex, finite dimensional, inner product space). Prove that A is self-adjoint if and only if it is normal and all its eigenvalues are real. A is unitary if and only if it is normal and all its eigenvalues are complex numbers of absolute value. If A is normal, it has such a basis by the spectral theorem. Conversely, assume A has such a basis and let U be the matrix whose columns are the eigenvectors. One sees that U U = I, thus U is unitary. Moreover, one also sees that AU = UD, where D = diag(λ,..., λ n ) is a diagonal matrix whose diagonal entries are the eigenvalues of A. Thus A = UDU, A = UD U, hence A A = UD DU = UDD U = AA since diagonal matrices commute. The rest of the exercise is sort of obvious (possibly all of it is), so I won t write out a solution. Exercise 9 Textbook, 80, #5 (p. 62) We assume A is normal and we can write A = r λ je j, where λ,..., λ r are the distinct eigenvalues of A and E,..., E r are mutually orthogonal orthogonal projections. (a) If A = A 2, then we must have λ 2 j = λ j for j =,..., r. Thus (given all λ j s) being distinct we either have r = and λ = 0, or λ =, or r = 2 and λ =, λ 2 = 0. In all of these cases, A = E is an orthogonal (in particular) self-adjoint projection. (b) If A k = 0, then λ k j = 0 for j =,..., r, hence λ j = 0 for j =,..., r, hence A = 0. (c) If A 3 = A 2, then λ 3 j = λ2 j for j =,..., r, hence either λ j = 0 or λ j = for each j. It follows that A is an orthogonal projection. The conclusion is not true in general; for example there are non-zero operators A such that A 2 0; then A 2 = A 3 = 0, but A 2 A. (d) Assume now A = A, thus λ j R for all j. If A k = I for some k N, then λ k j = for j =,..., n. This implies that either we have a single eigenvalue equal to either or -, or two eigenvalues, one equal to, the other one to -. In all cases, A 2 = I. Exercise 20 Textbook, 80, #6 (p. 62) Yes. For a normal transformation A, the kernel of A and of A are the same. Thus AB = 0 implies that R(B) ker(a) = ker(a ), hence A B = 0, hence B A = (A B) = 0; since B is self-adjoint it now follows that BA = 0. Exercise 2 Textbook, 80, #7 (p. 62) Here is where we can use to good effect Theorem 2 of 74; there exists an orthonormal basis {e,..., e n } of V with respect to which the operator A has a lower triangular matrix. The diagonal of that matrix consists of the eigenvalues of A repeated according to their multiplicities. If we denote the entries of this matrix by a ij, i, j n, then a ii = λ i and a ij = 0 if j > i. Now the matrix of A A with respect to this bass will be (a ji )(a ij ) hence tr(a A) = a ji 2 = a ji 2 = a ii 2 + a ji 2 a ii 2 = λ i 2. i i j=i i= (i,j) : i<j n This proves the inequality. Now equality is achieved if and only if (i,j) : i<j n a ji 2 = 0; i.e., if and only if a ji = 0 for j > i; i.e., if the matrix of A is diagonal. Having a diagonal matrix with respect to an orthonormal basis is equivalent to being normal Generalities Exercise 22 Here is a cute one. Let A, B be 2 2 matrices with integer entries, such that the matrices A, A + B, A + 2B, A + 3B and A + 4B are all invertible, and the inverses have integer entries. Show that A + 5B is also invertible with integer entries. i= i=

10 GENERALITIES 0 This was once a Putnam problem. The thing to keep in mind is that if A is a matrix with integer entries, then det(a) Z; if both A, A have integer entries, then = det(a) det(a and since the only divisors of are and, we conclude det(a) = ±. Conversely, if det A = ±, then the construction of the inverse matrix using the adjunct matrix shows that A has integer entries. Thus an integer matrix has an inverse that is also an integer matrix if and only if det A = ±. Having established this, let s turn to the problem. For t R, let p(t) = det A + tb. The hypothesis implies that p(0), p(), p(2), p(3), p(4) {, }. This implies that p assumes at least 3 times either the value or the value. But p is a polynomial of degree 2; if it assumes 3 times a value it must be constant, thus constantly equal to or -. Then det(a + 5B) = ±, hence A + 5B is invertible and the inverse has integer entries. Exercise 23 Let V be a finite dimensional vector space over a field F and let {e,..., e n } be a basis of V. Consider the following linear maps from V to V. They are called elementary linear transformations. I. For c F, i, j n, E i+c(j) is the unique map such that E i+c(j) e k = e k if k i (even if k = j), E i+c(j) e i = e i + ce j. II. For c F, c 0, i n, E c(i) is the unique map such that E c(i) e j = e j if j i, E c(i) e i = ce i. III. For i, j n, i j, E ij is the unique linear map such that E ij e k = e k if k i, j, E ij e i = e j, E ij e j = e i. Prove that a linear operator T L(V ) is invertible if and only if T is the product of elementary transformations. Hint and Comments. First the comments. Students in our rather elementary Matrix Theory course encounter the matrices of these transformations, called elementary matrices. They learn, with or without too much of a proof, that multiplying a matrix on the left by one of these elementary matrices acts on the rows of the matrix (on the right) by what is known as an elementary row operation. They learn that elementary row operations can be used to take a matrix down to canonical form and the canonical form of a square invertible matrix is the identity matrix. If a number of these transformations take the matrix to the identity, then the same transformations done to the identity matrix give you the inverse matrix. This is the row reduction method for inverting a matrix, probably the most efficient of the relatively straightforward inversion methods. Now the hints. That all the elementary transformations are invertible is obvious. For the converse you need to show that if {f,..., f n } is another basis of V, then a sequence of elementary linear transformations can take {e,..., e n } to {f,..., f n }. If you can show that you can replace one of the e i s by f, you are on the way. It may not be possible to replace e by f (for example, f could be equal to e 2 ). Maybe your induction hypothesis should be: For i n, there is a sequence of elementary linear transformations taking the basis {e,..., e n } to a basis {f,..., f i, e i+,..., e n}. By a transformation of type III you can then start by taking the basis to a permuted one in which f depends on e, allowing you to replace e by f. Etc. I am tired. I won t grade this exercise.

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details. Math 554 Qualifying Exam January, 2019 You may use any theorems from the textbook. Any other claims must be proved in details. 1. Let F be a field and m and n be positive integers. Prove the following.

More information

Remarks on Definitions

Remarks on Definitions Remarks on Definitions 1. Bad Definitions Definitions are the foundation of mathematics. Linear algebra bulges with definitions and one of the biggest challenge for students is to master them. It is difficult.

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Jordan normal form notes (version date: 11/21/07)

Jordan normal form notes (version date: 11/21/07) Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let

More information

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

LECTURE 7. k=1 (, v k)u k. Moreover r

LECTURE 7. k=1 (, v k)u k. Moreover r LECTURE 7 Finite rank operators Definition. T is said to be of rank r (r < ) if dim T(H) = r. The class of operators of rank r is denoted by K r and K := r K r. Theorem 1. T K r iff T K r. Proof. Let T

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

1 Linear transformations; the basics

1 Linear transformations; the basics Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or

More information

Honors Linear Algebra, Spring Homework 8 solutions by Yifei Chen

Honors Linear Algebra, Spring Homework 8 solutions by Yifei Chen .. Honors Linear Algebra, Spring 7. Homework 8 solutions by Yifei Chen 8... { { W {v R 4 v α v β } v x, x, x, x 4 x x + x 4 x + x x + x 4 } Solve the linear system above, we get a basis of W is {v,,,,

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Linear Algebra Final Exam Solutions, December 13, 2008

Linear Algebra Final Exam Solutions, December 13, 2008 Linear Algebra Final Exam Solutions, December 13, 2008 Write clearly, with complete sentences, explaining your work. You will be graded on clarity, style, and brevity. If you add false statements to a

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information