Further Mathematical Methods (Linear Algebra) 2002

Size: px
Start display at page:

Download "Further Mathematical Methods (Linear Algebra) 2002"

Transcription

1 Further Mathematical Methods (Linear Algebra) 00 Solutions For Problem Sheet 0 In this Problem Sheet we calculated some left and right inverses and verified the theorems about them given in the lectures. We also calculated an SGI and proved one of the theorems that apply to such matrices.. To find the equations that u v w x y and z must satisfy so that [ v [ u w x 0 = 0 y z we expand out this matrix equation to get u w + y = u + w + y = 0 and v x + z = 0 v + x + z = and notice that we have two independent sets of equations. Hence to find all of the right inverses of the matrix [ A = i.e. all matrices R such that AR = I we have to solve these sets of simultaneous equations. Now as each of these sets consist of two equations in three variables we select one of the variables in each set say y and z to be free parameters. Thus mucking about with the equations we find u = 3y w = y and v = 3z x = z and so all of the right inverses of the matrix A are given by R = 3y 3z y z y z where y z R. (You might like to verify this by checking that AR = I the identity matrix.) We are also asked to verify that the matrix equation Ax = b has a solution for every vector b = [b b t R. To do this we note that for any such vector b R the vector x = Rb is a solution to the matrix equation Ax = b since A(Rb) = (AR)b = Ib = b. Consequently for any one of the right inverses calculated above (i.e. any choice of y z R) and any vector b = [b b t R the vector x = Rb = 3y 3z [ y z b = b + b b b + b yb 3 + zb y z 0 is a solution to the matrix equation Ax = b as [ b + b Ax = b + b yb 3 + zb = 0 [ b b + 0 = b. If you choose different variables to be the free parameters you will get a general form for the right inverse that looks different to the one given here. However they will be equivalent.

2 Thus we can see that the infinite number of different right inverses available to us give rise to an infinite number of different solutions to the matrix equation Ax = b. Indeed the choice of right inverse [via the parameters y z R only affects the solutions through vectors in Lin{[3 t } which is the null space of the matrix A. As such the infinite number of different solutions only differ due to the relationship between the choice of right inverse and the corresponding vector in the null space of A.. To find the equations that u v w x y and z must satisfy so that we expand out this matrix equation to get [ u v w = x y z [ 0 0 u v + w = u + v + w = 0 and x y + z = 0 x + y + z = and notice that we have two independent sets of equations. Hence to find all of the left inverses of the matrix A = i.e. all matrices L such that LA = I we have to solve these sets of simultaneous equations. Now as each of these sets consist of two equations in three variables we select one of the variables in each set say w and z to be free parameters. Thus mucking about with the equations we find u = 3w v = w and x = 3z y = z and so all of the left inverses of the matrix A are given by L = [ 3w w w 3z z z where w z R. 3 (You might like to verify this by checking that LA = I the identity matrix.) Using this we can find the solutions of the matrix equation Ax = b for an arbitrary vector b = [b b b 3 t R 3 since Therefore the desired solutions are given by: x = Lb = Ax = b = LAx = Lb = Ix = Lb = x = Lb. [ 3w w w 3z z z b b = [ b b b b + b 3 3b + b b 3 [ w z and this appears to give us an infinite number of solutions (via w z R) contrary to the second part of Question 0. But we know that the matrix equation Ax = b is consistent (i.e. has solutions) if If you choose different variables to be the free parameters you will get a general form for the left inverse that looks different to the one given here. However they will be equivalent. 3 Having done Question this should look familiar since if A is an m n matrix of rank m then it has a right inverse i.e. there is an n m matrix R such that AR = I (see Question ). Now taking the transpose of this we have R t A t = I i.e. the m n matrix R t is a left inverse of the n m rank m matrix A t (see Question 0). Thus it should be clear that the desired left inverses are given by R t where R was found in Question.

3 and only if b R(A). Hence since ρ(a) = we know that the range of A is a plane through the origin given by b = [b b b 3 t b b b 3 R(A) = 0 3b + b b 3 = 0. That is if the matrix equation Ax = b has a solution then 3b + b b 3 = 0 and as such this solution is given by x = [ b b b + b which is unique (i.e. independent of the choice of left inverse via w z R). Alternatively you may have noted that given the above solutions to the matrix equation Ax = b we can multiply through by A to get Ax = b b 3b w + z + b b 3 w + z. 3b + b w + z Thus it should be clear that the vector x will only be a solution to the system of equations if 3b + b b 3 = 0 as this alone makes Ax = b. Consequently when x represents a solution it is given by x = [ b b b + b which is unique. 3. To show that the matrix A = [ has no left inverse we have two methods available to us: Method : A la Question a left inverse exists if there are u v w x y and z that satisfy the matrix equation u v [ 0 0 w x = 0 0. y z 0 0 Now expanding out the top row and looking at the first and third equations we require u and v to be such that u + v = and u + v = 0. But these equations are inconsistent and so there are no solutions for u and v and hence no left inverse. Method : From the lectures (or Question 0) we know that an m n matrix has a left inverse if its rank is n. Now this is a 3 matrix and so for a left inverse to exist we require that it has a rank of three. But its rank can be no more than min{ 3} = and so this matrix cannot have a left inverse. Similarly to show that the matrix has a right inverse we use the analogue of Method. 4 So from the lectures (or Question ) we know that an m n matrix has a right inverse if and only if its rank 4 Or we could use the analogue of Method but it seems pointless here as we are just trying to show that the right inverse exists. (Note that using this method we would have to show that the six equations were consistent. But to do this we would probably end up solving them and finding the general right inverse in the process which is exactly what the question says we need not do!) 3

4 is m. Now this is a 3 matrix and so for a right inverse to exist we require that it has a rank of two. In this case the matrix has two linearly independent rows 5 and so it has the required rank and hence a right inverse. Further to find one of these right inverses we use the formula R = A t (AA t ) from the lectures (or Question ). Thus as we get AA t = So tidying up we get [ = [ 3 3 R = A t (AA t ) = [ R = 4 = (AA t ) = 8. = [ 3 3 (Again you can check that this is a right inverse of A by verifying that AR = I the identity matrix.) 4. We are asked to find the strong generalised inverse of the matrix A = To do this we note that if the column vectors of A are denoted by x x x 3 and x 4 then x 3 = x + x and x 4 = x + x where the first two column vectors of the matrix i.e. x and x are linearly independent. Consequently ρ(a) = and so we use these two linearly independent column vectors to form a 4 matrix B of rank two i.e. B = We then seek a 4 matrix C such that A = BC and ρ(c) = i.e. C = [ 0 0 (Notice that this matrix reflects how the column vectors of A are linearly dependent on the column vectors of B.) The strong generalised inverse of A namely A G is then given by. A G = C t (CC t ) (B t B) B t as we saw in the lectures (or Question 3). So we find that CC t = [ = [ Notice that the two row vectors are not scalar multiples of each other. = (CC t ) = 3. [ 0 0 4

5 and so C t (CC t ) = [ 0 0 = whereas B t B = [ = [ = (B t B) = 50 [ 3 3 and so (B t B) B t = [ Thus A G = C t (CC t ) (B t B) B t = 30 [ = [ [ 0 0 = 0 = 30 [ is the required strong generalised inverse. 6 (You may like to check this by verifying that A G AA G = A G or indeed that AA G A = A. But then again you may have better things to do.) 5. We are asked to show that: If A has a right inverse then the matrix A t (AA t ) is the strong generalised inverse of A. To do this we start by noting that by Question as A has a right inverse this matrix exists. So to establish the result we need to show that this matrix satisfies the four conditions that need to hold for a matrix A G to be a strong generalised inverse of the matrix A and these are given in Definition 9. i.e.. AA G A = A.. A G AA G = A G. 3. AA G is an orthogonal projection of R m onto R(A). 4. A G A is an orthogonal projection of R n parallel to N(A). Now one way to proceed would be to show that each of these conditions are satisfied by the matrix A t (AA t ). 7 But to save time we shall adopt a different strategy here and utilise our knowledge of weak generalised inverses. That is noting that all right inverses are weak generalised inverses 8 we know that they must have the following properties: ARA = A. AR is a projection of R m onto R(A). RA is a projection of R n parallel to N(A). 6 In Questions (and ) we found that a matrix can have many right (and left) inverses. However the strong generalised inverse of a matrix is unique. (See Question.) 7 This will establish that the matrix A t (AA t ) is a strong generalised inverse of A. The fact that it is the strong generalised inverse of A then follows by the result proved in Question. 8 See the handout for Lecture 8.. 5

6 and it is easy to see that RAR = R as well since RAR = R(AR) = RI = R. Thus we only need to establish that the projections given above are orthogonal i.e. we only need to show that the matrices AR and RA are symmetric. But as we are only concerned with the right inverse given by A t (AA t ) this just involves noting that: Since AR = I we have (AR) t = I t = I too and so AR = (AR) t. [RA t = [A t (AA t ) A t = A t [A t (AA t ) t = A t [(AA t ) t [A t t = A t (AA t ) A = RA. 9 Consequently the right inverse of A given by the matrix A t (AA t ) is the strong generalised inverse of A (as required). Further we are asked to show that x = A t (AA t ) b is the solution of Ax = b nearest to the origin. To do this we recall that if an m n matrix A has a right inverse then the matrix equation Ax = b has a solution for all vectors b R m. 0 Indeed following the analysis of Question it should be clear that these solutions can be written as x = A t (AA t ) b + u where u N(A). Now the vector A t (AA t ) b is in the range of the matrix A t and so since R(A t ) = N(A) the vectors A t (AA t ) b and u are orthogonal. Thus using the Generalised Theorem of Pythagoras we have x = A t (AA t ) b + u = A t (AA t ) b + u = x A t (AA t ) b as u 0. That is x = A t (AA t ) b is the solution of Ax = b nearest to the origin (as required). Other Problems. In the Other Problems we looked at a singular values decomposition and investigated how these can be used to calculate strong generalised inverses. We also look at some other results concerning such matrices. 6. To find the singular values decomposition of the matrix 0 A = 0 we note that the matrix given by A A = [ = [ has eigenvalues given by λ λ = ( λ) = 0 = λ = ± and the corresponding eigenvectors are given by: 9 Notice that this is the only part of the proof that requires the use of the special right inverse given by the matrix A t (AA t )! 0 See Questions and. Notice that multiplying this expression by A we get Ax = AA t (AA t ) b + Au = b + 0 = b which confirms that these vectors are solutions. I trust that it is obvious that the quantity x measures the distance of the solution x from the origin. 6

7 For λ = : The matrix equation (A A λi)x = 0 gives [ [ x y and so we have x + y = 0. Thus taking x to be the free parameter we can see that the eigenvectors corresponding to this eigenvalue are of the form x[ t. = 0 For λ = 3: The matrix equation (A A λi)x = 0 gives [ [ x y and so we have x y = 0. Thus taking x to be the free parameter we can see that the eigenvectors corresponding to this eigenvalue are of the form x[ t. So an orthonormal 3 set of eigenvectors corresponding to the positive eigenvalues λ = and λ = 3 of the matrix A A is {y y } where y = [ = 0 and y = [. Thus by Theorem 9.4 the positive eigenvalues of the matrix AA are λ = and λ = 3 too 4 and the orthonormal set of eigenvectors corresponding to these eigenvalues is {x x } where 5 x = 0 0 [ = 0 and x = 0 0 [ =. 3 6 Consequently having found two appropriate sets of orthonormal eigenvectors corresponding to the positive eigenvalues of the matrices A A and AA we can find the singular values decomposition of the matrix A as defined in Theorem 9.5 i.e. A = λ x y + λ x y = [ [ 0 A = (Note: you can verify that this is correct by adding these two matrices together and checking that you do indeed get A.) 3 Notice that the eigenvectors that we have found are already orthogonal and so we just have to normalise them. (You should have expected this since the matrix A A is Hermitian with two distinct eigenvalues.) 4 Note that since AA is a 3 3 matrix it has three eigenvalues and in this case the third eigenvalue must be zero. 5 Using the fact that as given by Theorem 9.4. x i = λi Ay i 7

8 Hence to calculate the strong generalised inverse of A we use the result given in Theorem 9.6 to get A G = y x + y x λ λ = [ [ [ [ = [ 0 + [ 0 6 A G = [. 3 Thus using Theorem 9. the orthogonal projection of R 3 onto R(A) is given by 0 AA G = 0 3 [ = 3 and the orthogonal projection of R parallel to N(A) is given by A G A = 3 [ 0 0 = 3 [ 3 0 = I. 0 3 (Notice that both of these matrices are symmetric and idempotent.) 7. Given that the singular values decomposition of an m n matrix A is A = λi x i y i we are asked to prove that the matrix given by λi y i x i is the strong generalised inverse of A. To do this we have to show that this expression gives a matrix which satisfies the four conditions that define a strong generalised inverse and these are given in Definition 9. i.e.. AA G A = A.. A G AA G = A G. 3. AA G is an orthogonal projection of R m onto R(A). 4. A G A is an orthogonal projection of R n parallel to N(A). We shall proceed by showing that each of these conditions are satisfied by the matrix defined above. 6 6 This will establish that this matrix is a strong generalised inverse of A. The fact that it is the strong generalised inverse of A then follows by the result proved in Question. 8

9 : It is fairly easy to see that the first condition is satisfied since [ AA G A = λp x p y p y q x q λr x r y r λq as required. = = AA G A = A p= p= q= r= λp x p y p p= q= r= λ p λ r x p (y λ py q )(x qx r )y r s (As the x i and y i form orthonormal sets.) : It is also fairly easy to see that the second condition is satisfied since [ A G AA G = y p x p λq x q y q y r x r λp λr as required. = = p= p= q= r= p= A G AA G = A G λp y p x p q= r= λ q λ p λ r y p (x px q )(y qy r )x r (As the x i and y i form orthonormal sets.) Remark: Throughout this course when we have been asked to show that a matrix represents an orthogonal projection we have shown that it is [idempotent and symmetric. But we have only developed the theory of orthogonal projections for real matrices and in this case our criterion suffices. However we could ask when a complex matrix represents an orthogonal projection and we may suspect that such matrices will be Hermitian. However to avoid basing the rest of this proof on mere suspicions we shall now restrict our attention to the case where the eigenvectors x i and y i (for i =... k) of the matrices AA and A A are in R m and R n respectively. 7 Under this assumption we can take the matrices x i x i and y iy i (for i =... k) to be x ix t i and y iyi t respectively in the proofs of conditions 3 and : To show that AA G is an orthogonal projection of R m onto R(A) we note that AA G can be written as [ AA G = λi x i y i y j x λ i j = x i (y i λj λ y j)x j = x i x i j j= j= since the y j form an orthonormal set. 9 Thus AA G is idempotent as [ (AA G ) = x i x i x j x j = x i (x i x j)x j = j= j= since the x j form an orthonormal set and it is symmetric as x i x i = AAG [ t (AA G ) t = x i xi t = (x i x t i) t = x i x t i = AA G 7 Recall that since the matrices AA and A A are Hermitian we already know that their eigenvalues are real. 8 Recall that we also adopted this position in Question of Problem Sheet 8. 9 Notice that this gives us a particularly simple expression for an orthogonal projection of R m onto R(A). 9

10 assuming that the x i are in R m. Also we can see that R(A) = R(AA G ) as If u R(A) then there is a vector v such that u = Av. Thus we can write u = λi x i y i v and multiplying both sides of this expression by AA G we get [ AA G u = x j x j λi x i y i v = λi x j (x j x i)y i v = j= j= λi x i y i v = u i.e. u = AA G u. 0 Thus u R(AA G ) and so R(A) R(AA G ). If u R(AA G ) then there is a vector v such that u = AA G v. Thus as we can write u = A(A G v) we can see that u R(A) and so R(AA G ) R(A). Consequently AA G is an orthogonal projection of R m onto R(A) (as required). 4: To show that A G A is an orthogonal projection of R n parallel to N(A) we note that A G A can be written as [ A G A = y i x i λj x j y λ j j = y i (x i λi λ x j)y j = y i y i i j= j= since the x j form an orthonormal set. Thus A G A is idempotent as [ (A G A) = y i y i y j y j = y i (y i y j)y j = j= j= y i y i = AG A since the y j form an orthonormal set and it is symmetric as [ t (A G A) t = y i yi t = (y i yi) t t = y i yi t = A G A assuming the y i are in R n. Also we can see that N(A) = N(A G A) as If u N(A) then Au = 0 and so clearly A G Au = 0 too. N(A) N(A G A). Thus u N(A G A) and so If u N(A G A) then A G Au = 0. As such we have AA G Au = 0 and so by we have Au = 0. Thus N(A G A) N(A). Consequently AA G is an orthogonal projection of R n parallel to N(A) (as required). 8. Suppose that the real matrices A and B are such that AB t B = 0. We are asked to prove that AB t = 0. To do this we note that for any real matrix C with column vectors c c... c m the diagonal elements of the matrix product C t C are given by c t c c t c... c t mc m. So if C t C = 0 all of the elements of the matrix C t C are zero and in particular this means that each of the diagonal elements is zero. But then for i m we have c t ic i = c i = 0 = c i = 0 0 Alternatively as u = Av we have AA G u = AA G Av and so by this gives AA G u = Av. Thus again AA G u = u as desired. Notice that this gives us a particularly simple expression for an orthogonal projection of R n parallel to N(A). 0

11 and so C t C = 0 implies that C = 0 as all of the column vectors must be null. Thus noting that AB t B = 0 = AB t BA t = 0 = (BA t ) t BA t = 0 we can deduce that BA t = 0. Consequently taking the transpose of both sides of this expression we have AB t = 0 (as required). 9. Let A be an m n matrix. We are asked to show that the general least squares solution of the matrix equation Ax = b is given by x = A G b + (I A G A)z where z is any vector in R n. To do this we note that a least squares solution to the matrix equation Ax = b is a vector x which minimises the quantity Ax b. Thus as we have seen before the vector Ax obtained from an orthogonal projection of b onto R(A) will give such a least squares solution. So as AA G is an orthogonal projection onto R(A) we have Ax = AA G b = x = A G b is a particular least squares solution. Consequently the general form of a least squares solution to Ax = b will be given by x = A G b + (I A G A)z for any vector z R n (as required). Harder Problems. Here are some results from the lectures on generalised inverses that you might have tried to prove. Note: In the next two questions we are asked to show that three statements are equivalent. Now saying that two statements p and q are equivalent is the same as saying p q (i.e. p iff q). Thus if we have three statements p p and p 3 and we are asked to show that they are equivalent then we need to establish that p p p p 3 and p p 3. Or alternatively we can show that p p p 3 p. (Can you see why?) 0. We are asked to prove that the following statements about an m n matrix A are equivalent:. A has a left inverse i.e. there is a matrix L such that LA = I. (For example (A t A) A t.). Ax = b has a unique solution when it has a solution. 3. A has rank n. Proof: We prove this theorem using the method described in the note above. : Suppose that the matrix A has a left inverse L and that the matrix equation Ax = b has a solution. By definition L is such that LA = I and so multiplying both sides of our matrix equation by L we get LAx = Lb = Ix = Lb = x = Lb. Thus a solution to the matrix equation Ax = b is Lb when it has a solution. Notice that this is the general form of a least squares solution to Ax = b since it satisfies the equation Ax = AA G b given above. Not convinced? We can see that this is the case since using the fact that AA G A = A we have as expected. Ax = A A G b + (I A G A)z = AA G b + (A AA G A)z = AA G b + (A A)z = AA G b + 0 = AA G b

12 However a matrix A can have many left inverses and so to establish the uniqueness of the solutions consider two [possibly distinct left inverses L and L. Using these the matrix equation Ax = b will have two [possibly distinct solutions given by x = Lb and x = L b and so to establish uniqueness we must show that their difference i.e. x x = (L L )b is 0. To do this we start by noting that Ax Ax = A(L L )b = b b = A(L L )b = 0 = A(L L )b and so the vector (L L )b N(A) i.e. the vectors x and x can only differ by a vector in N(A). Then we can see that: For any vector u N(A) we have Au = 0 and so LAu = 0 too. However as LA = I this means that u = 0 N(A) {0}. As N(A) is a vector space 0 N(A) which means that {0} N(A). and so N(A) = {0}. 3 Thus the vectors x and x can only differ by the vector 0 i.e. we have x x = 0. Consequently Lb is the unique solution to the matrix equation Ax = bwhen it has a solution (as required). 3: The matrix equation Ax = 0 has a solution namely x = 0 and so by this solution is unique. Thus the null space of A is {0} and so η(a) = 0. Hence by the Dimension Theorem the rank of A is given by ρ(a) = n η(a) = ρ(a) = n 0 = n as required. 3 : Suppose that the rank of the m n matrix A is n. That is the matrix A t A is invertible (as ρ(a t A) = ρ(a) and A t A is n n) and so the matrix given by (A t A) A t exists. Further this matrix is a left inverse since [ (A t A) A t A = (A t A) A t A = (A t A) [ A t A = I as required. Remark: An immediate consequence of this theorem is that if A is an m n matrix with m < n then as ρ(a) = min{m n} = m < n A can have no left inverse. (Compare this with what we found in Question 3.) Further when m = n A is a square matrix and so if it has full rank it will have a left inverse which is identical to the inverse A that we normally consider. 4. We are asked to prove that the following statements about an m n matrix A are equivalent:. A has a right inverse i.e. there is a matrix R such that AR = I. (For example A t (AA t ).). Ax = b has a solution for every b. 3. A has rank m. 3 We should have expected this since looking ahead to 3 we have ρ(a) = n if A is an m n matrix. Thus by the rank-nullity theorem we have ρ(a) + η(a) = n = n + η(a) = n = η(a) = 0 and so the null space of A must be zero-dimensional i.e. N(A) = {0}. 4 Since if A is a square matrix with full rank then it has a left inverse L such that LA = I and an inverse A such that A A = I. Equating these two expressions we get LA = A A and so multiplying both sides by A (say) we can see that L = A (as expected).

13 Proof: We prove this theorem using the method described in the note above. : Suppose that the matrix A has a right inverse R. Then for any b R m we have b = Ib = (AR)b = A(Rb) as AR = I. Thus x = Rb is a solution to the matrix equation Ax = b for every b as required. 3: Suppose that the matrix equation Ax = b has a solution for all b R m. Recalling the fact that the matrix equation Ax = b has a solution iff b R(A) this implies that R(A) = R m. Thus the rank of A is m as required. 3 : Suppose that the rank of the m n matrix A is m. That is the matrix AA t is invertible (as ρ(aa t ) = ρ(a) and AA t is m m) and so the matrix given by A t (AA t ) exists. Further this matrix is a right inverse since as required. A [ A t (AA t ) = AA t (AA t ) = [ AA t (AA t ) = I Remark: An immediate consequence of this theorem is that if A is an m n matrix with n < m then as ρ(a) = min{m n} = n < m A can have no right inverse. (Compare this with what we found in Question 3.) Further when m = n A is a square matrix and so if it has full rank it will have a right inverse which is identical to the inverse A that we normally consider. 5. We are asked to prove that a matrix A has exactly one strong generalised inverse. To do this let us suppose that we have two matrices G and H which both have the properties attributed to a strong generalised inverse of A. (There are four such properties and they are given in Definition 9..) Thus working on the first two terms of G = GAG we can see that G = GAG :Prop. = (GA) t G :Prop 4. (As orthogonal projections are symmetric.) = A t G t G = (AHA) t G t G :Prop. (Recall that H is an SGI too.) = A t H t A t G t G = (HA) t (GA) t G = (HA)(GA)G :Prop 4. (Again as orthogonal projections are symmetric.) = H(AGA)G G = HAG :Prop. whereas working on the last two terms of H = HAH we have H = HAH :Prop. = H(AH) t :Prop 3. (As orthogonal projections are symmetric.) = HH t A t = HH t (AGA) t :Prop. (Recall that G is an SGI too.) = HH t A t G t A t = H(AH) t (AG) t = H(AH)(AG) :Prop 3. (Again as orthogonal projections are symmetric.) = H(AHA)G H = HAG :Prop. 5 Since if A is such a square matrix with full rank then it has a right inverse R such that AR = I and an inverse A such that AA = I. Equating these two expressions we get AR = AA and so multiplying both sides by A (say) we can see that R = A (as expected). 3

14 and we have made prodigious use of the fact that [for any two matrices X and Y (XY) t = Y t X t. Consequently H = HAG = G and so the strong generalised inverse of a matrix is unique (as required). 3. We are asked to consider an m n matrix A that has been decomposed into the product of an m k matrix B and an k n matrix C such that the ranks of B and C are both k. Under these circumstances we need to show that the matrix A G = C t (CC t ) (B t B) B t is a strong generalised inverse of A. To do this we have to establish that this matrix satisfies the four conditions that define a strong generalised inverse A G of A and these are given in Definition 9. i.e.. AA G A = A.. A G AA G = A G. 3. AA G is an orthogonal projection of R m onto R(A). 4. A G A is an orthogonal projection of R n parallel to N(A). We shall proceed by showing that each of these properties are satisfied by the matrix defined above. : It is easy to see that the first property is satisfied since as required. AA G A = BC [ C t (CC t ) (B t B) B t BC AA G A = BC = A = B [ CC t (CC t ) [ (B t B) B t B C : It is also easy to see that the second property is satisfied since as required. A G AA G = [ C t (CC t ) (B t B) B t BC [ C t (CC t ) (B t B) B t = C t (CC t ) [ (B t B) B t B [ CC t (CC t ) (B t B) B t A G AA G = C t (CC t ) (B t B) B t = A G For the last two parts we recall that an m n matrix P is an orthogonal projection of R n onto R(A) [parallel to N(A) if a. P is idempotent (i.e. P = P). b. P is symmetric (i.e. P t = P). c. R(P) = R(A). So using this we have 3: To show that AA G is an orthogonal projection of R m onto R(A) we note that AA G can be written as AA G = [ BC [ C t (CC t ) (B t B) B t = B [ CC t (CC t ) (B t B) B t = B(B t B) B t. Thus AA G is idempotent as ( AA G ) = [ B(B t B) B t = [ B(B t B) B t[ B(B t B) B t = B(B t B) [ B t B(B t B) B t = B(B t B) B t ( AA G) = AA G 4

15 and it is symmetric as ( AA G ) t = [ B(B t B) B t t = B [ (B t B) t B t = B(B t B) B t ( AA G) t = AA G where we have used the fact that [for any matrices X and Y (XY) t = Y t X t and [for any invertible square matrix X (X ) t = (X t ). Also we can see that R(AA G ) = R(A) as If x R(A) then there is a vector y R n such that x = Ay. But from the first property we know that A = AA G A and so x = AA G Ay = AA G (Ay). Thus there exists a vector namely z = Ay such that x = AA G z and so x R(AA G ). Consequently R(A) R(AA G ). If x R(AA G ) then there is a vector y R m such that x = AA G y = A(A G y). Thus there exists a vector namely z = A G y such that x = Az and so x R(A). Consequently R(AA G ) R(A). Thus AA G is an orthogonal projection of R m onto R(A) as required. 4: To show that A G A is an orthogonal projection of R n parallel to N(A) we note that A G A can be written as A G A = [ C t (CC t ) (B t B) B t[ BC = C t (CC t ) [ (B t B) B t B C = C t (CC t ) C. Thus A G A is idempotent as and it is symmetric as ( A G A ) = [ C t (CC t ) C = [ C t (CC t ) C [ C t (CC t ) C = C t (CC t ) [ CC t (CC t ) C = C t (CC t ) C ( A G A ) = A G A ( A G A ) t = [ C t (CC t ) C t = C t[ (CC t ) t C = C t (CC t ) C ( A G A ) t = A G A where we have used the fact that [for any matrices X and Y (XY) t = Y t X t and [for any invertible square matrix X (X ) t = (X t ). Also we can see that R(A G A) = R(A t ) as If x R(A t ) then there is a vector y R m such that x = A t y. But from the first property we know that A = AA G A and so x = (AA G A) t y = (A G A) t A t y = A G AA t y = A G A(A t y) (recall that (A G A) t = A G A by the symmetry property established above). Thus there exists a vector namely z = A t y such that x = A G Az and so x R(A G A). Consequently R(A t ) R(A G A). If x R(A G A) then there is a vector y R n such that x = A G Ay = (A G A) t y = A t (A G ) t y = A t [(A G ) t y (notice that we have used the fact that (A G A) t = A G A again). Thus there exists a vector namely z = (A G ) t y such that x = A t z and so x R(A t ). Consequently R(A G A) R(A t ). Thus A G A is an orthogonal projection of R n onto R(A t ) parallel to R(A t ). However recall that R(A t ) = N(A) and so we have established that A G A is an orthogonal projection of R n parallel to N(A) as required. 5

16 4. Let us suppose that Ax = b is an inconsistent set of equations. We are asked to show that x = A G b is the least squares solution that is closest to the origin. To do this we note that a least squares solution to the matrix equation Ax = b is given by x = A G b + (I A G A)z for any vector z (see Question 9) and we need to establish that of all these solutions the one given by x = A G b is closest to the origin. To do this we use the fact that A G = A G AA G to see that A G b = A G AA G b = A G A(A G b) and so A G b N(A). 6 Then as A G A is an orthogonal projection parallel to N(A) the matrix I A G A is an orthogonal projection onto N(A) and so we can see that (I A G A)z N(A) for all vectors z. Thus the vectors given by A G b and (I A G A)z are orthogonal and so applying the Generalised Theorem of Pythagoras we get x = A G b + (I A G A)z = x A G b as (I A G A)z 0. Consequently the least squares solution given by x = A G b is the one which is closest to the origin. Is it necessary that the matrix equation Ax = b is inconsistent? Well clearly not! To see why we note that if the matrix equation Ax = b is consistent i.e. b R(A) it will have solutions given by x = A G b + u where u N(A). 7 Then since the vector A G b can be written as A G b = A G AA G b = A G A(A G b) we have A G b N(A) (as seen above) and so the vectors given by A G b and u are orthogonal. Thus applying the Generalised Theorem of Pythagoras we get x = A G b + u = A G b + u = x A G b as u 0. So once again the solution x = A G b is the one which is closest to the origin. 6 Too quick? Spelling this out we see that there is a vector namely y = A G b such that A G Ay = A G AA G b = A G b and so A G b R(A G A). But we know that A G A is an orthogonal projection parallel to N(A) and so R(A G A) = N(A). Thus A G b N(A) as claimed. 7 We can see that these are the solutions since multiplying both sides of this expression by the matrix A we get Ax = AA G b + Au = b + 0 = b where Au = 0 as u N(A) and AA G b = b as b R(A) and AA G is an orthogonal projection onto R(A). 6

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The 2 Examination Question (a) For a non-empty subset W of V to be a subspace of V we require that for all vectors x y W and all scalars α R:

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

MA201: Further Mathematical Methods (Linear Algebra) 2002

MA201: Further Mathematical Methods (Linear Algebra) 2002 MA201: Further Mathematical Methods (Linear Algebra) 2002 General Information Teaching This course involves two types of teaching session that you should be attending: Lectures This is a half unit course

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then 1 Sec 21 Matrix Operations Review Let A, B, and C be matrices of the same size, and let r and s be scalars Then (i) A + B = B + A (iv) r(a + B) = ra + rb (ii) (A + B) + C = A + (B + C) (v) (r + s)a = ra

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Section 2.2: The Inverse of a Matrix

Section 2.2: The Inverse of a Matrix Section 22: The Inverse of a Matrix Recall that a linear equation ax b, where a and b are scalars and a 0, has the unique solution x a 1 b, where a 1 is the reciprocal of a From this result, it is natural

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

(a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned by a Solution: The projection matrix is P = 1

(a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned by a Solution: The projection matrix is P = 1 6 [3] 3. Consider the plane S defined by 2u 3v+w = 0, and recall that the normal to this plane is the vector a = [2, 3,1]. (a) Compute the projections of vectors [1,0,0] and [0,1,0] onto the line spanned

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

7 : APPENDIX. Vectors and Matrices

7 : APPENDIX. Vectors and Matrices 7 : APPENDIX Vectors and Matrices An n-tuple vector x is defined as an ordered set of n numbers. Usually we write these numbers x 1,...,x n in a column in the order indicated by their subscripts. The transpose

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources

More information

Linear Algebra Formulas. Ben Lee

Linear Algebra Formulas. Ben Lee Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Matrix operations Linear Algebra with Computer Science Application

Matrix operations Linear Algebra with Computer Science Application Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow

Math 343 Midterm I Fall 2006 sections 002 and 003 Instructor: Scott Glasgow Math 343 Midterm I Fall 006 sections 00 003 Instructor: Scott Glasgow 1 Assuming A B are invertible matrices of the same size, prove that ( ) 1 1 AB B A 1 = (11) B A 1 1 is the inverse of AB if only if

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Transpose & Dot Product

Transpose & Dot Product Transpose & Dot Product Def: The transpose of an m n matrix A is the n m matrix A T whose columns are the rows of A. So: The columns of A T are the rows of A. The rows of A T are the columns of A. Example:

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Lecture 3: Linear Algebra Review, Part II

Lecture 3: Linear Algebra Review, Part II Lecture 3: Linear Algebra Review, Part II Brian Borchers January 4, Linear Independence Definition The vectors v, v,..., v n are linearly independent if the system of equations c v + c v +...+ c n v n

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Review of similarity transformation and Singular Value Decomposition

Review of similarity transformation and Singular Value Decomposition Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop

Fall Inverse of a matrix. Institute: UC San Diego. Authors: Alexander Knop Fall 2017 Inverse of a matrix Authors: Alexander Knop Institute: UC San Diego Row-Column Rule If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

2 b 3 b 4. c c 2 c 3 c 4

2 b 3 b 4. c c 2 c 3 c 4 OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 2002 Solutions For Problem Sheet 4 In this Problem Sheet, we revised how to find the eigenvalues and eigenvectors of a matrix and the circumstances under which

More information

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later. 34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016 Answer the questions in the spaces provided on the question sheets. You must show your work to get credit for your answers. There will

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4 Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

1 Matrices and vector spaces

1 Matrices and vector spaces Matrices and vector spaces. Which of the following statements about linear vector spaces are true? Where a statement is false, give a counter-example to demonstrate this. (a) Non-singular N N matrices

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson

JUST THE MATHS UNIT NUMBER 9.8. MATRICES 8 (Characteristic properties) & (Similarity transformations) A.J.Hobson JUST THE MATHS UNIT NUMBER 9.8 MATRICES 8 (Characteristic properties) & (Similarity transformations) by A.J.Hobson 9.8. Properties of eigenvalues and eigenvectors 9.8. Similar matrices 9.8.3 Exercises

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true? . Let m and n be two natural numbers such that m > n. Which of the following is/are true? (i) A linear system of m equations in n variables is always consistent. (ii) A linear system of n equations in

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Rigid Geometric Transformations

Rigid Geometric Transformations Rigid Geometric Transformations Carlo Tomasi This note is a quick refresher of the geometry of rigid transformations in three-dimensional space, expressed in Cartesian coordinates. 1 Cartesian Coordinates

More information

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix Math 34H EXAM I Do all of the problems below. Point values for each of the problems are adjacent to the problem number. Calculators may be used to check your answer but not to arrive at your answer. That

More information