Matrix Theory. A.Holst, V.Ufnarovski

Size: px
Start display at page:

Download "Matrix Theory. A.Holst, V.Ufnarovski"

Transcription

1 Matrix Theory AHolst, VUfnarovski

2 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different from i are equal to zero What is in the row i? For C = AE ij one need to consider the columns instead In the second approach write A as A = k,l a kle kl and use E ij E kl = δ jk E il Note that for the equality B = C we need almost all elements in those matrices to be equal zero (the element on the intersection of the row i and column j is the only possible exclusion Answer: for equality one need a jl = 0 for l j; a ki = 0 for k i and a jj = a ii It is sufficient to use X = E ij and apply the previous exercise 3 Example was in the section 7 V AB = O B = O the contradiction 4 Use the previous exercise taking B C as B 5 The approach as (A,, A n B = (A B,, A n B is wrong, because the product A i B does not exist if n > Consider instead the first column Its i th element is k a ikb k = k b ka ik thus the column can be written as k b ka k, which is a linear combination of columns A k in A 6 Study instead ( A(A + B B and use (XY Z = Z Y X 7 One approach for the inverse can be found in the section 8 Another approach: write the inverse X in the triangle form with unknown variables, consider AX = I as a system of equations and show that it has a solution 8 Try to find the inverse in the form I + dxy T and use that Y T X is a number that can be moved free Answer: d = c +cy T X if + cy T X 0, otherwise the inverse does not exist (which is less trivial and most simple can be proved using the determinant in the next section 9 Write this as ( I + ca XY T A and use the answer from the previous exercise 0 The diagonals in the product of the triangular matrices multiplies by themselves and to get zero in some power it should be zero itself Two different proofs that the triangular matrix with zero main diagonal is nilpoten can be found in the section 7 Find the inverse in the form ( A X O B Answer: X = A CB Use that b is a number that can be moved free Use b(y T A X = ba to simplify the calculations For the (upper triangular matrix use that Y = O and apply the induction

3 9 Basic variables: x, x 3, x 5, thus r(a = 3, free variables: x, x 4 No left or right inverse because r(a 4, 5 If b 4 = 0 then x 5 = b 3, x 4 = t, x 3 = b b 3, x = t 3b 3 + b, x = s, otherwise no solutions Try to use a permutation matrix P that moves the zero row direct to the end (and it works here One possible decomposition is: P = , L = , U = Basic variables: x, x 4, thus r(a =, free variables: x, x 3, x 5 No left or right inverse because r(a 4, 5 Because AX = O P AX = O LUX = O UX = O we get the solutions: x 5 = t, x 4 = 3 t, x 3 = s, x = 3s t, x = u 3 For example, x + y + z = 0, x + y + z = 4 For example, P = ( 0 0 ( 0, L = 0 ( i 0, U = 0 i 5 Because we do not need P in LU decomposition we get A = LU = In one direction: the product of the invertible matrices is invertible itself The opposite direction is more complicated First show that the permutation matrix which permutes two rows only is an elementary matrix Second prove that any permutation matrix is a product of such matrices Third prove that invertible diagonal matrix with a single element different from is an elementary matrix Prove that any invertible diagonal matrix is a product of such matrices and apply Exercise 8 to finish the proof 7 Find an LU decomposition and find first all possible inverses to U Answer: left inverse ( 3 a a b b 8 Start from LU decomposition and continue in the following way, working with the columns in U First multiply by a permutation matrix Q from the right to get all pivot elements on the main diagonal and all basic columns in the beginning Second use elementary operations with the columns to clean the rest of the rows after the pivot elements This can be done by the multiplication with elementary matrices from the right hand side They create R

4 55 HINTS AND ANSWERS 93 3 Follows directly from the formula for the determinant on the page 40The permutation matrix P (, corresponding to the transposition (, is a possible counterexample for n > x y x y x y n x y + x y x y n x n y x n y + x n y n 0 0 x y + x y x y n x n y x n y + x n y n + = x y x y x y n x y + x y x y n x n y x n y + x n y n In the first determinant use the expansion in the first row and induction, in the second carry out x and use the remaining in the first row to clean the rest 33 In one direction: use = det A det A In the opposite direction use theorem For odd n : use det( A = ( n det A For n = 4 use direct calculations to get: 0 a b c a 0 d e = b d 0 f c e f 0 a f + adfc aebf + b e bedc + d c = (af + dc be For arbitrary n = k one can prove that A = (P f k, where the Pfaffian P f k is defined as { } 3 4 k k sign a i j i j i k j ij a ij a ik j k, k where the sum extends over all possible distinct terms subject to the restriction i s < j s k, s k 36 Expand in the last column 37 If C = O then this is theorem 38 Try to reduce the general ( case to I O this one by the multiplication from the left with the block matrix choosing the correct block X 38 Use the previous exercise! X I

5 94 4 A standard way to get the basis vectors X i is to solve the system AX = O and to write X = t i X i, were t i are free variables But there exists other ways Answer: Endast a One possible basis: (,, 0 T ; (, 0, T 4 Use the hint from the exercise 3 to get a basis One possible basis: (0,, 0, 3, T ; (0, 3,, 0, 0 T, (, 0, 0, 0 T 43 To find a basis one need to know from calculus that every solution can be written as y(t = C e t + C te t + C 3 t e t Without such knowledge use substitution y(t = x(te t and get the equation for x(t which is easy to solve directly Answer: one possible basis is e t, te t, t e t, thus the dimension is equal to 3 44 Try to express the basis with the help of E ij Answer: a The basis E ij E ji with i < j gives the dimension n(n b The basis E ij E in with j < n gives the dimension n(n c,d,e are not vector spaces

6 55 HINTS AND ANSWERS 95 5 a is not linear If the basis is e = E, e = E, e 3 = E, e 4 = E then the answer is: b , c , d Use the Gauss elimination or LU decomposition Use the hint from 4 for the kernels Answer: r(a = Possible bases are: for Ker A : (, 0, 0, 0, 0 T, (0, 3,, 0, 0 T, (0, 3, 0,, 3 T for Im A : (0,,, T, (0, 3, 9, 3 T for Ker A T : (, 0, 0, 0 T, ((0, 5,, T for Im A : (0,, 3, 3 T, (0, 0, 0, 3, T The second vector is not a row in A, but the second non-zero row in U Try to understand why the non-zero rows in U can be used as a basis for A T, though it does not work for the columns 53 The basis, x,, x n shows that the dimension is equal to n + In this basis the matrix for D : P n P n looks as n n The rank is n and we have neither left nor right inverse If we take away the last row we get the matrix for D : P n P n and find that the right inverse should exist and of course this is the integration 54 The map has a nontrivial kernel and cannot be injective This example shows that the situation for the linear operators in the infinite dimensional case is more complicated: injectivity does not follows from the surjectivity 55 a b c 3 56 See theorem See theorem Try to imitate the dimensions arguments used for the Lagrange interpolation Later we will prove more general case - see Hermite interpolation

7 96 6 See theorem 6 6 Use that p A (0 = A Another approach: according to theorem 59 a square matrix is invertible if and only if Ker A = O AX = 0X has only trivial solutions 63 For example matrices E and E with λ = µ = 0 64 As example one can take a triangular matrix with ones on the main diagonal 65 AX = λx A X = λ X (λ λx = 0, but X O λ = 0, 66 Use AX = λx A k X = λ k X Answer: λ = 0 67 Use that det( I = ( 3 det I = Examples: ii and E E 68 A ABA = BA Corollary 67 3 Suppose that A is not invertible Consider A ε for very small ε 0 According to Exercise 6 the matrix A ε is invertible and therefor A ε B and BA ε have the same characteristic polynomial p AεB = p BAε Let ε 0 and get the result using the fact that the characteristic polynomial depends continuously on the matrix The disadvantage of this solution is that it uses continuous arguments that does not work for other fields (eg finite fields There exists two other solutions that works for arbitrary field In both of them it is sufficient to consider the cases where both matrices are not invertible (otherwise AB and BA are similar In the first approach we use Im A and Im B Note that both this subspaces are different from K n, where n is the size of matrices If one of them is inside another (eg Im A Im B then we can choose the basis in the largest one and complete it arbitrary Then both images lies in the subspace generated by e,, e n and the matrices looks as block matrices A = ( A A O 0 ( B B, B = O 0 where A, B have size (n (n, and A, B are columns of the size n Then ( A B AB =, p 0 O AB (λ = λp AB (λ; ( B A BA =, p O 0 BA (λ = λp BA (λ and we can use the induction Otherwise dim (Im A Im B n and we can we choose the basis {e i } starting from this intersection and complete it such that Im A Im B,

8 55 HINTS AND ANSWERS 97 e,, e n, Im A e,, e n and Im B e,, e n Then the matrices looks as block matrices ( ( O 0 B B A =, B =, A A 0 O where A, B have size (n (n, and A, B are columns of the size n Then ( 0 O AB =, p A B AB (λ = λp AB (λ; BA = ( B A O 0 and we can use the induction, p BA (λ = λp BA (λ Another approach uses the kernels instead of the images try to find it! 4 Example: A = E, B = E, in fact any A, B, where AB = O, BA O 5 Note that I C = P C ( 69 Use Exercise 38 to get ( p M (λ = (λ n (λ (d + λ C T B Eigenvalues are λ = and λ = d + ± (d + 4C T B

9 98 7 a E, E, E 3, E 4, E 5, E 8 b E 4, E 7, E 8 c E 4, E 8 d One can cut rows and columns with indices 6, 7, 9 and get e O A 4I E A 4I E ; O A I E 3 ; O A E 4 A E5 A E6 ; O A E 7 ; O A E 8 A E9 7 In one direction it is exercise 66 For another direction: it is sufficient to prove this for the Jordan form In fact much more is proved in theorem 7 73 Direct approach: use SJS = J T, S = S for S = See corollary 79 for another approach 74 According to Exercise 73 it is sufficient to prove that if A i are similar to B i then the block matrices A O O O A O O O A k, B O O O B O O O B k are similar as well 75 Both matrices have different eigenvalues and are digonalizable Note that B has rank Answer: , ( Yes, they have the same (diagonal Jordan form

10 55 HINTS AND ANSWERS For example, J = J = J =, S = , S = for B and for C we have, S = for A, 78 Study the case n = first For general case conjugate first with the permutation matrix { S = P σ = (E + E n + E 3 + } E 4n + for the permutation σ = to get a block matrix n n 3 n Consider the cases with even and odd n separately More details: S AS = T a a = a n a n a 0 0 a n a 0 0 a n 0 = A O O O A O O O A k ( where n = k or n = k If n = k then all blocks looks as A i = 0 a i, but if n = k then the last block A a n i+ 0 k is different it is block (a k For the block A i of size there are three possibilies If a i = a n i+ = 0 then the Jordan form is O If exactly one of a i, a n i+ is equal to zero, then the Jordan form is J and if both are different from zero then the the Jordan form is diag ( ai a n i+, a i a n i+ 79The Jordan form is J = The Jordan form should be idempotent as well But J n (λ = (λi + J n = λ I + λj n + J n cannot be equal to J n (λ if n >,

11 300 7 Use that multiplication with D from the left multiplies the rows by λ i and from the right the columns Compare the result outside the main diagonal 7 Use the previous result

12 55 HINTS AND ANSWERS 30 8 a x ; b (x (x (x n 8 We have p A (x = (x 0 n = x n, thus A n = 0 83 Use that the minimal polynomial divides both x k and the characteristic polynomial if A k = 0 Thus it has form x m and m n 84 π A (x = (x if ε 0 and π A (x = (x if ε = 0 thus it does not depend continuously On the other hand the characteristic polynomial has the same degree and (as the determinant depends continuously on the matrix 85 For example E and the zero matrix O of the same size 86 Study the Jordan form Answer: for n 3 Counterexample for n = 4 : , X should be nilpotent, but X = O according to exercise X is nilpotent, thus dim Ker X, and X cannot have rank n 89The matrix looks as A = Note that A i E = E i+ if i < n and a a a a a n a n A n E = AA n E = AE n = a 0 E a E a n E n p(ae = A n E + a n A n E + + a AE + a 0 IE = O, which gives p(ae i+ = p(aa i E = A i p(ae = O as well, thus p(a = O On the other hand q(a O if deg q < n, because q(ae O This shows that p(x is the minimal polynomial Thus every polynomial x n + a n x n + + a 0 is the minimal polynomial for some matrix A 80 Use ideas from exercise 68 If AB = O, but BA O this is already a good counterexample

13 30 9 The map p(x (p(, p(, p (, p ( T has the matrix 4 8 A = Because we have: A = , L 0 (x = 7 ( + 3x + 3x + x 3 ; L 0 (x = 7 ( 6 3x 3x x 3 L (x = 9 (8 + 6x 3x x 3 ; L (x = 6 The polynomial corresponding to e tx is 7 ( + 3x x 3 p(x = e t L 0 (x + e t L 0 (x + te t L (x + t e t L (x = ( (e t + 6e t + 4te t + 9t e t ( + 3e t 3e t + 8te t + 7 ( 3e t 3e t 9t e t x + t (e t e t 3te t 9 t e t x 3 x+ (( e ta = 7 e t + 6e t + 4te t + 9t e t I + ( 3e t 3e t + 8te t + 7 t A + ( 3e t 3e t 9t e t A + ( e t e t 3te t 9 t e t A 3 9 Note that P is a permutation matrix Compute P, P 3 and get that A = a I + a P + a 3 P + + a n P n 93 Because (+x α = ( α k=0 k x k for x < the function f(x = (+x is defined on A if eigenvalues of A satisfies λ < and besides that f(a = I + A This definitely works for A = N 94 No, because f (x = is not defined for x = 0 Nevertheless the x equation X = N can have solutions for some nilpotent matrices N 95 If λ i are all eigenvalues for A then e λ i are all eigenvalues for e A (which is evident for the Jordan form It remains to apply theorem If all eigenvalues λ i of ta satisfy λ i < then the function f(x = ln( x = k= xk k is defined on ta Thus there exists B = f(a such that I ta = e B It remains to use the Jacobi identity above for B

14 55 HINTS AND ANSWERS 303 X = 3 + π, X = 5 + π, X = π for the first vector and X = , X = 3, X = 4 for the second one A ( = 0, A ( = 30, A ( = 4, A = 6, A = 5 +, A = 7 for the first matrix and A ( = , A ( = 56, A ( = 5, A = 5 + 3, A = 8 + 8, A = for the second one 3 A ( = + α, A F = + α, A ( = max(, α, A = + α, A = + α + α + α 4, A = + α, κ(a = + α + α + α 4 4 If x k = max{ x i } then X = x i n x k = n X and X = xi n x k = n X Besides that X = x k = xk xi = X The obvious inequality ( x i ( x i gives X X At last the inequality X = x i = x i xi n = n X follows from the Cauchy-Schwarz inequality 5 Use that A F = tr AH A 6 Use that A F = λ i, where λ i are eigenvalues of A H A to estimate the Frobenius norm The exercise 4 explain how the root expressions appears Use A AE i a ij to get A ( A and m A A Use A A F and A F mn A ( (also following from the the exercise 4 to get A mn A ( If AX = A with X = then AX x i AE i x i A = X A n X A gives A n A The shortest way to the last pair of inequalities is to use that A and A H have the same norm, because they have the same singular values, but this we will prove much later Otherwise consider a vector Y with Y ( = such that AY ( = A ( and estimate Y and AY and AX ( using exercise 4 7 x = 0, x = 000 in the first case and x =, x = 0 in the second The relative error is δx X = = 00, though δb B = is rather small 8 see the section 4 9 The second statement follows directly from the first because A + B = A(I + A B Use the spectral radius of A to show that no eigenvalue of I + A are equal to zero Alternative approach: consider the geometric series I A + A A 3 + and show that it converges and gives ( + A

15 304 See section 3 All axioms can be checked directly, eg (v v = 0 (Av, Av = 0 Av = 0 v = 0, because A is invertible 3 G = A H A The rest follows from theorem 6 4 See theorem ( 53 ( Q = 3, R = Two possible decompositions are: ( ( ( A = = ( A = cxx H, for some complex column vector X of length and real constant c Such matrix is obviously Hermitian To get that this is the only choice note that by theorem A = UDU H for a diagonal matrix D of rank If c = d ii is the only non-zero element in D then X = UE i Another longer solution is to write A as A = XY H Consider first the case when X = Y = and use that A = A H = Y X H to get x i = y i Put x i = ε i y i and get more from A H = A

16 55 HINTS AND ANSWERS Nontrivial is that AA H and A H A can have different size Use theorem 33 If A = USV H is the SV decomposition, then AA H = US U H is also a SV decomposition What about A H A? 3 Eigenvalues: a ±i 3 ; b; c0, ; d0, + i 3± 5 ; c ; d Singular values: a,b 33 Use that the Frobenius norm does not change after the multiplication with a unitary matrix 34 If A = USV H is the SV decomposition then it is sufficient to show that A and S are the matrices of the same linear map in different orthonormal bases 35 First, using the SV decomposition reduce the problem to the case A = S, thus suppose that all singular values are on the main diagonal and the rest is zero Second, we need to find a matrix of rang k such that A X = σ k+ this is easily done, replace all σ j in S with j > k by zeros The last and most difficult part is to show that for any matrix of rang k we have A X σ k+ Here we use the dimensions arguments Consider two subspaces: Ker X and U, consisting of vectors having last n k coordinates equal to zero, ie of form v = (v,, v k, v k+, 0, 0 T Because dim Ker X + dim Un k+(k+ > n we should have a non-zero vector v in their intersection k+ For this vector we have (A Xv v = Av i= v = (σivi k+ σ k+ i= (vi 36 Let A = (a 0,, a n, b = AA T + Eigenvalues are (with the multiplicity (n and λ, = b± b 4a 0 The singular values are (n times and λ, λ The condition number is b+ b 4a 0 a 0 37 Use that +ε k +(ε k + +(ε k n = 0 (consider this as a geometric sum This gives F H F = I If F = (F,, F n is a column representation then P F j = ε j F j, thus P F = F diag(, ε,, ε n = F D and F P F = D Note that Circ(x,, x n = x I + x P + x n P n = f(p is a polynomial in P Because F diagonalizes any polynomial in P as well we have F Circ(x,, x n F = diag(f(, f(ε,, f(ε n Thus f(ε j are eigenvalues and their product is the determinant 38 Combine Schur s lemma with theorems 7 and Use theorem Non-zero nilpotent matrices are not diagonalizable and therefore are not normal 3 Matrices in a and b are Hermitian (and normal Their eigenvalues are real Matrix in c is skew-hermitian (and normal Its eigenvalues are pure imaginary Matrices in d, e are normal (direct control of the definition, but their eigenvalues are nether real nor imaginary Matrix in f is not normal 3 No 33 Because A is real its characteristic polynomial has real coefficients Because a real polynomial of degree 3 has at least one real root we get that at least one eigenvalues is real Because A is unitary we have λ i = for all eigenvalues Thus if all eigenvalues are real they are equal to ± and two of then

17 306 are equal which give us ϕ = 0 or ϕ = π Otherwise one of eigenvalues is equal to ε = ± and two other are complex Let λ = e iϕ is one of them AX = λx gives AX = λx thus if X = U + iu then AU = A(X + X = (eiϕ X + e iϕ X = (cos ϕ + i sin ϕ(u + iu + (cos ϕ i sin ϕ(u iu = cos ϕu sin ϕu AU = sin ϕu + cos ϕu can be obtained in a similar way What if U = 0? 34 Use the previous exercise Use that det A is the product of eigenvalues and tr A is their sum 35 Prove more that the exponent of a skew-symmetric matrix is unitary For this consider this first matrices of size, then the diagonal matrices and use that normal matrices are diagonalizable 36 Use the hint from the previous exercise 37 a,c No Common counterexample: A = byes: (AB H (AB = B H A H AB = B H B = I ( 0 0 ( 0, B = 0 0

18 55 HINTS AND ANSWERS a No For example, A = E E, B = E + E b Yes Consider first the case when A = C, where C is a diagonal matrix with positive elements on the diagonal More details: we have that AB = C B is similar to C C BC = CBC But C = C H thus CBC = C H BC is congruent to B (which is positive definite and therefore is positive definite as well and therefor has positive eigenvalues For general case let U H AU = D be a diagonal matrix for some unitary matrix U Then U H BU = B is positive definite as well and DB = U H ABU is similar to AB But D = C because A is positive definite and we can use the previous case 4 In the Gaussian elimination we get d = d =, d 3 = All numbers are positive and the answer is yes Alternatively, = a = > 0, = = > 0 and 3 = det A > 0 thus the matrix is positively definite 43 Consider the determinant of U H AU = D 44 Multiply the matrices and get the equation for every block Answer: B = A, X = A A, Y = ( A A H ; B = A A A A If A is Hermitian then A is Hermitian as well, and AH = A, which gives X = Y 45 See Theorem 45 ( 0 46 A = GG H, where G = 9 47 Note that B need not be a square matrix! Let f(x = X H B H ABX be a corresponding quadratic form If BX = O with X O then f(x = 0 thus the form cannot be positive definite Otherwise, let Ker B = O Put Y = BX Because A is positive definite we have that f(x = Y H AY 0 and we have the equality only when Y = O, thus X = O 48 Because U is Hermitian all eigenvalues are real thus Re λ = λ Let V = 3U What we need to know is how many eigenvalues of V are between 0 and 3 Using Gaussian elimination we find that two eigenvalues of V are positive and the same is true for V 3 I Thus there are no eigenvalues in the given interval 49 Consider C(λ = A λb and study p(λ = det C(λ Show that if S exists then p(λ should have real roots Check that this is not the case for given matrices More details If S H AS = diag(a, b and S H BS = diag(c, d then ( det(s H C(λS = det diag(a λc, b λd = bc λ a ( λ d c b and we see that p(λ has real roots On the other hand p(λ = λ λ = λ 40 To make the calculations easier note that A H A = A = abs(a and we can use Lagrange interpolation

19 308 ( Answer: U = (, P = Consider vector space of polynomials of degree less than n and check that we have a scalar product (f(x, g(x = f(xg(xdx here Check that in 0 the basis, x, x n we have (x i, x j = a ij The corresponding quadratic form (f, f has matrix A in this basis and is positive definite, because it corresponds to a scalar product 4 Consider a vector space V consisting of all functions v(x that are linear combinations of the functions e ikt for k = 0,, n It is sufficient to show that (u(x, v(x = π u(xv(xf(xdx is a scalar product here π 43 Consider the matrix B = A λ n I It is positive semidefinite and X H BX 0 Take X = E i and X = E ii and see what this means for A 44 We need only to check the number of positive eigenvalues for A, A I and A 5I, which can be done with the help of Gaussian elimination

20 55 HINTS AND ANSWERS Use theorem 50 and the fact that B is arbitrary More details Theorem 50 gives XB = A + B + Y AXB = AA + B for any column B Therefore AX = AA + by theorem and it remains to use the properties of A + 5 A + = ( 0 0, B + = X = (0, 4, 7 T 54 x = Pv + (v Pv Note that P(v Pv = Pv P v = 0, thus w = v Pv Ker P Moreover, we have V = Im P Ker P, because if x Im P Ker P then x = Py = P y = Px = 0 We do not need the condition P = P to prove all this 55 Let P be the corresponding matrix If P H = P then P is Hermitian and diagonalizable Then P = P iff eigenvalues λ are real and λ = λ λ = 0 or λ = In particularly, P + = P 56 If AXA = A then P = AXAX = AX = P The opposite is less trivial Change the basis and use is that Im AX = Im A More details According to exercise 55 we can restrict ourself by the case ( I O AX = Let A = O O A 3 = O, A 4 = O Then ( I O (AXA = O O ( A A A 3 A 4 Because Im AX = Im A we get ( ( A A A A = = A O O O O

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Extra Problems: Chapter 1

Extra Problems: Chapter 1 MA131 (Section 750002): Prepared by Asst.Prof.Dr.Archara Pacheenburawana 1 Extra Problems: Chapter 1 1. In each of the following answer true if the statement is always true and false otherwise in the space

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved October 9, 200 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math 489AB Exercises for Chapter 2 Fall Section 2.3 Math 489AB Exercises for Chapter 2 Fall 2008 Section 2.3 2.3.3. Let A M n (R). Then the eigenvalues of A are the roots of the characteristic polynomial p A (t). Since A is real, p A (t) is a polynomial

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Steven J. Leon University of Massachusetts, Dartmouth

Steven J. Leon University of Massachusetts, Dartmouth INSTRUCTOR S SOLUTIONS MANUAL LINEAR ALGEBRA WITH APPLICATIONS NINTH EDITION Steven J. Leon University of Massachusetts, Dartmouth Boston Columbus Indianapolis New York San Francisco Amsterdam Cape Town

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Math 489AB Exercises for Chapter 1 Fall Section 1.0 Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information