Mathematical Methods wk 2: Linear Operators

Size: px
Start display at page:

Download "Mathematical Methods wk 2: Linear Operators"

Transcription

1 John Magorrian, These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from 3 Linear operators roof: that Any v V can be expressed as v n α i e i Using the expression 35 for I, we have I v e i e i α j e j e i α j e i e j δij e i α i v 36 II Throughout this section I assume that V is an n-dimensional inner-product space and that e,, e n are an orthonormal basis for this space Recall that linear operators are mappings from a vector space V to itself that satisfy the conditions 3 Let A be a linear operator and suppose that b A a 3 In 4 we saw how this is equivalent to the matrix equation b Aa, where a and b are the column vectors that represent a and b respectively Armed with our inner product we can now obtain an explicit expression for the elements of the matrix representing the operator A Expanding a n a k e k and b n b i e i, equation 3 becomes b i e i a k A e k 3 Now choose any basis bra e j and apply it to both sides: n n e j b i e i e j a k A e k δji b i e j e i e j A e k a k Therefore equation 3 can be represented by the matrix equation b j 33 A jk a k, 34 where A jk e j A e k are the matrix elements of the operator A in the e,, e n basis 3 The identity operator The identity operator I defined through I v v for all v V is clearly a linear operator Less obviously, it can be written as I e i e i 35 This is sometimes known as resolution of the identity The individual terms e i e i appearing in the sum 35 are known as projection operators: if we apply i e i e i to a vector a j a j e j the result i a a i e i Similarly, a i e i a i We have already seen a use of projection operators in the Gram Schmidt procedure earlier equation 3 Combining operators The composition of two linear operators A and B is another linear operator In case it is not obvious how to show this, let us write C AB for the result of applying B first, then A Now notice that C is a mapping from V to V and that conditions 3 hold for any a, b V and α F C a + b A B a + b A B a + A B b C a + C b, C α A A Bα a A αb a α AB a αc a Exercise: Show that the matrix representing C is identical to the matrix obtained by multiplying the matrix representations of the operators A and B Exercise: Show that sum of two linear operators is another linear operator In general AB BA: the order of composition matters The difference is another linear operator, known as the commutator of A and B Functions of operators 37 AB BA [A, B] 38 We can construct new linear operators by composition and addition For example, given a linear operator A, let us define exp A in the obvious way as exp A lim I + N N A N I + A + A + 3! A3 + + m! Am +, in which I is the identity operator, A 0 I, A A, A AA, A 3 AAA and so on It might not be obvious that this exp A is a linear operator, but notice that it is a mapping from V to itself and that for any a, b V and α F we have that exp A a + b exp A a + exp A b, exp Aα a αexp A a 39 30

2 II 3 0 Example: what is expαg, where G? 0 First note that G I Therefore G m m I and G m+ m G We can use this to split the expansion of the exponential into a sum of even and odd terms: expαg k! αgk k0 m! αm G m + m +! αm+ G m+ m0 m0 m! m α m I + m +! m α m+ G m0 cos αi + sin αg cos α sin α sin α cos α m Dual of an operator Given an operator A, let b A a be the result of applying A to an arbitrary ket a How are the corresponding dual vectors a and b related? Expanding b A a in terms of components we have that b i e i A ij a j e i 34 Using 3 to obtain the dual of each side of this equation gives e i b i e i A ija j a j A ji e i 35 II 4 For this reason G is known as the generator of two-dimensional rotations: for ɛ the operator I + ɛg is a rotation by an angle ɛ; from the definition 39 the operator expαg is obtained by chaining together many such infinitesmal rotations 33 Special types of matrix The LHS is clearly just b The RHS is a A To see this, apply the RHS to an arbitrary basis ket e k The result is j a j A jk The result of operating on the same ket with a A is a A e k e j a j A e k a j e j A e k a j A jk 36 Let A be an n n matrix with elements A ij The transpose A T is obtained from A by swapping rows and columns It has elements A T ij A ji 3 Therefore, if b A a then the corresponding dual vectors are related via b a A : the dual to the operator A is its Hermitian conjugate A Exercise: Use property 3 of the inner product to show that a A b b A a The Hermitian conjugate A is obtained by swapping rows and columns and taking the complex conjugate It has elements A ij A ji Change of basis Exercise: Show that AB T B T A T and AB B A The following table lists the conditions that are satisfied by members of some important families of matrices: Symmetric: A A T Hermitian: A A Orthogonal: AA T I Unitary: AA I [Normal: A A AA ] Hermitian matrices are sometimes also known as self-adjoint matrices Recall that orthogonal matrices represent reflections and rotations It turns out 35 below that the conditions for a matrix to be Hermitian or unitary are independent of orthonormal basis: if the matrix representing a particular linear operator is Hermitian unitary in one basis then it is Hermitian unitary in all bases and one can sensibly say that the underlying operator itself is Hermitian unitary For the special case of real vector spaces this implies that if a matrix is symmetric orthogonal in one basis then it is symmetric orthogonal in all bases Exercise: Show that the product of two unitary matrices is another unitary matrix Does the same result hold for Hermitian matrices? If not, find a condition under which it does hold Bases are not unique Sometimes it is convenient to do parts of a calculation in one basis and the rest in another Therefore it is important to know how to transform vector coordinates and matrix elements from one basis to another Suppose we have two different orthonormal basis sets, { e,, e n } and { e,, e n }, with e i U ji e j 37 That is, the i th column of the matrix U gives the representation of e i in the unprimed basis The corresponding relationship for the basis bras is e i e j Uji 38 The orthonormality of the bases places constraints on the the transformation matrix U: l δ ij e i e n n j e k Uki U lj e l UkiU lj e k e l UkiU kj U ik U kj, 39 l

3 II 5 II 6 or, in matrix form, I U U: the matrix U describing the coordinate transformation must be unitary: unitary matrices are the generalization of real orthogonal transformations ie, rotations and reflections to complex vector spaces Transformation of vector components Taking an arbitrary vector a n a j e j n and using e i to project a along the ith primed basis vector gives a j e j 36 Rank The range of a linear operator is the set of all possible output vectors: range A {A v : v V} 36 a i e i a e k Uki a j e j Uki In matrix form, the components in the primed basis are given by a U a Transformation of matrix elements A ij e i A n n e j e k Uki A U lj e l l a j e k e j Ujia j U ij a j 30 In the primed basis, the operator A has matrix elements Uki l e k A e l U lj U ik A kl U lj, 3 so that A U AU What does this mean? When applying the matrix A to a vector whose coordinates a are given with respect to the e i basis, think of U as transforming from a to a Then we apply the matrix A to the result before transforming back to the primed basis with U Exercise: Derive the change-of-basis formulae 30 and 3 by resolving the identity 35 That is, write a i e i a e i I a A ij e i A e j e i IAI e 3 j and use the fact that I can be expressed as I k e k e k l e l e l You should find that a i j A ij k e i e j a j, e i e k l e k A e l e l e 33 j, The range is clearly the subspace of V spanned by the vectors A e, A e,, A e n The rank of an operator is the dimension of its range Examples: rank , rank , 0 rank The rank of a matrix is unchanged under any of the following elementary row operations: swap two rows; multiply one row by a non-zero constant; add a multiple of one row to another Each such operation produces a new matrix whose columns are linear combinations of the columns of the original matrix A simple way of calculating the rank of a matrix is to use elementary row operations to reduce it to an upper triangular matrix and counting the number of LI columns in the result Every elementary row operation can be expressed as an elementary matrix For example, starting with a 3 3 matrix A and adding α times row 3 to row, then swapping the first two rows gives a new matrix E E A, where the elementary matrices E and E are E α, E where e i e j is the projection of e j onto the e i basis that is, a matrix whose jth column expresses e j in the primed basis How is this matrix related to U introduced above? Aside: linear equations To solve the linear equation Ax b apply elementary row operations E, E, to A to reduce it to upper triangular form That is, Exercise: Justify the statement at the end of 33: show that if a matrix is Hermitian unitary in one orthonormal basis then it is Hermitian unitary in any other orthonormal basis e e α a e e Example: Example: two-dimensional rotations Suppose that the e, e basis is related to the e, e basis by a rotation through an angle α From the diagram, the basis vectors are related through e cos α e + sin α e e sin α e + cos α e with U T cos α sin α, sin α cos α so that U e e U T e, 34 e cos α sin α sin α cos α 35 The i th column of U expresses e i in the e j basis: U ji e j e i Clearly the coordinates a, a of a vector a in the two bases are related through a U T a Ax b E m E E Ax E m E E b A A A 3 A,n A n x b 0 A A 3 A,n A n 0 0 A 33 A 3,n A x b 3n x 3 b, A n,n A n,n x n b n A n,n x n b n where A ij E m E E A ij and b i E m E E b i Then the x i can be found by backsubstitution 39 Aside: matrix inverse To find the inverse assuming that it exists, of a square matrix A, apply elementary row operations to reduce A E A E E A E m E E A I while simultaneously applying the same operations to the identity I: I E I E E I E m E E I Then E m E A I, so that A E m E I

4 II 7 II 8 37 Determinant roperties of the determinant ermutations A permutation of the list,,, m is another list that contains each of the numbers,, m exactly once In other words, it is a straightforward shuffling of the order of the elements There are m! permutations of an m-element list Given a permutation we write for the first element in the shuffled list, for the second, etc Then can be written as,,, m An alternative notation is m, 330 m In the following A and B are any two square matrices i If two rows of A are equal to one another then det A 0 ii If B can be obtained from A by multiplying one row by a factor α then det B α det A, iii If B can be obtained from A by adding a multiple of one row to another then det B det A iv If B can be obtained from A by swapping two rows then det B det A v deta T det A vi detab det A det B roof of i: Suppose that the first two rows of A are equal, A j A j, and consider the factor F below in each term of the sum which emphasises that is a mapping from the set,, m top row to itself values given on bottom row From any two permutation mappings and Q we can compose a new one Q defined through Qi Qi There is an identity mapping for which i i and every has an inverse det A sgn A A A n, n 335 F m, 33 m which is well defined because each number,,, m appears exactly once in the top row of 33 Any permutation,,, m can be constructed from,,, m by a sequence of pairwise element exchanges Even odd permutations require an even odd number of exchanges The sign of a permutation is defined as { +, if is an even permutation of,,, m, sgn, if is an odd permutation of,,, m 33 Since A j A j we have that F sgn A, A, For each permutation there is another,, that swaps only and, leaving the other elements unchanged This new permutation has F sgn A, A, sgn A, A, F So, the term involving cancels out the contribution of the term involving to the sum 335 Summing over all the result is det A 0 roof of ii: Without loss of generality, suppose that B αa αa αa n A A A n A n A n A nn 336 Given two permutations, Q, we have sgn Q sgn sgnq The identity permutation is even Therefore + sgn sgn sgn, showing that sgn sgn The following table shows all 6 permutations of the 3-elements list,, 3: Then det B sgn B B B n, n Determinant 3 sgn The determinant of an n n matrix A is defined through det A sgn A A A n, n, 333 roof of iii: Suppose that B α sgn αa A A n, n sgn A A A n, n α det A A + αa A + αa A n + αa n A A A n A n A n A nn where the sum is over all permutations of,, 3, n For example, for the case n 3 Then det A A A A 33 A A A 33 + A A 3 A 3 A 3 A A 3 + A 3 A A 3 A A 3 A 3, 334 det B sgn [ A + αa ] A A n, n where in the sum 333 I have taken the permutations in the order given in the table above Exercise: Confirm that this expression agrees with the result obtained by the more familiar minorexpansion method sgn A A A A n, n +α sgn A A A n, n 339 det A 0 The second term is zero by property i above

5 II 9 II 0 roof of iv: Suppose, for example, that B is obtained from A by swapping the first two rows Then det B sgn A A A n, n sgn A, Q A, Q A n, Qn, 340 where Q is a permutation that unswaps the first two rows and leaves the rest unchanged: Q, Q and Qi i for i > The sum over can be replaced by another sum over Q, with sgn sgn sgnq sgn Therefore roof of v: det B deta T det A sgn A A A n, n sgn A T A T A T n, n 34 sgn A A A n,n 34 n sgn A i,i Let Q be the inverse permutation to : if i j, then Qj i Then A ii A jqj and we have that n A ii n A jqj Noting also that sgn sgnq and replacing the sum over by an equivalent sum over Q, obtain n n sgn sgnq A jqj det A 343 roof of vi: homework exercise! A i,i Q Exercise: Use the properties above together with results derived in 35 to show that the determinant is independent of basis 38 Trace The trace tr A of an n n matrix A is the sum of its diagonal elements: tr A A ii 344 The trace satisfies because ABii trab trba 345 n trab A ij B ji B ji A ij trba 346 BAjj Taking A A, B A A 3 A m we see that tra A A m tra A m A : the trace is invariant under cyclic permutations Exercise: Show that the trace is independent of basis Explain then how, given a 3 3 rotation matrix, one can find the rotation angle directly from the trace 39 Eigenvectors and diagonalisation Recall that v 0 is an eigenvector of an operator A with eigenvalue λ if it satisfies the eigenvalue equation A v λ v 347 To find v we first find λ by rewriting eq above as A v λ v A λi v Clearly A λi can t be invertible: if it were, then we could operate on 0 with A λi to get v So, we must have that deta λi 0, 349 which is known as the characteristic equation for A The characteristic equation is an n th -order polynomial in λ which can always be written in the form λ λ λ λ λ λ n 0, where the n roots ie, eigenvalues λ,, λ n are in general complex and not necessarily distinct If a particular value of λ appears k > times then that eigenvalue is said to be k-fold degenerate The integer k is also sometimes called the multiplicity of the eigenvalue Exercise: Let A be a linear operator with eigenvector v and corresponding eigenvalue λ Show that v is also an eigenvector of the linear operator expαa, but with eigenvalue expαλ Example: Example: find the eigenvalues and eigenvectors of A The characteristic equation is λ λ λ λ 0, which, when factorised, is λ λ 0 Therefore the eigenvalues are λ 0,, : the eigenvalue λ is doubly degenerate To find the eigenvector v corresponding to the first eigenvalue with λ λ 0, take v x, x, x 3 T and substitute into eigenvalue equation 347 to find x x 3 and x 0 Any vector v that satisfies these constraints is an eigenvector of A with eigenvalue 0 For example, we could take v, 0, T or π, 0, π T or even i, 0, i T assuming we have a complex vector space It is usually most convenient, however, to choose them to make v normalized Therefore we choose v, 0, T Taking λ λ λ 3 and substituting v x, x, x 3 T into the eigenvalue equation yields the constraints x + x 3 0 and x x 3 0 So, we must set x x 3, but we are free to choose anything for x For v let us take x x 3 / and x 0, while for v 3 we choose x x 3 0 and x To summarise, we have found the following eigenvalues and eigenvectors for the matrix A: λ 0, v 0, λ, v 0, λ 3, v Diagonalisation Notice that for the 3 3 matrix A above we were able to find three eigenvectors v, v, v 3 These three eigenvectors turn out to be orthogonal Therefore they can be used as a basis for the 3-dimensional vector space on which A operates In this eigenbasis the matrix representing the operator A takes on a particularly simple form: it is diagonal, with matrix elements v i A v j λ i δ ij 35

6 II II 30 The complex spectral theorem A fundamental problem of linear algebra is to find the conditions under which a given matrix A can be diagonalised, ie, whether there exists an invertible coordinate transformation for which A becomes diagonal For our purposes, we need only accept that a sufficient condition is that A be Hermitian see next section If you don t like to accept such things, let A be an operator on a complex inner-product space V It is relatively easy to show that V has an orthonormal basis consisting of eigenvectors of A if and only if A is normal Recall that normal matrices satisfy AA A A: Hermitian and unitary matrices are special cases of normal matrices Here is an outline sketch of the proof Let v,, v n be the eigenvectors of A and λ,, λ n the corresponding eigenvalues First suppose that V has an orthonormal basis consisting of the eigenvectors v i In this basis the matrix representing A is diagonal with matrix elements A ij v i A v j The matrix representing A has elements A ij v i A v j v j A v i, which is also diagonal Diagonal matrices commute Therefore AA A A: A is normal Now the converse: we need to show that if A is normal then V has an orthonormal basis consisting of eigenvectors of A Any matrix can be reduced to upper triangular form by applying an appropriate set of elementary row operations see 36 In particular, there is an orthonormal basis e,, e n in which the matrix representing A becomes e i A e j a a n 353 As A is Hermitian we have A A Operate on the first of 356 with v and use the second to operate on v Subtracting, the result is 0 λ λ v v 357 But v v > 0 Therefore λ λ : the eigenvalues are real roof of : the eigenvectors of a Hermitian operator are orthogonal eigenvectors with corresponding eigenvalues λ, λ The eigenvalue equations are A v λ v, A v λ v Let v and v be two 358 For simplicity, let us first consider the case λ λ Operating on the first of 358 with v and on the second with v results in v A v λ v v, 359 v A v λ v v Taking the complex conjugate of the second of these gives From 353 we see that 0 a nn A e a and A e a + a + + a n 354 v A v λ v v v A v λ v v, since λ λ and v A v v A v v A v Now subtract 360 from the first of 359: 360 But A is normal, so A e A e This means that all entries in the first row of the matrix in the RHS of 353 vanish, except possibly for the first Similarly, from equation 353 we have that A e a + a a, and A e a + a a n, using the result that a 0 from the previous paragraph Because A is normal we have that A e A e : the whole second row must equal zero, except possibly for the diagonal element a Repeating this procedure for A e 3 to A e n shows that all of the off-diagonal elements in 353 must be zero Therefore the orthonormal basis vectors e,, e n are eigenvectors of A The corresponding result for real vector spaces is harder to prove It states that a real vector space V has an orthonormal basis consisting of eigenvectors of A if and only if A is symmetric which is equivalent to Hermitian for the special case of a real vector space 3 Hermitian operators Hermitian operators are particularly important Here are two central results If A is Hermitian then: its eigenvalues are real its eigenvectors are orthogonal and therefore form a basis of V roof of : the eigenvalues of a Hermitian operator are real eigenvalue λ Then the eigenvalue equation 347 and its dual are A v λ v v A λ v 355 Let v be a eigenvector of A with λ λ v v 36 Under our assumption that λ λ we must have v v 0: the eigenvectors are orthogonal If all n eigenvalues are distinct, then it is clear that the eigenvectors span V and therefore form a basis If λ λ then we can use the Gram Schmidt procedure to construct an orthonormal pair from v and v : the complex spectral theorem 30 guarantees that a Hermitian operator on an n-dimensional space always has n orthogonal eigenvectors and so there must exist appropriate linear combinations of v and v that are orthogonal Exercise: Show that i the eigenvalues of a unitary operator are complex numbers of unit modulus and ii the eigenvectors of a unitary operator are mutually orthogonal How to diagonalise a Hermitian operator Let A be a Hermitian operator with normalised eigenvectors v,, v n When the matrix representing A is expressed in terms of its eigenbasis then the matrix elements are v i A v j λ i δ ij, 36 where λ i is the eigenvalue corresponding to v i In our standard e,, e n basis, we have that the matrix elements of A are given by e i A e j e i IAI e j 363 e i v k v k A v l v l e j, l resolving the identity 35 through I k v k v k l v l v l Written as a matrix equation, this is A U diagλ,, λ n U, 364 where U ji e j v i so that the i th column of U expresses v i in terms of the e,, e n basis

7 II 3 II 4 Exercise: Using UU U U I show the following: U AU diagλ,, λ n, A m U diagλ m,, λ m n U, tr A λ i, det A n λ i 365 Simultaneous diagonalisation of two Hermitian matrices Let A and B be two Hermitian operators There exists a basis in which A and B are both diagonal if and only if [A, B] 0 Some comments before proving this: Because the eigenvectors of Hermitian operators are orthogonal, any basis in which such an operator is diagonal must be an eigenbasis An equivalent statement is therefore that A and B both have the same eigenvectors if and only if [A, B] 0 3 In this eigenbasis the only difference between A and B is the values of their diagonal elements ie, their eigenvalues roof: We first show that if there is basis in which A and B are both diagonal then [A, B] 0 This is obvious: diagonal matrices commute The converse is that if [A, B] 0 then there is a basis in which both A and B are diagonal To prove this, note that because A is Hermitian we can find a basis in which A diaga,, a n, where the a i are the eigenvalues of A In this basis B will be represented by some matrix B B B n B 366 B n B n B nn The commutator 0 a a B a a 3 B 3 a a B 0 a a 3 B 3 AB BA a 3 a B 3 a 3 a B By assumption [A, B] 0 From the matrix 367 we see that this means that B ij B ji 0 for all indices i, j for which a i a j : if all of A s eigenvalues are distinct then B must be diagonal If some of the {a i } aren t distinct we have just a little more work to do Take, for example, the case a a Then we have that B B 0 0 B B 0 0 B B 0 0 B 33 0 B B 44, 368 B n using B B B B and B B n diagb 33,, B nn to denote block submatrices of B Now introduce a unitary change-of-basis matrix U U 0 U U U 0 U Ī n Then in this new basis B will be represented by the matrix U Ū BU Ū Bn 370 Notice that U changes only the upper-left corner of B We can always choose Ū to make Ū B Ū diagb, b, so that the matrix U BU becomes diagonal The basis change U has no effect on the matrix representing A because Ā diaga, a a diag, is proportional to the identity, we have that 3 Odds and ends Quadratic forms U Ū AU ĀŪ Ā n Expressions such as ax + bxy + cy, or, more generally, j<i Ā A 37 Ān A ii x i + A ij x i x j A ij x i x j 37 are known as homogeneous quadratic forms They can be written in matrix form as x T Ax, where A is a symmetric matrix with elements A ij A ji For example, 4x + 6xy + 7y x y 4 3 x y Exercise: Explain why under a certain change of basis x x the quadratic form 37 can be expressed as λ x + +λ n x n, where the λ i are the eigenvalues of A What is the relationship between x and x? Do you need to make any assumptions about the elements A ij or x i? Lorentz transformations Our definition of scalar product includes a condition 4 that scalar products are positive unless one of the vectors involved is zero This condition is relaxed in the special theory of relativity, where the scalar product of two four-vectors x ct, x, y, z and x c t, x, ȳ, z is defined to be x x c t x ȳ z ct x x T ηx, 374 y z where the metric η diag,,, The square of the length of a spacetime interval dx cdt, dx, dy, dz is then ds cdt dx dy dz, 375 which can be positive, negative or zero, depending on whether the interval is time-like, space-like or light-like A Lorentz transformation is a change of basis x x, x x that preserves the scalar product 374 An example familiar from the first-year course is x Λx, where γ βγ Λ βγ γ with β v/c and γ / β It is easy to confirm that Λ T ηλ η, 376

8 II 5 Tensor product Let V and W be n- and m-dimensional vector spaces respectively The tensor product V W is an vector space of dimension n m A basis for V W is e V i e W j, where e V,, e V n is a basis of V and e W,, e W m is a basis of W For any vi V, w i W and scalar α, the tensor product satisfies v + v w v w + v w, v w + w v w + v w, α v w v α w α v w 377 Given linear operators A : V V and B : W W we can define a new linear operator A B : V W V W through A B v v A v B w 378 If V and W are inner-product spaces then there is a natural inner product between two vectors v w and v w V W: v w v w v v w w 379 Further reading See RHB 8, DR II Jordan normal form Not all matrices can be diagonalised, but it turns out that for every matrix A there is an invertible transformation for which A takes on the block-diagonal form A A A An, 380 where the blocks are λ i λ i A i λi, 38 having ones immediately above the main diagonal Each λ i here is an eigenvalue of A with multiplicity given by the number of diagonal elements in the corresponding A i Singular value decomposition A generalisation of the eigenvector decomposition we have been applying to square matrices holds to any m n matrix A: any such A can be factorized as λ i A UDV, 38 where U is an m m unitary matrix, D is an m n diagonal matrix consisting of the so-called singular values of A, and V is an n n unitary matrix

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Mathematical Foundations of Quantum Mechanics

Mathematical Foundations of Quantum Mechanics Mathematical Foundations of Quantum Mechanics 2016-17 Dr Judith A. McGovern Maths of Vector Spaces This section is designed to be read in conjunction with chapter 1 of Shankar s Principles of Quantum Mechanics,

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Matrix Representation

Matrix Representation Matrix Representation Matrix Rep. Same basics as introduced already. Convenient method of working with vectors. Superposition Complete set of vectors can be used to express any other vector. Complete set

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 44 Definitions Definition A matrix is a set of N real or complex

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant...

1 = I I I II 1 1 II 2 = normalization constant III 1 1 III 2 2 III 3 = normalization constant... Here is a review of some (but not all) of the topics you should know for the midterm. These are things I think are important to know. I haven t seen the test, so there are probably some things on it that

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Properties of Linear Transformations from R n to R m

Properties of Linear Transformations from R n to R m Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Chapter 1. Matrix Calculus

Chapter 1. Matrix Calculus Chapter 1 Matrix Calculus 11 Definitions and Notation We assume that the reader is familiar with some basic terms in linear algebra such as vector spaces, linearly dependent vectors, matrix addition and

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB

Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB Arfken 3.4.6 Matrix C is not Hermition. But which is Hermitian. Likewise, Assignment 11 (C + C ) = (C + C ) = (C + C) [ i(c C ) ] = i(c C ) = i(c C) = i ( C C ) Arfken 3.4.9 The matrices A and B are both

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) Lecture 1: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11) The eigenvalue problem, Ax= λ x, occurs in many, many contexts: classical mechanics, quantum mechanics, optics 22 Eigenvectors and

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information