Benjamin McKay. Abstract Linear Algebra

Size: px
Start display at page:

Download "Benjamin McKay. Abstract Linear Algebra"

Transcription

1 Benjamin McKay Abstract Linear Algebra October 19, 2016

2 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

3 Contents I Basic Definitions 1 1 Vector Spaces 3 2 Fields 21 3 Direct Sums of Subspaces 35 II Jordan Normal Form 41 4 Jordan Normal Form 43 5 Decomposition and Minimal Polynomial 53 6 Matrix Functions of a Matrix Variable 61 7 Symmetric Functions of Eigenvalues 71 8 The Pfaffian 77 III Factorizations 89 9 Dual Spaces Singular Value Factorization Factorizations 101 IV Tensors Quadratic Forms Tensors and Indices Tensors Exterior Forms 137 Hints 141 Bibliography 161 List of notation 163 Index 165 iii

4

5 Basic Definitions

6

7 Chapter 1 Vector Spaces The ideas of linear algebra apply more widely, in more abstract spaces than R n. Definition To avoid rewriting everything twice, once for real numbers and once for complex numbers, let K stand for either R or C. Definition 1.1. A vector space V over K is a set (whose elements are called vectors) equipped with two operations, addition (written +) and scaling (written ), so that a. Addition laws: 1. u + v is in V 2. (u + v) + w = u + (v + w) 3. u + v = v + u for any vectors u, v, w in V, b. Zero laws: 1. There is a vector 0 in V so that 0 + v = v for any vector v in V. 2. For each vector v in V, there is a vector w in V, for which v + w = 0. c. Scaling laws: 1. av is in V 2. 1 v = v 3. a(bv) = (ab)v 4. (a + b)v = av + bv 5. a(u + v) = au + av for any numbers a, b K, and any vectors u and v in V. Because (u + v) + w = u + (v + w), we never need parentheses in adding up vectors. 3

8 4 Vector Spaces K n is a vector space, with the usual addition and scaling. The set V of all real-valued functions of a real variable is a vector space: we can add functions (f + g)(x) = f(x) + g(x), and scale functions: (c f)(x) = c f(x). This example is the main motivation for developing an abstract theory of vector spaces. Take some region inside R n, like a box, or a ball, or several boxes and balls glued together. Let V be the set of all real-valued functions of that region. Unlike R n, which comes equipped with the standard basis, there is no standard basis of V. By this, we mean that there is no collection of functions f i we know how to write down so that every function f is a unique linear combination of the f i. Even still, we can generalize a lot of ideas about linear algebra to various spaces like V instead of just R n. Practically speaking, there are only two types of vector spaces that we ever encounter: R n (and its subspaces) and the space V of real-valued functions defined on some region in R n (and its subspaces). The set K p q of p q matrices is a vector space, with usual matrix addition and scaling. 1.1 If V is a vector space, prove that a. 0 v = 0 for any vector v, and b. a 0 = 0 for any scalar a. 1.2 Let V be the set of real-valued polynomial functions of a real variable. Prove that V is a vector space, with the usual addition and scaling. 1.3 Prove that there is a unique vector w for which v + w = 0. (Lets always call that vector v.) Prove also that v = ( 1)v. We will write u v for u + ( v) from now on. We define linear relations, linear independence, bases, subspaces, bases of subspaces, and dimension using exactly the same definitions as for R n. Remark 1.2. Thinking as much as possible in terms of abstract vector spaces saves a lot of hard work. We will see many reasons why, but the first is that every subspace of any vector space is itself a vector space.

9 Review problems 5 Review problems 1.4 Prove that if u + v = u + w then v = w. 1.5 Imagine that the population p j at year j is governed (at least roughly) by some equation p j+1 = ap j + bp j 1 + cp j 2. Prove that for fixed a, b, c, the set of all sequences..., p 1, p 2,... which satisfy this law is a vector space. 1.6 Give examples of subsets of the plane a. invariant under scaling of vectors (sending u to au for any number a), but not under addition of vectors. (In other words, if you scale vectors from your subset, they have to stay inside the subset, but if you add some vectors from your subset, you don t always get a vector from your subset.) b. invariant under addition but not under scaling or subtraction. c. invariant under addition and subtraction but not scaling. 1.7 Take positive real numbers and add by the law u v = uv and scale by a u = u a. Prove that the positive numbers form a vector space with these funny laws for addition and multiplication. 1.8 Which of the following sets are vector spaces (with the usual addition and scalar multiplication for real-valued functions)? Justify your answer. a. The set of all continuous functions of a real variable. b. The set of all nonnegative functions of a real variable. c. The set of all polynomial functions of degree exactly 3. d. The set of all symmetric matrices A, i.e. A = A t. Bases We define linear combinations, linear relations, linear independence, bases and the span of a set of vectors identically. 1.9 Find bases for the following vector spaces: a. The set of polynomial functions of degree 3 or less. b. The set of 3 2 matrices. c. The set of n n upper triangular matrices. d. The set of polynomial functions p(x) of degree 3 or less which vanish at the origin x = 0.

10 6 Vector Spaces Lemma 1.3. The number of elements in a linearly independent set is never more than the number of elements in a spanning set: if v 1, v 2,..., v p V is a linearly independent set of vectors and w 1, w 2..., w q V is a spanning set of vectors in the same vector space, then p q. Moreover, p = q just when v 1, v 2,..., v p is a basis. In particular, any two bases have the same number of elements. Proof. If v 1 = 0 then 1 v v v v p = 0, a linear relation. So v 1 0. We can write v 1 as a linear combination, say v 1 = b 1 w 1 + b 2 w b q w q. Not all of the b 1, b 2,..., b q coefficients can vanish, since v 1 0. If we relabel the subscripts, we can arrange that b 1 0. Solve for w 1 : w 1 = 1 v 1 b 2 w 2 b q w q. b 1 b 1 b 1 Therefore we can write each of w 1, w 2, w 3,..., w q as linear combinations of v 1, w 2, w 3,..., w q. So v 1, w 2, w 3,..., w q is a spanning set. Next replace v 1 in this argument by v 2, and then by v 3, etc. We can always replace one of the vectors w 1, w 2,..., w q by each of the vectors v 1, v 2,..., v p. If p q, we can keep going like this until we replace all of the vectors w 1, w 2,..., w q by the vectors v 1, v 2,..., v q : v 1, v 2,..., v q is a spanning set. If p = q, we find that v 1, v 2,..., v p span, so form a basis. If p > q, v q+1 is a linear combination v q+1 = b 1 v 1 + b 2 v b q v q, a linear relation, a contradiction. Definition 1.4. The dimension of a vector space V is n if V has a basis consisting of n vectors. If there is no such value of n, then we say the V has infinite dimension. Remark 1.5. We can include the possibility that n = 0 by defining K 0 to consist in just a single vector 0, a zero dimensional vector space Let V be the set of polynomials of degree at most p in n variables. Find the dimension of V. Subspaces The definition of a subspace is identical to that for R n. Let V be the set of real-valued functions of a real variable. The set P of continuous real-valued functions of a real variable is a subspace of V.

11 Review problems 7 Let V be the set of all infinite sequences of real numbers. We add a sequence x 1, x 2, x 3,... to a sequence y 1, y 2, y 3,... to make the sequence x 1 + y 1, x 2 + y 2, x 3 + y 3,.... We scale a sequence by scaling each entry. The set of convergent infinite sequences of real numbers is a subspace of V. In these last two examples, we see that a large part of analysis is encoded into subspaces of infinite dimensional vector spaces. (We will define dimension shortly.) 1.11 Describe some subspaces of the space of all real-valued functions of a real variable. Review problems 1.12 Which of the following are subspaces of the space of real-valued functions of a real variable? a. The set of everywhere positive functions. b. The set of nowhere positive functions. c. The set of functions which are positive somewhere. d. The set of polynomials which vanish at the origin. e. The set of increasing functions. f. The set of functions f(x) for which f( x) = f(x). g. The set of functions f(x) each of which is bounded from above and below by some constant functions Which of the following are subspaces of vector space of all 3 3 matrices? a. The invertible matrices. b. The noninvertible matrices. c. The matrices with positive entries. d. The upper triangular matrices. e. The symmetric matrices. f. The orthogonal matrices.

12 8 Vector Spaces 1.14 Prove that for any subspace U of a finite dimensional vector space U, there is basis for V u 1, u 2,..., u p, v 1, v 2,..., v q so that u 1, u 2,..., u p, form a basis for U a. Let H be an n n matrix. Let P be the set of all matrices A for which AH = HA. Prove that P is a subspace of the space V of all n n matrices. b. Describe this subspace P for H = ( ) Sums and Direct Sums Suppose that U, W V are two subspaces of a vector space. Then the intersection U W V is also a subspace. Let U + W be the set of all sums u + w for any u U and w W. Then U + W V is a subspace Prove that U + W is a subspace, and that dim(u + W ) = dim U + dim W dim(u W ). If U and W are any vector spaces (not necessarily subspaces of any particular vector space V ) the direct sum U W is the set of all pairs (u, w) for any u U and w W. We add pairs and scale pairs in the obvious way: (u 1, w 1 ) + (u 2, w 2 ) = (u 1 + u 2, w 1 + w 2 ) and c (u, w) = (cu, cw). If u 1, u 2,..., u p is a basis for U and w 1, w 2,..., w q is a basis for W, then (u 1, 0), (u 2, 0),... (u p, 0), (0, w 1 ), (0, w 2 ),... (0, w q ) is a basis for U W. In particular, dim (U W ) = dim U dim W.

13 Linear Maps 9 Linear Maps Definition 1.6. A linear map T between vector spaces U and V is a rule which associates to each vector x from U a vector T x from V so that a. T (x 0 + x 1 ) = T x 0 + T x 1 b. T (ax) = at x for any vectors x 0, x 1 and x in U and real number a. We will write T : U V to mean that T is a linear map from U to V. Let U be the vector space of all real-valued functions of real variable. Imagine 16 scientists standing one at each kilometer along a riverbank, each measuring the height of the river at the same time. The height at that time is a function h of how far you are along the bank. The 16 measurements of the function, say h(1), h(2),..., h(16), sit as the entries of a vector in R 16. So we have a map T : U R 16, given by sampling values of functions h(x) at various points x = 1, x = 2,..., x = 16. h(1) h(2) This T is a linear map. T h =. h(16). Any p q matrix A determines a linear map T : R q R p, by the equation T x = Ax. Conversely, given a linear map T : R q R p, define a p q matrix A by letting the j-th column of A be T e j. Then T x = Ax. We say that A is the matrix associated to T. In this way we can identify the space of linear maps T : R q R p with the space of p q matrices. It is convenient to write T = A to mean that T has associated matrix A. There is an obvious linear map I : V V given by Iv = v for any vector v in V, and called the identity map Definition 1.7. If S : U V and T : V W are linear maps, then T S : U W is their composition.

14 10 Vector Spaces If U, W V are subspaces, then there is an obvious linear map T : U W U + W, T (u, w) = u + w. This map is a bijection just when U W = {0}, clearly, in which case we use this map to identify U W with U + W, and say U + W is a direct sum of subspaces Prove that if A is the matrix associated to a linear map S : R p R q and B the matrix associated to T : R q R r, then BA is the matrix associated to their composition. Remark 1.8. From now on, we won t distinguish a linear map T : R q R p from its associated matrix, which we will also write as T. Once again, deliberate ambiguity has many advantages. Remark 1.9. A linear map between abstract vector spaces doesn t have an associated matrix; this idea only makes sense for maps T : R q R p. Let U and V be two vector spaces. The set W of all linear maps T : U V is a vector space: we add linear maps by (T 1 + T 2 ) (u) = T 1 (u) + T 2 (u), and scale by (ct )u = ct u. Definition The kernel of a linear map T : U V is the set of vectors u in U for which T u = 0. The image is the set of vectors v in V of the form v = T u for some u in U. Definition A linear map T : U V is an isomorphism if a. T x = T y just when x = y (one-to-one) for any x and y in U, and b. For any z in W, there is some x in U for which T x = z (onto). Two vector spaces U and V are called isomorphic if there is an isomorphism between them. Being isomorphic means effectively being the same for purposes of linear algebra. Remark When working with an abstract vector space V, the role that has up to now been played by a change of basis matrix will henceforth be played by an isomorphism F : R n V. Equivalently, F e 1, F e 2,..., F e n is a basis of V. Let V be the vector space of polynomials p(x) = a + bx + cx 2 of degree

15 Linear Maps 11 at most 2. Let F : R 3 V be the map a F b = a + bx + cx 2. c Clearly F is an isomorphism Prove that a linear map T : U V is an isomorphism just when its kernel is 0, and its image is V Let V be a vector space. Prove that I : V V is an isomorphism Prove that an isomorphism T : U V has a unique inverse map T 1 : V U so that T 1 T = 1 and T T 1 = 1, and that T 1 is linear Let V be the set of polynomials of degree at most 2, and map T : V R 3 by, for any polynomial p, T p = p(0) p(1). p(2) Prove that T is an isomorphism. Theorem If v 1, v 2,..., v n is a basis for a vector space V, and w 1, w 2,..., w n are any vectors in a vector space W, then there is a unique linear map T : V W so that T v i = w i. Proof. If there were two such maps, say S and T, then S T would vanish on v 1, v 2,..., v n, and therefore by linearity would vanish on any linear combination of v 1, v 2,..., v n, therefore on any vector, so S = T. To see that there is such a map, we know that each vector x in V can be written uniquely as x = x 1 v 1 + x 2 v x n v n. So lets define T x = x 1 w 1 + x 2 w x n w n. If we take two vectors, say x and y, and write them as linear combinations of basis vectors, say with x = x 1 v 1 + x 2 v x n v n, y = y 1 v 1 + y 2 v y n v n, then T (x + y) = (x 1 + y 1 ) w 1 + (x 2 + y 2 ) w (x n + y n ) w n = T x + T y.

16 12 Vector Spaces Similarly, if we scale a vector x by a number a, then so that Therefore T is linear. ax = a x 1 v 1 + a x 2 v a x n v n, T (ax) = a x 1 w 1 + a x 2 w a x n w n = a T x. Corollary A vector space V has dimension n just when it is isomorphic to K n. To each basis v 1, v 2,..., v n we associate the unique linear isomorphism F : R n V so that F (e 1 ) = v 1, F (e 2 ) = v 2,..., F (e n ) = v n. Suppose that T : V W is a linear map between finite dimensional vector spaces, and we have a basis and a basis v 1, v 2,..., v p V w 1, w 2,..., w q W. Then we can write each element T v i somehow in terms of these w 1, w 2,..., w q, say T v j = A ij w i, i for some numbers A ij. Let A be the matrix with entries A ij ; we say that A is the matrix of T in these bases. Let F : R p V be the isomorphism F (e j ) = v j, and let G: R q W be the isomorphism G (e i ) = w i. Then G 1 T F : R p R q is precisely the linear map G 1 T F x = Ax, given by the matrix A. The proof: clearly Ae j = j-th column of A, A 1j A 2j =., A qj = i A ij e i.

17 Review problems 13 Therefore GAe j = i A ij w j = T v j = T F e j. So GA = T F, or A = G 1 T F. We can now just take all of the theorems we have previously proven about matrices and prove them for linear maps between finite dimensional vector spaces, by just replacing the linear map by its matrix. For example, Theorem Let T : U V be a linear transformation of finite dimensional vector spaces. Then dim ker T + dim im T = dim U. The proof is that the kernels and images are identified when we match up T and A using F and G as above. Definition If T : U V is a linear map, and W is a subspace of U, the restriction, written T W : W V, is the linear map defined by T W (w) = T w for w in W, only allowing vectors from W to map through T. Review problems 1.22 Prove that if linear maps satisfy P S = T and P is an isomorphism, then S and T have the same kernel, and isomorphic images Prove that if linear maps satisfy SP = T, and P is an isomorphism, then S and T have the same image and isomorphic kernels Prove that dimension is invariant under isomorphism Prove that the space of all p q matrices is isomorphic to R pq. Quotient Spaces A subspace W of a vector space V doesn t usually have a natural choice of complementary subspace. For example, if V = R 2, and W is the vertical axis, then we might like to choose the horizontal axis as a complement to W. But this choice is not natural, because we could carry out a linear change of variables, fixing the vertical axis but not the horizontal axis (for example, a shear along the vertical direction). There is a natural choice of vector space which plays the role of a complement, called the quotient space. Definition If V is a vector space and W a subspace of V, and v a vector in V, the translate v + W of W is a set of vectors in V of the form v + w where w is in W. The translates of the horizontal plane through 0 in R 3 are just the horizontal planes.

18 14 Vector Spaces 1.26 Prove that any subspace W will have for any w from W. w + W = W, Remark If we take W the horizontal plane (x 3 = 0) in R 3, then the translates W and W 1 are the same, because we can write = W = 0 + W This is the main idea behind translates: two vectors make the same translate just when their difference lies in the subspace. Definition If x + W and y + W are translates, we add them by (x + W ) + (y + W ) = (x + y) + W. If s is a number, let s(x + W ) = sx + W Prove that addition and scaling of translates is well-defined, independent of the choice of x and y in a given translate. Definition The quotient space V/W of a vector space V by a subspace W is the set of all translates v + W of all vectors v in V. Take V the plane, V = R 2, and W the vertical axis. The translates of W are the vertical lines in the plane. The quotient space V/W has the various vertical lines as its points. Each vertical line passes through the horizontal axis at a single point, uniquely determining the vertical line. So the translates are the points ( x 0) + W. The quotient space V/W is just identified with the horizontal axis, by taking ( ) x + W to x. 0 Lemma The quotient space V/W of a vector space by a subspace is a vector space. The map T : V V/W given by the rule T x = x + W is an onto linear map.

19 Determinants 15 Remark The concept of quotient space can each be circumvented by using some complicated matrices, as can everything in linear algebra, so that one never really needs to use abstract vector spaces. But that approach is far more complicated and confusing, because it involves a choice of basis, and there is usually no natural choice to make. It is always easiest to carry out linear algebra as abstractly as possible, descending into choices of basis at the latest possible stage. Proof. One has to check that (x + W ) + (y + W ) = (y + W ) + (x + W ), but this follows from x + y = y + x clearly. Similarly all of the laws of vector spaces hold. The 0 element of V/W is the translate 0 + W, i.e. W itself. To check that T is linear, consider scaling: T (sx) = sx + W = s(x + W ), and addition: T (x + y) = x + y + W = (x + W ) + (y + W ). Lemma If U and W are subspaces of a vector space V, and V = U W a direct sum of subspaces, then the map T : V V/W taking vectors v to v + W restricts to an isomorphism T U : U V/W. Remark So, while there is no natural complement to W, every choice of complement is naturally identified with the quotient space. Proof. The kernel of T is clearly U W = 0. To see that T is onto, take a vector v + W in V/W. Because V = U W, we can somehow write v as a sum v = u + w with u from U and w from W. Therefore v + W = u + W = T U u lies in the image of T U. Theorem If V is a finite dimensional vector space and W a subspace of V, then dim V/W = dim V dim W. Definition If T : U V is a linear map, and U 0 U and V 0 V are subspaces, and T (U 0 ) V 0, we can define vector spaces U = U/U 0, V = V/V 0 and a linear map T : U V so that T (u + U 0 ) = (T u) + V 0. It is easy to check that T is a well defined linear map. Determinants Definition If T : V V is a linear map taking a finite dimensional vector space to itself, define det T to be det T = det A, where F : R n V is an isomorphism, and A is the matrix associated to F 1 T F : R n R n. Remark There is no definition of determinant for a linear map of an infinite dimensional vector space, and there is no general theory to handle such things, although there are many important examples.

20 16 Vector Spaces Remark A map T : U V between different vector spaces doesn t have a determinant Prove that value of the determinant is independent of the choice of isomorphism F Let V be the vector space of polynomials of degree at most 2, and let T : V V be the linear map T p(x) = 2p(x 1) (shifting a polynomial p(x) to 2p(x 1).) For example, T 1 = 2, T x = 2(x 1), T x 2 = 2(x 1) 2. a. Prove that T is a linear map. b. Prove that T is an isomorphism. c. Find det T. Theorem Suppose that S : V V and T : V V are diagonalizable linear maps, i.e. each has a basis of eigenvectors. Then ST = T S just when there is a basis which consists of eigenvectors simultaneously for both S and T. This is hard to prove for matrices, but easy in the abstract setting of linear maps. Proof. If there is a basis of simultaneous eigenvectors, then clearly the matrices of S and T are diagonal in that basis, so commute, so ST = T S. Now suppose that ST = T S. Clearly the result is true if dim V = 1. More generally, clearly the result is true if T = λi for any constant λ, because all vectors in V are then eigenvectors of T. More generally, for any eigenvalue λ of T, let V λ be the λ-eigenspace of T. Because T is diagonal, the sum V = V λ1 V λ2 V λp summed over the eigenvalues λ 1, λ 2,..., λ p of T is isomorphic to a direct sum. We claim that each V λ is S-invariant, for each eigenvalue λ of T. The proof: pick any vector v V λ. We want to prove that Sv V λ. Since V λ is the λ-eigenspace of T, clearly T v = λv. But then we need to prove that T (Sv) = λ(sv). This is easy: T Sv = ST v, = Sλv, = λsv.

21 Review problems 17 Review problems 1.30 Let T : V V be the linear map T x = 2x. Suppose that V has dimension n. What is det T? 1.31 Let V be the vector space of all 2 2 matrices. Let A be a 2 2 matrix with two different eigenvalues, λ 1 and λ 2, and eigenvectors x 1 and x 2 corresponding to these eigenvalues. Consider the linear map T : V V given by T B = AB (matrix multiplication on the right hand side of B by A). What are the eigenvalues of T and what are the eigenvectors? (Warning: the eigenvectors are vectors from V, so they are matrices.) What is det T? 1.32 The same but let T B = BA Let V be the vector space of polynomials of degree at most 2, and let T : V V be defined by T q(x) = q( x). What is the characteristic polynomial of T? What are the eigenspaces of T? Is T diagonalizable? 1.34 (Due to Peter Lax [4].) Consider the problem of finding a polynomial p(x) with specified average values on each of a dozen intervals on the x-axis. (Suppose that the intervals don t overlap.) Does this problem have a solution? Does it have many solutions? (All you need is a naive notion of average value, but you can consult a calculus book, for example [9], for a precise definition.) (a) For each polynomial p of degree n, let T p be the vector whose entries are the averages. Suppose that the number of intervals is at least n. Show that T p = 0 only if p = 0. (b) Suppose that the number of intervals is no more than n. Show that we can solve T p = b for any given vector b How many of the nutshell criteria for invertibility of a matrix can you translate into criteria for invertibility of a linear map T : U V? How much more if we assume that U and V are finite dimensional? How much more if we assume as well that U = V? Complex Vector Spaces If we change the definition of a vector space, a linear map, etc. to use complex numbers instead of real numbers, we have a complex vector space, complex linear map, etc. All of the examples so far in this chapter work just as well with complex numbers replacing real numbers. We will refer to a real vector space or a complex vector space to distinguish the sorts of numbers we are using to scale the vectors. Some examples of complex vector spaces: a. C n b. The space of p q matrices with complex entries.

22 18 Vector Spaces c. The space of complex-valued functions of a real variable. d. The space of infinite sequences of complex numbers. Inner Product Spaces Definition An inner product on a real vector space V is a choice of a real number x, y for each pair of vectors x and y so that a. x, y is a real-valued linear map in x for each fixed y b. x, y = y, x c. x, x 0 and equal to 0 just when x = 0. A real vector space equipped with an inner product is called a inner product space. A linear map between vector spaces is called orthogonal if it preserves inner products. Theorem Every inner product space of dimension n is carried by some orthogonal isomorphism to R n with its usual inner product. Proof. Use the Gram Schmidt process to construct an orthonormal basis, using the same formulas we have used before, say u 1, u 2,..., u n. Define a linear map F x = x 1 u x n u n, for x in R n. Clearly F is an orthogonal isomorphism. Take A any symmetric n n matrix with positive eigenvalues, and let x, y A = Ax, y (with the usual inner product on R n appearing on the right hand side). Then the expression x, y A is an inner product. Therefore by the theorem, we can find a change of variables taking it to the usual inner product. Definition A linear map T : V V from an inner product space to itself is symmetric if T v, w = v, T w for any vectors v and w. Theorem 1.34 (Spectral Theorem). Given a symmetric linear map T on a finite dimensional inner product space V, there is an orthogonal isomorphism F : R n V for which F 1 T F is the linear map of a diagonal matrix. Hermitian Inner Product Spaces Definition A Hermitian inner product on a complex vector space V is a choice of a complex number z, w for each pair of vectors z and w from V so that a. z, w is a complex-valued linear map in z for each fixed w

23 Review problems 19 b. z, w = w, z c. z, z 0 and equal to 0 just when z = 0. Review problems 1.36 Let V be the vector space of complex-valued polynomials of a complex variable of degree at most 3. Prove that for any four distinct points z 1, z 2, z 3, z 4, the expression p(z), q(z) = p (z 0 ) q (z 0 ) + p (z 1 ) q (z 1 ) + p (z 2 ) q (z 2 ) + p (z 3 ) q (z 3 ) + is a Hermitian inner product Continuing the previous question, if the points z 0, z 1, z 2, z 3 are z 0 = 1, z 1 = 1, z 2 = i, z 3 = i, prove that the map T : V V given by T p(z) = p( z) is unitary Continuing the previous two questions, unitarily diagonalize T State and prove a spectral theorem for normal complex linear maps T : V V on a Hermitian inner product space, and define the terms adjoint, normal and unitary for complex linear maps V V.

24

25 Chapter 2 Fields Instead of real or complex numbers, we can dream up wilder notions of numbers. Definition 2.1. A field is a set F equipped with operations + and so that a. Addition laws 1. x + y is in F 2. (x + y) + z = x + (y + z) 3. x + y = y + x for any x, y and z from F. b. Zero laws 1. There is an element 0 of F for which x + 0 = x for any x from F 2. For each x from F there is a y from F so that x + y = 0. c. Multiplication laws 1. xy is in F 2. x(yz) = (xy)z 3. xy = yx for any x, y and z in F. d. Identity laws 1. There is an element 1 in F for which x1 = 1x = x for any x in F. 2. For each x 0 there is a y 0 for which xy = 1. (This y is called the reciprocal or inverse of x.) e. Distributive law 1. x(y + z) = xy + xz for any x, y and z in F. We will not ask the reader to check all of these laws in any of our examples, because there are just too many of them. We will only give some examples; for a proper introduction to fields, see Artin [1]. 21

26 22 Fields Of course, the set of real numbers R is a field (with the usual addition and multiplication), as is the set C of complex numbers and the set Q of rational numbers. The set Z of integers is not a field, because the integer 2 has no integer reciprocal. Let F be the set of all rational functions p(x)/q(x), with p(x) and q(x) polynomials, and q(x) not the 0 polynomial. Clearly for any pair of rational functions, the sum p 1 (x) q 1 (x) + p 2(x) q 2 (x) = p 1(x)q 2 (x) + q 1 (x)p 2 (x) q 1 (x)q 2 (x) is also rational, as is the product, and the reciprocal. 2.1 Suppose that F is a field. Prove the uniqueness of 0, i.e. that there is only one element z = 0 in F which satisfies x + z = x for any element x. 2.2 Prove the uniqueness of Let x be an element of a field F. Prove the uniqueness of the element y for which x + y = 0. Henceforth, we write this y as x. 2.4 Let x be an element of field F. If x 0, prove the uniqueness of the reciprocal. Henceforth, we write the reciprocal of x as 1 x, and write x + ( y) as x y. Some Finite Fields Let F be the set of numbers F = {0, 1}. Carry out multiplication by the usual rule, but when you add, x + y won t mean the usual addition, but instead will mean the usual addition except when x = y = 1, and then we set = 0. F is a field called the field of Boolean numbers. 2.5 Prove that for Boolean numbers, x = x and 1 x = x. Suppose that p is a positive integer. Let F be the set of numbers F p = {0, 1, 2,..., p 1}. Define addition and multiplication as usual for integers, but if the result is bigger than p 1, then subtract multiples of p from the result until it lands in F p, and let that be the definition

27 Some Finite Fields 23 of addition and multiplication. F 2 is the field of Boolean numbers. We usually write x = y (mod p) to mean that x and y differ by a multiple of p. For example, if p = 7, we find 5 6 = 30 (mod 7) = (mod 7) = 2 (mod 7). This is arithmetic in F 7. It turns out that F p is a field for any prime number p. 2.6 Prove that F p is not a field if p is not prime. The only trick in seeing that F p is field is to see why there is a reciprocal. It can t be the usual reciprocal as a number. For example, if p = = 36 (mod 7) = (mod 7) = 1 (mod 7) (because 35 is a multiple of 7). So 6 has reciprocal 6 in F 7. The Euclidean Algorithm To compute reciprocals, we first need to find greatest common divisors, using the Euclidean algorithm. The basic idea: given two numbers, for example and 2304, divide the smaller into the larger, writing a quotient and remainder: = 612. Take the two last numbers in the equation (2304 and 612 in this example), and repeat the process on them, and so on: = = = = 0. Stop when you hit a remainder of 0. The greatest common divisor of the numbers you started with is the last nonzero remainder (36 in our example). Now that we can find the greatest common divisor, we will need to write the greatest common divisor as an integer linear combination of the original numbers. If we write the two numbers we started with as a and b, then our goal is to compute integers u and v for which

28 24 Fields ua + bv = gcd(a, b). To do this, lets go backwards. Start with the second to last equation, giving the greatest common divisor. 36 = Plug the previous equation into it: Simplify: = ( ) = Plug in the equation before that: = ( ) = = ( ) = We have it: gcd(a, b) = u a+b v, in our case 36 = What does this algorithm do? At each step downward, we are facing an equation like a bq = r, so any number which divides into a and b must divide into r and b (the next a and b) and vice versa. The remainders r get smaller at each step, always smaller than either a or b. On the last line, b divides into a. Therefore b is the greatest common divisor of a and b on the last line, and so is the greatest common divisor of the original numbers. We express each remainder in terms of previous a and b numbers, so we can plug them in, cascading backwards until we express the greatest common divisor in terms of the original a and b. In the example, that gives ( 15)(12132) + (79)(2304) = 36. Let compute a reciprocal modulo an integer. Lets compute 17 1 modulo Take a = 1009, and b = 17. Going backwards 1 = = = = = 0. = 6 1 (17 2 6) = = ( ) =

29 Matrices 25 So finally, modulo 1001, = 1. So 17 1 = 178 = = 831 (mod 1009). This is how we can compute reciprocals in F p : we take a = p, and b the number to reciprocate, and apply the process. If p is prime, the resulting greatest common divisor is 1, and so we get up + vb = 1, and so vb = 1 (mod p), so v is the reciprocal of b. 2.7 Compute 15 1 in F Solve the linear equation 3x + 1 = 0 in F Prove that F p is a field whenever p is a prime number. Matrices Matrices with entries from any field F are added, subtracted, and multiplied by the same rules. We can still carry out forward elimination, back substitution, calculate inverses, determinants, characteristic polynomials, eigenvectors and eigenvalues, using the same steps Let F be the Boolean numbers, and A the matrix A = 1 0 1, thought of as having entries from F. Is A invertible? If so, find A 1. All of the ideas of linear algebra worked out for the real and complex numbers have obvious analogues over any field, except for the concept of inner product, which is much more sophisticated. From now on, we will only state and prove results for real vector spaces, but those results which do not require inner products (or orthogonal or unitary matrices) continue to hold with identical proofs over any field If A is a matrix whose entries are rational functions of a variable t, prove that the rank of A is constant in t, except for finitely many values of t. Polynomials Consider the field Consider the polynomial F 2 = {0, 1}. p(x) = x 2 + x.

30 26 Fields Clearly p(0) = , = 0. Keeping in mind that 2 = 0 in F 2, clearly p(1) = , = 1 + 1, = 0. Therefore p(x) = 0 for any value of x in F 2. So p(x) is zero, as a function. But we will still want to say that p(x) is not zero as a polynomial, because it is x 2 + x, a sum of powers of x with nonzero coefficients. We should think of polynomials as abstract expressions, sums of constants times powers of a variable x, and distinguish them from polynomial functions. Think of x as just a symbol, abstractly, not representing any value. So p(x) is nonzero as a polynomial (because it has nonzero coefficients), but p(x) is zero as a polynomial function. A rational function is a ratio p(x)/q(x) of polynomials, with q(x) not the zero polynomial. CAREFUL: it isn t really a function, and should probably be called something like a rational expression. We are stuck with the standard terminology here. We consider two such expressions to be the same after cancellation of any common factor from numerator and denominator. So 1/x is a rational function, in any field, and x/x 2 = 1/x in any field. Define addition, subtraction, multiplication and division of rational functions as you expect. For example, 1 x + 1 x + 1 = 2x + 1 x(x + 1), over any field. CAREFUL: over the field F 2, we know that x 2 + x vanishes for every x. So the rational function f(x) = 1 x 2 + x is actually not defined, no matter what value of x you plug in, because the denominator vanishes. But we still consider it a perfectly well defined rational function, since it is made out of perfectly well defined polynomials. If x is an abstract variable (think of just a letter, not a value taken from any field), then we write F (x) for the set of all rational functions p(x)/q(x). Clearly F (x) is a field.fs For example, F 2 (x) contains 0, 1, x, 1 + x, 1/x, 1/(x + 1),.... Subfields If K is a field, a subfield F K is a subset containing 0 and 1 so that if a and b are in F, then a + b, a b and ab are in F, and, if b 0, then a/b is in F. In particular, F is itself a field. For example, Q R, R C, and Q C are subfields. Another example: if F is any field, then F F (x) is a subfield.

31 Splitting fields Find all of the subfields of F Find a subfield of C other than R or Q. Example: Over the field R, the polynomial x 2 +1 has no roots. A polynomial p(x) with coefficients in a field F splits if it is a product of linear factors. If F K is a subfield, we say that a polynomial p(x) splits over K if it splits into a product of linear factors, allowing the factors to have coefficients from K. Example: x splits over C: x = (x i) (x + i). If F K is a subfield, then K is an F -vector space. For example, C is an R-vector space of dimension 2. The dimension of K as an F -vector space is called the degree of K over F. Splitting fields We won t prove the following theorem: Theorem 2.2. If F is a field and p(x) is a polynomial over F, then there is a field K containing F as a subfield, over which p(x) splits into linear factors, and so that every element of K is expressible as a rational function of the roots of p(x) with coefficients from F. Moreover, K has finite degree over F. This field K is uniquely determined up to an isomorphism of fields, and is called the splitting field of p(x) over F. For example, over F = R the polynomial p(x) = x has splitting field C: x = (x i) (x + i). Example of a splitting field Consider the polynomial p(x) = x 2 + x + 1 over the finite field F 2. Let s look for roots of p(x), i.e. eigenvalues. Try x = 0: no good. Try x = 1: p(0) = = 1, p(1) = = = 1, since = 0. No good. So p(x) has no eigenvalues in F 2. We know by theorem 2.2 that there is some splitting field K for p(x), containing F 2, so that p(x) splits into linear factors over K, say for some α, β K. p(x) = (x α) (x β),

32 28 Fields What can we say about this mysterious field K? We know that it contains F 2, contains α, contains β, and that everything in it is made up of rational functions over F 2 with α and β plugged in for the variables. We also know that K has finite dimension over F 2. Otherwise, K is a total mystery: we don t know its dimension, or a basis of it over F, or its multiplication or addition rules, or anything. We know that in F 2, = 0. Since F 2 K is a subfield, this holds in K as well. So in K, for any c K, c(1 + 1) = c0 = 0. Therefore c + c = 0 in K, for any element c K. Roughly speaking, the arithmetic rules in F 2 impose themselves in K as well. A clever trick, which you probably wouldn t notice at first: it turns out that β = α + 1. Why? Clearly by definition, α is a root of p(x), i.e. α 2 + α + 1 = 0. So then let s try α + 1 and check that it is also a root. (α + 1) 2 + (α + 1) + 1 = α α α , but numbers in K cancel in pairs, c + c = 0, so = α 2 + α + 1, = 0 since α is a root of p(x). So therefore elements of K can be written in terms of α purely. The next fact about K: clearly We want to prove that {0, 1, α, α + 1} K. {0, 1, α, α + 1} = K. How? First, lets try to make up an addition table for these 4 elements: α α α α α + 1 α α α α α + 1 α + 1 α 1 0 To make up a multiplication table, we need to note that 0 = α 2 + α + 1, so that α 2 = α 1 = α + 1,

33 Example of a splitting field 29 and Therefore α(α + 1) = α 2 + α = α α = 1. (α + 1) (α + 1) = α 2 + 2α + 1 = α. This gives the complete multiplication table: Looking for reciprocals, we find that 0 1 α α α α + 1 α 0 α α α α α 1 does not exist, = 1, 1 α = α + 1, 1 α + 1 = α. So {0, 1, α, α + 1} is a field, containing F 2, and p(x) splits over this field, and the field is generated by F 2 and α, so this field must be the splitting field of p(x): {0, 1, α, α + 1} = K. So K is the finite field with 4 elements, K = F Consider the polynomial p(x) = x 3 + x over the field F 2. Suppose that that splitting field K of p(x) contains a root α of p(x). Prove that α 2 and 1 + α + α 2 are the two other roots. Compute the addition table and the multiplication table of the 8 elements Use this to prove that so K is the finite field F 8. 0, 1, α, 1 + α, α 2, 1 + α 2, α + α 2, 1 + α + α 2. K = { 0, 1, α, 1 + α, α 2, 1 + α 2, α + α 2, 1 + α + α 2}

34 30 Fields Construction of splitting fields Suppose that F is a field and p(x) is a polynomial over F. We say that p(x) is irreducible if p(x) does not split into a product of factors over F. Basic fact: if p(x) is irreducible, and p(x) divides a product q(x)r(x), then p(x) must divide one of the factors q(x) or r(x). Suppose that p(x) is a nonconstant irreducible polynomial. (Think for example of x over F = R, to have some concrete example in mind.) We have no roots of p(x) in F, so can we construct a splitting field explicitly? Let V be the vector space of all polynomials over F in a variable x. Let W V be the subspace consisting of all polynomials divisible by p(x). Clearly if p(x) divides two polynomials, then it divides their sum, and their scalings, so W V is a subspace. Let K = V/W. So K is a vector space. Every element of K is a translate, so has the form q(x) + W, for some polynomial q(x). Any two translates, say q(x) + W and r(x) + W, are equal just when q(x) r(x) W, as in our general theory of quotient spaces. So this happens just when q(x) r(x) is divisible by p(x). In other words, if you write down a translate q(x) + W K and I write down a translate r(x) + W K, then these will be the same translate just exactly when r(x) = q(x) + p(x)s(x), for some polynomial s(x) over F. So far K is only a vector space. Let s make K into a field. We know have to add elements of K, since K is a vector space. How do we multiply elements? Take two elements, say q(x) + W, r(x) + W, and try to define their product to be q(x)r(x) + W. Is this well defined? If I write the same translates down differently, I could write them as q(x) + Q(x)p(x) + W, r(x) + R(x)p(x) + W, and my product would turn out to be (q(x) + Q(x)p(x)) (r(x) + R(x)p(x)) + W =q(x)r(x) + p(x) (q(x)r(x) + Q(x)r(x) + Q(x)R(x)) + W, =q(x)r(x) + W, the same translate, since your result and mine agree up to multiples of p(x), so represent the same translate. So now we can multiple elements of K.

35 Construction of splitting fields 31 The next claim is that K is a finite dimensional vector space. This is not obvious, since K = V/W and both V and W are infinite dimensional vector spaces. Take any element of K, say q(x) + W, and divide q(x) by p(x), say q(x) = p(x)q(x) + r(x), a quotient Q(x) and remainder r(x). Clearly q(x) differs from r(x) by a multiple of p(x), so q(x) + W = r(x) + W. Therefore every element of K can be written as a translate r(x) + W for r(x) a polynomial r(x) of degree less than the degree of p(x). Clearly r(x) is unique, since we can t quotient out anything of lower degree than p(x). Therefore K is identified as a vector space with the vector space of polynomials in x of degree less than the degree of p(x). The notation is much nicer if we write x + W as, say, α. Then clearly x 2 + W = α 2, etc. so q(x) + W = q(α) for any polynomial q(x) over F. So we can say that α K is an element so that p(α) = 0, so p(x) has a root over K. Moreover, every element of K is a polynomial over F in the element α. We need to check that K is a field. The hard bit is checking that every element of K has a reciprocal. Pick any element q(α) K. We want to prove that q(α) has a reciprocal, i.e. that there is some element r(α) so that q(α)r(α) = 1. Fix q(α) and consider the F -linear map given by T : K K, T (r(α)) = q(α)r(α). If q(α) 0, then T (r(α)) = 0 just when q(α)r(α) = 0, i.e. just when q(x)r(x) is divisible by p(x). But since p(x) is irreducible, we know that p(x) divides q(x)r(x) just when p(x) divides one of q(x) or r(x). But then q(x) + W = 0 or r(x) + W = 0, i.e. q(α) = 0 or r(α) = 0. We know by hypothesis that q(α) 0, so r(α) = 0. In other words, the kernel of T is {0}. Therefore T is invertible. So T is an isomorphism of F -vector spaces. In particular, T is onto. So there must exist some r(α) K so that i.e. so q(α) has a reciprocal, T (r(α)) = 1, q(α)r(α) = 1, r(α) = 1 q(α). The remaining axioms of fields are easy to check, so K is a field, containing F, and containing a root α for p(x). Every element of K is a polynomial in α.

36 32 Fields The dimension of K over F is finite. We only need to check that p(x) splits over K into a product of linear factors. Clearly we can split off one linear factor: x α, since α is a root of p(x) over K. Inductively, if p(x) doesn t completely split into a product of linear factors, we can try to factor out as many linear factors as possible, and then repeat the whole process for any remaining nonlinear factors. If you have to calculate in K, how do you do it? The elements of K look like q(α), where α is just some formal symbol, and you add and multiply as usual with polynomials. But we can always assume that q(α) is a polynomial in α of degree less than the degree of p(x), and then subtract off any p(x)-multiples when we multiply or add, since p(α) = 0. For example, if p(x) = x 2 + 1, and F = R, then the field K consists of expressions like q(α) = b + cα, where b and c are any real numbers. When we multiply, we just replace α by 0, i.e. replace α 2 by 1. So we just get K being the usual complex numbers. Transcendental numbers Some boring examples: a number a C is algebraic if it is the solution of a polynomial equation p(a) = 0 where p(x) is a nonzero polynomial with rational coefficients. A number which is not algebraic is called transcendental. If x is an abstract variable, and F is a field, let F (x) be the set of all rational functions p(x)/q(x) in the variable x with p(x) and q(x) polynomials with coefficients from F. Theorem 2.3. Take an abstract variable x. A number a C is transcendental if and only if the field { } p(a) Q(a) = p(x) q(a) Q(x) and q(a) 0 q(x) is isomorphic to Q(x). Proof. If a is transcendental, then the map φ: p(x) p(a) Q(x) q(x) q(a) Q(a) is clearly well defined, onto, and preserves all arithmetic operations. Is φ 1-1? Equivalently, does φ have trivial kernel? Suppose that p(x)/q(x) lies in the kernel of φ. Then p(a) = 0. Therefore p(x) = 0. Therefore p(x)/q(x) = 0. So the kernel is trivial, and so φ is a bijection preserving all arithmetic operations, so φ is an isomorphism of fields. On the other hand, take any complex number a and suppose that there is some isomorphism of fields ψ : Q(x) Q(a).

37 Transcendental numbers 33 Let b = ψ(x). Because ψ is a field isomorphism, all arithmetic operations carried out on x must then be matched up with arithmetic operations carried out on b, so ( ) p(x) ψ = p(b) q(x) q(b). Because ψ is an isomorphism, some element must map to a, say ( ) p0 (x) ψ = a. q 0 (x) So p 0 (b) q 0 (b) = a. So Q(b) = Q(a). Any algebraic relation on a clearly gives one on b and vice versa. Therefore a is algebraic if and only if b is. Suppose that b is algebraic. Then q(b) = 0 for some polynomial q(x), and then ψ is not defined on 1/q(x), a contradiction.

38

39 Chapter 3 Direct Sums of Subspaces Subspaces have a kind of arithmetic. Definition 3.1. The intersection of two subspaces is the collection of vectors which belong to both of the subspaces. We will write the intersection of subspaces U and V as U V. The subspace U of R 3 given by the vectors of the form x 1 x 2 x 1 x 2 intersects the subspace V consisting in the vectors of the form x 1 0 x 3 in the subspace written U V, which consists in the vectors of the form x 1 0. x 1 Definition 3.2. If U and V are subspaces of a vector space W, write U + V for the set of vectors w of the form w = u + v for some u in U and v in V ; call U + V the sum. 3.1 Prove that U + V is a subspace of W. Definition 3.3. If U and V are two subspaces of a vector space W, we will write U + V as U V (and say that U V is a direct sum) to mean that every vector x of U + V can be written uniquely as a sum x = y + z with y in U and z in Z. We will also say that U and V are complementary, or complements of one another. 35

40 36 Direct Sums of Subspaces R 3 = U V for U the subspace consisting of the vectors x 1 x = x 2 0 and V the subspace consisting of the vectors x = 0 0, x 3 since we can write any vector x uniquely as x = x 1 x x Give an example of two subspaces of R 3 which are not complementary. Theorem 3.4. U + V is a direct sum U V just when U V consists of just the 0 vector. Proof. If U +V is a direct sum, then we need to see that U V only contains the zero vector. If it contains some vector x, then we can write x uniquely as a sum x = y + z, but we can also write x = (1/2)x + (1/2)x or as x = (1/3)x + (2/3)x, as a sum of vectors from U and V. Therefore x = 0. On the other hand, if there is more than one way to write x = y +z = Y +Z for some vectors y and Y from U and z and Z from V, then 0 = (y Y )+(z Z), so Y y = z Z, a nonzero vector from U V. Lemma 3.5. If U V is a direct sum of subspaces of a vector space W, then the dimension of U V is the sum of the dimensions of U and V. Moreover, putting together any basis of U with any basis of V gives a basis of W. Proof. Pick a basis for U, say u 1, u 2,..., u p, and a basis for V, say v 1, v 2,..., v q. Then consider the set of vectors given by throwing all of the u s and v s together. The u s and v are linearly independent of one another, because any linear relation 0 = a 1 u 1 + a 2 u a p u p + b 1 v 1 + b 2 v b q v q would allow us to write a 1 u 1 + a 2 u a p u p = (b 1 v 1 + b 2 v b q v q ), so that a vector from U (the left hand side) belongs to V (the right hand side), which is impossible unless that vector is zero, because U and V intersect only

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary

ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM. Executive summary ABSTRACT VECTOR SPACES AND THE CONCEPT OF ISOMORPHISM MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 4.1 and 4.2. General stuff... Executive summary (1) There is an abstract

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Rings. EE 387, Notes 7, Handout #10

Rings. EE 387, Notes 7, Handout #10 Rings EE 387, Notes 7, Handout #10 Definition: A ring is a set R with binary operations, + and, that satisfy the following axioms: 1. (R, +) is a commutative group (five axioms) 2. Associative law for

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Rings If R is a commutative ring, a zero divisor is a nonzero element x such that xy = 0 for some nonzero element y R.

Rings If R is a commutative ring, a zero divisor is a nonzero element x such that xy = 0 for some nonzero element y R. Rings 10-26-2008 A ring is an abelian group R with binary operation + ( addition ), together with a second binary operation ( multiplication ). Multiplication must be associative, and must distribute over

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

a (b + c) = a b + a c

a (b + c) = a b + a c Chapter 1 Vector spaces In the Linear Algebra I module, we encountered two kinds of vector space, namely real and complex. The real numbers and the complex numbers are both examples of an algebraic structure

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014

Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 Finite Fields: An introduction through exercises Jonathan Buss Spring 2014 A typical course in abstract algebra starts with groups, and then moves on to rings, vector spaces, fields, etc. This sequence

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Benjamin McKay. Linear Algebra

Benjamin McKay. Linear Algebra Benjamin McKay Linear Algebra May 6, 6 This work is licensed under a Creative Commons Attribution-ShareAlike 3. Unported License. Preface Up close, smooth things look flat the picture behind differential

More information

MATH 196, SECTION 57 (VIPUL NAIK)

MATH 196, SECTION 57 (VIPUL NAIK) TAKE-HOME CLASS QUIZ: DUE MONDAY NOVEMBER 25: SUBSPACE, BASIS, DIMENSION, AND ABSTRACT SPACES: APPLICATIONS TO CALCULUS MATH 196, SECTION 57 (VIPUL NAIK) Your name (print clearly in capital letters): PLEASE

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Math 120 HW 9 Solutions

Math 120 HW 9 Solutions Math 120 HW 9 Solutions June 8, 2018 Question 1 Write down a ring homomorphism (no proof required) f from R = Z[ 11] = {a + b 11 a, b Z} to S = Z/35Z. The main difficulty is to find an element x Z/35Z

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

The minimal polynomial

The minimal polynomial The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked

More information

2a 2 4ac), provided there is an element r in our

2a 2 4ac), provided there is an element r in our MTH 310002 Test II Review Spring 2012 Absractions versus examples The purpose of abstraction is to reduce ideas to their essentials, uncluttered by the details of a specific situation Our lectures built

More information

Polynomial Rings. i=0. i=0. n+m. i=0. k=0

Polynomial Rings. i=0. i=0. n+m. i=0. k=0 Polynomial Rings 1. Definitions and Basic Properties For convenience, the ring will always be a commutative ring with identity. Basic Properties The polynomial ring R[x] in the indeterminate x with coefficients

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

EXAMPLES OF PROOFS BY INDUCTION

EXAMPLES OF PROOFS BY INDUCTION EXAMPLES OF PROOFS BY INDUCTION KEITH CONRAD 1. Introduction In this handout we illustrate proofs by induction from several areas of mathematics: linear algebra, polynomial algebra, and calculus. Becoming

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Linear algebra and differential equations (Math 54): Lecture 10

Linear algebra and differential equations (Math 54): Lecture 10 Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Commutative Rings and Fields

Commutative Rings and Fields Commutative Rings and Fields 1-22-2017 Different algebraic systems are used in linear algebra. The most important are commutative rings with identity and fields. Definition. A ring is a set R with two

More information

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3. SYMMETRIES 88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Example: This theorem is the easiest way to test an ideal (or an element) is prime. Z[x] (x)

Example: This theorem is the easiest way to test an ideal (or an element) is prime. Z[x] (x) Math 4010/5530 Factorization Theory January 2016 Let R be an integral domain. Recall that s, t R are called associates if they differ by a unit (i.e. there is some c R such that s = ct). Let R be a commutative

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. Lecture notes - Math 110 Lec 002, Summer 2016 BW The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. 1 Contents 1 Sets and fields - 6/20 5 1.1 Set notation.................................

More information

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b . FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1.1. Bilinear forms and matrices. 1. Bilinear forms Definition 1.1. Suppose that F is a field and V is a vector space over F. bilinear

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

1. Introduction to commutative rings and fields

1. Introduction to commutative rings and fields 1. Introduction to commutative rings and fields Very informally speaking, a commutative ring is a set in which we can add, subtract and multiply elements so that the usual laws hold. A field is a commutative

More information

Polynomials, Ideals, and Gröbner Bases

Polynomials, Ideals, and Gröbner Bases Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields

More information

Rings. Chapter 1. Definition 1.2. A commutative ring R is a ring in which multiplication is commutative. That is, ab = ba for all a, b R.

Rings. Chapter 1. Definition 1.2. A commutative ring R is a ring in which multiplication is commutative. That is, ab = ba for all a, b R. Chapter 1 Rings We have spent the term studying groups. A group is a set with a binary operation that satisfies certain properties. But many algebraic structures such as R, Z, and Z n come with two binary

More information

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi)

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information