Introduction to Linear Algebra, Second Edition, Serge Lange

Size: px
Start display at page:

Download "Introduction to Linear Algebra, Second Edition, Serge Lange"

Transcription

1 Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement. As a point: place a dot at the coordinates. As a displacement of a point: If the point is A and the displacement is B, the displaced point is A + B. Addition of displacements and scalar multiplication of displacements: algebraic definition. Addition of displacements and scalar multiplication of displacements: geometric definition. Sum of displacements: interpret as first displacement followed by second displacement. To form A B, draw an arrow from endpoint of B to endpoint of A. Every point can be thought of as a displacement from the origin. Every pair of points A, B gives rise to a displacement from A to B: Coordinates of displacement: B A. AB. Two displacements A and B are parallel if A = cb for some c 0. Reason: same slope. Same direction if c > 0, opposite directions if c < 0. The quadrilateral produced by two displacements is a parallelogram. We will refer objects with coordinates as vectors. Norm of a vector: square root of sum of squares of coordinates. Produces length of vector in R 2 and R 3 by Pythagorean Theorem. When are two displacements perpendicular in R 3? Pythagorean Theorem implies a b + a 2 b 2 + a 3 b 3 = 0. Law of Cosines yields a b + a 2 b 2 + a 3 b 3 = A B cos θ. Scalar product of vectors: A B = a b + + a n b n. Properties: page 3.

2 Two vectors defined to be orthogonal if A B = 0. Agrees with perpendicular in low dimensions. Law of Cosines yields A B = A B cos θ. Distance between two points: norm of the displacement between them. Circles, spheres, open and closed discs, open and closed balls. General Pythagorean Theorem: When two vectors are orthogonal, A+ B 2 = A 2 + B 2. Proof: You can either use coordinates or properties of the dot product. Orthogonal projection of A onto B, producing P : P = cb for some c. We require A cb B, hence (A cb) B = 0. This yields c = A B B B. c is called the component of A along B and is a number. Unit vectors: E i. The component of A along E i is a i. Schwarz Inequality: In R n, A B A B. Proof: Apply Pythagorean Theorem to A = A cb + cb to derive A 2 c 2 B 2. Multiply through by B 2 and simplify. Note: We knew this already in low dimensions using A B = A B cos θ, but θ not defined in high dimension. But Schwarz implies that A B A B, so this number is equal to cos θ for a unique θ [0, π], so we define θ in high dimensions using the formula θ = cos A B A B. Triangle Inequality: A + B A + B. Proof: Compute square norm of A + B using dot product, then apply Schwarz Inequality. 2

3 Lines: The equation of a line is y = mx + b. y-intercept is (0, b) and slope is m, so every time you run you rise m. Every time you run t you rise mt. This brings you to (t, mt + b). Point corresponding to t is P (t) = (t, mt + b). Geometric interpretation: Initial point is A = (0, b) and displacement is B = (, m). So P (t) = A + tb. Parametric equation of line through A with displacement B: P (t) = A + tb. This yields equations for x(t) and y(t). Recovering the equation satisfied by the coordinates on a parametric line: (x, y) = (a, a 2 ) + t(b, b 2 ). Now solve for y in terms of x. Slope of P (t) = A + tb: ratio of coordinates in B. Equation of line starting at A when t = 0 and ending at B when t = : P (t) = A + t(b A). Also written P (t) = ta + ( t)b. Note: when the equation is written this way, the distance from A to P (t) is t B A. Since the distance between A and B is B A, t measures what fraction of the way you have traveled. Midpoint: Use t =. One-third of 2 the way there: use t =. 3 Given x(t) and y(t), one can either write y in terms of x or write (x(t), y(t)) = A+tB and figure out the slope. Since the line passes through A, the equation is y a 2 = m(x a ). Planes: A plane in R 3 is determined by 3 non-collinear points. Typical point in the plane through by A, B, C: Starting at A, one can more in the direction from A to B, in the direction from A to C, and in any combination of these. So the typical point is P (s, t) = A + s(b A) + t(c A). Example: If A = (,, ), B = (2, 3, 3), C = (5, 4, 7) then P (s, t) = (,, )+s(, 2, 2)+t(4, 3, 6) = (+s+4t, +2s+3t, +2s+6t). We obtain parametric equations x(s, t) = + s + 4t, y(s, t) = + 2s + 3t, z(s, t) = + 2s + 6t. Getting an equation out of this: Solve for x and y in terms of s and t, then express z in terms of x and y. This yields s = /5( 3x + 4y), t = (/5) + (2x)/5 y/5, z = /5( 3 + 6x + 2y). Normalized, the equation of the plane is 6x + 2y 5z = 3. 3

4 Using (,, ) as a solution we also get the equation 6() + 2() 5() = 3. Subtracting, we obtain 6(x ) + 2(y ) 5(z ) = 0. Generalizing this: the general equation of a plane is ax + by + cz = d. Assuming that it passes through the point (x 0, y 0, z 0 ), another equation is a(x x 0 ) + b(y y 0 ) + c(z z 0 ) = 0. We call this the standard equation of the plane. Geometrically: Let N = (a, b, c) and let Q = (x x 0, y y 0, z z 0 ). Then N Q = 0, so N and Q are perpendicular. The plane can be described as all points (x, y, z) such that (x x 0, y y 0, z z 0 ) is perpendicular to (a, b, c). N is called the normal vector. Example: find the equation through (, 2, 3) and perpendicular to (4, 5, 6). Solution: (4, 5, 6) (x, y 2, z 3) = 0. Finding a, b, c: consider the example A = (,, ), B = (2, 3, 3), C = (5, 4, 7) again. Two displacements in the plane are B A = (, 2, 2) and C A = (4, 3, 6). So we want (a, b, c) (, 2, 2) = 0 and (a, b, c) (4, 3, 6) = 0. Solving for a and b in terms of c we obtain a = ((6c)/5) and b = ((2c)/5). There are infinitely many choices for (a, b, c). Choosing the one where c = 5 we obtain (a, b, c) = ( 6, 2, 5). This is consistent with the previous method of solution, but faster. Angle between planes: defined to be the angle between the normal vectors. Planes are parallel when their normal vectors are parallel. Projection of the point Y onto the plane N (X X 0 ) = 0: We seek the vector X such that X is in the plane and (Y X) N. This yields the equations N (X X 0 ) = 0, Y X = αn. Substituting X = Y αn in the first equation yields N (Y αn X 0 ) = 0. 4

5 Solving for α yields Therefore α = N (Y X 0). N N X = Y N (Y X 0) N. N N Distance from the point Y to the plane N (X X 0 ) = 0: This is defined to be the distance between Y and the projection of Y onto the plane. Since Y X = αn and α = N (Y X 0), the distance is N N α N = N (Y X 0) N where θ is the angle between Y X 0 and N. = Y X 0 cos θ Remark: Let s call the projection of Y onto the plane the point P. We claim that P is the point in the plane closest to Y. Reason: Let X be any point in the plane. Then Y X = Y P + P X. By construction, Y P is parallel to N. Since N is orthogonal to any displacement in the plane and P X is a displacement in the plane, Y P is orthogonal to P X. By the Pythagorean Theorem, this implies Hence (Y P ) + (P X) 2 = Y P 2 + P X 2. Y X 2 Y P 2. In other words, the distance from Y to any arbitrary X in the plane is the distance from Y to P. Chapter 2: Matrices and Linear Equations Matrix: Another kind of vector, since it has coordinates. So addition and scalar multiplication are defined. Matrix multiplication: the ij entry of AB is R i C j where the row-decomposition of A is (A,... A m ) and the column-decomposition of B is (B,..., C B ). The number of coordinates in the rows of A must match the number of coordinates in the columns of B. Formula for c ij given AB = C. 5

6 Let A be a matrix and let X be a column vector. Then AX = x A + + x n A n. Transforming a system of equations into a matrix equation: see the example on page 49. Write as xa + ya 2 + za 3 = B and re-write as AX = B. Application: [ ] formula [ for ] rotation of a vector about the origin. Input vector x xθ, output vector. The relationship is y y θ [ xθ y θ ] [ ] x = R(θ). y More matrix algebra:. Let the columns of B be (B, B 2,..., B n ). Then AB = (AB, AB 2,..., AB n ). 2. Elementary column vector: E i. It satisfies AE i = A i where A i is column i of A. 3. Identity matrix: I = (E, E 2,..., E n ). It satisfies AI = A by #2. 4. Distributive and associative laws: see page 53. Distributive law follows from dot product properties. Associative law can be done using brute force. 5. A property not satisfied: commutativity. Just do an example. Invertible matrix: A is invertible if square and there exists a square matrix B such that AB = BA = I. Notation: A. Rotation matrices are invertible: First note that R(α)R(β)X = R(α + β)x for all column vectors X. In particular, for X = E i. So R(α)R(β) and R(α + β) have the same columns and are equal. This implies that R(α) has inverse R( α). Solving an equation of the form AX = B is easy if we know that A is invertible: X = A B. Not all square matrices are invertible: the zero matrix for example. Homogeneous system of equations: a matrix equation of the form AX = 0 where X is a column vector. 6

7 Theorem: When there are more variables than equations in a homogeneous system of equations, then there are an infinite number of solutions. Proof: By induction on the number of equations. equation: true. Now assume true for n homogeneous equations with more than n variables. Consider n + equations and more than n + variables. Take the first equation, express one of the variables in terms of the others, then substitute this into the remaining n equations. In the remaining n equations there are more than n variables, so they have an infinite number of solutions. Each is a solution to the first one. Corollary: More variables than equations in homogeneous system guarantees at least one non-trivial solution. Application to vectors: say that vectors A,..., A k are linearly dependent if there is a non-trivial solution to x A + +x k A k = 0. Then any n+ vectors in R n are linearly dependent. Reason: more variables than equations. Application to square matrices, treated as vectors: any n 2 + n n matrices are linearly dependent. Solving AX = B using Gauss Elimination: First, represented in augmented matrix form. Second use the following elementary transformations which don t change the solution set: swap equations, multiply equation by number, add two equations. Most importantly, adding a multiple of a given row to another. Leading term in row: first non-zero coefficient. Pivoting on a leading term: adding multiples of the row it is in to other rows to get zeros in the column it is in. Iterate this procedure from to bottom so that the surviving non-zero rows have different leading term columns (row echelon form). The variables fall into two categories: leading and slack. Slack variables can be assigned arbitrary values. Use back-substitution to get the leading variables expressed in terms of the slack variables. In a homogeneous system with more variables than equations, there will be at least one slack variable, so there will be an infinite number of solutions. Application to finding the inverse of a matrix: We wish to solve AB = I. In other words, (AB, AB 2,..., AB N ) = (E, E 2,..., E n ). Consider solving AB = E. Compare to solving AB 2 = E 2. Coefficient matrix is the same, all that changes is the augmented columns. Do these simultaneously. If there is an inverse, we should be able to continue until the coefficient matrix looks like I, in which case the augmented side can be read off as B. 7

8 Matrix units: E ab. Properties:. E ab E cd = 0 if b c and E ab E bc = E ac. 2. E pp A zeros out all rows except row p in A. 3. E pq A zeros out all rows except row q in A and moves it to row p. 4. A E pp A E qq A + E pq A + E qp A = (I E pp E qq + E pq + E qp )A swaps rows p and q in A. 5. A E pp A + xe pp A = (I E pp + xe pp )A multiplies row p of A by x. 6. A + xe pq A = (I + xe pq )A adds x copies of row q in A to row p of A. The matrices in 4, 5, 6 mimic elementary row operations. They are invertible, since element row operations can be undone. Theorem: If AB = I then BA = I. Proof: We have seen the procedure for finding B such that AB = I: perform elementary row operations on [A I] until it becomes [I B]. We can see that every operation applied to A is also applied to I. So if E n E n E A = I then E n E n E I = B. But this says BA = I. The last section can be read by students and skipped in lecture since we have covered the topics above. Chapter 3: Vector Spaces Vector Space: any set behaving like R n. The required properties are listed on page 89. Examples: matrices, polynomials. Subspace of a vector space: a subset of a vector space which is closed with respect to vector addition and scalar multiplication. Examples: solutions to a homogeneous system of equations, upper-triangular matrices, polynomials with a given root. More examples: line through origin, plane through origin. Intersection and sum of subspaces produces new subspace. Skip Section 3, Convex Sets. Linear combinations. 8

9 The set of linear combinations of v,..., v k produces a subspace W. (They span the subspace.) Linearly independent vectors: the opposite of linearly dependent vectors. Do examples of linearly independent vectors, including trig and exponential functions. Basis for a vector space: a set of linearly independent vectors that span the vector space. Basis for a subspace: same idea. Examples: basis for R n. Basis for a homogeneous system of equations. Basis for polynomials of degree 3 with as a root. Theorem 4.2, p. 07: Coefficients in basis linear combination are unique. Every spanning set yields a basis. Reason: If the spanning set is already linearly independent, you have a basis. But if there is a non-trivial linear combination of one of them that produces 0, you can discard one. Keep on going until what you have left is linearly independent. This is essentially Theorem 5.3. Definition: when a vector space has a basis of n vectors, we say that the dimension of the vector space is n. Problem with this definition: it seems to imply that every basis has the same number of vectors in it. Question: can a vector space have bases of different sizes? Answer: no. Proof: Suppose that there is a basis (u,..., u m ) and another basis (v,..., v n ) where n > m. Express each v i in terms of u,..., u m : v = a u + + a m u m v 2 = a 2 u + + a 2m u m v n = a n u + + a nm u m. Consider the coordinate vectors (a,..., a m ), (a 2,..., a 2m ),..., (a n,..., a nm ) in R m. There are n of them, so they must be linearly dependent with a nontrivial way to combine them into (0,..., 0) via (x,..., x n ). This implies x v + + x n v n = 0. 9

10 This cannot happen because the v i are linearly dependent. So you cannot have bases of different sizes. This is Theorem 5.2. Note: If you look at this proof carefully you see that it says that any n > m vectors in an m-dimensional space are linearly dependent. This is Theorem 5.. Every linearly independent set in a finite-dimensional basis can be expanded to a basis. Reason: If they already span the vector space, you have a basis. But if there is a vector outside the span, add it, and the larger set is still linearly independent. Keep on going. You must eventually arrive at a spanning set and basis, because above we showed that there is an upper limit to the number of linearly independent vectors you can produce. This is Theorem 5.7. If V is a vector space of dimension n and W is a subspace then W has dimension k n. Reason: Find any non-zero vector in W. As before, keep on growing the list of linearly independent vectors. You can t outrun the dimension of n, so the process has to stop. This is Theorem 5.8. If V has dimension n, any n linearly independent vectors in V form a basis. Reason: expand to a basis. This is Theorem 5.5. It also implies Theorem 5.6. Row rank of a matrix: the dimension of its row space. Column rank of a matrix: the dimension of its column space. How to compute these: Every elementary row operation yields a matrix with the same row space. Reduce matrix to reduced row echelon form and read off the dimension. Similarly, every elementary column operation yields a matrix with the same column space. Reduce matrix to reduced column echelon form and read of the dimension. Theorem: Let A be an m n matrix. Then dim RS(A) = dim CS(A). Proof: If we don t worry about the impact on the row space and the column space of A, we can always perform a series of elementary row operations followed by a series of elementary column operations so that the resulting matrix A has the very simple form depicted in Theorem 6.2 on page 8 of the textbook. One can see that both the row space and the column space of A have the same dimension r. All we need to do is to prove that the row 0

11 space dimension of A is the same as the row space dimension of A and that the column space dimension of A is the same as the column space dimension of A. In class I proved that if a subset of columns of A forms a basis for the column space of A, then the corresponding columns of EA form a basis for the column space of EA, where E is an m m elementary matrix representing elementary row operations on A. Therefore the column space dimension of EA is the same as the column space dimension of A. The argument was ( ) ( ) α i EA i = 0 = E α i A i = 0 = E E α i A i = 0 = i i α i A i = 0 = α = α 2 = = 0. i We also know that the row space dimension of EA is the same as the row space dimension of A because EA has the same row space as A. Summary: row operations on a matrix preserve both the row space dimension and the column space dimension. Similarly, since elementary column operations on A can be expressed as AF where F is an n n elementary matrix representing column operations on A, AF has the same row space dimension and same column space dimension as A. So if A A A 2 A is a sequence of elementary row operations followed by a sequence of elementary column operations, all the row space dimensions and the column space dimensions are unchanged from what they are in A. Since they are the same in A, they must be the same in A. Note: if row or column swaps are involved, we must change the meaning of corresponding rows and columns accordingly. Chapter 4: Linear Mappings Linear mapping: A function T : V W between two vector spaces that satisfies T (u + v) = T (u) + T (v) and T (cv) = ct (v). Terminology: domain, range, image, inverse image, kernel. A large source of examples: T v = Av. Includes rotation and reflection. Other examples: Among polynomials, multiplication by x and differentiation. i

12 Another example: reflection across the plane in R 3 through origin with normal vector (, 2, 3). Formula: given input v, T (v) = v (v (, 2, 3))(, 2, 3). 7 Verify directly that this is linear. Another example: projection on to the same plane. v v (v (, 2, 3))(, 2, 3). 4 A linear map T : V W is completely determined by where it sends a basis of V. The range of T is the span of T (v ), T (v 2 ),..., T (v k ) where v,..., v k is a basis of V. Therefore im(t ) is a subspace of W and dim(w ) dim(v ). The kernel of T is a subspace of the domain. Use of the kernel: classifying all solutions to T (v) = b. The solution set is {v 0 + k : k kernel}. Example: solutions to an inhomogeneous system of equations. Example: solutions to the differential equation y = cos x. (The vector space is the set of differentiable functions, the linear transformation is differentiation, b = cos x, the kernel consists of constant functions). See also Theorem 4.4, p. 48. (I have stated the more general result.) A map is injective if it satisfies T (v) = T (v ) = v = v. Reflection across a plane is one-to-one. Projection onto the plane is not one-to-one. Criterion for injective linear map: kernel is trivial. Theorem 3., p. 39: When T : V W is injective T sends linearly independent vectors to linearly independent vectors. Exact relationship between dim(v ) and dim(t (V )) in T : V W : dim(v ) = dim(kernel) + dim(image). (Theorem 3.2, p. 39) Example: projection onto plane. Proof: Find a basis for V of the right side. First, choose a basis for the image: w,..., w i. Second, find v,..., v i with T (v i ) = w i for each i. They must be linearly independent. Let v, v 2,..., v k be a basis for the kernel. If we can just show that v,..., v i, v,..., v k is a basis for V, we are done. Linearly independent: If a linear combination of them is zero, then a linear combination of their images is 0. So the coefficients of the images of v,..., v i are 0. That just leaves a linear combination of the kernel basis equal to zero, so all coefficients are 0. Span: Choose any v V. Then T (v) span(w,..., w i ), therefore T (v) = T ( c i v i ), therefore v c i v i is in the kernel, so v is a span of the v j and the v j vectors. 2

13 Example: projection. Relation of the Rank-Nullity theorem to matrices: Let A be an m n matrix. It gives rise to a linear mapping T : R n R n via T (v) = Av. The image of T is the column space of A. Therefore dim(image) = r where r is the rank of A. The kernel of T is the solution set to Av = 0, and we know that row operations on A produce r linearly-independent rows and that there are n r slack variables. This implies dim(kernel) = n r. So we can see that dim(image) + dim(kernel) = n = dim(v ). Geometric interpretation of the kernel of T when T (v) = Av: the set of vectors perpendicular to every vector in the column space of A. So if A has m rows and n columns, CS(A) is a subspace of R n and ker(t ) considers of the orthogonal complement. It has dimension n r. For example, the equation of a plane through the origin is ax + by + cz = 0, so the vectors in the plane belong to the kernel of T defined by the matrix [ a b c ]. The rank is and the nullity is 3 = 2. More generally, a hyperplane in R n is the solution set to a x + + a n x n = 0. This corresponds to a n matrix, so the rank is and the nullity is n. In other words, a hyperplane has dimension n. One can also try to compute the dimension of the intersection of m hyperplanes. This corresponds to the the kernel corresponding to an m n matrix. The dimension of the intersection has to be n r. The matrix associated with a linear map: If T : R n R m is defined by T (v) = Av, then the matrix is A. If the matrix is not given, we can find it as follows: Suppose that T (E ) = v, T (E 2 ) = v 2,..., T (E n ) = v n, vectors in R m. Then by linearity x T ( x 2 ) = T (x E + + x n E n ) = x v + + x n v n. x n But this is exactly Av where the columns of A are v,..., v n. Hence T (v) = Av and the matrix is A. Example: projection onto a plane, reflection across a plane. When T : V W is a linear map but neither V nor W is in the form R k, we can still find a matrix representation for T : Choose a basis {v,..., v n } for 3

14 V, choose a basis {w,..., w m } for W, and identify each v V with a vector in R n whose entries come from the unique way the basis produces v. Do the same for vectors in W. You can now identify T with a map S : R n R m and it has a matrix representation A. This represents T also, but it is only valid for the particular choice of bases we choose. Example: Let V = P 3 and let W = P 2 (polynomial vector spaces). Let T : P 3 P 2 be given by T (p(x)) = p (x). A basis for P 3 is {, x, x 2, x 3 }. A basis for P 2 is {, x, x 2 }. Since T sends the polynomial a 0 + a x + a 2 x 2 + a 3 x 3 to the polynomial a + 2a 2 x + 3a 3 x 2, S sends the vector a a 2 to the vector a 3 a a 2. The matrix representation is A = a Example: Rotation through θ about a directed line through the origin in R 3 : If the line has has direction vector (0, 0, ) then we rotate (x, y) through θ and send z to z. This is represented by cos θ sin θ 0 A = sin θ cos θ x x cos θ y sin θ The vector y is sent to the vector x sin θ + y cos θ. z z But suppose instead we want to rotate about the line in the direction (,,). We will find a new coordinate system in which (,, ) is acting like the z axis. The plane perpendicular to (,, ) through the origin is given by x+y+z = 0. The typical vector in the plane is ( a b, a, b). We will find two perpendicular vectors in this plane. For the first one we choose (, 0, ). For the second one we choose a and b so that (, 0, ) ( a b, a, b) = 0. One choice is a = 2, b =. This yields (, 2, ). We scale these three down to length dividing by 2, 6, and 3 respectively to obtain v = (/ 6, 2/ 6, / 6), v 2 = (/ 2, 0, / 2), and v 3 = (/ 3, / 3, / 3). (We want to rotate counterclockwise from v to v 2.) The three vectors v, v 2, v 3 form an alternative coordinate system (basis) for R 3. Identifying the vector xv + yv 2 + zv 3 4 a 0

15 x with the vector y, the matrix representing rotation about the line through z (,, ) is cos θ sin θ 0 A = sin θ cos θ So the vector xv + yv 2 + zv 3 is sent to the vector (x cos θ y sin θ)v + (x sin θ + y cos θ)v 2 + zv 3. We can express this map in matrix form using matrix algebra. Setting V = [ v v 2 v 3 ], this map can be described as Setting we have T (V x x y ) = V A y. z z X x Y = V y, Z z X X T ( Y ) = V AV Y. Z Z Note that it is very easy to compute the inverse of this V because the columns have dot products which are all equal to 0 or. When θ = π we obtain 2 V AV = One can check that this does send v to v 2 and v 2 to v and v 3 to v 3. For a general angle θ, we have V AV = 3 3 ( + 2 cos θ) ( ) ( ) 3 3 cos θ 3 sin θ ( ) 3 cos θ + 3 sin θ cos θ + 3 sin θ ( + 2 cos θ) ( ) ( ) ( 3 ) 3 cos θ 3 sin θ cos θ 3 sin θ 3 cos θ + 3 sin θ ( + 2 cos θ) 3 5..

16 Chapter 5: Composition of Linear Maps Define composition. The composition is linear and associative. The matrix of a composition is the product of the matrices. Associativity of composition implies associativity of matrix multiplication. Look at Section exercises. A linear map has an inverse if unique inputs produce unique outputs and every vector in the codomain is the image of a vector in the domain. Injective can be detected using the kernel. Surjective can be determined using a dimension argument. When the linear map is given by the matrix, the dimension of the kernel is the number of slack variables and the dimension of the image is the dimension of the column space, which by rank-nullity theorem is number of columns minus number of slack variables, i.e. number of leading variables. See also Theorems 2.4 and 2.5. The inverse of a bijective linear map is linear and its matrix representation is the inverse matrix, assuming domain and codomain are Euclidean space of the same dimension. Look at Section 2 exercises. Chapter 6: Scalar Products and Orthogonality Let V be a vector space over F = R or F = C, finite or infinite-dimensional. An inner product on V is a function, : V V F which satisfies the following axioms:. Positive-Definiteness: v, v 0 for all v V, and v, v = 0 if and only if v = 0 V. 2. Multilinearity: v + v, w = v, w + v, w and av, w = a v, w for all v, v, w V and a W. 3. Conjugate Symmetry: w, v = v, w for all v, w V. Inner-Product Space: A real or complex vector space V equipped with an inner-product. Note that axioms 2 and 3 imply v, aw = a v, w and v, w + w = v, w + v, w for all v, w V and a F. Examples: The usual dot product on R n, the generalized dot product on C n, the inner-product on P ([a, b]) defined by f, g = b f(x)g(x) dx. a 6

17 Norm: v = v, v. This satisfies av = a v where a = aa is absolute value (if real) or length (if complex). Orthogonal vectors: u,..., u n are mutually orthogonal iff u i, u j = 0 for all i j. Orthonormal vectors: u,..., u n are mutually orthonormal iff u i, u j = δ i,j for all i, j. In other words, they are mutually orthogonal and have length. Orthonormal projection: Let u,..., u n be mutually orthonormal. Let U = span(u,..., u n ). The linear operator P : V U defined by P v = v, ui u i is called orthonormal projection onto U. Properties of orthogonal and orthonormal vectors:. Mutually orthogonal vectors u,..., u n are linearly independent. Proof: Suppose a i u i = 0 V. Taking the inner product with u j we obtain 0 = 0 V, u j = a i u i, u j = a i u i, u j = a j u j 2 = a j. 2. Let u,..., u n be mutually orthogonal. Then u i 2 = u i 2. This is called the Pythagorean Theorem. Proof: u i, u i = u i, u j = u i, u i. 3. Let u,..., u n be mutually orthonormal. Then a i u i = ai 2. Proof: a i u i, a i u i = a i a j u i, u j = a i a i. 4. Let u,..., u n be mutually orthonormal. Let U = span(u,..., u n ). Then for any u U, u = u, u i u i. In other words, u = P u where P is orthonormal projection onto U. This also implies P 2 = P. Proof: Write u = a i u i. Then u, u j = a i u i, u i = a i u i, u j = a j. Properties of orthonormal projection:. Let u,..., u n be mutually orthonormal. Let U = span(u,..., u n ). Then for any v V and for any u U, v P v and u are orthogonal to each other, where P is orthonormal projection onto U. Proof: For any j, P v, u j = v, u i u i, u j = v, u i u i, u j = v, u j. Subtracting, v P v, u j = Let u,..., u n be mutually orthonormal. Let U = span(u,..., u n ). Then for any v V, the unique vector u U that minimizes v u is P v. 7

18 Proof: Let u U be given. Then we know that v P v and P v u are orthogonal to each other. By the Pythagorean Theorem, v u 2 = v P v 2 + P v u 2 v P v 2, with equality iff P v u = 0 iff u = P v. Theorem: Every finite-dimensional subspace of an inner product space has an orthonormal basis. Proof: Let V be the inner product space. Let U be a subspace of dimension n. We prove that U has an orthonormal basis by induction on n. Base Case: n =. Let {u } be a basis for U. Then { u u } is an orthonormal basis for U. Induction Hypothesis: If U has dimension n then it has an orthonormal basis {u,..., u n }. Inductive Step: Let U be a subspace of dimension n+. Let {v,..., v n+ } be a basis for U. Write U n = span(v,..., v n ). By the induction hypothesis, U n has an orthonormal basis {u,..., u n }. Let P be orthonormal projection onto U n. Then the vectors u,..., u n, v n+ P v n+ are mutually orthogonal and form a basis for U. Setting u n+ = v n+ P v n+ v n+ P v n+, the vectors u,..., u n+ form an orthonormal basis for U. Remark: The proof of this last theorem provides an algorithm (Gram- Schmidt) for producing an orthonormal basis for a finite-dimensional subspace U: Start with any basis {v,..., v n }. Set u = v v. This is an orthonormal basis for span(v ). Having found an orthonormal basis {u,..., u k } for span(v,..., v k ), one can produce an orthonormal basis for span(v,..., v k+ ) by appending the vector u k+ = v k+ P v k+ v k+ P v k+, where P is orthonormal projection onto u,..., u k. A Minimization Problem: Consider the problem of finding the best polynomial approximation p(x) P 5 ([ π, π]) of sin x, where by best we mean that π π (sin x p(x)) 2 dx 8

19 is a small as possible. To place this in an inner-product setting, we consider P 5 ([ π, π]) to be a subspace of C([ π, π]), where the latter is the vector space of continuous functions from [ π, π] to R. Then C([ π, π]) has inner product defined by f, g = π f(x)g(x) dx. We are trying to minimize sin x p(x) 2. However, we know how to minimize sin x p(x) : π p(x) = P (sin x) where P is orthogonal projection onto the finite-dimensional subspace P 5 ([ π, π]). The latter has basis {, x, x 2, x 3, x 4, x 5 }, and Gram-Schmidt can be applied to produce an orthonormal basis {u 0 (x), u (x), u 2 (x), u 3 (x), u 4 (x), u 5 (x)}. Therefore the best polynomial approximation is α i u i (x) where α i = sin x, u i (x) = π π sin x u i (x) dx. The approximation to sin x given in the book on page 5 is in contrast to the Taylor Polynomial x.0229 x x , x x3 6 + x5 20. Cauchy-Schwarz Inequality: u, v u v. Proof: Project u onto v yielding p = λv. We have u p p, therefore u 2 = u p + p 2 = u p 2 + p 2 p 2 = λ 2 v 2. Given that λ = u, v /v v, this yields u 2 v 4 u, v 2 v 2, and this implies Cauchy-Schwartz. Triangle Inequality: u + v u + v. 9

20 Proof: Square both sides and subtract the left-hand side from the right-hand side. The result is 2 u v u, v v, u = 2 u v 2Re u, v 2 u v 2 u, v 0 by Cauchy-Schwarz. The Orthogonal Complement of a Subspace: Let V be a finite-dimensional inner-product space and let U be a subspace. We define U = {v V : v, u = 0 for all u U}. We can construct U explicitly as follows: Let {u,..., u k } be an orthonormal basis for U. Expand to an orthonormal basis {u,..., u n } for V using Gram-Schmidt. The vectors in span(u k+,..., u n ) are orthogonal to the vectors in U. Moreover, for any v U, the coefficients of v in terms of the orthonormal basis are the inner product of v with each basis vector, which places v span(u k+,..., u n ). Therefore U = span(u k+,..., u n ). This immediately implies that (U ) = span(u,..., u k ) = U. Note also that V = U U. To decompose a vector in V into something in U plus something in U we can use v = P v + (v P v). Chapter 7: Determinants Prove directly that the 2 2 determinant has the following properties: det(i) =, that as a function of the columns, det(a, A 2 ) is multilinear and skewsymmetric. In particular, the determinant of a matrix with repeated columns is 0. Moreover, det(ab) = det(a) det(b). Proof of the last statement: The fact that the determinant is skew-symmetric implies that the determinant is zero when there is a repeated column. AB = C has columns C = b A + b 2 A 2 and C 2 = b 2 A + b 22 A 2, therefore det(ab) = det(b A + b 2 A 2, b 2 A + b 22 A 2 ) = b b 2 det(a, A 2 ) b 2 b 2 det(a, A 2 ) = det(b) det(a). Define the n n determinant recursively and state that it is also has the same properties as above. Theorem: When the columns of a matrix are linearly dependent, the determinant is 0. 20

21 Proof: Expand one column in terms of the others, compute the determinant, and note that all terms are zero. Theorem: When the columns of a matrix are linearly independent, the determinant is not 0. Proof: The linear map defined by the matrix is invertible, so the map has an inverse, so the matrix has an inverse. The determinant of the product is, so each determinant is non-zero. Cramer s Rule: Suppose Ax = b. Then b = x A + + x n A n. The determinant of the matrix whose columns are A,..., b,..., A n, where the replacement is in column i, is x i det(a). So x i is this determinant divided by det(a). Chapter 8: Eigenvalues and Eigenvectors Let A be an n n matrix of real numbers. We way that a non-zero vector v is an eigenvector of A if there is a number λ such that Av = λv. How to find: In matrix terms, we are solving Av = λiv = 0, (A λi)v = 0. This says that the columns of A λi are linearly dependent, which implies that det(a λi) = 0. Expand this in terms of the unknown λ, then solve for λ, then go back and calculate v. Using the dot product as scalar product, the dot product of two column vectors x and y is x T y. Real symmetric matrices have real eigenvalues. Proof: Let Av = λv where λ C, v C n, and v 0. Write λ = a + bi and v = x + iy. Comparing real and complex values in A(x + iy) = (a + bi)(x + iy) we obtain and Therefore and Ax = ax by Ay = ay + bx. x T Ay = ax T y + bx T x x T Ay = (Ax) T y = (ax T by T )y = ax T y by T y. 2

22 Comparing, b x 2 = b y 2. If b 0 then x = y = 0 since they have zero length. This contradicts v 0. Hence b = 0 and λ is real. Let A be a square matrix with orthonormal columns. Then A T = A. Proof: multiply and look at the dot products. For any two matrices A and B, (AB) T = B T A T. Let A and B be square matrices with orthonormal columns. Then AB has orthonormal columns. Proof: If B,..., B n are the columns of B then AB,..., AB n are the columns of AB. Dot product of columns i and j is (AB i ) T (AB j ) = Bi T A T AB j = Bi T B j = δ ij. Let A be a real symmetric matrix. Then there is an orthonormal matrix C such that C T AC is diagonal with eigenvalue diagonal entries. Proof: By induction on number of rows and columns. Trivial when n =. More generally, let v be eigenvector with eigenvalue λ. Find basis incorporating v and use Gram-Schmidt to produce orthonormal basis v,..., v n. Then Av = λ v. Let C = [ ] v v 2 v n. Then AC = [ Av Av 2 Av n ] = [ λ v Av 2 Av n ], C T AC = [ λ C T v C T Av 2 C T Av n ] = matrix with first row λ, b 2,..., b n, first column λ, 0,..., 0, and lower right hand submatrix B. See notes. Corollary: C T AC = diag(λ,..., λ n ) implies AC = (λ C, λ 2 C 2,..., λ n C n ). In other words, the columns of C are an orthonormal set of eigenvectors. Finding them: First note that eigenvectors of a real symmetric matrix A corresponding to distinct eigenvalues are orthogonal: Suppose A T = A and Au = αu and Av = βv where α β. Then βu T v = u T (βv) = u T (Av) = (u T A)v = (u T A T )v = (Au) T v = (αu) T v = α(u T v). 22

23 Since α β, this forces u T v = 0. In other words, their dot product is 0. Find each eigenvalue using the characteristic polynomial, then find a basis for each eigenspace, then use Gram-Schmidt to find an orthonormal basis for each eigenspace. The union of the bases will be orthonormal and there will be enough of them to form an orthonormal basis. These are the columns of C. Applications: () Matrix powers and solutions to recurrence relations (2) Diagonalizing a quadratic binary form (3) Solution to system of differential equations. Example: Graph ax 2 + bxy + cy 2 =. In matrix form, this reads [ ] [ ] [ ] a b/2 x x y = [ ]. b/2 c y We have already proved that for each symmetric matrix A there is a rotation matrix C of eigenvectors of A such that A = CDC T where D is a diagonal matrix. Making the substitution we obtain [ ] [ ] [ ] λ 0 x x y C C T = [ ]. 0 λ 2 y Writing we obtain In other words, [ ] [ ] X x = C T Y y [ ] [ ] [ ] λ X Y 0 X = [ ]. 0 λ 2 Y λ X 2 + λ 2 Y 2 =. This is much easier to graph. Since [ ] x = C y [ ] X Y and C is a rotation matrix, all we have to do is identify the angle of rotation θ and rotate the XY graph by θ to obtain the xy graph. 23

24 [ ] 3/2 Example: Graph x 2 + 3xy + 5y 2 =. The eigenvalues of are [ ] [ ] 3/2 5 3 λ = and λ 2 2 =. Eigenspace bases are { } and { }. This yields 2 3 [ ] 3/ 0 / 0 C = / Identifying this with [ ] cos(θ) sin(θ) R(θ) = sin(θ) cos(θ) yields cos θ = 3 0, sin θ = 0, tan θ =, θ = tan = radians or degrees. So we graph 2 X2 + Y 2 =, then rotate degrees. For example, one solution to 2 X2 + Y 2 = is X = 2, 2 Y = 0. This yields solution We have [ x y ] = C [ X Y ] = [ 3/ 0 / 0 / ] [ 2 0 ] [ = (3/ 5) 2 + 3(3/ 5)( / 5) + 5( / 5) 2 =. ]. Graph of 2 X2 + 2 Y 2 = : Graph of x 2 + 3xy + 5y 2 = :

25 A related problem: find the maximum and minimum value of x 2 + 3xy + 5y 2 subject to x 2 + y 2 =. Given that (x, y) is related to (X, Y ) by a rotation, x 2 + y 2 = is equivalent to X 2 + Y 2 =. So equivalently we can find the maximum of 2 X2 + Y 2 subject to X 2 + Y 2 =. Writing Y 2 = X 2 2 we want the maximum of 2 5X2 where X. The maximum is 2 using (X, Y ) = (0, ), (x, y) = (/ 0, 3/ 0). The minimum is using 2 (X, Y ) = (, 0), (x, y) = (3/ 0, / 0). 25

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

NOTES FOR LINEAR ALGEBRA 133

NOTES FOR LINEAR ALGEBRA 133 NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2

More information

4.1 Distance and Length

4.1 Distance and Length Chapter Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at vectors

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

is Use at most six elementary row operations. (Partial

is Use at most six elementary row operations. (Partial MATH 235 SPRING 2 EXAM SOLUTIONS () (6 points) a) Show that the reduced row echelon form of the augmented matrix of the system x + + 2x 4 + x 5 = 3 x x 3 + x 4 + x 5 = 2 2x + 2x 3 2x 4 x 5 = 3 is. Use

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Vector Geometry. Chapter 5

Vector Geometry. Chapter 5 Chapter 5 Vector Geometry In this chapter we will look more closely at certain geometric aspects of vectors in R n. We will first develop an intuitive understanding of some basic concepts by looking at

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017 Final A Math115A Nadja Hempel 03/23/2017 nadja@math.ucla.edu Name: UID: Problem Points Score 1 10 2 20 3 5 4 5 5 9 6 5 7 7 8 13 9 16 10 10 Total 100 1 2 Exercise 1. (10pt) Let T : V V be a linear transformation.

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information

Mathematics 1. Part II: Linear Algebra. Exercises and problems

Mathematics 1. Part II: Linear Algebra. Exercises and problems Bachelor Degree in Informatics Engineering Barcelona School of Informatics Mathematics Part II: Linear Algebra Eercises and problems February 5 Departament de Matemàtica Aplicada Universitat Politècnica

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Math 21b. Review for Final Exam

Math 21b. Review for Final Exam Math 21b. Review for Final Exam Thomas W. Judson Spring 2003 General Information The exam is on Thursday, May 15 from 2:15 am to 5:15 pm in Jefferson 250. Please check with the registrar if you have a

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

ELEMENTS OF MATRIX ALGEBRA

ELEMENTS OF MATRIX ALGEBRA ELEMENTS OF MATRIX ALGEBRA CHUNG-MING KUAN Department of Finance National Taiwan University September 09, 2009 c Chung-Ming Kuan, 1996, 2001, 2009 E-mail: ckuan@ntuedutw; URL: homepagentuedutw/ ckuan CONTENTS

More information

Quizzes for Math 304

Quizzes for Math 304 Quizzes for Math 304 QUIZ. A system of linear equations has augmented matrix 2 4 4 A = 2 0 2 4 3 5 2 a) Write down this system of equations; b) Find the reduced row-echelon form of A; c) What are the pivot

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition 6 Vector Spaces with Inned Product Basis and Dimension Section Objective(s): Vector Spaces and Subspaces Linear (In)dependence Basis and Dimension Inner Product 6 Vector Spaces and Subspaces Definition

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam This study sheet will not be allowed during the test Books and notes will not be allowed during the test Calculators and cell phones

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

Fall 2016 MATH*1160 Final Exam

Fall 2016 MATH*1160 Final Exam Fall 2016 MATH*1160 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie Dec 16, 2016 INSTRUCTIONS: 1. The exam is 2 hours long. Do NOT start until instructed. You may use blank

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors. Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix Professor Joana Amorim, jamorim@bu.edu What is on this week Vector spaces (continued). Null space and Column Space of a matrix............................. Null Space...........................................2

More information

Linear algebra. S. Richard

Linear algebra. S. Richard Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information