HINTS - BANK PEYAM RYAN TABRIZIAN

Size: px
Start display at page:

Download "HINTS - BANK PEYAM RYAN TABRIZIAN"

Transcription

1 HINTS - BANK PEYAM RYAN TABRIZIAN This document contains hints to various Math 54 homework problems I ve compiled from Fall 2011! It contains hints to the following problems (Note that the problems in bold are T/F questions): Section Problems , 20, , 23, 24, 25, 26, , 22, , 18, 29, , 24, 29, 30, 31, , 11, 13, 15, 17, 21, 22, 33, 34, 35, , 9, , , 11, 13, 15, 17, 23, , 3, 9, 11, 19, 21, , 13, 15, 17, 21, 24, , 3, 5, 7, 9, 11, , 3, 9, 11, 17, 21, 23, , 20, 21, 22, , 19, 21, 27, 31, 33, 34, , 9, 21, 25, , 28, , 21, 32, , 3, 5, 7, 9, 26, , 9, 11, 13, 15, 17, 23, , 11 Continued on the next page Date: Monday, June 20th,

2 2 PEYAM RYAN TABRIZIAN Section Problems 5.1 3, 17, , 17, , 3, , 3, 9, 11, , 3, 5, 7, 9, 11, 13, 15, , , 11, Facts, , 15, , 3, 7, , 3, 5, 7, 9, 11, 13, 19, 22, , 2, 3, , 11, 13, 17, 22, 32, 33, , 9, 19 Section Problems DE 4.1 1, 3, 5, 8, 9, 10 DE , 30 DE 4.4 1, 3, 5, 9, 11, 13, 15, 17 DE 4.5 9, 11, 13 DE DE 4.7 3, 12, 13 DE 4.8 1, 3, 9, 11 DE 4.9 3, 5, 7 DE 6.1 1, 3, 11, 23, 27 DE 6.2 1, 15, 17, 25 DE 9.4 1, 3, 17, 21, 25, 27 DE , 19, 21, 35 DE , 19 DE 9.7 3, 5, 10, 13, 15, 27 DE DE , 3, 5, 9, 11, 13, 21, 27, 31 DE , 3, 5, 7, 9, 11, 17, 19, 26, 27, 31 DE , 3, 5, 7, 9, 11, 13, 18 DE , 9, 13 DE 10.6 Facts DE 10.7 Facts Section 1.1: Systems of linear equations All you have to do are row-reductions until it is easier to see whether the equation has a solution or not. In particular, if one of the rows is of the form: [ b ]

3 HINTS - BANK 3 then the system has no solution! Solve the system as if h was a number! It might be useful to divide the second row by 2. Again, use the fact that if one of the rows is of the form: then the system has no solution! The answer is h = 2 [ b ] The answer is ad bc 0. Solve the system as if a, b, c, d were fixed numbers (say 1, 2, 3, 4). We ll see later a much easier criterion to solve this problem, namely the determinant of the coefficient matrix, which is ad bc here has to be nonzero! Solution: (by popular demand) Just write the system in matrix form and use row-reductions to solve it: [ a b f c d g ] [ a 1 b a f a c d g ] [ c 1 b a f a 0 d bc a g fc a In particular, if you want the system to be consistent, there has to be no row of the form: [ b ] So in particular d bc a 0, so d bc a, so ad bc so ad bc 0. Another thing you could do is divide the second row by d bc a, but you ll get the same result because remember that you can t divide by 0. Section 1.2: Row Reduction and Echelon Forms , , , In each of the problems, the following fact will help you solve the problem: Fact: A system is consistent if and only if in the row echelon form of the augmented matrix there is no row of the form [ b ] Where b 0. For 23, 24, 25, it ll help to draw a picture of what the matrix in question looks like. ] Again, draw a picture of the given matrix. Try out a concrete example to convince you of this! Can you solve for z? If yes, can you solve for y? Finally, can you solve for x? Underdetermined means fewer equations than unknowns. Find two equations in three unknowns which give you a contradiction, such as 0 = 1.

4 4 PEYAM RYAN TABRIZIAN Section 1.3: Vector equations Here s a cool trick! Any vector in R 2 is a linear combination of two linearly independent vectors! So the answer is immediately yes In other words, find an inconsistent system of 3 equations! Think about why this is true! Careful! A set is not the same as the span of a set. In particular, b is not in {a 1, a 2, a 3 } because it is not equal to either of those vectors. However, it might be in the span of those 3 vectors! Also, for (c), remember that a 1 is always in the span of {a 1, a 2, a 3 }. Section 1.4: The matrix equation Ax = b , Row-reduce! Also, use Theorem 4(d) on page The easiest way to do this is find a matrix in row-echelon form that has this property, and then just interchange two rows! For example, the following matrix works: Ignore the word unique. Then Theorem 4(a) holds, and in particular Theorem 4(c) holds, which solved the problem! Section 1.5: Solution sets of linear systems The line that goes through 8 2 and with slope (a) F (nontrivial means not all entries are 0. For a counterexample, let A be the zero matrix!) (b) T (c) F (no, it means b = 0) (d) F (no, draw a picture, or see page 55) (e) T (see page 55) , , , Nontrivial means x 0. The best way to do this is to draw a picture of what the reduced-echelon form of the matrix looks like! Also, for (b), if one of the rows of A is a row of zeros, then the equation Ax = b has no solution! Section 1.6: Applications of linear systems Ignore this section if you want, it s more for your personal entertainment! :)

5 HINTS - BANK 5 Section 1.7: Linear independence Notice that v 2 = 3v 1, so in other words, for what values of h is v 3 a multiple of v 1. For (b), the set is always linearly dependent because v 2 = 3v 1 already , Row-reduce! , A set with more than 2 elements in R 2 is always linearly dependent. A set with the zero-vector is always linearly dependent (a) F (the equation Ax = 0 always has the trivial solution, no matter what the columns of A look like!) (b) F (for example, S = 0, 1, 2 doesn t satisfy this! The correct statement should be: there is some vector such that ) (c) T (in other words, 5 vectors in R 4 are linearly dependent) (d) T (otherwise the set would be linearly independent) (a) T (end of page 69) 1 2 (b) F (for example, S = 0, 0 is linearly dependent) 0 0 (c) T (z is a linear combination of x and y!) 1 2 (d) F (for example, S = 0, 0 is linearly dependent) , Remember that a set is linearly dependent if there s a relationship between the vectors in the set. Also, a set with the zero vector is always linearly dependent False (v 1 could be the zero vector!) False (choose v 1 = v 2 = v 4 and v 3 linearly independent from v 1! The point is for linear independence, you have to consider the set as a whole!) Section 1.8: Introduction to Linear Transformations 1.8.3, 1.8.9, Just solve the equation Ax = b, where in 1.8.9, b is the zero vector! Section 1.9: The matrix of a linear transformation For all of those questions, all you need to find is T (e 1 ), T (e 2 ), and group the terms in a matrix!

6 6 PEYAM RYAN TABRIZIAN (a) T (in other words, if you know T (e 1 ), T (e 2 ),, T (e n ), you know T ) (b) T (see example 3) (c) F (the composition of two linear transformations is a linear transformation, see chapter 4) (d) F (onto means every vector in R n is in the image of T ) 1 0 (e) F (let A = 0 1, then the columns of A are linearly independent, and 0 0 hence T is one-to-one by theorem 12b) (a) F (b) T (c) T (d) T (e) T :) Section 1.10: Linear models in business, science, and engineering You can ignore this section if you want, it s for your own personal entertainment Section 2.1: Matrix operations Remember the rule (m n) (n p) = (m p) Write out AB and BA and solve for k in AB = BA D = 2I works! [Qr 1 Qr p ] = Q [r 1 r p ]. Think of this in terms of how you multiply matrices! Remember this for later on, it will be useful! (a) F (oh, life would be awesome if this was true! But a 1 a 2 doesn t even make sense!) (b) F (the columns of A using weights from the column of B) (c) T (d) T (e) F (in the reverse order, (AB) T = B T A T ) This amounts to solving a bunch of equations! Label the entries in the first two columns of B as a, b, c, d, and solve for a, b, c, d using the fact that AB is given Multiply the equation Ax = 0 by C Db is a solution! 2.2.1, Use theorem 4. Section 2.2: The inverse of a matrix All statements are true, except for (b), because (AB) 1 = B 1 A 1.

7 HINTS - BANK Muliply AX = B by A 1 and use A 1 A = I X = CB A In other words, you have to show that the only solution to Ax = 0 is x = 0 (think about this in terms of linear combinations of the columns of A). For that, multiply the equation Ax = 0 by A D = The best way to get D is by using the equation AD = I 2 and guessing! C cannot exist, because otherwise A would be invertible, and in particular its columns would be linearly independent, which is bogus! Section 2.3: Characterizations of invertible matrices For the first few problems, row-reduction is the key! Just look at theorem 8! If one of those statements holds, then all of them hold! Only if all the entries on the diagonal are nonzero! See theorem 8(c) No! See theorem 8(e) This is super cute!!! If A is invertible, then A 1 is invertible (because ( A 1 ) 1 = A). Hence theorem 8(e) holds for A 1 instead of A) Careful! The word some is key here! (if the statement held for all y, then it would be true). In this case G is not invertible, and hence by Theorem 8(h), the columns cannot span the whole space Yes, because L is invertibel, and hence Theorem 8(h) holds! It is invertible by theorem 8(f) and hence also onto by theorem 8(i). This is one of the special features of R n and about linear transformations! You cannot expect this result to be true in general! Section 2.5: Matrix Factorizations You should be able to do this section without any help! :) Section 2.8: Subspaces of R n Not closed under scalar multiplication Not closed under addition One way to do this is to group all the three vectors together in a matrix A, and solve Ax = 0.

8 8 PEYAM RYAN TABRIZIAN (a) 3 (b) Infinitely many of them! (but in a sense, you ll see that Col(A) is a 2 or 3 dimensional space). (c) Solve Ax = p Use the row-reduced form of A q is the number of pivots, q = n p = number of free variables All statements are True, EXCEPT (c) (should be R n )! Notice that in particular (a) is true! The book is just being anal about this, even though it omitted the word for each, the statement still remains true (the words for each here are implied) To find Nul(A), solve Ax = 0 using the row-echelon form. To find Col(A), notice that the first two columns of A are pivot columns. In particular, a basis for Col(A) is the set of the first two columns of the original matrix A. Section 2.9: Dimension and rank [x] B gives you the coefficients of the linear combination of x in the basis B Find the coefficients of x as a linear combination of b 1 and b , To find Nul(A), solve Ax = 0 using the row-echelon form. To find Col(A), locate the pivot columns of A. In particular, a basis for Col(A) is the set of the pivot columns of the original matrix A All statements are TRUE, EXCEPT for (b) (the line must go through the origin!) Use the fact that dim(nul(a)) + rank(a) = n For example: For example: :) A = A = Section 3.1: Introduction to determinants Always try to look for a row/column full of zeros! May Bomberman be with you , , , What you re asked to do is: compute the determinants of the first matrix and of the second matrix and compare them. Also, explain how to obtain the second matrix from the first using a row-operation!

9 HINTS - BANK The area of the parallelogram always equals to the determinant of [u v]. Use the formula: Area of parallelogram = base height. Section 3.2: Properties of determinants I think what they mean is: First calculate the determinant by expanding along the second column, and then evaluate the resulting sub-determinants using row-reduction! DO NOT EVALUATE THE DETERMINANT! Use row-reduction! In particular, notice that to obtain the determinant in the problem, all you have to do is multiply the second row of the original matrix by 2, and then add the first row to the second row! Hence, the answer should be 2 7 = A matrix A is invertible if and only if det(a) 0. Note: From now on, I m only giving the answer to the T/F questions! I leave it up to you to explain why the result is true or false (a) T? I think the book uses the term row-replacement to mean: add k times a row to another row. (b) F (not true for any echelon form, what about the reduced row-echelon form?) (c) T (d) F , , , All you need to use is the fact that det(ab) = det(a)det(b).

10 10 PEYAM RYAN TABRIZIAN Section 3.3: Cramer s rule, volume, and linear transformations For all those problems, all you need to do is imitate the techniques presented in the book! This section is unlikely to be on the exam (except maybe the volumepart) , The system has a unique solution iff det(a) 0 (because that s equivalent to saying that A is invertible) The only thing that makes this difficult is that the parallelogram is not centered at (0, 0). To make it centered at (0, 0), just shift it to the right by one unit! The area stays the same anyway! If det(a) = 0, then the columns of A are linearly dependent, and hence A represents a degenerate (or flat) parallepiped, whose volume is This is similar to the ball-example I showed in section, except that it s even easier, because the problem gives you T (e 1 ) etc. Also, notice that the volume of S is (1 1) 1 = 1 6, because all of its three lengths are equal to 1. Section 4.1: Vector spaces and subspaces Remember the three techniques of showing whether something is a vector space or not! (1) Trick 1: Show it is not a vector space by finding an explicit property which does not hold (2) Trick 2: Show it is a subspace of a (known) vector space (3) Trick 3: Express it in the form Span of some vectors (a) T (this is important to remember!!! A vector isn t a list of numbers any more, it could be anything, even a function!) (b) T (c) T (of itself!) (d) F (e) T (again, the textbook might give you a different answer, but I agree that this is weirdly phrased! What they mean is: If u, v is in H, then u + v is in H) (a) 8 (b) 3 (c) 5 (d) 4

11 HINTS - BANK This is a bit tricky! Remember that H K is the set of vectors that is both in H and in K. Here s the proof that H K is closed under addition (hopefully that ll inspire you to do the rest): Suppose u and v are in H K. Then u and v are in H, so is u + v (since H is a subspace). Also, since u and v are in K, so is u + v (since K is a subspace). Hence u + v is both in H and K, hence u + v is in H K. As for the fact that the union of two subspaces is not a subspace, take H to be the x axis, and K to be the y axis. Section 4.2: Nullspaces, Column Spaces, and Linear Transformations This is very similar to what you ve been doing in sections 2.8 and (a) T (b) F (c) T (d) T (the book might say F, if it is pedantic about the fact that it didn t say for all b ) (e) T (f) T Section 4.3: Linearly independent sets, bases Remember that a basis is a linearly independent set which spans the whole space! Equivalently, as set is a basis if the corresponding matrix A is invertible The system the book talks about is: x + 2y + z = 0 y = y z = z IGNORE THIS PROBLEM, unless you enjoy being tortured :) (a) F (b) F (c) T (d) F (e) F This is the same as problem 3 in Quiz 2! Suppose ap 1 + bp 2 = 0, where 0 is the zero-polynomial. Then either identify coefficients (like in Math 1B), or differentiate, to find out that a = b = 0.

12 12 PEYAM RYAN TABRIZIAN Section 4.4: Coordinate Systems Remember: It s easier to figure out x once we know [x] B than the reverse. Also, remember that the change of coordinates matrix is just the matrix whose columns are the elements in B. It takes a code as its input and tells you which vector corresponds to that code. Its inverse matrix does what you usually want: It produces the coordinates of x Use the basis B = { 1, t, t 2, t 3}, and compute the coordinates of the 4 polynomials. Then the polynomials are linearly independent if and only if their corresponding vectors are linearly independent! Section 4.5: The dimension of a vector space 4.5.1, 4.5.3, 4.5.7, First express the subspace as the span of some vectors, and then use the following useful trick: Useful trick: To find a basis of a collection of vectors, form the matrix A whose columns are the vectors, and all you need to do is to find a basis for Col(A). In particular, the dimension of the subspace is the dimension of Col(A) (which is the number of pivots) I did this on Friday! Suppose B = {v 1, v 2, v n } is a basis for H. What two things can you say about B? Then use the Basis theorem (Theorem 12) Find an infinite linearly independent set in P. For example, { 1, x, x 2 } works! Section 4.6: The rank of a matrix Remember that the rank of A is just dim(col(a)). It is also equal to dim(row(a)) and to Rank(A T ) and to the number of pivots of A , 4.6.9, , , Use the equation dim(n ul(a))+rank(a) = n. Also, rank(a) is largest when Nul(A) is smallest (a) T (b) F (c) T (d) F (e) F This question is just meant to confuse you with words! All that it says is if dim(nul(a)) = 2 and n = 8, then what is Row(A)?

13 HINTS - BANK I urge you to do before, it makes this much easier! The point is that if A has rank 1, then all its columns are multiples of the first[ column. In ] particular, let v be the list of the coefficients. For example, if A =, then let v = 1 3, because the second column is 3 times the first one and the 4 third column is 4 times the first one. If the first column of A is zero, try the second column. If the second column is zero, try the third column. If neither of those hold, then A is the zero matrix, which does not have rank 1. Section 4.7: Change of basis Remember: To change coordinates from B to C, just express the vectors in B in terms of the vectors in C (ii), because P is just (a) F (b) T P W U, so P goes from U to V IGNORE! But if you do and , you ll have a completely different and awesome view of calculus and linear algebra! Section 5.1: Eigenvalues and eigenvectors Remember: To find the eigenvalues, calculate det(a λi) and find the zeros of the resulting polynomial. To find a basis for the eigenspaces, find Nul(A λi) for each eigenvalue λ that you found! Also, you should never get Nul(A λi) = {0} Calculate Av, where A is the given matrix and v is the given vector Remember that the determinant of an upper-triangular matrix is just the product on the entries of the diagonal! (a) F (x has to be nonzero) (b) T (c) T (d) T (depending on what you mean by easy and hard :) ) (e) F Section 5.2: The characteristic equation , Remember that the determinant of an upper/lower-triangular matrix is just the product on the entries of the diagonal!

14 14 PEYAM RYAN TABRIZIAN (a) F (b) F (c) T (d) F ( 5 is an eigenvalue) Section 5.3: Diagonalization 5.3.1, If A = P DP 1, then A k = P D k P (a) T (b) T (c) F (d) F Section 5.4: Eigenvectors and linear transformations The matrix in question is A = [ [T(b 1 )] D [T(b 2 )] D [T(b 3 )] D ] Remember that e 1 = 1 0 = (1, 0, 0) etc For (b) Show T (p + q) = T (p) + T (q) and T (cp) = ct (p), for (c), calculate T (1), T (t) and T (t 2 ) A = [ [T(b 1 )] B [T(b 2 )] B ]. That is, calculate T (b1 ) and T (b 2 ) and express your answer in terms of b 1 and b This problem is MUCH longer than you think it is! To show A is not diagonalizable, calculate the eigenvalues of A, and find a basis for each eigenspace. If the sum of the dimensions of the bases is not equal to 2, then A is not diagonalizable. Section 5.5: Complex eigenvalues 5.5.1, 5.5.3, Just use the same technique you usually use to find eigenvalues and eigenvectors! 5.5.7, 5.5.9, First, calculate r = det(a) (or take the length of the first row of A). Then factor out r from A and recognize the resulting [ matrix as a rotation matrix, i.e. find φ such that the remaining matrix equals to ] cos(φ) sin(φ) sin(φ) cos(φ)

15 HINTS - BANK , , First, find the eigenvalues of A, and pick one of them. Then the first ROW of C consists the real and imaginary parts of the eigenvalue you picked. Then remember that the diagonal entries of C are the same, and the other entries are opposites of each other. Finally, to get P, find an eigenvector corresponding to the eigenvalue you picked, and then the columns of P are the real and imaginary parts of that eigenvector! Section 6.1: Inner products, lengths, and orthogonality Divide the vector by its length! What they mean by the transpose definition is u v = u T v. Use the facts that (A + B) T = A T + B T and (ca) T = ca T Use the fact that w 2 = w w for any w, and expand the left-hand-side out using a distributive law similar to (a + b)(c + d) = ac + ad + bc + bd. Section 6.2: Orthogonal sets Remember: A set B is orthogonal if for every pair of distinct vectors u and v, u v = 0. It is orthonormal if it is orthogonal and every vector has length 1. An orthogonal set can be made orthonormal by dividing every vector by its length [x] B = x u 1 u 1 u 1 x u 2 u 2 u 2 x u 3 u 3 u , The formula for orthogonal projection of y on the line spanned by u is: ( y u ) ŷ = u u u The distance between u and L is then y ŷ Section 6.3: Orthogonal projection Here are all the basic facts that you ll need: (1) If W = Span {u 1, u 2 u k }, then the orthogonal projection of y onto W is: ( ) ( ) ( ) y u1 y u2 y uk ŷ = u 1 + u u k u 1 u 1 u 2 u 2 u k u k (2) Then ŷ is in W, y ŷ is in W (that is, orthogonal to W ). (3) y = (ŷ) + (y ŷ), which decomposes y as a sum of two vectors, one in W and the other one orthogonal to W. (4) ŷ is the closest point to y in W. (5) y ŷ is the smallest distance between y and W.

16 16 PEYAM RYAN TABRIZIAN (a) T (b) T (c) F (d) T (e) T Section 6.4: The Gram-Schmidt process Use the formula given in Theorem 11. To get an orthonormal basis, just divide every vector at the end by its length. At every step, it s helpful to multiply your vector by a scalar to avoid fractions. This is ok, because you ll normalize them at the end anyway! Here s the trick: If A = QR, then Q T A = Q T QR = R, since Q T Q = I (because the columns of Q are orthonormal). Hence: R = Q T A Section 6.5: Least squares problems Here s the general procedure to solve least-squares problems: To solve Ax = b in the least-squares sense, multiply both sides by A T, and solve the (easier) equation A T Aˆx = A T b. Your solution ˆx is called the least-squares solution. The least squares error is Aˆx b No! Because Av b is smaller than Au b, so u cannot be a least-squares solution of Ax = b, by the definition of a least-squares solution! (beginning of section 6.5) Use the formula in theorem (a) T (b) T (c) F (d) T (e) T Section 6.6: Applications to linear models 6.6.1, Use equation (1) on page 360. It s basically just solving a leastsquares problem with a special matrix A.

17 HINTS - BANK (a) x 1 x 2 1 x 2 x 2 2 X = x 3 x 2 3 x 4 x 2, β = 4 x 5 x 2 5 where (x 1, y 1 ) = (1, 1.8) etc. [ β1 β 2 y 1 ] y 2, y = y 3 y 4 y 5 (b) You can ignore this part if you want! (or use MATLAB) OH GOD! IGNORE, IGNORE, IGNORE!!! You have better things to do! :) Start with X T X ˆβ = X T y. Then, by definition of X, we get: That is: 1 x 1 y 1 [ ] x 2 [ ] [ ] ˆβ y 2 = x 1 x 2 x n.. ˆβ 1 x 1 x 2 x n. 1 x n y n ˆβ 0 + ˆβ 1 x 1 [ ] ˆβ 0 + ˆβ y 1 1 x 2 [ ] x 1 x 2 x n. = y 2 x 1 x 2 x n. ˆβ 1 x n y n NOW expand the left-hand-side and the right-hand-side out, and what you should get are the normal equations, using the notation x and y etc. introduced right before the exercise! Here x, y = 4u 1 v 1 + 5u 2 v 2 Section 6.7: Inner product spaces 6.7.3, 6.7.5, Here p, q = p( 1)q( 1) + p(0)q(0) + p(1)q(1). And p = p, p. Finally, remember that the formula for orthogonal projection remains the same, namely: ˆq = q, p p, p p

18 18 PEYAM RYAN TABRIZIAN Here p, q = p( 3)q( 3) + p( 1)q( 1) + p(1)q(1) + p(3)q(3) (a) ˆp 2 = p 2, p 0 p 0, p 0 p 0 + p 2, p 1 p 1, p 1 p 1 (b) q = p 2 ˆp 2! (as usual! :)). For the second part, all that is says is that multiply q by a constant, so that the new polynomial cq satisfies cq( 3) = 1, cq( 1) = 1 etc Here p, q = p( 2)q( 2) + p( 1)q( 1) + p(0)q(0) + p(1)q(1) + p(2)q(2). If we let p 3 = t 2, then we have: ˆp 3 = p 3, p 0 p 0, p 0 p 0 + p 3, p 1 p 1, p 1 p 1 + p 3, p 2 p 2, p 2 p Show that it satisfies axioms 1, 2, 3, 4 in the definition of an inner product space (page 368). You may want to use the fact that (A + B) T = A T + B T and (ca) T = ca T. Also, for 4, remember that w T w = w w 0 unless w = The Cauchy-Schwarz inequality says u v u v , f, g = 1 0 f(t)g(t)dt. Also, g = g, g Section 6.8: Applications of inner product spaces This is the same as fitting data to a line (section 6.6), except you multiply the equation Ax = y on the left by the diagonal matrix W of the weights to obtain W Ax = W y. For example, in the first problem, we have: W = Now solve the equation W Ax = W y by using least-squares! Should give you the same result, because solving W Ax = W y is the same as solving (2W )Ax = (2W )y Use the formula in example 2 (on page 380), except you have to add the term g,p3 p 3,p 3 p (a) Use the Gram-Schmidt process on the set B = { 1, t, t 2} with respect to the inner product p, q = p( 5)q( 5) + p( 3)q( 3) + p( 1)q( 1) + p(1)q(1) + p(3)q(3) + p(5)q(5) (b) Just imitate example 2.

19 HINTS - BANK 19 Section 7.1: Diagonalization of Symmetric matrices 7.1.7, Remember orthogonal matrices have orthonormal columns! , First diagonalize the matrix as usual, and then apply Gram-Schmidt to each eigenspace! Since P is orthogonal and square, P 1 = P T. Solve for R and then calculate R T , Look at example 4 on page 393. Section 7.2: Quadratic forms Remember to evenly distribute the cross-product terms! For example, the matrix of the quadratic form x x 1 x 2 + 2x 2 2 is: [ ] 1 3 A = Find the matrix A of the quadratic form, then diagonalize A. The matrix P of eigenvectors is the matrix you re looking for. Finally, to write the new quadratic form, use the fact that x = P y and substitute each variable x 1, x 2 etc. with the corresponding linear combination of y 1 and y You don t need any linear algebra for this! Notice the following: 5x x 2 2 = 5x x x 2 2 = 5(x x 2 2) + 3x 2 2 = 5 + 3x 2 2 Now the largest value of x 2 is 1, hence the largest value of the quadratic form is = 8

20 20 PEYAM RYAN TABRIZIAN NOTE: In case you re stuck with the more standard differential equations questions (hom equations, undetermined coeffs, variation of parameters, etc.), make sure to look at my Differential equations - Handout. That should help clarify some things! Section 4.1: Introduction: The mass-spring oscillator NOTE: If you re running out of time, SKIP this section! I ll let you know if a problem from that section is graded! Plug in y(t) = sin(ωt) into the equation To find the maximum value, do the standard calculus optimization trick! Calculate y (t), set it equal to 0, and solve for t The limit is 0 by the squeeze theorem 4.1.8, This problem just asks you to use the method of undetermined coefficients (section 4.4) Wow, what a long problem! :O You can definitely SKIP this if you want! It s just a matter of plugging in solutions and using the method of undetermined coefficients. Section 4.2: Homogeneous linear equations: the general solution This problem looks scary, but it s not that scary! In each question, try to solve for c 1 and c 2. In (a), you ll be able to do that, in (b), there will be no solutions, and in (c) there will be infinitely many solutions! Here s the trick I showed in section. Consider the Wronskian: [ ] y1 (t) y W (t) = det 2 (t) y 1(t) y 2(t) You DON T have to calculate this determinant explicitly. Just pick a point t 0 between 0 and 1 (e π ) and calculate W (t 0 ). If W (t 0 ) 0, then your solutions are linearly independent. But if W (t 0 ) = 0, then you can t always conclude that the solutions are linearly dependent (I ll talk this some day), so usually that means pick another point that works.

21 HINTS - BANK 21 Section 4.3: Auxiliary equations with complex roots The problems should be pretty straightforward :) Ignore (c)! Section 4.4: The method of undetermined coefficients No! (in our problems, we always assumed the exponent of t to be apositive integer!) Yes! 3 t = e ln(3)t, remember that ln(3) is just a number No! Guess y p (t) = A y p (t) = Ae 2t y p (t) = A cos(3t) + B sin(3t) y p (t) = (Ax + B)e x y p (t) = At 2 e t (the double root coincides with the right-hand-side!) Yes Section 4.5: The superposition principle No (because of the 1 t term) Actually yes! Because sin 2 (t) = 1 cos(2t) 2 Section 4.6: Variation of parameters Note: You are allowed to use your calculator of Wolfram alpha to evaluate the given integrals! This is Math 54 after all, and not Math 1B :) The easiest way to do the problems in this section is to look at my differential equations handout! The formula is: Let y 1 (t) and y 2 (t) be the solutions to the homogeneous equation, and suppose y p (t) = v 1 (t)y 1 (t) + v 2 (t)y 2 (t). Let: [ ] y1 (t) y W (t) = 2 (t) y 1(t) y 2(t) And solve: [ ] [ ] v W (t) 1 (t) 0 v 2(t) = f(t) where f(t) is the inhomogeneous term.

22 22 PEYAM RYAN TABRIZIAN Don t freak out! Here we have y 1 (t) = cos(t) and y 2 (t) = sin(t). Just use the variation of parameters formula with f instead of the inhomogeneous term. At some point, you should get: and v 1(t) = sin(t)f(t) v 2(t) = cos(t)f(t) Then, to get v 1 and v 2, just integrate from 0 to t: v 1 (t) = v 2 (t) = t 0 t 0 sin(s)f(s)ds cos(s)f(s)ds Finally, use the fact that y p (t) = v 1 (t) cos(t) + v 2 (t) sin(t), and use the formula sin(t) cos(s) sin(s) cos(t). Section 4.7: Qualitative considerations for variable-coefficient and nonlinear equations The book gives a physical explanation (which doesn t make sense to me), so let me give you a more mathematical explanation! Again, the word qualitative is important, which means your answer doesn t have to be rigorous at all! Notice that y = 6y 2 0. Hence the function y is concave up everywhere! This says that y should look like the parabola y = x 2. Moreover, since y(0) = 1 and y (0) = 1, we know that y starts at 1 and then starts to decrease. However, because y is concave up and y is decreasing so quiclly, at some point y has to attain a minimum, and then increase without bounds! (because as y is large, y is large, so y increases faster and faster) Note that this agrees with the corresponding graph! Ughhhh, sorry, but this is waaay too much physics! :( This is just a matter of plugging in y 2 into Legendre s equation Just compare the graphs with figures 4.13, 4.16, 4.17

23 HINTS - BANK 23 Section 4.8: A closer look at free mechanical vibrations 4.8.1, 4.8.9, Use the equation my + by + ky = 0, where: (1) m is mass (2) b is damping (3) k is stiffness Also, beware that the problems give you initial conditions! m = 3, k = 48, b = 0. y(0) = 1 2, y (0) = 2. The period is 2π ω and the frequency is ω 2π, where ω is the term you find in the cos and sin terms in your solution (for example, if your solution involves cos(2t), then ω = 2). The amplitude is C C2 2, where C 1 and C 2 are the two constants in your solution. Finally, you need to find the first time t such that y(t) = Notice that your solution is different depending on whether 0 b < 8, b = 8, or b > m = 2, k = 40, b = 8 5, y(0) = 10, y (0) = 2. First find your solution, and then solve y (t) = 0 (this might involve tan 1, see example 3) m = 1, k = 100, b = 0.2, y(0) = 0, y (0) = 1. First find your solution, and then solve y (t) = 0 (this might involve tan 1, see example 3). Section 4.9: A closer look at forced mechanical vibrations Use your calculator! Careful! Here one of the roots of the auxiliary equation, r = 3i coincides with the term on the right-hand-side, 2 cos(3t), hence you have to guess y p (t) = At cos(3t) + Bt sin(3t) (hence the term resonance ) First divide the equation by m, then find the homogeneous solution, and then find a particular solution of the form y p (t) = A cos(γt)+b sin(γt) (no t factors because γ ω). The trigonometric identity the book talks about is: ( ) ( ) A + B A B cos(a) cos(b) = 2 sin sin 2 2 Finally, (c) looks long, but all you have to do is to sketch the curve in (b). Use your calculator!

24 24 PEYAM RYAN TABRIZIAN This looks bad, but it s not as horrible as you might think! First of all, y(t) = y 0 (t) + y p (t) as usual, but here y 0 (t) is given by formulas (18) and (19) on page 509. Moreover, y p stays exactly the same! Section 6.1: Basic theory of linear differential equations 6.1.1, First of all, make sure that the coefficient of y is equal to 1. Then look at the domain of each term, including the inhomogeneous term (more precisely, the part of the domain which contains the initial condition 2 resp. 5). Then the answer is just the intersection of the domains you found! cos 2 (x) + sin 2 (x) = 1, so linearly dependent Use the Wronskian with x = Use the Wronskian with x = For example, for (a), we have: L [2y 1 y 2 ] = 2L [y 1 ] L [y 2 ] = 2x sin(x) (x 2 + 1) = 2x sin(x) x 2 1 So 2y 1 y 2 solves the equation for (a) Either you can use the Wronskian with x = 0 (the matrix becomes a diagonal matrix with (n, n)th term n), or use the following reasoning: If a 0 + a 1 x + a 2 x 2 + a n x n = 0 This means that for EVERY x, x is a zero of a 0 + a 1 x + a 2 x 2 + a n x n (by definition of the zero function). However, this polynomial is of degree n, hence cannot have more than n zeros unless a 1 = a 2 = = a n = 0, which we want! Section 6.2: Homogeneous linear equations with constant coefficients 6.2.1, 6.2.3, 6.2.5, 6.2.7, 6.2.9, , The following fact might be useful: Rational roots theorem: If a polynomial p has a zero of the form r = a b, then a divides the constant term of p and b divides the leading coefficient of p. This helps you guess a zero of p. Then use long division to factor out p.

25 HINTS - BANK , The reason this is written out in such a weird way is because the auxiliary polynomial is easy to figure out! For example, in , the auxiliary polynomial is (r 1) 2 (r + 3)(r 2 + 2r + 5) Suppose: a 0 e rx + a 1 xe rx + + a m 1 x m 1 e rx = 0 Now cancel out the e rx, and you get: a 0 + a 1 x + + a m 1 x m 1 = 0 But 1, x, x 2, x m 1 are linearly independent, so a 0 = a 1 = a m 1 = 0, which is what we wanted! Section 9.4: Linear systems in normal form 9.4.1, Those problems are easier to do than to explain. For example, for 9.4.1: A = [ ] [ ] 3 1 t 2, f = 1 2 e t Just beware of the following: If for example y (t) doesn t contain x(t), then the corresponding term in the matrix A is , Use the Wronskian! The good news is that the wronskian is very easy to calculate! Just ignore any constants and put all the three vectors in a matrix. For example, for , the (pre)-wronskian is: W (t) = e2t e 2t 0 0 e 2t e 3t 5e 2t e 2t 0 And as usual, pick your favorite point t 0, and evaluate det( W (t 0 )). If this is nonzero, your functions are linearly independent Just show L[x + y] = L[x] + L[y] and L[cx] = cl[x], where c is a constant. Use linearity of the derivative and of matrix multiplication Just use the formula x(t) = X(t)X 1 (t 0 )x 0 with t 0 = 0 and x 0 = x(0). Notice that you only have to calculate the inverse of X at 0, not in general!

26 26 PEYAM RYAN TABRIZIAN Section 9.5: Homogeneous linear systems with constant coefficients If you re lost about this, check out the handout Systems of differential equations on my website! Essentially all you have to do is to find the eigenvalues and eigenvectors of A. Also, to deal with the finding the eigenvalues part, remember the following theorem: Rational roots theorem: If a polynomial p has a zero of the form r = a b, then a divides the constant term of p and b divides the leading coefficient of p. This helps you guess a zero of p. Then use long division to factor out p First, draw two lines, one spanned by u 1 and the other one spanned by u 2. Then on the first line, draw arrows pointing away from the origin (because of the e 2t -term in the solution, points on that line move away from the origin). On the second line, draw arrows pointing towards the origin (because of the e 2t -term, solutions move towards the origin). Finally, for all the other points, all you have to do is to connect the arrows (think of it like drawing a force field or a velocity field). If you want a picture of how the answer looks like, google saddle phase portrait differential equations and under images, check out the second image you get! , The fundamental solution set is just the matrix whose columns are the solutions to your differential equation. Basically find the general solution to your differential equation, ignore the constants, and put everything else in a matrix! For (c), you don t need to derive the relations, just solve the following equation for u 2 : Au 2 = u 1. Section 9.6: Complex eigenvalues Again, for all those problems, look at the handout Systems of differential equations, where everything is discussed in more detail! First find all the eigenvectors u 1, u 2, u 3 of A. Then the general solution is x(t) = At λ1 u 1 +Bt λ2 u 2 +Ct λ3 u 3, where λ i is the eigenvalue corresponding to u i Use use equation (10) on page 600 with m 1 = m 2 = 1, k 1 = k 2 = k 3 = 2. Careful: Prof. Grunbaum s method doesn t work here as the k s are 1! Note: The trick where you let y 1 = x 1, y 2 = x 1, y 3 = x 2, y 4 = x 2 is important to remember! It allows you to convert second-order differential equations into a system of differential equations!

27 HINTS - BANK 27 Section 9.7: Nonhomogeneous linear equations Again, the handout Systems of differential equations goes through this in more detail! [ ] [ ] a1 b1 Note: In what follows, a = and b = are 2 vectors Guess f(t) = e t a a 2 b Guess f(t) = e 2t a Guess f(t) = e t (a + tb) , The formula is: ) ( W [ ] v (t) 1 v 2 = f where W (t) is the (pre)-wronskian, or fundamental matrix for your system (essentially the solutions but without the constants). NOTE: The book s answer is WRONG (thank you Raphaël for noticing this!) Either of the following answers is correct: [ ] [ ] Ae t 1 + Be t [ ] 5t e2t 5t e2t All that this says is that you have to find a particular solution to x (t) = Ax(t) + f 1 and x (t) = Ax(t) + f 2, where: For the first equation, guess x(t) = a. f 1 = 1 1, f 2 = 0 e t 0 2e t For the second equation, you have to be very careful because one of the eigenvalues ( 1) coincides with the right-hand-side of your equation. If you look at the previous problem , you see that you have to guess: x(t) = te t a + e t b If you plug this into the differential equations and equate the terms with e t and the terms with te t, you should get 6 equations in 6 unknowns!

28 28 PEYAM RYAN TABRIZIAN Note: You should get that there are two free variables! (because two of the equations are redundant) In this case, just pick a value for the free variables! Section 9.8: The matrix exponential functions Again, check out the Systems of differential equations -handout! The formula is e At = X(t)X 1 (0). Remember that you only have to find the inverse at 0, not in general! Or use e At = P e Dt P 1, where D is your matrix of eigenvalues, and P is your matrix of eigenvectors. Section 10.2: Method of separation of variables , , Just solve your equation the way you would usually do (for 5, use undetermined coefficients) and plug in the initial conditions. You may or may not find a contradiction! If you find 0 = 0, that usually means there are infinitely many solutions, depending on your constant A or B , , You have to split up your analysis into three cases: Case 1: λ > 0. Then let λ = ω 2, where ω > 0. This helps you get rid of square roots. Case 2: λ = 0. Case 3: λ < 0. Then λ = ω 2, where ω < 0. In each case, solve the equation and plug in your initial condition. You may or may not get a contradiction. Also, remember that y has to be nonzero! At some point, in the case λ > 0, you should get: ω cos(ωπ) + sin(ωπ) = 0 At this point, divide your equation by cos(ωπ) to get: That is: ω + sin(ωπ) cos(ωπ) = 0 ω + tan(ωπ) = 0 If you draw a picture of tan(πx), you might notice that there are infinitely many solutions of tan(πx) = x, hence the above equation has infinitely many solutions ω m, for each m = 1, 2,. That s how far you have to go to solve the problem!

29 HINTS - BANK You DON T have to solve the equation! Just use formula (24) on page 638. Then use formulas (25) and (26) on page 639. And equate coefficients that look alike. For example, the coefficient of sin(9x) in (26), which corresponds to n = 9, should equal to This is just separation of variables. Plug u(r, θ) = R(r)T (θ) into your equation, put all the terms with T on the left-hand-side, and all the terms with R on the right-hand-side, and use the fact that the LHS = RHS = λ, where λ doesn t depend on r or θ This is a strange problem! It s more a logic exercise than a math exercise. For the first part: u(r, 0) = 0 implies R(r)T (0) = 0, and provided R(r) 0 (which you can do because otherwise u(r, t) = R(r)T (θ) = 0), cancel out R(r) and get T (0) = 0. Similar for T (π). For the second part: u(r, θ) = R(r)T (θ). Notice that T doesn t depend on r, so what happens as r 0 + is solely determined by R(r). So if R(r) is bounded as r 0 +, then so is u, and if R(r) is unbounded as r 0 +, then so is u. Since u is bounded as r 0 +, we get that R has to be bounded as well as r 0 +. Section 10.3: Fourier series , , f is even if f( x) = f(x), f is odd if f( x) = f(x) Just calculate f g( x) = f( x)g( x) For what follows, use the following formulas: f(x) a0 2 + a n = 1 T b n = 1 T n=1 T { a n cos T T T f(x) cos f(x) sin Where T is such that f is defined on ( T, T ) Notice f is odd, so all the a n are 0. ( nπx ) + b n sin T ( nπx ) dx T ( nπx ) dx T ( nπx )} T To calculate the integral, split up the integral from

30 30 PEYAM RYAN TABRIZIAN , The Fourier series converges to f(x) if f is continuous at x, and converges to f(x+ )+f(x ) 2 if f is discontinuous at x. As for the endpoints T and T, the fourier series converges to the average of f at those endpoints Just show: for all m and n. 1 Use the following formulas: ( ) ( ) (2m 1)π (2n 1)π cos x sin x dx = ( ) ( ) (2m 1)π (2n 1)π cos x cos x dx = ( ) ( ) (2m 1)π (2n 1)π sin x sin x dx = cos(a) cos(b) = cos(a+b)+cos(a B), 2 sin(a) sin(b) = cos(a B) cos(a+b) as well as the fact that odd-ness (for the first one) Just calculate: 1 1 f(x)g(x)dx 1 1 g(x)2 dx for every function g(x) in (this follows from formula (20) on page 652) Just show: H 0 (x)h 1 (x)e x2 dx = 0 H 0 (x)h 2 (x)e x2 dx = 0 H 1 (x)h 2 (x)e x2 dx = 0 For the first one and the third one, use odd-ness. For the second one, split up the integral: 4 x 2 e x2 dx 2 e x2 dx And for the left integral, use integration by parts with du = xe x2 and v = x, which should equal to the right integral.

31 HINTS - BANK 31 Section 10.4: Fourier cosine and sine series IMPORTANT NOTE: The book uses the following trick A LOT: Namely, suppose that when you calculate your coefficients A m or B m, you get something like: A m = ( 1)m+1 +1 πm. Then notice the following: If m is even, then ( 1) m = 0, so A m = 0, and if m is odd, ( 1) m = 2, and A m = 2 πm. So at some point, you would like to say: f(x) = m=1,modd A m cos(mx) The way you do this is as follows: Since m is odd m = 2k 1, for k = 1, 2, 3, and so the sum becomes: f(x) = k=1 2 cos((2k 1)x) π(2k 1) , π-periodic extension just means repeat the graph of f. The even-2π periodic extension is just the function: { f( x) if π < x < 0 f e (x) = f(x) if0 < x < π The odd-2π periodic extension is just the function: f( x) if π < x < 0 f o (x) = 0 ifx = 0 f(x) if0 < x < π And repeat all those graphs! , , Use the formulas: where: ( πmx ) f(x) = A m cos T m=0 A m = 2 T A 0 = 1 T T 0 T 0 f(x)dx ( πmx ) f(x) cos dx T

32 32 PEYAM RYAN TABRIZIAN , Use the formulas: where: ( πmx ) f(x) = B m sin T m=0 B 0 = See next section! B m = 2 T T 0 ( πmx ) f(x) sin dx T Section 10.5: The heat equation The best advice I can give you is: Read the PDE handout, specifically the section about the heat equation! It outlines all the important steps you ll need! Also read the important note I wrote in the previous section! Don t worry about this for the exam, but basically imitate Example 2! Your solution is u(x, t) = v(x) + w(x, t) where v(x) = 5 + 5x π and w(x, t) solves the corresponding homogeneous equation with w(0, t) = 0, w(π, t) = 0 but with w(x, 0) = sin(3x) sin(5x) v(x) , Don t worry about this for the exam, but basically because we re dealing with an inhomogeneous solution, the general solution u(x, t) is of the following form: u(x, t) = v(x) + w(x, t) where v(x) is a particular solution to the differential equation, and w(x, t) is the general solution to the homogeneous equation (36), (37), (38) on page 671 (careful about the initial term, it s w(x, 0) = f(x) v(x), not w(x, 0) = f(x)) To find v use formula (35) on page 671, and to find w, solve equations (36), (37), (38). Section 10.6: The wave equation Read the PDE handout, specifically the section about the wave equation! outlines all the important steps you ll need! It

33 HINTS - BANK 33 Section 10.7: Laplace s equation Read the PDE handout, specifically the section about Laplace s equation! outlines all the important steps you ll need! It The most important thing to remember is that when you solve for Y (y), your solution might involve exponentials, i.e. Y (y) = Ae wy + Be wy for some constants A, B, w (which might depend on w). Do NOT use this form! Instead, use the fact that: and write: e w + e w 2 e w e w 2 = cosh(w) = sinh(w) Y (y) = A cosh(wy) + B sinh(wy) where A and B might be constants different from above (but still call them A and B). This will simplify your algebra by a LOT, trust me!

MATH 54 - HINTS TO HOMEWORK 11. Here are a couple of hints to Homework 11! Enjoy! :)

MATH 54 - HINTS TO HOMEWORK 11. Here are a couple of hints to Homework 11! Enjoy! :) MAH 54 - HINS O HOMEWORK 11 PEYAM ABRIZIAN Here are a couple of hints to Homework 11! Enjoy! : Warning: his homework is very long and very time-consuming! However, it s also very important for the final.

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

MATH 54 - FINAL EXAM STUDY GUIDE

MATH 54 - FINAL EXAM STUDY GUIDE MATH 54 - FINAL EXAM STUDY GUIDE PEYAM RYAN TABRIZIAN This is the study guide for the final exam! It says what it does: to guide you with your studying for the exam! The terms in boldface are more important

More information

DIFFERENTIAL EQUATIONS REVIEW. Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it.

DIFFERENTIAL EQUATIONS REVIEW. Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it. DIFFERENTIAL EQUATIONS REVIEW PEYAM RYAN TABRIZIAN Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it. Welcome to the special Friday after-school special of That

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Systems of differential equations Handout

Systems of differential equations Handout Systems of differential equations Handout Peyam Tabrizian Friday, November 8th, This handout is meant to give you a couple more example of all the techniques discussed in chapter 9, to counterbalance all

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

MATH 310, REVIEW SHEET

MATH 310, REVIEW SHEET MATH 310, REVIEW SHEET These notes are a summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive, so please

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

0 2 0, it is diagonal, hence diagonalizable)

0 2 0, it is diagonal, hence diagonalizable) MATH 54 TRUE/FALSE QUESTIONS FOR MIDTERM 2 SOLUTIONS PEYAM RYAN TABRIZIAN 1. (a) TRUE If A is diagonalizable, then A 3 is diagonalizable. (A = P DP 1, so A 3 = P D 3 P = P D P 1, where P = P and D = D

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

Higher-order differential equations

Higher-order differential equations Higher-order differential equations Peyam Tabrizian Wednesday, November 16th, 2011 This handout is meant to give you a couple more example of all the techniques discussed in chapter 6, to counterbalance

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Practice problems for Exam 3 A =

Practice problems for Exam 3 A = Practice problems for Exam 3. Let A = 2 (a) Determine whether A is diagonalizable. If so, find a matrix S such that S AS is diagonal. If not, explain why not. (b) What are the eigenvalues of A? Is A diagonalizable?

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

MATH 24 EXAM 3 SOLUTIONS

MATH 24 EXAM 3 SOLUTIONS MATH 4 EXAM 3 S Consider the equation y + ω y = cosω t (a) Find the general solution of the homogeneous equation (b) Find the particular solution of the non-homogeneous equation using the method of Undetermined

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors. Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,

More information

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST me me ft-uiowa-math255 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 2/3/2 at :3pm CST. ( pt) Library/TCNJ/TCNJ LinearSystems/problem3.pg Give a geometric description of the following

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you. Math 54 Fall 2017 Practice Final Exam Exam date: 12/14/17 Time Limit: 170 Minutes Name: Student ID: GSI or Section: This exam contains 9 pages (including this cover page) and 10 problems. Problems are

More information

Exam in TMA4110 Calculus 3, June 2013 Solution

Exam in TMA4110 Calculus 3, June 2013 Solution Norwegian University of Science and Technology Department of Mathematical Sciences Page of 8 Exam in TMA4 Calculus 3, June 3 Solution Problem Let T : R 3 R 3 be a linear transformation such that T = 4,

More information

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions Linear Algebra (MATH 4) Spring 2 Final Exam Practice Problem Solutions Instructions: Try the following on your own, then use the book and notes where you need help. Afterwards, check your solutions with

More information

WI1403-LR Linear Algebra. Delft University of Technology

WI1403-LR Linear Algebra. Delft University of Technology WI1403-LR Linear Algebra Delft University of Technology Year 2013 2014 Michele Facchinelli Version 10 Last modified on February 1, 2017 Preface This summary was written for the course WI1403-LR Linear

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Summer Session Practice Final Exam

Summer Session Practice Final Exam Math 2F Summer Session 25 Practice Final Exam Time Limit: Hours Name (Print): Teaching Assistant This exam contains pages (including this cover page) and 9 problems. Check to see if any pages are missing.

More information

Math 21b. Review for Final Exam

Math 21b. Review for Final Exam Math 21b. Review for Final Exam Thomas W. Judson Spring 2003 General Information The exam is on Thursday, May 15 from 2:15 am to 5:15 pm in Jefferson 250. Please check with the registrar if you have a

More information

2018 Fall 2210Q Section 013 Midterm Exam II Solution

2018 Fall 2210Q Section 013 Midterm Exam II Solution 08 Fall 0Q Section 0 Midterm Exam II Solution True or False questions points 0 0 points) ) Let A be an n n matrix. If the equation Ax b has at least one solution for each b R n, then the solution is unique

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Practice Final Exam. Solutions.

Practice Final Exam. Solutions. MATH Applied Linear Algebra December 6, 8 Practice Final Exam Solutions Find the standard matrix f the linear transfmation T : R R such that T, T, T Solution: Easy to see that the transfmation T can be

More information

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner:

Table of contents. d 2 y dx 2, As the equation is linear, these quantities can only be involved in the following manner: M ath 0 1 E S 1 W inter 0 1 0 Last Updated: January, 01 0 Solving Second Order Linear ODEs Disclaimer: This lecture note tries to provide an alternative approach to the material in Sections 4. 4. 7 and

More information

4.3 - Linear Combinations and Independence of Vectors

4.3 - Linear Combinations and Independence of Vectors - Linear Combinations and Independence of Vectors De nitions, Theorems, and Examples De nition 1 A vector v in a vector space V is called a linear combination of the vectors u 1, u,,u k in V if v can be

More information

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th. Elementary Linear Algebra Review for Exam Exam is Monday, November 6th. The exam will cover sections:.4,..4, 5. 5., 7., the class notes on Markov Models. You must be able to do each of the following. Section.4

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra Final Exam Study Guide Solutions Fall 2012

Linear Algebra Final Exam Study Guide Solutions Fall 2012 . Let A = Given that v = 7 7 67 5 75 78 Linear Algebra Final Exam Study Guide Solutions Fall 5 explain why it is not possible to diagonalize A. is an eigenvector for A and λ = is an eigenvalue for A diagonalize

More information

Math 20F Practice Final Solutions. Jor-el Briones

Math 20F Practice Final Solutions. Jor-el Briones Math 2F Practice Final Solutions Jor-el Briones Jor-el Briones / Math 2F Practice Problems for Final Page 2 of 6 NOTE: For the solutions to these problems, I skip all the row reduction calculations. Please

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solving a system by back-substitution, checking consistency of a system (no rows of the form MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

Feb 10/ True or false: if A is a n n matrix, Ax = 0 has non-zero solution if and only if there are some b such that Ax = b has no solution.

Feb 10/ True or false: if A is a n n matrix, Ax = 0 has non-zero solution if and only if there are some b such that Ax = b has no solution. Feb 0/. True or false: if A is a n n matrix, Ax = 0 has non-zero solution if and only if there are some b such that Ax = b has no solution. Answer: This is true. Firstly we prove the if part: let b be

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2 MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality

More information

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS

Math 250B Final Exam Review Session Spring 2015 SOLUTIONS Math 5B Final Exam Review Session Spring 5 SOLUTIONS Problem Solve x x + y + 54te 3t and y x + 4y + 9e 3t λ SOLUTION: We have det(a λi) if and only if if and 4 λ only if λ 3λ This means that the eigenvalues

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

CHAPTER 5. Higher Order Linear ODE'S

CHAPTER 5. Higher Order Linear ODE'S A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 2 A COLLECTION OF HANDOUTS ON SCALAR LINEAR ORDINARY

More information

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions. The exam will cover Sections 6.-6.2 and 7.-7.4: True/False 30% Definitions 0% Computational 60% Skip Minors and Laplace Expansion in Section 6.2 and p. 304 (trajectories and phase portraits) in Section

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

APPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014

APPM 2360 Section Exam 3 Wednesday November 19, 7:00pm 8:30pm, 2014 APPM 2360 Section Exam 3 Wednesday November 9, 7:00pm 8:30pm, 204 ON THE FRONT OF YOUR BLUEBOOK write: () your name, (2) your student ID number, (3) lecture section, (4) your instructor s name, and (5)

More information

Solutions to Math 53 Math 53 Practice Final

Solutions to Math 53 Math 53 Practice Final Solutions to Math 5 Math 5 Practice Final 20 points Consider the initial value problem y t 4yt = te t with y 0 = and y0 = 0 a 8 points Find the Laplace transform of the solution of this IVP b 8 points

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

Math 369 Exam #2 Practice Problem Solutions

Math 369 Exam #2 Practice Problem Solutions Math 369 Exam #2 Practice Problem Solutions 2 5. Is { 2, 3, 8 } a basis for R 3? Answer: No, it is not. To show that it is not a basis, it suffices to show that this is not a linearly independent set.

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Second-Order Homogeneous Linear Equations with Constant Coefficients

Second-Order Homogeneous Linear Equations with Constant Coefficients 15 Second-Order Homogeneous Linear Equations with Constant Coefficients A very important class of second-order homogeneous linear equations consists of those with constant coefficients; that is, those

More information

Systems of Linear ODEs

Systems of Linear ODEs P a g e 1 Systems of Linear ODEs Systems of ordinary differential equations can be solved in much the same way as discrete dynamical systems if the differential equations are linear. We will focus here

More information

Linear Algebra, Summer 2011, pt. 3

Linear Algebra, Summer 2011, pt. 3 Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V. MA322 Sathaye Final Preparations Spring 2017 The final MA 322 exams will be given as described in the course web site (following the Registrar s listing. You should check and verify that you do not have

More information

This is a closed everything exam, except for a 3x5 card with notes. Please put away all books, calculators and other portable electronic devices.

This is a closed everything exam, except for a 3x5 card with notes. Please put away all books, calculators and other portable electronic devices. Math 54 final, Spring 00, John Lott This is a closed everything exam, except for a x5 card with notes. Please put away all books, calculators and other portable electronic devices. You need to justify

More information

Final Exam Practice 3, May 8, 2018 Math 21b, Spring Name:

Final Exam Practice 3, May 8, 2018 Math 21b, Spring Name: Final Exam Practice 3, May 8, 8 Math b, Spring 8 Name: MWF 9 Oliver Knill MWF Jeremy Hahn MWF Hunter Spink MWF Matt Demers MWF Yu-Wen Hsu MWF Ben Knudsen MWF Sander Kupers MWF Hakim Walker TTH Ana Balibanu

More information

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic. MATH 300, Second Exam REVIEW SOLUTIONS NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic. [ ] [ ] 2 2. Let u = and v =, Let S be the parallelegram

More information

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS 1. HW 1: Due September 4 1.1.21. Suppose v, w R n and c is a scalar. Prove that Span(v + cw, w) = Span(v, w). We must prove two things: that every element

More information

Calculus I Review Solutions

Calculus I Review Solutions Calculus I Review Solutions. Compare and contrast the three Value Theorems of the course. When you would typically use each. The three value theorems are the Intermediate, Mean and Extreme value theorems.

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

MATH 3330 INFORMATION SHEET FOR TEST 3 SPRING Test 3 will be in PKH 113 in class time, Tues April 21

MATH 3330 INFORMATION SHEET FOR TEST 3 SPRING Test 3 will be in PKH 113 in class time, Tues April 21 MATH INFORMATION SHEET FOR TEST SPRING Test will be in PKH in class time, Tues April See above for date, time and location of Test It will last 7 minutes and is worth % of your course grade The material

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 F all206 Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: Commutativity:

More information