1.6: 16, 20, 24, 27, 28

Size: px
Start display at page:

Download "1.6: 16, 20, 24, 27, 28"

Transcription

1 .6: 6, 2, 24, 27, 28 6) If A is positive definite, then A is positive definite. The proof of the above statement can easily be shown for the following 2 2 matrix, a b A = b c If that matrix is positive definite, then its upper left determinants must be positive. These conditions are stated by the inequalities a > and ac b 2 >. The inverse of this 2 2 matrix is easy to find; it is given by A c b = ac b 2 b a For A to be positive definite, its upper left determinants must also be positive. The resulting inequalities can be simplified as shown by using the given inequality ac b 2 >. c ac b 2 > c > ca (ac b 2 ) 2 b 2 (ac b 2 ) 2 > (ac b 2 ) (ac b 2 ) 2 > ac b 2 > > The inequality on the right is trivially satisfied. The inequality on the left, c >, is also automatically satisfied because if c, it would imply that b 2 < ac (using the fact that a > ), which is a contradiction because the the square of a real number cannot be strictly less than zero. Since both determinant tests for A are satisfied automatically, just by the fact that A is positive definite, it shows that A is also positive definite given that A is positive definite. The case for a general matrix A can also be proven, using the eigen decomposition A = SΛS. If A is positive definite, then all of its eigenvalues must be positive, or the diagonal entries on the diagonal matrix Λ are all positive. Since Λ is a diagonal matrix, its inverse Λ must also be a diagonal matrix with its entries simply being the reciprocal of the corresponding entry in Λ; or mathematically, Λ ] ii = (Λ] ii ). This means that the diagonal entries of Λ are also positive. The inverse of A is given by A = (SΛS ) = (S ) Λ S = SΛ S Since the diagonal entries of Λ, which are known to be positive, are the eigenvalues of A, it indicates that the eigenvalues of A are all positive, and thus A is positive definite. 2) cos θ sin θ 2 cos θ sin θ A = sin θ cos θ 5 sin θ cos θ Looking at the above equation, a few things are clear. First, the third matrix is the transpose of the first matrix. Second, the first matrix has orthonormal column vectors. This can easily be checked as follows: taking the inner product of the first column vector with itself gives cos 2 θ + sin 2 θ = ; taking the inner product of the second column vector with itself gives sin 2 θ + cos 2 θ = ; and taking the inner product of the first column vector with the second column vector gives cos θ sin θ + sin θ cos θ =. Third, the second matrix is a diagonal matrix. Putting all these observations together it is clear that the right hand side is the eigen decomposition, A = QΛQ T where cos θ sin θ 2 Q = Λ = sin θ cos θ 5 With A written in this form, the eigenvalues, eigenvectors, and determinant of A are easy to calculate. 9/29/ Page of

2 (a) The determinant of A is the product of the eigenvalues, which is 2 5 =. (b) The eigenvalues are on the diagonal of Λ and they are λ = 2 and λ 2 = 5. (c) The eigenvectors are the column vectors of Q and they are v = cos θ sin θ and v 2 = sin θ. cos θ (d) Since the eigenvalues of A, given in part (b), are all positive, the matrix A is positive definite. 24) We want to show that P (u) = 2 ut u u T f is equal to 2 (u f) T (u f) 2 f T f. We can start with the latter expression and simplify it to obtain the former expression. Note that in the simplification, the assumption that is a symmetric, positive definite matrix was used. Also, the fact that u T f = f T u is used, which comes from the fact that the inner product of those two column vectors is a scalar and so the order of the inner product does not matter. 2 (u f) T (u f) 2 f T f = 2 = 2 u T (u f) f T ( T ) (u f) ] 2 f T f u T u u T f f T u + f T f ] 2 f T f = 2 ut u] 2 ut f + f T u] + 2 f T f 2 f T f = 2 ut u u T f = P (u) Other things to notice about P(u) can be seen by studying the longer expression representation. The term 2 (u f) T (u f) can be simplified to 2 vt v by letting v = (u f). Since is positive definite, by the definition of positive definite, this term must always be positive for any v. When v does equal, it implies that u = f or u = f, and under this case P (u) simplifies to 2 f T f, which is the minimum energy P min. 27) We are given that the matrices H (size m m) and (size n n) are positive definite and matrices M and N are defined in block notation by H M = N = If we denote the upper triangular Gaussian eliminated forms of H and as U H and U respectively, then we can perform Gaussian elimination on matrices M and N and get M elimination UH N elimination U U U So the pivots of M are composed of the pivots of H and the pivots of. Since the pivots of both H and are positive, the pivots of M are all positive and thus M is positive definite. The pivots of N are composed of the pivots of and n zeros. Since N has positive and zero pivots, it is not positive definite but rather positive semi-definite. The eigenvalues of M and N can also be connected to the eigenvalues of H and. We define vi H and λ H i and to be the m eigenvectors and corresponding eigenvalues of H with i =, 2,..., m. We also define vi and λ i and to be the n eigenvectors and corresponding eigenvalues for with i =, 2,..., n. Then the following observations can be made. H v H i Hv H = i = λ H i v H i ] H vi = vi ] = λ i So the eigenvalues of M are composed of the eigenvalues of H and the eigenvalues of. Also if we define e i to be the column vector consisting of (i ) zeros followed by a one and then followed by (n i) zeros, then we can use it to find the eigenvalues of N. ei ei v = = i 2v e i e i vi = i 2vi = 2λ v i i vi v i 9/29/ Page 2 of

3 ei Since e i is orthogonal to e j for i j, it is clear that for i =, 2,..., n are n linearly independent e i vectors. This means that zero is an eigenvalue for N with a multiplicity of n. The remaining eigenvalues come from the eigenvalues of, but as seen above they are doubled. So the eigenvalues of N are 2 times the eigenvalues of, as well as the eigenvalue zero with multiplicity n. Finally, we want to construct the Cholesky of M, chol(m), from chol(h) and chol(). We let A = chol(h), so that H = A T A, and let B = chol(), so that = B T B. We then define a matrix C that is given in block notation by A chol(h) C = = B chol() If we multiply the transpose of C with C, we find that it equals C T A T A A C = B T = T A B B T B = H = M So, M = C T C, which is the Cholesky factorization. Thus, chol(m) = C, where C was defined above in terms of the chol(h) and chol(). 28) P = w w 2 u ] w w 2 = w 2 + w2 2 2uw + 2uw 2 u The eigenvalues of the middle matrix, denoted A, are λ =, λ 2 = 2, and λ 3 =, and their corresponding normalized eigenvectors are v = v 2 = v 3 = A can be written in eigen decomposed form, A = QΛQ T, where Q = v v 2 v 3 Λ = 2 So, P = yay T, where y is defined to be y = w w 2 u ]. Then, P = yay T = yqλq T y T. Letting x = Q T y T = x x 2 ] T x 3, P = x T Λx = λ x 2 + λ 2 x λ 2 x 2 3. Then, x is found as follows x / 2 / 2 x 2 = / 3 / 3 / 3 w w +w 2 2 w 2 w = w 2 u x 3 / 6 / 6 2/ 3 6 u ( ) 2 ( ) 2 ( So P = x T w Λx = +w w w 2 u 3 w w 2+2u 6 considerably. ( ) 2 ( ) 2 ( ) 2 w + w 2 w w P = 2 u w w u = 2 (w + w 2 ) w w 2+2u 6 ) 2. The expression on the right can be simplified ] (w w 2 ) 2 2(w w 2 )u + u 2 ] (w w 2 ) 2 + 4(w w 2 )u + 4u 2 6 = 2 (w + w 2 ) (w w 2 ) 2 2(w w 2 ) = 2 (w2 + w w w 2 ) + 2 (w2 + w 2 2 2w w 2 ) 2w u + 2w 2 u = w 2 + w 2 2 2uw + 2uw 2 This agrees with the original given equation. 9/29/ Page 3 of

4 2.: 5, 6, 7, 8 5) In the case of four identical springs connecting three identical masses together and to the fixed top and bottom, the matrix relating the displacements of the masses, u, to the forces on the masses, f, with the equation u = f, is actually the 3 3 special matrix times the spring constant, c. The force on each mass is identical since it is simply equal to mg. Note that the convention here is to take both displacements and forces in the downward direction to be positive. From the derivation of in the fixed-fixed case it was found that = ACA T, where in this case C = ci, and A along with the other matrices are listed below. 2 = c 2 A = 2 C = c c c c f = mg The tension in the springs is given by w = CAu = CA( f). 3 2 CA = c = CA f = 2 4c 4 2 mg = mg So, the reaction force at the top (where pointing downward is positive) is R t = w = 3 2mg. The reaction force at the bottom (where again pointing downward is positive) is R b = w 4 = 3 2mg. Both reaction forces are negative meaning they are both pointing upward, to counteract the force of gravity pointing downward. Also, notice that the sum of the reaction forces R t + R b = 3mg, which is to be expected because the reaction forces must exactly balance out the force of gravity on the three masses. 6) Now, in the fixed-free case with three equal masses and three springs, the first and third springs have spring constant c = c 3 =, but the second spring constant, c 2, differs. The matrix in u = f = ] T is given by the following equation = A T CA = c + c 2 c 2 c 2 c 2 + c 3 c 3 = + c 2 c 2 c 2 + c 2 c 3 c For c 2 = : For c 2 = : 3. = u = f = = u = f = ) In the fixed-fixed case with three equal masses and originally four equal springs with spring constant equal to, we now weaken spring 2 so that c 2. Now, the matrix becomes c + c 2 c 2 = c 2 c 2 + c 3 c 3 = c 3 c 3 + c 4 2 This matrix is still invertible because its determinant is. Solving u = f = ] T, we get u = 3 2 ] T. To explain this answer physically, it is important to realize that by weaken spring 2, it splits 9/29/ Page 4 of

5 the problem into two decoupled problem. One problem is mass hanging freely off spring. In this problem, we expect for the displacement to be u = because there is a force of on the mass connected to a spring with spring constant equal to. The second problem is a free-fixed problem with two identical masses and two identical spring, but the problem is upside compared to the typical fixed-free spring-mass problem. So, when u 2 = 3 and u 3 = 2, it means that spring 4 is compressed by 2 by the two masses above it and spring 3 is compressed by by the one mass above it, which makes sense physically. 8) With one free-free spring, the extension in the spring is e = u 2 u. The tension is proportional to the extension with the constant of proportionality, c, which is the spring constant. So, the tension w = c ] u. Then the tension is related to two forces at the ends of the spring, fi t for the force at the top and fi b for the force on bottom, by f i t = w and f i b = w. So, f t f = i ] = w = c u = c u f b i elem,i u = f where elem,i = c i The notation used on the bottom is to distinguish between the various forces when assembling multiple element matrices, elem,i, into a bigger matrix for the whole system. The superscript denotes whether the force is on top of the spring, t, or on the bottom of the spring, b. The subscript is supposed to be a number to distinguish which element matrix the forces are meant for. Also, as an example, if two springs, spring and spring 2, are attached to the same mass with spring above spring 2, then from a free-body diagram on the mass it can be seen that f b + f t 2 = f m, where f m is the total force on the mass. (a) The element matrices can be assembled to find free-free for the free-free three-mass, two-spring (spring 2 and spring 3) problem. f t f 2 2 t c 2 c 2 u u expanded f2 b = c 2 f u 2 b = c 2 c 2 u 2 2 u 3 f t 3 u2 expanded f3 b = c 3 f 3 u t = c 3 c 3 u u 2 3 f3 b c 3 c 3 u 3 f2 t f c 2 c 2 u Summing the two equations: f 2 b + f3 t = f 2 = c 2 (c 2 + c 3 ) c 3 u 2 () f3 b f 3 c 3 c 3 u 3 }{{} free-free (b) Now, an element matrix for spring can be adjusted and added to free-free to get fixed-free, which is the matrix for the fixed-free three-mass, three-spring problem. f t c c f t expanded f b = c f b u = c c u simplified u 2 f b c u = u 2 u u 3 3 Adding this equation to equation gives f b + f2 t f (c + c 2 ) c 2 f 2 b + f3 t = f 2 = c 2 (c 2 + c 3 ) c 3 u u 2 (2) f3 b f 3 c 3 c 3 u 3 }{{} fixed-free where this time f has a different equation because adding spring changes the free-body diagram of mass. Looking at the free-body diagram it can be seen that f = f b + f t 2 is consistent with balancing the forces on mass. 9/29/ Page 5 of

6 (c) Finally, an element matrix for spring 4 can be adjusted and added to fixed-free to get fixed-fixed, which is the matrix for the fixed-fixed three-mass, four-spring problem. u f t 4 u3 expanded f4 b = c 4 f 4 t = u 2 u simplified c 4 c 4 u 3 = u 2 f4 b f4 c 4 c 4 t c 4 u 3 Adding this equation to equation 2 gives f b + f2 t f (c + c 2 ) c 2 f 2 b + f3 t = f 2 = c 2 (c 2 + c 3 ) c 3 f3 b + f4 t f 3 c 3 (c 3 + c 4 ) }{{} fixed-fixed u u 2 u 3 (3) and again f 3 has a different equation because adding spring 3 changes the free-body diagram of mass 3. 9/29/ Page 6 of

7 2.2: 5, 6, 8 5) c b du dt = c a u b a (a) u() 2 = u 2 + u u 2 3, and the derivative of u(t) 2 with respect to time is 2u u + 2u 2 u 2 + 2u 3 u 3. This can be simplified as follows. Thus u(t) 2 = u() 2. d dt u(t) 2 = 2u (cu 2 bu 3 ) + 2u 2 (au 3 cu ) + 2u 3 (bu au 2 ) = 2cu u 2 2bu u 3 + 2au 2 u 3 2cu u 2 + 2bu u 3 2au 2 u 3 = (b) Q = e At is an orthogonal matrix, where A is the matrix above. It can also be seen by studying the matrix that A is a skew-symmetric matrix, or A T = A. It can be shown that Q T = e At, and thus Q T Q = e At e At = I, satisfying the property for an orthogonal (actually orthonormal) matrix. The proof below makes use of the Taylor series expansion of e At about t =. Q = e At = I + At + 2! (At)2 + 3! (At) Q T = I + At + 2! (At)2 + ] T 3! (At) = I + A T t + 2! (AT t) 2 + 3! (AT t) = I + ( At) + 2! ( At)2 + 3! ( At) = e At 6) The trapezoidal rule for u = Au is given by (I t 2 A)U n+ = (I + t 2 A)U n If A T = A, then the trapezoidal rule will conserve the energy u 2. This can be proven by showing that U n+ 2 = U n 2. (I t 2 A)U n+ = (I + t 2 A)U n U n+ U n = t 2 A(U n+ + U n ) (Un+ T + Un T )(U n+ U n ) = (Un+ T + Un T ) t 2 A(U n+ + U n ) (Un+ + U n ) T A(U n+ + U n ) ] U T n+u n+ + U T n+u n U T n U n+ U T n U n = t 2 U n+ 2 U n 2 = t 2 vt Av where we let v = (U n+ + U n ) and U T n+u n was canceled by U T n U n+ since they are both equal scalars and the order of the inner product does not matter. Likewise, v T Av is a scalar and thus it is equal to its transpose (v T Av) T = v T A T v. However, since A T = A, this means that v T Av = v T Av and the only way that equality can be satisfied is if v T Av =. Thus, U n+ 2 U n 2 = t 2 vt Av = U n+ 2 = U n 2 9/29/ Page 7 of

8 8) The Forward and Backward Euler are given by Forward Euler Backward Euler U n+ = U n + hv n U n+ = U n + hv n+ V n+ = V n hu n V n+ = V n hu n+ Forward Euler multiplies the energy by ( + h 2 ) at every step. U 2 n+ + V 2 n+ = (U n + hv n ) 2 + (V n hu n ) 2 = U 2 n + 2hU n V n + h 2 V 2 n + V 2 n 2hU n V n + h 2 U 2 n = ( + h 2 )U 2 n + ( + h 2 )V 2 n = ( + h 2 )(U 2 n + V 2 n ) Backward Euler divides the energy by ( + h 2 ) at every step. First, we rewrite the Backward Euler as U n+ hv n+ = U n and V n+ + hu n+ = V n. Then, we sum Un 2 and Vn 2 to find U 2 n + V 2 n = (U n+ hv n+ ) 2 + (V n+ + hu n+ ) 2 U 2 n + V 2 n = U 2 n+ 2hU n+ V n+ + h 2 V 2 n+ + V 2 n+ + 2hU n+ V n+ + h 2 U 2 n+ U 2 n + V 2 n = ( + h 2 )U 2 n+ + ( + h 2 )V 2 n+ U 2 n + V 2 n = ( + h 2 )(U 2 n+ + V 2 n+) U 2 n+ + V 2 n+ = + h 2 (U 2 n + V 2 n ) We can see what the gain in energy will be after 32 steps, for example, with h = 2π 32 : ( + h2 ) It is interesting to see whether Euler converges, so we take the limit of y = ( + h 2 ) 2π h as h. lim ( + h h2 ) 2π h = lim y = lim e ln y = exp lim ln y = exp h h h d dh = exp 2π lim ln( + ] h2 ) h d dh h = exp 2π lim h Since the limit equals, it indicates that Euler does converge, albeit slowly. 2π lim h 2h + h 2 ] h ln( + h2 ) ] = e = 9/29/ Page 8 of

9 MATLAB Assignment In this problem, identical masses are connected by identical springs with spring constant c =. Two boundary conditions are considered: one is the fixed-fixed boundary condition, in which the top and bottom masses are attached with springs to fixed supports; the other is the fixed-free boundary condition, in which only the top mass is attached with a spring to a fixed support and the bottom mass is hanging freely. The force on each mass is taken to have a magnitude of.. MATLAB was used to generate the solutions to these problems and graph the displacements. The code used is displayed below in Listing. The plot of the displacement for the fixed-fixed case can be seen in Figure. The plot of the displacement for the fixed-free case can be seen in Figure 2. Listing : MATLAB Code to solve spring-mass system for both boundary conditions and plot displacements. % Prepare necessary matrices f o r system under both boundary c o n d i t i o n s = TBC(,, ) ; % Create s parse matrix f o r f i x e d f i x e d case H = ; H(, ) = ; % Create sparse H matrix f o r f i x e d f r e e case f =. * ones (, ) ; % Create f o r c e column v e c t o r % S olve system u = f f o r both boundary c o n d i t i o n s u = \ f ; % S o l v e displacements, u, f o r f i x e d f i x e d case u2 = H\ f ; % S o l v e displacements, u2, f o r f i x e d f r e e case % Plot r e s u l t s figure ( ) ; plot ( u, + ) ; % Plot r e s u l t s f o r f i x e d f i x e d case xlabel ( Mass Number ) ; ylabel ( Displacement ) ; t i t l e ( Fixed Fixed Case ) ; figure ( 2 ) ; plot ( u2, + ) ; % Plot r e s u l t s f o r f i x e d f r e e case xlabel ( Mass Number ) ; ylabel ( Displacement ) ; t i t l e ( Fixed Free Case ) ; 9/29/ Page 9 of

10 4 Fixed Fixed Case 2 Displacement Mass Number Figure : Plot of mass displacement for spring-mass system with a fixed-fixed boundary condition. 6 Fixed Free Case 5 4 Displacement Mass Number Figure 2: Plot of mass displacement for spring-mass system with a fixed-free boundary condition. 9/29/ Page of

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Fundamentals of Matrices

Fundamentals of Matrices Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points. 8.06 Problem Set 7 Solution Due Wednesday, 5 April 2009 at 4 pm in 2-06. Total: 50 points. ( ) 2 Problem : Diagonalize A = and compute SΛ 2 k S to prove this formula for A k : A k = ( ) 3 k + 3 k 2 3 k

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Multiple Degree of Freedom Systems. The Millennium bridge required many degrees of freedom to model and design with.

Multiple Degree of Freedom Systems. The Millennium bridge required many degrees of freedom to model and design with. Multiple Degree of Freedom Systems The Millennium bridge required many degrees of freedom to model and design with. The first step in analyzing multiple degrees of freedom (DOF) is to look at DOF DOF:

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

18.06 Problem Set 8 Due Wednesday, 23 April 2008 at 4 pm in

18.06 Problem Set 8 Due Wednesday, 23 April 2008 at 4 pm in 18.06 Problem Set 8 Due Wednesday, 23 April 2008 at 4 pm in 2-106. Problem 1: Do problem 3 in section 6.5 (pg. 339) in the book. The matrix A encodes the quadratic f = x 2 + 4xy + 9y 2. Completing the

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lecture 18, Friday 18 th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON Mathematics

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

EE5120 Linear Algebra: Tutorial 7, July-Dec Covers sec 5.3 (only powers of a matrix part), 5.5,5.6 of GS

EE5120 Linear Algebra: Tutorial 7, July-Dec Covers sec 5.3 (only powers of a matrix part), 5.5,5.6 of GS EE5 Linear Algebra: Tutorial 7, July-Dec 7-8 Covers sec 5. (only powers of a matrix part), 5.5,5. of GS. Prove that the eigenvectors corresponding to different eigenvalues are orthonormal for unitary matrices.

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares

Chapter 7. Optimization and Minimum Principles. 7.1 Two Fundamental Examples. Least Squares Chapter 7 Optimization and Minimum Principles 7 Two Fundamental Examples Within the universe of applied mathematics, optimization is often a world of its own There are occasional expeditions to other worlds

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Final Review Written by Victoria Kala vtkala@mathucsbedu SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015 Summary This review contains notes on sections 44 47, 51 53, 61, 62, 65 For your final,

More information

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if 5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

MTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition

More information

Beyond Vectors. Hung-yi Lee

Beyond Vectors. Hung-yi Lee Beyond Vectors Hung-yi Lee Introduction Many things can be considered as vectors. E.g. a function can be regarded as a vector We can apply the concept we learned on those vectors. Linear combination Span

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns Linear Equation: a x + a 2 x 2 +... + a n x n = b. x, x 2,..., x n : variables or unknowns a, a 2,..., a n : coefficients b: constant term Examples: x + 4 2 y + (2 5)z = is linear. x 2 + y + yz = 2 is

More information

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg Linear Algebra, part 3 Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2010 Going back to least squares (Sections 1.7 and 2.3 from Strang). We know from before: The vector

More information

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f. Review Example. Elementary matrices in action: (a) 0 0 0 0 a b c d e f = g h i d e f 0 0 g h i a b c (b) 0 0 0 0 a b c d e f = a b c d e f 0 0 7 g h i 7g 7h 7i (c) 0 0 0 0 a b c a b c d e f = d e f 0 g

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

Linear Algebra Lecture Notes

Linear Algebra Lecture Notes Linear Algebra Lecture Notes jongman@gmail.com January 19, 2015 This lecture note summarizes my takeaways from taking Gilbert Strang s Linear Algebra course online. 1 Solving Linear Systems 1.1 Interpreting

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Mathematical Properties of Stiffness Matrices

Mathematical Properties of Stiffness Matrices Mathematical Properties of Stiffness Matrices CEE 4L. Matrix Structural Analysis Department of Civil and Environmental Engineering Duke University Henri P. Gavin Fall, 0 These notes describe some of the

More information

Spring 2014 Math 272 Final Exam Review Sheet

Spring 2014 Math 272 Final Exam Review Sheet Spring 2014 Math 272 Final Exam Review Sheet You will not be allowed use of a calculator or any other device other than your pencil or pen and some scratch paper. Notes are also not allowed. In kindness

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Positive Definite Matrix

Positive Definite Matrix 1/29 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Positive Definite, Negative Definite, Indefinite 2/29 Pure Quadratic Function

More information

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University

Old Math 330 Exams. David M. McClendon. Department of Mathematics Ferris State University Old Math 330 Exams David M. McClendon Department of Mathematics Ferris State University Last updated to include exams from Fall 07 Contents Contents General information about these exams 3 Exams from Fall

More information