Solutions to the Exercises * on Linear Algebra

Size: px
Start display at page:

Download "Solutions to the Exercises * on Linear Algebra"

Transcription

1 Solutions to the Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 ebruary 7 Contents Vector spaces 4. Definition Linear combinations Linear (in)dependence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence Exercise: Linear independence in C Basis systems Exercise: Vector space of the functions sin(x + φ) Laurenz Wiskott (homepage This work (except for all figures from other sources, if present) is licensed under the Creative Commons Attribution-ShareAlike 4. International License. To view a copy of this license, visit igures from other sources have their own copyright, which is generally indicated. Do not distribute parts of these lecture notes showing figures with non-free copyrights (here usually figures I have the rights to publish but you don t, like my own published figures). Several of my exercises (not necessarily on this topic) were inspired by papers and textbooks by other authors. Unfortunately, I did not document that well, because initially I did not intend to make the exercises publicly available, and now I cannot trace it back anymore. So I cannot give as much credit as I would like to. The concrete versions of the exercises are certainly my own work, though. * These exercises complement my corresponding lecture notes available at Teaching/Material/, where you can also find other teaching material such as programming exercises. The table of contents of the lecture notes is reproduced here to give an orientation when the exercises can be reasonably solved. or best learning effect I recommend to first seriously try to solve the exercises yourself before looking into the solutions.

2 .4. Exercise: Basis systems Exercise: Dimension of a vector space Exercise: Dimension of a vector space Representation wrt a basis Exercise: Representation of vectors w.r.t. a basis Exercise: Representation of vectors w.r.t. a basis Euclidean vector spaces. Inner product Exercise: Inner product for functions Exercise: Representation of an inner product Norm Exercise: City-block metric Exercise: Ellipse w.r.t. the city-block metric Exercise: rom norm to inner product Exercise: rom norm to inner product (concrete) Angle Exercise: Angle with respect to an inner product Exercise: Angle with respect to an inner product Exercise: Angle with respect to an inner product Orthonormal basis systems 7 3. Definition Exercise: Pythagoras theorem Exercise: Linear independence of orthogonal vectors Exercise: Product of matrices of basis vectors Representation wrt an orthonormal basis Exercise: Writing vectors in terms of an orthonormal basis Inner product Exercise: Norm of a vector Exercise: Writing polynomials in terms of an orthonormal basis and simplified inner product Projection Exercise: Projection Exercise: Is P a projection matrix

3 3.4.3 Exercise: Symmetry of a projection matrix Change of basis Exercise: Change of basis Exercise: Change of basis Exercise: Change of basis Schmidt orthogonalization process Exercise: Gram-Schmidt orthonormalization Exercise: Gram-Schmidt orthonormalization Exercise: Gram-Schmidt orthonormalization of polynomials Matrices Exercise: Matrix as a sum of a symmetric and an antisymmetric matrix Matrix multiplication Matrices as linear transformations Exercise: Antisymmetric matrices yield orthogonal vectors Exercise: Matrices that preserve the length of all vectors Exercise: Derivative as a matrix operation Exercise: Derivative as a matrix operation Exercise: Derivative as a matrix operation Rank of a matrix Determinant Exercise: Determinants Exercise: Determinant Exercise: Determinant Inversion Trace Exercise: Trace and determinant of a symmetric matrix Orthogonal matrices Diagonal matrices Exercise: Matrices as transformations Exercise: Matrices as transformations Exercise: Matrices with certain properties Eigenvalue equation for symmetric matrices Exercise: Eigenvectors of a matrix

4 4.9. Exercise: Eigenvalue problem Exercise: Eigenvectors of a matrix Exercise: Eigenvectors of a matrix of type v i v T i Exercise: Eigenvectors of a symmetric matrix are orthogonal General eigenvectors Exercise: Matrices with given eigenvectors and -values Exercise: rom eigenvalues to matrices Exercise: Generalized eigenvalue problem Complex eigenvalues Exercise: Complex eigenvalues Nonquadratic matrices Quadratic forms Vector spaces. Definition. Linear combinations.3 Linear (in)dependence.3. Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 3, v 8, V R 3. (b) f (x) x + 3x +, f (x) 3x + 6x, f 3 (x) x +, vector space of polynomials of degree..3. Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ) 3, v, V R 4. (b) f (x) x, f (x) 3 + 4x, V vector space of polynomials of degree..3.3 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? 4

5 ( ) ( ) 3 (a) v, v, V R. Solution: v and v are obviously linearly independent and then they are also a basis of V R for dimensionality reasons. (b) f (x) x x 3, f (x) x + 3x 5, f 3 (x) x, V vector space of polynomials of degree. Solution: Linear independence of the three vectors can be shown by the proof that no linear combination of the three vectors (besides the trivial solution: all coefficients equal ) results in the zero vector. a(x x 3) + b(x + 3x 5) + c(x ) () (a + b)x + ( a + 3b + c)x + ( 3a 5b c) () a + b (3) a + 3b + c (4) 3a 5b c (5) a + b (6) 4a (equations (3) + (4) + (5)) (7) 3a 5b c (8) a (9) b (since a ) () c (since a and b ). () There is no other solution than the trivial one. Thus, f f 3 are linearly independent and form a basis of the vector space of polynomials of degree for dimensionality reasons..3.4 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 4, v 3, V R 3. Solution: v and v are obviously linearly independent, but they are not a basis for dimensionality reasons. One vector is missing to span the three dimensional space V R 3. (b) f (x) 3x, f (x) x, f 3 (x), V vector space of polynomials of degree. Solution: f is linearly independent of the other two vectors, because it is the only one containing x. (Here it is essential to argue that f is linearly independent of the other two together and not only of each of the other two individually.) We thus can discard the first vector and only consider the other two vectors for linear dependence. But it is quite obvious that f and f 3 are not linearly dependent. Thus all three vectors are linearly independent. Since the vector space of polynomials of degree is 3-dimensional, f f 3 form a basis for dimensionality reasons..3.5 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 3, v, V R 3. 5

6 Solution: v and v are obviously linearly independent, but they are not a basis of V R 3 for dimensionality reasons. (b) f (x) x, f (x) x + 3, f 3 (x) x, V vector space of polynomials of degree. Solution: f is linearly independent of the other two vectors, because it is the only one containing x. (Here it is essential to argue that f is linearly independent of the other two together and not only of each of the other two individually.) f is linearly independent of f 3, because only f contains a constant. Since the vector space of polynomials of degree is 3-dimensional, f f 3 form a basis for dimensionality reasons..3.6 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ), v, V R. Solution: v and v are obviously linearly independent and they are a basis of V R for dimensionality reasons. (b) f (x) 3x +x, f (x) x 3, f 3 (x) x, f 4 (x) 4x +5x+3, V vector space of polynomials of degree. Solution: The vector space of polynomials of degree is only 3-dimensional, thus f f 4 cannot be linearly indepent and they are not a basis for dimensionality reasons..3.7 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ) ( ) 4, v, v 3, V R 7. (b) f (x) sin(x), f (x) sin(x + π/4), f 3 (x) sin(x + π/), V L{sin(αx), cos(αx) : α R}. Hint: sin(x ± y) sin(x) cos(y) ± cos(x) sin(y). Solution: rom the addition theorems for trigonometric functions we know: and f (x) sin(x + π/4) cos(π/4) sin(x) + sin(π/4) cos(x) / sin(x) + / cos(x) f 3 (x) sin(x + π/) cos(π/) sin(x) + sin(π/) cos(x) sin(x) + cos(x) cos x Thus, f f 3 are obviously not linearly independent since f / f + / f 3 and they are therefore not a basis. What is the vector space V anyway? 6

7 .3.8 Exercise: Linear independence in C 3 Are the following vectors in C 3 over the field C linearly independent? Do they form a basis? r, r +i, r 3 i. () +i i Solution: The question is, whether there exist three constants a, b, and c in C, which are not all zero, but for which ar + br + cr 3 holds. With b : b r + i b i und c : c r + i c i we find ar + br + cr 3 () () a + b + c (3) a + i b i c (4) a b c (5) b c + i b i c (6) b( + i) + c( i) (7) (b r + i b i )( + i) + (c r + i c i )( i) (8) b r + i b r i b i b i c r i c r i c i + c i (9) ( b r b i c r + c i ) + i (+b r b i c r c i ) () b r b i c r + c i () ( b i c r ) + ( b r + c i ) () b r b i c r c i (3) ( b i c r ) ( b r + c i ) (4) b i c r (5) b r c i (6) a i (7) b (8) c i, (9) and verify ar + br + cr 3 ( i) + + i i i +i +i + + i +i +i i i + () i (). () This shows that the three vectors in C 3 are linearly dependent. The in line (7) is needed here, because we are searching for a solution of Equation (), but we cannot derive a unique solution, i.e. if we set a i, b, c i then () is true, but from () does not follow a i, b, c i. One could have suspected that right from the start, because the second and third component are copies of each other in all three vectors. This means that the three vectors can only be linearly independent if the vectors shortened by the last component are linearly independent. But since there are no three linearly independent vectors in C over the field C, like in R, the three vectors r to r 3 must be linearly dependent. 7

8 .4 Basis systems.4. Exercise: Vector space of the functions sin(x + φ) Show that the set of functions V {f(x) A sin(x + φ) A R, φ [, π]} generates a vector space over the field of real numbers. ind a basis for V and determine its dimension. Hint: The addition theorems for trigonometric functions are helpful for the solution, in particular sin(x+y) sin(x) cos(y) + sin(y) cos(x). Solution: rom the addition theorems for trigonometric functions follows: f(x) A sin(x + φ) A sin(x) cos(φ) + A sin(φ) cos(x). () Since A, sin(φ), and cos(φ) are simply real numbers, every f(x) can be written as a linear combination of sin(x) and cos(x). On the other hand, every linear combination of sin(x) und cos(x) can be written as A sin(x + φ) and is therefore an element of V, since A(cos(φ), sin(φ)) T can realize any pair of two real numbers. Since the latter also holds for the pairs (, ) T and (, ) T, the two functions sin(x) and cos(x) are elements of V. These three properties make sin(x) and cos(x) a basis of V, which implies that V is a two-dimensional vector space..4. Exercise: Basis systems. ind two different basis systems for the vector space of the polynomials of degree 3. Solution: The most obvious basis is, x, x, x 3. A different basis can be derived from this by simply adding some of the previous ones, e.g., x +, x +, x ind a basis for the vector space of symmetric 3 3 matrices. Solution: or symmetric matrices the diagonal elements can be chosen independently, but opposing off-diagonal elements are coupled together. Thus, a basis for symmetric 3 3-matrices, for instance, is,,, (),,. ().4.3 Exercise: Dimension of a vector space Determine the dimension of the following vector spaces. (a) Vector space of real symmetric n n matrices (for a given n). Solution: A symmetric n n matrix has n coefficients. However, only the coefficients in the upper right triangle (including the diagonal) can be choosen freely, those in the lower left triangle (without the diagonal) follow from the symmetry condition. Thus, there are n(n + )/ free parameters and that is the dimension of the vector space of symmetric n n matrices. 8

9 (b) Vector space of mixed polynomials in x and y (e.g. f(x, y) x y + x 3y + 5) that have a maximal degree of n in x and y (i.e. for each term x nx y ny in the polynomial n x + n y n must hold). Solution: There are (n + ) different monomials in x and (n + ) different monomials in y with a maximal degree of n. Combining them to all possible mixed monomials leads to n terms, which can be arranged in a square matrix (see below for n ). Within this matrix the monomials in the lower right triangle (without the diagonal) are not permitted, because they have too a high degree. Thus, the dimension of the vector space of mixed polynomials in x and y with maximal degree n is (n + )(n + )/. x x y xy x y y xy x y.4.4 Exercise: Dimension of a vector space Determine the dimension of the following vector spaces.. Vector space of real antisymmetric n n matrices (for a given n). A matrix M is antisymmetric if M T M. M T is the transpose of M. Solution: The vector space of all real n n matrices (for a given n) is n. Since antisymmetric matrices have zeros on the diagonal this reduces the dimensionality by n down to (n n) n(n ); since the entries in the lower left triangle are the negative of the entries in the upper right triangle this reduces the dimentionality by half to n(n )/.. Vector space of the series that converge to zero. Solution: Consider the series (,,,...), (,,,,...), (,,,,,...), etc. Each series converges to zero, they are all linearly independent of each other, and we can create infinitely many of them. Thus, the vector space is infinite dimensional..5 Representation wrt a basis.5. Exercise: Representation of vectors w.r.t. a basis Write the vectors w.r.t. the given basis. ( ) ( ) ( 3 (a) Vectors: v, v, v e e The subscript e indicates the canonical basis. ) ; Basis: b e ( ), b e ( ) ( ) ( ) Solution: The first solutions are obvious: Since v b, we have v, and v b b can also be seen easily. The solution for v 3 is a bit more complex and can be derived by solving a linear system of equations. ( ) ( ) ( ) 3 a + b () 5 e e e 3 a + b () 5 a b (3) a (st + nd equation) (4) 5 a b (5) a (6) b 4 (since a ) (7) ( ) v 3. (8) 4 9 b e

10 (b) Vector: f(x) 3x + 3 x ; Basis: x, x, Solution: This is very easy to solve, since the basis consists of the pure monomials. One simply stacks the coefficients of the polynomial, which yields 3 f(x) 3. (9) (c) Vector: g(x) (x + )(x ); Basis: x 3, x +, x,. Solution: This, too, is easy to solve, once the polynomial is multiplied out. g(x) (x + )(x ) x. ().5. Exercise: Representation of vectors w.r.t. a basis Write the vectors w.r.t. the given basis. (a) Vector: v ; Basis: b, b 3 e e The subscript e indicates the canonical basis., b 3 e e Solution: This can be easily solved in a cascade. irst one determines the contribution of b to get the second component, since b is the only basis vector that contributes there. Then one determines the contribution of b to get the first component, since b 3 does not contribute there. inally one determines the contribution of b 3 to get the third component. This way one finds v (b) Vector: h(x) 3x x 3; Basis: x x +, x +, x 3 6 Solution: This example is difficult to solve directly. Thus, one has to solve a system of linear b.

11 differential equations. 3x x 3 a(x x + ) + b(x + ) + c(x ) () (a + c)x + ( a + b)x + (a + b c) () 3 a + c (3) a + b (4) 3 a + b c (5) 3 a + c (6) b + c (st + nd equation) (7) 6 b c (3rd st equation) (8) 3 a + c (9) b + c () 8 4c (3rd nd equation) () a (since c ) () b (since c ) (3) c (4) h. (5) Euclidean vector spaces. Inner product.. Exercise: Inner product for functions Consider the space of real continuous functions defined on [, ] for which [f(x)] dx exists. Let the inner product be (f, g) : with an arbitrary positive weighting function w(x)f(x)g(x) dx, () < w(x) <. (). Prove that () is indeed an inner product.

12 Solution: We simply have to verify the axioms of the inner product. The first three are trivial. (λf, g) (f + h, g) (f, g) λ w(x)λf(x)g(x) dx (3) w(x)f(x)g(x) dx (4) λ(f, g). (5) w(x)(f(x) + h(x))g(x) dx (6) w(x)f(x)g(x) dx + w(x)h(x)g(x) dx (7) (f, g) + (h, g). (8) The fourth one is more subtle. We first show (f, f). w(x)f(x)g(x) dx (9) w(x)g(x)f(x) dx () (g, f). () () f (x) () w(x)f (x) (3) It is also easy to show that (f(x) ) ((f, f) ), since w(x)f (x) dx (4) () (f, f). (5) f(x) (6) w(x)f (x) (7) w(x)f (x) dx (8) () (f, f). (9) Really difficult is the other direction, ((f, f) ) (f(x) ). We can prove that by showing the inverse direction for the negation, i.e. ((f, f) ) (f(x) ). (f(x) ) means that there is an x [, ], for which f(x ). Because of the continuity of f there is then also a finite δ-neighborhood around x for which f(x), and because of w(x) > follows x+δ x w(x)f δ (x) dx > (for simplicity I have assumed here that x is an inner point, but something analogous hold for a point at the border). urthermore, one can easily show that b a w(x)f (x) dx for arbitrary a < b, compare proof above for (f, f). But that means that the positive contribution of the integral around x cannot be compensated for by a negative contribution at another location. Therefore w(x)f (x) dx (f, f) >. Thus we have shown that ((f, f) ) (f(x) ), and ((f, f) ) (f(x) ) holds as well. Since we have shown the other direction already farther above, we have proven as required. () therefore is a scalar product. (f, f) f(x), (). Show whether () is an inner product also for non-continuous functions.

13 Solution: We show that () is no inner product by finding a counter example. If, for instance, f() and f(x) otherwise, then f is obviously not the zero-function, but still the weighted integral over f(x) vanishes, because f differs from zero only in a single point. Thus, the fourth axiom is violated and we don t have an inner product anymore. 3. Show whether () is an inner product for continuous functions even if the weighting function is positive only in the inner of the interval, i.e. if w(x) > x (, ) but w(±). Solution: The first properties of the inner product are not critical. Only the fourth one requires further consideration. With the new weighting function, a difference to the inner product above could only be at the border. If w( ), then f( ) could be non-zero without contributing to the integral. However, since f has to be continuous, f(x) would then also hold in a small δ-neighborhood, and in this neighborhood w(x) > holds, so that there we would indeed get a positive contribution to the integral. This means, the fourth axiom is still valid and () an inner product... Exercise: Representation of an inner product Let V be an N-dimensional vector space over R and let {b i } with i,..., N be a basis. Let x ( x,..., x N ) T b and ỹ (ỹ,..., ỹ N ) T b be the representations of two vectors x, y V with respect to the basis {b i}. Show that: (x, y) x T Aỹ. () where A is an N N-matrix. Solution: Since x and ỹ are representations of x and y with respect to the basis b i, we have x N i x ib i and y N i ỹib i. With this we get (x, y) i x i b i, j ỹ j b j () i,j x i (b i b j ) }{{} ỹ j (3) A ij x T Aỹ. (4). Norm.. Exercise: City-block metric The norm of the city-block metric is defined as: x CB : i x i Prove that this actually is a norm. Solution: Properties and 3 are trivially true. Property holds because x + y CB i x i + y i i x i + y i i x i + j y j x CB + y CB. ().. Exercise: Ellipse w.r.t. the city-block metric What does an ellipse in a two-dimensional space with city block metric look like? 3

14 An Ellipse is the set of all points x whose sum of the distances x a and x b equals r, given the two focal points a and b and a radius r. Examine the following cases: (a) a (, ) T, b (, ) T, and r 4, Solution: The norm of the city-block metric is defined as: x CB : i x i We take an intuitive approach to finding the ellipse, see left figure below, (there is also a tedious formal one by making a number of case distinctions). igure (left): (Wiskott group, 7) unclear.; igure (right): (Wiskott group, 7) unclear. We start at an easy to find point, e.g. (,), which has equal distance to a and b with a total distance of 4. If the point is shifted to the left the distance to b grows but the distance to a is reduced by the same amount, so that the total distance to the focal points remains constant, as it should. This works also to the right, overall from (-,) to (,). As we move to the left beyond (-,) the horizontal distance to a and b grows, thus we have to reduce the vertical distance correspondingly, which results in a movement to the lower left at an angle of 45 until the point reaches (-,). One could proceed like that, but the shape of the complete ellipse already follows from the upper left part for symmetry reasons. The result is an angular ellipse, see right figure above. (b) a (, ) T, b (, ) T, and r 4. Solution: igure (left): (Wiskott group, 7) unclear.; igure (right): (Wiskott group, 7) unclear. 4

15 One can find the solution in a similar way as described in (a), see left figure above. We start at an easy to find point, e.g. (,). If you move from (,) to the right, you see that the distance to a grows, but the distance to b is reduced by the same amount, so that the total distance remains constant, as it should. However, beyond (,), the distance to b also grows. Therefore, the point has to be shifted upwards at the same rate as it move to the right to keep the total distance to the focal points constant. The point thus moves at an angle of 45 to the abscissa until it reaches (3,). One can proceed in this way, but the complete ellipse already follows from these two parts for symmetry reasons. The result is an octagon, see right figure above...3 Exercise: rom norm to inner product Every inner product (, ) defines a norm by x (x, x). Show that a norm can also define an inner product over the field R (if it exists, which is the case if the parallelogram law x + y + x y ( x + y ) holds). Hint: Make the ansatz (x + y, x + y)... and derive a formula for the inner product given a norm. Solution: (x + y, x + y) (x, x + y) + (y, x + y) () (x, x) + (x, y) + (y, x) + (y, y) () (x,x) + (x, y) + (y, y) (3) (x, y) ((x + y, x + y) (x, x) (y, y)) (4) ( x + y x y ). (5) Thus, given a norm x, a corresponding inner product can be derived with: (x, y) : ( x + y x y ). (6) Alternatively, one can also use (x, y) : ( x + y x y ), (7) 4 known as the polarization identity (D: Polarisationsidentität) See also Exercise: rom norm to inner product (concrete) Given a norm x, a corresponding inner product can be derived with (x, y) : Derive the corresponding inner product for the norm ( x + y x y ). () g : w(x)g(x) dx, () with w(x) being some arbitrary strictly positive function. 5

16 Solution: (g, f) ( g + f g f ) ( w(x)(g(x) + f(x)) dx w(x)g(x) dx ( w(x) ( (g(x) + f(x)) g(x) f(x) ) ) dx ( ) w(x)g(x)f(x) dx w(x)g(x)f(x) dx. ) w(x)f(x) dx.3 Angle.3. Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product.. v 4 and v 5 with the standard Euclidean inner product.. f (x) x 3 + and f (x) 3x with the inner product (f, g) : f(x)g(x) dx. Solution: (f, f ) f (x)f (x) dx (x3 + )3x dx (3x4 + 3x) dx 3x4 dx (since odd functions integrated over [, +] vanish for symmetry reasons) 3[ 5 x5 ] , f (x3 + ) dx (x6 + x 3 + ) dx (x6 + ) dx (s.a.) [ 7 x7 ] + [x] , f (3x) dx 9x dx ( ) ( 6 6 ) 5 α arccos arccos Drawing not available. 9[ 3 x3 ] , thus.3. Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product. ( ) (a) v and v ( ) with the standard Euclidean inner product. 3 ( Solution: (v, v ) 5, v 5, v, thus α arccos 3 4π Drawing not available. ) 5 5 ) arccos ( (b) f (x) arctan(x) and f (x) cos(x) with the inner product (f, g) : exp( x )f(x)g(x) dx. Solution: f is odd, f and exp( x ) are even. The product of the three functions is therefore odd. Thus, the integral vanishes and the two vectors are orthogonal. Drawing not available. 6

17 .3.3 Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product. ( (a) v ) ( and v ) with the inner product (x, y) : x T ( ) y. (b) f (x) 3x and f (x) x with the inner product (f, g) : f(x)g(x) dx. 3 Orthonormal basis systems 3. Definition 3.. Exercise: Pythagoras theorem Prove the generalized Pythagoras theorem: Let v i, i {,..., N} be pairwise orthogonal vectors. Then holds. N N v i v i. i i Solution: We can show directly N v i i N N v i, i j v j () N (v i, v j ) () i,j N (v i, v i ) (because (v i, v j ) if i j) (3) i N v i. (4) i 3.. Exercise: Linear independence of orthogonal vectors Show that N pairwise orthogonal vectors (not permitting the zero vector) are always linearly independent. Solution: Proof by contradiction: We assume the pairwise orthogonal non-zero vectors v i are linearly dependent. Then there exist factors a i, of which at least one is not zero, so that i a iv i. rom this 7

18 follows a i v i () i ( ) a i v i, v j j () i a i (v i, v j ) j (3) i a j (v j, v j ) j (since the vectors are orthogonal) (4) a j j (since the vectors have finite norm), (5) which is a contradiction to the assumption. Thus, the assumption is not true and all orthogonal vectors are linearly independent. Ekaterina Kuzminykh (SS 7) came up with the following solution. a i v i (6) i a i v i (7) i i a i v i, j a j v j (8) ij i a i a j (v i, v j ) (9) a i (v i, v i ) (since (v i, v j ) for i j) () a i i (since the vectors have finite norm). () 3..3 Exercise: Product of matrices of basis vectors Let {b i }, i,..., N, be an orthonormal basis and N indicate the N-dimensional identity matrix.. Show that (b, b,..., b N ) T (b, b,..., b N ) N. Does the result also hold if one only takes the first N basis vectors? If not, try to interpret the resulting matrix. Solution: (b, b,..., b N ) T (b, b,..., b N ) b T b T. b T N (b, b,..., b N ) N () follows directly from the fact that b T i b j δ ij by definition of an orthonormal basis.. Show that (b, b,..., b N )(b, b,..., b N ) T N. Does the result also hold if one only takes the first N basis vectors? If not, try to interpret the resulting matrix. Solution: Writing an arbitrary vector v in terms of the basis b i is done by v b v b v b... v Nb b b T v b T v... b T Nv b b T b T. b T N v. () 8

19 Writing the vector v b, which is written in terms of the basis b i, in terms of the Euclidean basis again is done by v v b v ib b i (b, b,..., b N ) v b... (b, b,..., b N )v b. (3) i v Nb Combining these two transformations results in v (b, b,..., b N ) b T b T.. b T N b v. (4) Since this is true for any vector v, we conclude that (b, b,..., b N )(b, b,..., b N ) T (b, b,..., b N ) b T b T. N. (5) b T N 3. Representation wrt an orthonormal basis 3.. Exercise: Writing vectors in terms of an orthonormal basis Given the orthonormal basis b 6, b 6 3, b (). Write the vectors v and v in terms of the orthonormal basis b i. v, v. () 3 Solution: Since the b i form an orthonormal basis, the coefficients of the vectors v j in terms of this basis can be simply computed with the inner products (b i, v j ). Thereby we get v v b T v b T v b T 6 3 v b T v b T v b T 3 v b, (3) b b. (4). What is the matrix with which you could transform any vector given in terms of the Euclidean basis into a representation in terms of the orthonormal basis b i? Solution: Since the coefficients can be computed with the inner products (b i, v j ) b T i v j, the transformation matrix T simply is b T T b T 3 3 b T 6. (5) 3 9

20 3.3 Inner product 3.3. Exercise: Norm of a vector Let b i, i,..., N, be an orthonormal basis. Then we have (b i, b j ) δ ij and N v v i b i with v i : (v, b i ) v. () i Show that N v vi. () i Solution: We can show directly that v (v, v) (3) N N v i b i, v j b j (4) i i,j j N v i v j (b i, b j ) (5) }{{} δ ij N vi (b i, b i ) (since (b i, b j ) for i j) (6) i N vi (since the basis vectors are normalized to ). (7) i 3.3. Exercise: Writing polynomials in terms of an orthonormal basis and simplified inner product The normalized Legendre polynomials L /, L 3/ x, L 5/8 ( + 3x ) form an orthonormal basis of the vector space of polynomials of degree with respect to the inner product (f, g) f(x)g(x) dx.. Write the following polynomials in terms of the basis L, L, L : Verify the result. f (x) + x, f (x) 3 x. Solution: Since the L i form an orthonormal basis, the coefficients of the vectors f j in terms of this basis can be simply computed with the inner products (f i, L j ). Note that if f i L j is an odd function,

21 the integral over the intervall [, ] vanishes for symmetry reasons. Thereby we get and (f, L ) (f, L ) (f, L 3 ) ( + x ) / dx ([x] + [x 3 /3] ) / () ( + /3) / 8/3 / 4/3 () ( + x ) 3/ x dx (for symmetry reasons) (3) ( + x ) 5/8 ( + 3x ) dx (4) ( + x + 3x 4 ) 5/8 dx (5) ( [x] + [x 3 /3] + 3[x 5 /5] ) 5/8 (6) ( + 4/3 + 6/5) 5/8 8/5 5/8 /3 8/5 (7) 4/3 f /3 (8) 8/5 (f, L ) (f, L ) (f, L 3 ) L 4/3 L + L + /3 8/5 L (9) 4/3 / + /3 8/5 5/8 ( + 3x ) () 4/3 + ( /3 + x ) + x, () (3 x ) / dx (3[x] [x 3 /3] ) / () (6 4/3) / 4/3 / 7/3 (3) (3 x ) 3/ x dx (for symmetry reasons) (4) (3 x ) 5/8 ( + 3x ) dx (5) ( 3 + x 6x 4 ) 5/8 dx (6) ( 3[x] + [x 3 /3] 6[x 5 /5] ) 5/8 (7) ( 6 + /3 /5) 5/8 (8) 6/5 5/8 /3 8/5 (9) 7/3 f /3 () 8/5 L 7/3 L + L /3 8/5 L () 7/3 / /3 8/5 5/8 ( + 3x ) () 7/3 ( /3 + x ) 3 x. (3) Equations (9, 3) were added only to verify the result.

22 . Calculate the inner product (f, f ) first directly with the integral and then based on the coefficients of the vectors written in terms of the basis L, L, L. Solution: We calculate directly (f, f ) ( + x )(3 x )dx (3 + x x 4 )dx (4) 3[x] + [x 3 /3] [x 5 /5] (5) 6 + /3 4/5 6 /5 88/5, (6) 4/3 T 7/3 (f, f ) /3 8/5 /3 (7) 8/5 L 4/3 7/3 /3 8/5 /3 8/5 (8) (9) L 3.4 Projection 3.4. Exercise: Projection. Project the vector v (,, ) T onto the space orthogonal to the vector b (,, ) T. Solution: The standard way would be to construct basis vectors b and b 3 for the space orthogonal to b and then project onto these with v 3 i (v, b i)b i. Simpler, however, is it to subtract the projection onto the space spanned by b. or that we first normalize b to obtain v can then be calculated as b : b b ( ) 3. () v v v v (v, b )b () 3 ( + + ) 3 (3) (4) We verify that v is indeed orthogonal to b and that it is shorter than v. (v, b ) ( + 4 ), 9 (5) v ( ) 6, (6) v 9 ( ) 8 9 < 6 v. (7). Construct a 3 3-matrix P that realizes the projection onto the subspace orthogonal to b, so that v Pv for any vector v.

23 Solution: We start from what we have written above to calculate v and rewrite it with a matrix. ( ) v v (v, b )b v b (b, v) 3 v b b T v 3 b b T v (8) }{{} P 3 b b T P b b b b b b 3 b b b b b b 3 b 3 b b 3 b b 3 b We verify that we get the same vector v for v (,, ) T as above but with matrix P. v Pv Calculate the product of P with itself, i.e. PP. 9 Solution: There are different ways to solve this problem. (9). () () 3 4. () 3 3 ˆ The intuitive way is to realize that after we have projected a vector onto a subspace, the projected vector lies within the subspace and thus projecting it a second time does not make any difference. Thus, we expect PP P. ˆ If we want to be more formal we can show that ( ) ( ) PP 3 b b T 3 b b T (3) b b T b b T 3 + b b T b b T }{{} (4) 3 b b T b b T + b b T 3 b b T P. (5) ˆ inally, one can do it the direct (hard) way by simply multiplying the matrices. PP (6) Exercise: Is P a projection matrix Determine whether matrix is a projection matrix or not. P 5 ( 4 ) () Solution: The defining property of a projection matrix is that you get the same result if you apply it twice, i.e. PP P. We verify that PP ( ) ( ) ( ) 5 ( ) P. ()

24 3.4.3 Exercise: Symmetry of a projection matrix Prove that the matrix P of an orthogonal projection is always symmetric. Solution: If {b i } is an orthonormal basis of the space onto which P projects, then P can be written as P i b i b T i. () With this it is easy to show that ( ) T P T b i b T i ( ) T b i b T i i i i b T T i b T i i b i b T i P. () 3.5 Change of basis 3.5. Exercise: Change of basis Let {a i } and {b i } be two orthonormal bases in R 3 : a, a 3 b, b 6, a 3, 6, b 3. 3 Determine the matrices B b a and B a b for the transformations from basis a to basis b and vice versa. Are there similarities between the two matrices? What happens if you multiply the two matrices? Solution: If a vector v is given in terms of basis a, then v in terms of the Euclidean basis is given by v v v (a, a, a 3 ) v (v i ) a a i. () v 3 v 3 i Vector v given in terms of basis b can then be computed with v b T v v b T v v 3 b T v 3 3 Combining these two transformations we have b T B b a b T (a, a, a 3 ) b T 3 or B a b one gets analogously e b B a b a T a T a T 3 e a b T v b T v b T 3 v b T a b T a b T a 3 b T a b T a b T a 3 b T 3 a b T 3 a b T 3 a 3. (). (3) (b, b, b 3 ) B T b a. (4) It is intuitively clear that a back- and forth transformation between bases a and b should have no effect. We verify that the product of the two matrices indeed results in the identity matrix. a T b T B a b B b a a T a T (b, b, b 3 ) b T a T (a, a, a 3 ) a T (a, a, a 3 ). (5) 3 a T 3 b T 3 } {{ } 4 } {{ }

25 or the concrete bases given above we find with (3) /9 3/4 /36 B b a /3 /4 / B T a b (6) /9 8/9 and verify that B a b B b a /9 /3 /9 3/4 /4 /36 / 8/9 /9 3/4 /36 /3 /4 / /9 8/9. (7) Extra question: What would change if the basis would not be orthonormal? Extra question: How can you generalize this concept of change of basis to vector spaces of polynomials of degree? 3.5. Exercise: Change of basis Consider {a i } and {b i }, where a b 6, a, a 3,, b, b Are {a i } and {b i } an orthonormal basis of R 3? If not, make them orthonormal.. ind the transformation matrix B b a. 3. ind the inverse of matrix B b a Exercise: Change of basis Let {a i } and {b i } be two orthonormal bases in R : a ( ), a ( ), b ( 5 Write the vector v ( 3 which is given in terms of the basis a, in terms of basis b. Solution: irst we determine the vector in the Euclidean basis. ( ) v a 3 3a ( ) ( 3 3 Then we write this vector wrt basis b. a v ( v v ) b ( b T v b T v 5 ) ) b a ), b 5 ( ). (), () ( 3 ) ) ( 5 b ). (3). (4)

26 3.6 Schmidt orthogonalization process 3.6. Exercise: Gram-Schmidt orthonormalization Construct an orthonormal basis for the space spanned by the vectors v, v, v 3. Solution: There is something suspicious here. Either the three vectors are linearly independent, then the Euclidean basis (,, ) T, (,, ) T, (,, ) T would do, or they are linearly dependent, then one of the three vectors can be ignored. With some guessing one sees that v 3 v v, so the problem reduces to finding a basis for the first two vectors. We apply the Gram-Schmidt orthonormalization to obtain the basis vectors b and b. v , () v b : v, () 5 b : v (v, b )b (3) ( + + ) (4) , (5) b ( )/5 5/5, (6) b b : b (7) Now, if one has not guessed that v 3 can be expressed as a linear combination of the other two vectors but proceeds with the Gram-Schmidt procedure, one gets the following. b 3 v 3 (v 3, b )b (v 3, b )b (8) ( + 4) 8 ( ) 5 (9) () () Thus, it becomes apparent that v 3 is linearly dependent on v and v and is therefore redundant. So we are done and the basis is b and b. It is easy to see that the two vectors are normalized and orthogonal. 6

27 We also verify v (v, b )b + (v, b )b () ( + + 4) + (8 + 8) 8 5 (3) , (4) v (v, b )b + (v, b )b (5) ( + + ) + 8 ( ) 5 (6) , (7) v 3 (v 3, b )b + (v 3, b )b (8) ( + 4) + ( ) 8 5 (9) () Thus b and b are indeed a basis for the space spanned by the v i Exercise: Gram-Schmidt orthonormalization ind an orthonormal basis for the spaces spanned by the following sets of vectors.. v, v, v 3. (). Solution: The three vectors are obviously linearly independent and thus span the whole threedimensional space. A simple basis of this space is b :, b :, b 3 :. () v, v. (3) Solution: We see that the two vectors are not linearly dependent and not orthogonal already and 7

28 apply Gram-Schmidt orthonormalization to obtain the basis vectors b and b. v , (4) v b : v, (5) 5 b : v (v, b )b (6) ( + + ) (7) 5 5 4, (8) 5 5 b /5 /5, (9) b b : b () Exercise: Gram-Schmidt orthonormalization of polynomials Construct an orthonormal basis for the space of polynomials of degree in R given the inner product and the norm induced by this inner product. (g, h) g(x)h(x) dx () Solution: We apply Gram-Schmidt orthogonalization to the functions g (x), g (x) x, and g (x) x. 8

29 b (x) g (x) g, () b (x) g (x) (g, b )b (x) x (/) x /, (3) b (x /) dx (x x + /4) dx (4) /3 / + /4 /, (5) b (x) b (x) b x / / x 3, (6) b (x) g (x) (g, b )b (x) (g, b )b (x) (7) ( x x ( x ) 3) dx ( x 3) (/3) (8) x ( /4 3/3)( x 3) (/3) (9) x ( 3/4 /3)( x 3) (/3) () x (3 x x 3/ + ) (/3) () x x + /6, () b (x x + /6) dx (3) (x 4 x 3 + /6 x + x /6 x + /36) dx (4) /5 /4 + /8 + /3 / + /36 (5) ( )/8 (6) /8, (7) b (x) b (x) b x x + /6 /8 8(x x) + 5. (8) Extra question: Would the result change if we usee a different inner product, e.g. with an integral on the interval [, +] instead of [, ]? Extra question: Seeing this basis, what does it mean to project a polynomial of degree onto the space of polynomials of degree? 4 Matrices 4.. Exercise: Matrix as a sum of a symmetric and an antisymmetric matrix Prove that any square matrix M can be written as a sum of a symmetric matrix M + and an antisymmetric matrix M, i.e. M M + + M with (M + ) T M + and (M ) T M. Hint: Construct a symmetric matrix and an antisymmetric matrix from M. Solution: We can make M symmetric or antisymmetric by adding or subtracting its transpose, respectively. 9

30 If we also devide by, to get the normalization right, we have ( ) ( T M + : M + M T / M T + M) / (M + ) T, () ( ) ( T M : M M T / M T M) / (M ) T, () ( ) ( ) M? M + + M M + M T / + M M T / (3) ( ) M + M T + M M T / M. (4) 4. Matrix multiplication 4. Matrices as linear transformations 4.. Exercise: Antisymmetric matrices yield orthogonal vectors. Show that multiplying a vector v R N with an antisymmetric N N-matrix A yields a vector orthogonal to v. In other words A T A (v, Av) v R N. () Solution: If we write the inner product in matrix notation, we find that i.e. Av is orthogonal to v. (v, Av) v T Av () (v T Av) T (because v T Av is a scalar) (3) v T A T (v T ) T (4) (because (AB) T B T A T for any A and B) v T A T v (5) v T Av (because A is antisymmetric) (6) (v, Av) (7) (v, Av), (8) One can get an intuition for that by performing the product explicitly for a simple example but maintaining the matrix order (Phillip reyer, SS 9). a a a 3 v v T Av (v, v, v 3 ) a a a 3 v (9) a 3 a 3 a 33 (v, v, v 3 ) a v + a v + a 3 v 3 a v + a v + a 3 v 3 a 3 v + a 3 v + a 33 v 3 v (a v + a v + a 3 v 3 ) +v (a v + a v + a 3 v 3 ) +v 3 (a 3 v + a 3 v + a 33 v 3 ) v 3 () () v a v + v a v + v a 3 v 3 +v a v + v a v + v a 3 v 3 +v 3 a 3 v + v 3 a 3 v + v 3 a 33 v 3 () + v a v + v a 3 v 3 v a v + + v a 3 v 3 v 3 a 3 v v 3 a 3 v + + v a v + v a 3 v 3 v a v + + v a 3 v 3 v a 3 v 3 v a 3 v 3 + (since A is antisymmetric) (3). (4) 3

31 Now one can see that the terms that are related by a transposition of the matrix cancel out each other, so that the sum is zero.. Show the converse. If a matrix A transforms any vector v such that it becomes orthogonal to v, then A is antisymmetric. In other words (v, Av) v R N A T A. (5) Solution: We know the inner product (v, Av) is zero. If we write it explicitly in terms of the coefficients, and choose v to be either a Cartesian basis vector e i or a sum of two such vectors, i.e. e i + e j, then we find i.e. A is antisymmetric. e T i Ae i A ii i (6) (e i + e j ) T A(e i + e j ) i, j (7) e T i Ae i + e T i Ae j + e T j Ae i + e T j Ae j (8) A ii + A ij + A ji + A jj (9) A ij + A ji (because A ii A jj ) () A ii i () A ij A ji i, j () A T A, (3) This proof was fairly direct. However, there is a more elegant proof (Oswin Krause, SS 9), which requires a bit more background knowledge, namely (i) any matrix M can be written as a sum of a symmetric M + and an antisymmetric matrix M and (ii) if a quadratic form x T Hx with a symmetric matrix H is zero for any vector x, then H must be the zero matrix. i.e., A is antisymmetric.! (v, Av) v (4) v T Av (5) (i) v T (A + + A ) v }{{} (6) : A v T A + v + v T A v (7) () v T A + v (8) (ii) A + (9) A A T, (3) 4.. Exercise: Matrices that preserve the length of all vectors Let A be a matrix that preserves the length of any vector under its transformation, i.e. Av v v R N. () Show that A must be an orthogonal matrix. Hint: or a square matrix M we have v T Mv v R N M M T. () 3

32 Solution: Length preservation for any vector v means v T A T Av (3) Av () v (4) v T v v (5) v T (A T A )v v (6) () (A T A ) (A T A ) T (7) (A T A ) (because (A T A ) is symmetric) (8) (A T A ) T (9) A T A, () which means that A is orthogonal Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector spaces spanned by the following function sets. Use the given functions as a basis with respect to which you represent the vectors. Determine the rank of each matrix. (a) {sin(x), cos(x)}. () Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) sin(x), cos(x). () The derivatives are correspondingly ( ) (sin(x)) cos(x) ( ), (cos(x)) sin(x). (3) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D. (4) Interpreted as a transformation the matrix performs a rotation by 9. obviously. The rank of the matrix is We can verify that for a general function f(x) a sin(x) + b cos(x) considered as a vector f within the given vector space the derivative can actually be computed with D. ( ) a f a sin(x) + b cos(x) f(x), (5) b ( ) ( ) ( ) f a b D f (6) b a b sin(x) + a cos(x) a cos(x) b sin(x) f (x). (7) (b) {, x +, x }. (8) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors., x +, x. (9) 3

33 The derivatives are correspondingly (), (x + ), (x ) x. () The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. () The rank of the matrix is obviously. We can verify that for a general function f(x) a + b x + c x f a b b c f D f (a b) + b (x + ) + c x a + b x + c x f(x), () a b b c b c c (3) (b c) + c (x + ) + x b + c x f (x). (4) (c) {exp(x), exp(x)}. (5) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) exp(x), exp(x). (6) The derivatives are correspondingly ( ) (exp(x)) exp(x) ( ), (exp(x)) exp(x). (7) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D 3. (8) Interpreted as a transoformation the matrix performs a stretching along the second axis by a factor of two. The rank of the matrix is obviously. We can verify that for a general function f(x) a exp(x) + b exp(x) ( ) a f a exp(x) + b exp(x) f(x), (9) b ( ) ( ) ( ) f a a D 3 f () b b a exp(x) + b exp(x) f (x). () 4..4 Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector spaces spanned by the following function sets. Use the given functions as a basis with respect to which you represent the vectors. Determine the rank of each matrix. 33

34 (a) {sin(x), cos(x)}. () Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) sin(x), cos(x). () The derivatives are correspondingly ( ) (sin(x)) cos(x) ( ), (cos(x)) sin(x). (3) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D. (4) Interpreted as a transformation the matrix performs a rotation by 9. obviously. The rank of the matrix is We can verify that for a general function f(x) a sin(x) + b cos(x) considered as a vector f within the given vector space the derivative can actually be computed with D. ( ) a f a sin(x) + b cos(x) f(x), (5) b ( ) ( ) ( ) f a b D f (6) b a b sin(x) + a cos(x) a cos(x) b sin(x) f (x). (7) (b) {, x +, x }. (8) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. The derivatives are correspondingly (), x +, (x + ), x, (x ) x. (9). () The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. () The rank of the matrix is obviously. We can verify that for a general function f(x) a + b x + c x f a b b (a b) + b (x + ) + c x a + b x + c x f(x), () c f D f a b b b c c (3) c (b c) + c (x + ) + x b + c x f (x). (4) 34

35 4..5 Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector space spanned by the function set {sin(x), cos(x), x sin(x), x cos(x)}. Use the given functions as a basis. Determine the rank of the matrix. Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. sin(x), cos(x), () x sin(x) The derivatives are correspondingly (sin(x)) cos(x) (x sin(x)) sin(x) + x cos(x), x cos(x), (cos(x)) sin(x). (), (x cos(x)) cos(x) x sin(x), (3). (4) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. (5) The rank of the matrix is obviously Rank of a matrix 4.4 Determinant 4.4. Exercise: Determinants Calculate the determinants of the following matrices. (a) ( cos(φ) sin(φ) M sin(φ) cos(φ) ) () Solution: The formula for the determinant of a -matrix yields M cos(φ) cos(φ) ( sin(φ)) sin(φ). () This result is not surprising, since M is a rotation matrix, which obviously does not change the volume of the unit square under its transformation. 35

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Exercises * on Functions

Exercises * on Functions Exercises * on Functions Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 2 February 2017 Contents 1 Scalar functions in one variable 3 1.1 Elementary transformations.....................................

More information

Solutions to the Exercises * on Multiple Integrals

Solutions to the Exercises * on Multiple Integrals Solutions to the Exercises * on Multiple Integrals Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 27 Contents Introduction 2 2 Calculating multiple integrals

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Linear algebra II Homework #1 due Thursday, Feb A =

Linear algebra II Homework #1 due Thursday, Feb A = Homework #1 due Thursday, Feb. 1 1. Find the eigenvalues and the eigenvectors of the matrix [ ] 3 2 A =. 1 6 2. Find the eigenvalues and the eigenvectors of the matrix 3 2 2 A = 2 3 2. 2 2 1 3. The following

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) Solutions For Problem Sheet 9 In this problem sheet, we derived a new result about orthogonal projections and used them to find least squares approximations

More information

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002 Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week: December 4 8 Deadline to hand in the homework: your exercise class on week January 5. Exercises with solutions ) Let H, K be Hilbert spaces, and A : H K be a linear

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63 Contents Appendix D (Inner Product Spaces W-5 Index W-63 Inner city space W-49 W-5 Chapter : Appendix D Inner Product Spaces The inner product, taken of any two vectors in an arbitrary vector space, generalizes

More information

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124 Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

,, rectilinear,, spherical,, cylindrical. (6.1)

,, rectilinear,, spherical,, cylindrical. (6.1) Lecture 6 Review of Vectors Physics in more than one dimension (See Chapter 3 in Boas, but we try to take a more general approach and in a slightly different order) Recall that in the previous two lectures

More information

Lecture 2: Linear Algebra

Lecture 2: Linear Algebra Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,

More information

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. The Manga Guide to Linear Algebra Supplemental Appendixes Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd. Copyright by Shin Takahashi and TREND-PRO Co., Ltd. ISBN-: 978--97--9 Contents A Workbook...

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 MATH 167: APPLIED LINEAR ALGEBRA Chapter 3 Jesús De Loera, UC Davis February 18, 2012 Orthogonal Vectors and Subspaces (3.1). In real life vector spaces come with additional METRIC properties!! We have

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

The Gram Schmidt Process

The Gram Schmidt Process u 2 u The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple

More information

The Gram Schmidt Process

The Gram Schmidt Process The Gram Schmidt Process Now we will present a procedure, based on orthogonal projection, that converts any linearly independent set of vectors into an orthogonal set. Let us begin with the simple case

More information

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

NOTES ON LINEAR ALGEBRA CLASS HANDOUT NOTES ON LINEAR ALGEBRA CLASS HANDOUT ANTHONY S. MAIDA CONTENTS 1. Introduction 2 2. Basis Vectors 2 3. Linear Transformations 2 3.1. Example: Rotation Transformation 3 4. Matrix Multiplication and Function

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces Introduction Recall in the lecture on vector spaces that geometric vectors (i.e. vectors in two and three-dimensional Cartesian space have the properties of addition, subtraction,

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares MATH 167: APPLIED LINEAR ALGEBRA Least-Squares October 30, 2014 Least Squares We do a series of experiments, collecting data. We wish to see patterns!! We expect the output b to be a linear function of

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes Written by Santiago Cañez These are notes which provide a basic summary of each lecture for Math 290-2, the second

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Methods of Mathematical Physics X1 Homework 2 - Solutions

Methods of Mathematical Physics X1 Homework 2 - Solutions Methods of Mathematical Physics - 556 X1 Homework - Solutions 1. Recall that we define the orthogonal complement as in class: If S is a vector space, and T is a subspace, then we define the orthogonal

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS The problems listed below are intended as review problems to do before the final They are organied in groups according to sections in my notes, but it is not

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

TIEA311 Tietokonegrafiikan perusteet kevät 2018

TIEA311 Tietokonegrafiikan perusteet kevät 2018 TIEA311 Tietokonegrafiikan perusteet kevät 2018 ( Principles of Computer Graphics Spring 2018) Copyright and Fair Use Notice: The lecture videos of this course are made available for registered students

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y: 3 Algebraic Methods b The first appearance of the equation E Mc 2 in Einstein s handwritten notes. So far, the only general class of differential equations that we know how to solve are directly integrable

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2. SSM: Linear Algebra Section 61 61 Chapter 6 1 2 1 Fails to be invertible; since det = 6 6 = 0 3 6 3 5 3 Invertible; since det = 33 35 = 2 7 11 5 Invertible; since det 2 5 7 0 11 7 = 2 11 5 + 0 + 0 0 0

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Math 54 HW 4 solutions

Math 54 HW 4 solutions Math 54 HW 4 solutions 2.2. Section 2.2 (a) False: Recall that performing a series of elementary row operations A is equivalent to multiplying A by a series of elementary matrices. Suppose that E,...,

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

Homework 2. Solutions T =

Homework 2. Solutions T = Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information