Solutions to the Exercises * on Linear Algebra

Similar documents
Exercises * on Linear Algebra

Exercises * on Principal Component Analysis

Exercises * on Functions

Solutions to the Exercises * on Multiple Integrals

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra. and

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MAT Linear Algebra Collection of sample exams

Linear Algebra. Min Yan

Principal Component Analysis

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Linear Algebra- Final Exam Review

Introduction to Matrix Algebra

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear algebra II Homework #1 due Thursday, Feb A =

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Linear Algebra Massoud Malek

Further Mathematical Methods (Linear Algebra) 2002

Further Mathematical Methods (Linear Algebra) 2002

Functional Analysis Exercise Class

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

MIT Final Exam Solutions, Spring 2017

1. General Vector Spaces

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Review problems for MA 54, Fall 2004.

Algebra II. Paulius Drungilas and Jonas Jankauskas

Mathematics Department Stanford University Math 61CM/DM Inner products

,, rectilinear,, spherical,, cylindrical. (6.1)

Lecture 2: Linear Algebra

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

The Hilbert Space of Random Variables

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra: Matrix Eigenvalue Problems

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

MATH 167: APPLIED LINEAR ALGEBRA Chapter 3

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Algebra Workshops 10 and 11

Typical Problem: Compute.

Linear Algebra. Session 12

The Gram Schmidt Process

The Gram Schmidt Process

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Linear Algebra March 16, 2019

Inner Product Spaces

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Lecture 7: Positive Semidefinite Matrices

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

MATH 167: APPLIED LINEAR ALGEBRA Least-Squares

Lecture Notes 1: Vector spaces

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

18.06 Professor Johnson Quiz 1 October 3, 2007

Linear Models Review

Linear Algebra: Characteristic Value Problem

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Methods of Mathematical Physics X1 Homework 2 - Solutions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Lecture 23: 6.1 Inner Products

Linear Algebra Primer

REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

GQE ALGEBRA PROBLEMS

Linear Algebra I. Ronald van Luijk, 2015

TIEA311 Tietokonegrafiikan perusteet kevät 2018

Tutorials in Optimization. Richard Socher

LINEAR ALGEBRA W W L CHEN

Analysis-3 lecture schemes

Lecture 1: Systems of linear equations and their solutions

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:

Math 110, Spring 2015: Midterm Solutions

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

EIGENVALUES AND EIGENVECTORS 3

Math 54 HW 4 solutions

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra Highlights

Review of Linear Algebra

Math 113 Final Exam: Solutions

7: FOURIER SERIES STEVEN HEILMAN

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Extreme Values and Positive/ Negative Definite Matrix Conditions

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Econ Slides from Lecture 7

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Homework 2. Solutions T =

COMP 558 lecture 18 Nov. 15, 2010

14 Singular Value Decomposition

Transcription:

Solutions to the Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 ebruary 7 Contents Vector spaces 4. Definition............................................... 4. Linear combinations......................................... 4.3 Linear (in)dependence........................................ 4.3. Exercise: Linear independence............................... 4.3. Exercise: Linear independence............................... 4.3.3 Exercise: Linear independence............................... 4.3.4 Exercise: Linear independence............................... 5.3.5 Exercise: Linear independence............................... 5.3.6 Exercise: Linear independence............................... 6.3.7 Exercise: Linear independence............................... 6.3.8 Exercise: Linear independence in C 3............................ 7.4 Basis systems............................................. 8.4. Exercise: Vector space of the functions sin(x + φ)..................... 8 7 Laurenz Wiskott (homepage https://www.ini.rub.de/people/wiskott/). This work (except for all figures from other sources, if present) is licensed under the Creative Commons Attribution-ShareAlike 4. International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4./. igures from other sources have their own copyright, which is generally indicated. Do not distribute parts of these lecture notes showing figures with non-free copyrights (here usually figures I have the rights to publish but you don t, like my own published figures). Several of my exercises (not necessarily on this topic) were inspired by papers and textbooks by other authors. Unfortunately, I did not document that well, because initially I did not intend to make the exercises publicly available, and now I cannot trace it back anymore. So I cannot give as much credit as I would like to. The concrete versions of the exercises are certainly my own work, though. * These exercises complement my corresponding lecture notes available at https://www.ini.rub.de/people/wiskott/ Teaching/Material/, where you can also find other teaching material such as programming exercises. The table of contents of the lecture notes is reproduced here to give an orientation when the exercises can be reasonably solved. or best learning effect I recommend to first seriously try to solve the exercises yourself before looking into the solutions.

.4. Exercise: Basis systems................................... 8.4.3 Exercise: Dimension of a vector space........................... 8.4.4 Exercise: Dimension of a vector space........................... 9.5 Representation wrt a basis...................................... 9.5. Exercise: Representation of vectors w.r.t. a basis..................... 9.5. Exercise: Representation of vectors w.r.t. a basis..................... Euclidean vector spaces. Inner product............................................... Exercise: Inner product for functions.............................. Exercise: Representation of an inner product....................... 3. Norm................................................. 3.. Exercise: City-block metric................................. 3.. Exercise: Ellipse w.r.t. the city-block metric........................ 3..3 Exercise: rom norm to inner product........................... 5..4 Exercise: rom norm to inner product (concrete)..................... 5.3 Angle................................................. 6.3. Exercise: Angle with respect to an inner product..................... 6.3. Exercise: Angle with respect to an inner product..................... 6.3.3 Exercise: Angle with respect to an inner product..................... 7 3 Orthonormal basis systems 7 3. Definition............................................... 7 3.. Exercise: Pythagoras theorem............................... 7 3.. Exercise: Linear independence of orthogonal vectors................... 7 3..3 Exercise: Product of matrices of basis vectors....................... 8 3. Representation wrt an orthonormal basis.............................. 9 3.. Exercise: Writing vectors in terms of an orthonormal basis............... 9 3.3 Inner product............................................. 3.3. Exercise: Norm of a vector................................. 3.3. Exercise: Writing polynomials in terms of an orthonormal basis and simplified inner product............................................ 3.4 Projection............................................... 3.4. Exercise: Projection..................................... 3.4. Exercise: Is P a projection matrix............................. 3

3.4.3 Exercise: Symmetry of a projection matrix........................ 4 3.5 Change of basis............................................ 4 3.5. Exercise: Change of basis.................................. 4 3.5. Exercise: Change of basis.................................. 5 3.5.3 Exercise: Change of basis.................................. 5 3.6 Schmidt orthogonalization process................................. 6 3.6. Exercise: Gram-Schmidt orthonormalization........................ 6 3.6. Exercise: Gram-Schmidt orthonormalization........................ 7 3.6.3 Exercise: Gram-Schmidt orthonormalization of polynomials............... 8 4 Matrices 9 4.. Exercise: Matrix as a sum of a symmetric and an antisymmetric matrix........ 9 4. Matrix multiplication......................................... 3 4. Matrices as linear transformations................................. 3 4.. Exercise: Antisymmetric matrices yield orthogonal vectors................ 3 4.. Exercise: Matrices that preserve the length of all vectors................. 3 4..3 Exercise: Derivative as a matrix operation......................... 3 4..4 Exercise: Derivative as a matrix operation......................... 33 4..5 Exercise: Derivative as a matrix operation......................... 35 4.3 Rank of a matrix........................................... 35 4.4 Determinant.............................................. 35 4.4. Exercise: Determinants................................... 35 4.4. Exercise: Determinant.................................... 36 4.4.3 Exercise: Determinant.................................... 37 4.5 Inversion +.............................................. 37 4.6 Trace.................................................. 37 4.6. Exercise: Trace and determinant of a symmetric matrix................. 37 4.7 Orthogonal matrices......................................... 38 4.8 Diagonal matrices.......................................... 38 4.8. Exercise: Matrices as transformations........................... 38 4.8. Exercise: Matrices as transformations........................... 39 4.8.3 Exercise: Matrices with certain properties......................... 39 4.9 Eigenvalue equation for symmetric matrices............................ 4 4.9. Exercise: Eigenvectors of a matrix............................. 4 3

4.9. Exercise: Eigenvalue problem................................ 4 4.9.3 Exercise: Eigenvectors of a matrix............................. 4 4.9.4 Exercise: Eigenvectors of a matrix of type v i v T i.................... 4 4.9.5 Exercise: Eigenvectors of a symmetric matrix are orthogonal.............. 4 4. General eigenvectors......................................... 43 4.. Exercise: Matrices with given eigenvectors and -values.................. 43 4.. Exercise: rom eigenvalues to matrices........................... 44 4..3 Exercise: Generalized eigenvalue problem......................... 46 4. Complex eigenvalues......................................... 47 4.. Exercise: Complex eigenvalues............................... 47 4. Nonquadratic matrices +...................................... 48 4.3 Quadratic forms +.......................................... 48 Vector spaces. Definition. Linear combinations.3 Linear (in)dependence.3. Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 3, v 8, V R 3. (b) f (x) x + 3x +, f (x) 3x + 6x, f 3 (x) x +, vector space of polynomials of degree..3. Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ) 3, v, V R 4. (b) f (x) x, f (x) 3 + 4x, V vector space of polynomials of degree..3.3 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? 4

( ) ( ) 3 (a) v, v, V R. Solution: v and v are obviously linearly independent and then they are also a basis of V R for dimensionality reasons. (b) f (x) x x 3, f (x) x + 3x 5, f 3 (x) x, V vector space of polynomials of degree. Solution: Linear independence of the three vectors can be shown by the proof that no linear combination of the three vectors (besides the trivial solution: all coefficients equal ) results in the zero vector. a(x x 3) + b(x + 3x 5) + c(x ) () (a + b)x + ( a + 3b + c)x + ( 3a 5b c) () a + b (3) a + 3b + c (4) 3a 5b c (5) a + b (6) 4a (equations (3) + (4) + (5)) (7) 3a 5b c (8) a (9) b (since a ) () c (since a and b ). () There is no other solution than the trivial one. Thus, f f 3 are linearly independent and form a basis of the vector space of polynomials of degree for dimensionality reasons..3.4 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 4, v 3, V R 3. Solution: v and v are obviously linearly independent, but they are not a basis for dimensionality reasons. One vector is missing to span the three dimensional space V R 3. (b) f (x) 3x, f (x) x, f 3 (x), V vector space of polynomials of degree. Solution: f is linearly independent of the other two vectors, because it is the only one containing x. (Here it is essential to argue that f is linearly independent of the other two together and not only of each of the other two individually.) We thus can discard the first vector and only consider the other two vectors for linear dependence. But it is quite obvious that f and f 3 are not linearly dependent. Thus all three vectors are linearly independent. Since the vector space of polynomials of degree is 3-dimensional, f f 3 form a basis for dimensionality reasons..3.5 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v 3, v, V R 3. 5

Solution: v and v are obviously linearly independent, but they are not a basis of V R 3 for dimensionality reasons. (b) f (x) x, f (x) x + 3, f 3 (x) x, V vector space of polynomials of degree. Solution: f is linearly independent of the other two vectors, because it is the only one containing x. (Here it is essential to argue that f is linearly independent of the other two together and not only of each of the other two individually.) f is linearly independent of f 3, because only f contains a constant. Since the vector space of polynomials of degree is 3-dimensional, f f 3 form a basis for dimensionality reasons..3.6 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ), v, V R. Solution: v and v are obviously linearly independent and they are a basis of V R for dimensionality reasons. (b) f (x) 3x +x, f (x) x 3, f 3 (x) x, f 4 (x) 4x +5x+3, V vector space of polynomials of degree. Solution: The vector space of polynomials of degree is only 3-dimensional, thus f f 4 cannot be linearly indepent and they are not a basis for dimensionality reasons..3.7 Exercise: Linear independence Are the following vectors linearly independent? Do they form a basis of the given vector space V? (a) v ( ) ( ) ( ) 4, v, v 3, V R 7. (b) f (x) sin(x), f (x) sin(x + π/4), f 3 (x) sin(x + π/), V L{sin(αx), cos(αx) : α R}. Hint: sin(x ± y) sin(x) cos(y) ± cos(x) sin(y). Solution: rom the addition theorems for trigonometric functions we know: and f (x) sin(x + π/4) cos(π/4) sin(x) + sin(π/4) cos(x) / sin(x) + / cos(x) f 3 (x) sin(x + π/) cos(π/) sin(x) + sin(π/) cos(x) sin(x) + cos(x) cos x Thus, f f 3 are obviously not linearly independent since f / f + / f 3 and they are therefore not a basis. What is the vector space V anyway? 6

.3.8 Exercise: Linear independence in C 3 Are the following vectors in C 3 over the field C linearly independent? Do they form a basis? r, r +i, r 3 i. () +i i Solution: The question is, whether there exist three constants a, b, and c in C, which are not all zero, but for which ar + br + cr 3 holds. With b : b r + i b i und c : c r + i c i we find ar + br + cr 3 () () a + b + c (3) a + i b i c (4) a b c (5) b c + i b i c (6) b( + i) + c( i) (7) (b r + i b i )( + i) + (c r + i c i )( i) (8) b r + i b r i b i b i c r i c r i c i + c i (9) ( b r b i c r + c i ) + i (+b r b i c r c i ) () b r b i c r + c i () ( b i c r ) + ( b r + c i ) () b r b i c r c i (3) ( b i c r ) ( b r + c i ) (4) b i c r (5) b r c i (6) a i (7) b (8) c i, (9) and verify ar + br + cr 3 ( i) + + i i i +i +i + + i +i +i i i + () i (). () This shows that the three vectors in C 3 are linearly dependent. The in line (7) is needed here, because we are searching for a solution of Equation (), but we cannot derive a unique solution, i.e. if we set a i, b, c i then () is true, but from () does not follow a i, b, c i. One could have suspected that right from the start, because the second and third component are copies of each other in all three vectors. This means that the three vectors can only be linearly independent if the vectors shortened by the last component are linearly independent. But since there are no three linearly independent vectors in C over the field C, like in R, the three vectors r to r 3 must be linearly dependent. 7

.4 Basis systems.4. Exercise: Vector space of the functions sin(x + φ) Show that the set of functions V {f(x) A sin(x + φ) A R, φ [, π]} generates a vector space over the field of real numbers. ind a basis for V and determine its dimension. Hint: The addition theorems for trigonometric functions are helpful for the solution, in particular sin(x+y) sin(x) cos(y) + sin(y) cos(x). Solution: rom the addition theorems for trigonometric functions follows: f(x) A sin(x + φ) A sin(x) cos(φ) + A sin(φ) cos(x). () Since A, sin(φ), and cos(φ) are simply real numbers, every f(x) can be written as a linear combination of sin(x) and cos(x). On the other hand, every linear combination of sin(x) und cos(x) can be written as A sin(x + φ) and is therefore an element of V, since A(cos(φ), sin(φ)) T can realize any pair of two real numbers. Since the latter also holds for the pairs (, ) T and (, ) T, the two functions sin(x) and cos(x) are elements of V. These three properties make sin(x) and cos(x) a basis of V, which implies that V is a two-dimensional vector space..4. Exercise: Basis systems. ind two different basis systems for the vector space of the polynomials of degree 3. Solution: The most obvious basis is, x, x, x 3. A different basis can be derived from this by simply adding some of the previous ones, e.g., x +, x +, x 3 +.. ind a basis for the vector space of symmetric 3 3 matrices. Solution: or symmetric matrices the diagonal elements can be chosen independently, but opposing off-diagonal elements are coupled together. Thus, a basis for symmetric 3 3-matrices, for instance, is,,, (),,. ().4.3 Exercise: Dimension of a vector space Determine the dimension of the following vector spaces. (a) Vector space of real symmetric n n matrices (for a given n). Solution: A symmetric n n matrix has n coefficients. However, only the coefficients in the upper right triangle (including the diagonal) can be choosen freely, those in the lower left triangle (without the diagonal) follow from the symmetry condition. Thus, there are n(n + )/ free parameters and that is the dimension of the vector space of symmetric n n matrices. 8

(b) Vector space of mixed polynomials in x and y (e.g. f(x, y) x y + x 3y + 5) that have a maximal degree of n in x and y (i.e. for each term x nx y ny in the polynomial n x + n y n must hold). Solution: There are (n + ) different monomials in x and (n + ) different monomials in y with a maximal degree of n. Combining them to all possible mixed monomials leads to n terms, which can be arranged in a square matrix (see below for n ). Within this matrix the monomials in the lower right triangle (without the diagonal) are not permitted, because they have too a high degree. Thus, the dimension of the vector space of mixed polynomials in x and y with maximal degree n is (n + )(n + )/. x x y xy x y y xy x y.4.4 Exercise: Dimension of a vector space Determine the dimension of the following vector spaces.. Vector space of real antisymmetric n n matrices (for a given n). A matrix M is antisymmetric if M T M. M T is the transpose of M. Solution: The vector space of all real n n matrices (for a given n) is n. Since antisymmetric matrices have zeros on the diagonal this reduces the dimensionality by n down to (n n) n(n ); since the entries in the lower left triangle are the negative of the entries in the upper right triangle this reduces the dimentionality by half to n(n )/.. Vector space of the series that converge to zero. Solution: Consider the series (,,,...), (,,,,...), (,,,,,...), etc. Each series converges to zero, they are all linearly independent of each other, and we can create infinitely many of them. Thus, the vector space is infinite dimensional..5 Representation wrt a basis.5. Exercise: Representation of vectors w.r.t. a basis Write the vectors w.r.t. the given basis. ( ) ( ) ( 3 (a) Vectors: v, v, v 3 3 5 e e The subscript e indicates the canonical basis. ) ; Basis: b e ( ), b e ( ) ( ) ( ) Solution: The first solutions are obvious: Since v b, we have v, and v b b can also be seen easily. The solution for v 3 is a bit more complex and can be derived by solving a linear system of equations. ( ) ( ) ( ) 3 a + b () 5 e e e 3 a + b () 5 a b (3) a (st + nd equation) (4) 5 a b (5) a (6) b 4 (since a ) (7) ( ) v 3. (8) 4 9 b e

(b) Vector: f(x) 3x + 3 x ; Basis: x, x, Solution: This is very easy to solve, since the basis consists of the pure monomials. One simply stacks the coefficients of the polynomial, which yields 3 f(x) 3. (9) (c) Vector: g(x) (x + )(x ); Basis: x 3, x +, x,. Solution: This, too, is easy to solve, once the polynomial is multiplied out. g(x) (x + )(x ) x. ().5. Exercise: Representation of vectors w.r.t. a basis Write the vectors w.r.t. the given basis. (a) Vector: v ; Basis: b, b 3 e e The subscript e indicates the canonical basis., b 3 e e Solution: This can be easily solved in a cascade. irst one determines the contribution of b to get the second component, since b is the only basis vector that contributes there. Then one determines the contribution of b to get the first component, since b 3 does not contribute there. inally one determines the contribution of b 3 to get the third component. This way one finds v (b) Vector: h(x) 3x x 3; Basis: x x +, x +, x 3 6 Solution: This example is difficult to solve directly. Thus, one has to solve a system of linear b.

differential equations. 3x x 3 a(x x + ) + b(x + ) + c(x ) () (a + c)x + ( a + b)x + (a + b c) () 3 a + c (3) a + b (4) 3 a + b c (5) 3 a + c (6) b + c (st + nd equation) (7) 6 b c (3rd st equation) (8) 3 a + c (9) b + c () 8 4c (3rd nd equation) () a (since c ) () b (since c ) (3) c (4) h. (5) Euclidean vector spaces. Inner product.. Exercise: Inner product for functions Consider the space of real continuous functions defined on [, ] for which [f(x)] dx exists. Let the inner product be (f, g) : with an arbitrary positive weighting function w(x)f(x)g(x) dx, () < w(x) <. (). Prove that () is indeed an inner product.

Solution: We simply have to verify the axioms of the inner product. The first three are trivial. (λf, g) (f + h, g) (f, g) λ w(x)λf(x)g(x) dx (3) w(x)f(x)g(x) dx (4) λ(f, g). (5) w(x)(f(x) + h(x))g(x) dx (6) w(x)f(x)g(x) dx + w(x)h(x)g(x) dx (7) (f, g) + (h, g). (8) The fourth one is more subtle. We first show (f, f). w(x)f(x)g(x) dx (9) w(x)g(x)f(x) dx () (g, f). () () f (x) () w(x)f (x) (3) It is also easy to show that (f(x) ) ((f, f) ), since w(x)f (x) dx (4) () (f, f). (5) f(x) (6) w(x)f (x) (7) w(x)f (x) dx (8) () (f, f). (9) Really difficult is the other direction, ((f, f) ) (f(x) ). We can prove that by showing the inverse direction for the negation, i.e. ((f, f) ) (f(x) ). (f(x) ) means that there is an x [, ], for which f(x ). Because of the continuity of f there is then also a finite δ-neighborhood around x for which f(x), and because of w(x) > follows x+δ x w(x)f δ (x) dx > (for simplicity I have assumed here that x is an inner point, but something analogous hold for a point at the border). urthermore, one can easily show that b a w(x)f (x) dx for arbitrary a < b, compare proof above for (f, f). But that means that the positive contribution of the integral around x cannot be compensated for by a negative contribution at another location. Therefore w(x)f (x) dx (f, f) >. Thus we have shown that ((f, f) ) (f(x) ), and ((f, f) ) (f(x) ) holds as well. Since we have shown the other direction already farther above, we have proven as required. () therefore is a scalar product. (f, f) f(x), (). Show whether () is an inner product also for non-continuous functions.

Solution: We show that () is no inner product by finding a counter example. If, for instance, f() and f(x) otherwise, then f is obviously not the zero-function, but still the weighted integral over f(x) vanishes, because f differs from zero only in a single point. Thus, the fourth axiom is violated and we don t have an inner product anymore. 3. Show whether () is an inner product for continuous functions even if the weighting function is positive only in the inner of the interval, i.e. if w(x) > x (, ) but w(±). Solution: The first properties of the inner product are not critical. Only the fourth one requires further consideration. With the new weighting function, a difference to the inner product above could only be at the border. If w( ), then f( ) could be non-zero without contributing to the integral. However, since f has to be continuous, f(x) would then also hold in a small δ-neighborhood, and in this neighborhood w(x) > holds, so that there we would indeed get a positive contribution to the integral. This means, the fourth axiom is still valid and () an inner product... Exercise: Representation of an inner product Let V be an N-dimensional vector space over R and let {b i } with i,..., N be a basis. Let x ( x,..., x N ) T b and ỹ (ỹ,..., ỹ N ) T b be the representations of two vectors x, y V with respect to the basis {b i}. Show that: (x, y) x T Aỹ. () where A is an N N-matrix. Solution: Since x and ỹ are representations of x and y with respect to the basis b i, we have x N i x ib i and y N i ỹib i. With this we get (x, y) i x i b i, j ỹ j b j () i,j x i (b i b j ) }{{} ỹ j (3) A ij x T Aỹ. (4). Norm.. Exercise: City-block metric The norm of the city-block metric is defined as: x CB : i x i Prove that this actually is a norm. Solution: Properties and 3 are trivially true. Property holds because x + y CB i x i + y i i x i + y i i x i + j y j x CB + y CB. ().. Exercise: Ellipse w.r.t. the city-block metric What does an ellipse in a two-dimensional space with city block metric look like? 3

An Ellipse is the set of all points x whose sum of the distances x a and x b equals r, given the two focal points a and b and a radius r. Examine the following cases: (a) a (, ) T, b (, ) T, and r 4, Solution: The norm of the city-block metric is defined as: x CB : i x i We take an intuitive approach to finding the ellipse, see left figure below, (there is also a tedious formal one by making a number of case distinctions). igure (left): (Wiskott group, 7) unclear.; igure (right): (Wiskott group, 7) unclear. We start at an easy to find point, e.g. (,), which has equal distance to a and b with a total distance of 4. If the point is shifted to the left the distance to b grows but the distance to a is reduced by the same amount, so that the total distance to the focal points remains constant, as it should. This works also to the right, overall from (-,) to (,). As we move to the left beyond (-,) the horizontal distance to a and b grows, thus we have to reduce the vertical distance correspondingly, which results in a movement to the lower left at an angle of 45 until the point reaches (-,). One could proceed like that, but the shape of the complete ellipse already follows from the upper left part for symmetry reasons. The result is an angular ellipse, see right figure above. (b) a (, ) T, b (, ) T, and r 4. Solution: igure (left): (Wiskott group, 7) unclear.; igure (right): (Wiskott group, 7) unclear. 4

One can find the solution in a similar way as described in (a), see left figure above. We start at an easy to find point, e.g. (,). If you move from (,) to the right, you see that the distance to a grows, but the distance to b is reduced by the same amount, so that the total distance remains constant, as it should. However, beyond (,), the distance to b also grows. Therefore, the point has to be shifted upwards at the same rate as it move to the right to keep the total distance to the focal points constant. The point thus moves at an angle of 45 to the abscissa until it reaches (3,). One can proceed in this way, but the complete ellipse already follows from these two parts for symmetry reasons. The result is an octagon, see right figure above...3 Exercise: rom norm to inner product Every inner product (, ) defines a norm by x (x, x). Show that a norm can also define an inner product over the field R (if it exists, which is the case if the parallelogram law x + y + x y ( x + y ) holds). Hint: Make the ansatz (x + y, x + y)... and derive a formula for the inner product given a norm. Solution: (x + y, x + y) (x, x + y) + (y, x + y) () (x, x) + (x, y) + (y, x) + (y, y) () (x,x) + (x, y) + (y, y) (3) (x, y) ((x + y, x + y) (x, x) (y, y)) (4) ( x + y x y ). (5) Thus, given a norm x, a corresponding inner product can be derived with: (x, y) : ( x + y x y ). (6) Alternatively, one can also use (x, y) : ( x + y x y ), (7) 4 known as the polarization identity (D: Polarisationsidentität) See also http://en.wikipedia.org/wiki/parallelogram_law...4 Exercise: rom norm to inner product (concrete) Given a norm x, a corresponding inner product can be derived with (x, y) : Derive the corresponding inner product for the norm ( x + y x y ). () g : w(x)g(x) dx, () with w(x) being some arbitrary strictly positive function. 5

Solution: (g, f) ( g + f g f ) ( w(x)(g(x) + f(x)) dx w(x)g(x) dx ( w(x) ( (g(x) + f(x)) g(x) f(x) ) ) dx ( ) w(x)g(x)f(x) dx w(x)g(x)f(x) dx. ) w(x)f(x) dx.3 Angle.3. Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product.. v 4 and v 5 with the standard Euclidean inner product.. f (x) x 3 + and f (x) 3x with the inner product (f, g) : f(x)g(x) dx. Solution: (f, f ) f (x)f (x) dx (x3 + )3x dx (3x4 + 3x) dx 3x4 dx (since odd functions integrated over [, +] vanish for symmetry reasons) 3[ 5 x5 ] 3 5 + 3 5 6 5, f (x3 + ) dx (x6 + x 3 + ) dx (x6 + ) dx (s.a.) [ 7 x7 ] + [x] 7 + 7 + + 6 7 4 7, f (3x) dx 9x dx ( ) ( 6 6 ) 5 α arccos 4 7 6 arccos 7.4 7.9. Drawing not available. 9[ 3 x3 ] 3 + 3 6, thus.3. Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product. ( ) (a) v and v ( ) with the standard Euclidean inner product. 3 ( Solution: (v, v ) 5, v 5, v, thus α arccos 3 4π.356 35. Drawing not available. ) 5 5 ) arccos ( (b) f (x) arctan(x) and f (x) cos(x) with the inner product (f, g) : exp( x )f(x)g(x) dx. Solution: f is odd, f and exp( x ) are even. The product of the three functions is therefore odd. Thus, the integral vanishes and the two vectors are orthogonal. Drawing not available. 6

.3.3 Exercise: Angle with respect to an inner product Draw the following vectors and calculate the angle between them with respect to the given inner product. ( (a) v ) ( and v ) with the inner product (x, y) : x T ( ) y. (b) f (x) 3x and f (x) x with the inner product (f, g) : f(x)g(x) dx. 3 Orthonormal basis systems 3. Definition 3.. Exercise: Pythagoras theorem Prove the generalized Pythagoras theorem: Let v i, i {,..., N} be pairwise orthogonal vectors. Then holds. N N v i v i. i i Solution: We can show directly N v i i N N v i, i j v j () N (v i, v j ) () i,j N (v i, v i ) (because (v i, v j ) if i j) (3) i N v i. (4) i 3.. Exercise: Linear independence of orthogonal vectors Show that N pairwise orthogonal vectors (not permitting the zero vector) are always linearly independent. Solution: Proof by contradiction: We assume the pairwise orthogonal non-zero vectors v i are linearly dependent. Then there exist factors a i, of which at least one is not zero, so that i a iv i. rom this 7

follows a i v i () i ( ) a i v i, v j j () i a i (v i, v j ) j (3) i a j (v j, v j ) j (since the vectors are orthogonal) (4) a j j (since the vectors have finite norm), (5) which is a contradiction to the assumption. Thus, the assumption is not true and all orthogonal vectors are linearly independent. Ekaterina Kuzminykh (SS 7) came up with the following solution. a i v i (6) i a i v i (7) i i a i v i, j a j v j (8) ij i a i a j (v i, v j ) (9) a i (v i, v i ) (since (v i, v j ) for i j) () a i i (since the vectors have finite norm). () 3..3 Exercise: Product of matrices of basis vectors Let {b i }, i,..., N, be an orthonormal basis and N indicate the N-dimensional identity matrix.. Show that (b, b,..., b N ) T (b, b,..., b N ) N. Does the result also hold if one only takes the first N basis vectors? If not, try to interpret the resulting matrix. Solution: (b, b,..., b N ) T (b, b,..., b N ) b T b T. b T N (b, b,..., b N ) N () follows directly from the fact that b T i b j δ ij by definition of an orthonormal basis.. Show that (b, b,..., b N )(b, b,..., b N ) T N. Does the result also hold if one only takes the first N basis vectors? If not, try to interpret the resulting matrix. Solution: Writing an arbitrary vector v in terms of the basis b i is done by v b v b v b... v Nb b b T v b T v... b T Nv b b T b T. b T N v. () 8

Writing the vector v b, which is written in terms of the basis b i, in terms of the Euclidean basis again is done by v v b v ib b i (b, b,..., b N ) v b... (b, b,..., b N )v b. (3) i v Nb Combining these two transformations results in v (b, b,..., b N ) b T b T.. b T N b v. (4) Since this is true for any vector v, we conclude that (b, b,..., b N )(b, b,..., b N ) T (b, b,..., b N ) b T b T. N. (5) b T N 3. Representation wrt an orthonormal basis 3.. Exercise: Writing vectors in terms of an orthonormal basis Given the orthonormal basis b 6, b 6 3, b 3 3 6. (). Write the vectors v and v in terms of the orthonormal basis b i. v, v. () 3 Solution: Since the b i form an orthonormal basis, the coefficients of the vectors v j in terms of this basis can be simply computed with the inner products (b i, v j ). Thereby we get v v b T v b T v b T 6 3 v b T v b T v b T 3 v 6 3 7 3 3 b, (3) b b. (4). What is the matrix with which you could transform any vector given in terms of the Euclidean basis into a representation in terms of the orthonormal basis b i? Solution: Since the coefficients can be computed with the inner products (b i, v j ) b T i v j, the transformation matrix T simply is b T T b T 3 3 b T 6. (5) 3 9

3.3 Inner product 3.3. Exercise: Norm of a vector Let b i, i,..., N, be an orthonormal basis. Then we have (b i, b j ) δ ij and N v v i b i with v i : (v, b i ) v. () i Show that N v vi. () i Solution: We can show directly that v (v, v) (3) N N v i b i, v j b j (4) i i,j j N v i v j (b i, b j ) (5) }{{} δ ij N vi (b i, b i ) (since (b i, b j ) for i j) (6) i N vi (since the basis vectors are normalized to ). (7) i 3.3. Exercise: Writing polynomials in terms of an orthonormal basis and simplified inner product The normalized Legendre polynomials L /, L 3/ x, L 5/8 ( + 3x ) form an orthonormal basis of the vector space of polynomials of degree with respect to the inner product (f, g) f(x)g(x) dx.. Write the following polynomials in terms of the basis L, L, L : Verify the result. f (x) + x, f (x) 3 x. Solution: Since the L i form an orthonormal basis, the coefficients of the vectors f j in terms of this basis can be simply computed with the inner products (f i, L j ). Note that if f i L j is an odd function,

the integral over the intervall [, ] vanishes for symmetry reasons. Thereby we get and (f, L ) (f, L ) (f, L 3 ) ( + x ) / dx ([x] + [x 3 /3] ) / () ( + /3) / 8/3 / 4/3 () ( + x ) 3/ x dx (for symmetry reasons) (3) ( + x ) 5/8 ( + 3x ) dx (4) ( + x + 3x 4 ) 5/8 dx (5) ( [x] + [x 3 /3] + 3[x 5 /5] ) 5/8 (6) ( + 4/3 + 6/5) 5/8 8/5 5/8 /3 8/5 (7) 4/3 f /3 (8) 8/5 (f, L ) (f, L ) (f, L 3 ) L 4/3 L + L + /3 8/5 L (9) 4/3 / + /3 8/5 5/8 ( + 3x ) () 4/3 + ( /3 + x ) + x, () (3 x ) / dx (3[x] [x 3 /3] ) / () (6 4/3) / 4/3 / 7/3 (3) (3 x ) 3/ x dx (for symmetry reasons) (4) (3 x ) 5/8 ( + 3x ) dx (5) ( 3 + x 6x 4 ) 5/8 dx (6) ( 3[x] + [x 3 /3] 6[x 5 /5] ) 5/8 (7) ( 6 + /3 /5) 5/8 (8) 6/5 5/8 /3 8/5 (9) 7/3 f /3 () 8/5 L 7/3 L + L /3 8/5 L () 7/3 / /3 8/5 5/8 ( + 3x ) () 7/3 ( /3 + x ) 3 x. (3) Equations (9, 3) were added only to verify the result.

. Calculate the inner product (f, f ) first directly with the integral and then based on the coefficients of the vectors written in terms of the basis L, L, L. Solution: We calculate directly (f, f ) ( + x )(3 x )dx (3 + x x 4 )dx (4) 3[x] + [x 3 /3] [x 5 /5] (5) 6 + /3 4/5 6 /5 88/5, (6) 4/3 T 7/3 (f, f ) /3 8/5 /3 (7) 8/5 L 4/3 7/3 /3 8/5 /3 8/5 (8) 4 7 3 3 8 3 3 5 4 7 3 3 5 8 3 3 5 64 45 88 5. (9) L 3.4 Projection 3.4. Exercise: Projection. Project the vector v (,, ) T onto the space orthogonal to the vector b (,, ) T. Solution: The standard way would be to construct basis vectors b and b 3 for the space orthogonal to b and then project onto these with v 3 i (v, b i)b i. Simpler, however, is it to subtract the projection onto the space spanned by b. or that we first normalize b to obtain v can then be calculated as b : b b (4 + + 4) 3. () v v v v (v, b )b () 3 ( + + ) 3 (3) 3 3 4. (4) We verify that v is indeed orthogonal to b and that it is shorter than v. (v, b ) ( + 4 ), 9 (5) v ( + 4 + ) 6, (6) v 9 ( + 6 + ) 8 9 < 6 v. (7). Construct a 3 3-matrix P that realizes the projection onto the subspace orthogonal to b, so that v Pv for any vector v.

Solution: We start from what we have written above to calculate v and rewrite it with a matrix. ( ) v v (v, b )b v b (b, v) 3 v b b T v 3 b b T v (8) }{{} P 3 b b T 9 4 4 4 4 P b b b b b b 3 b b b b b b 3 b 3 b b 3 b b 3 b 3 9 5 4 8 4 5 We verify that we get the same vector v for v (,, ) T as above but with matrix P. v Pv 9 9 8 5 4 4 5 5 4 4 + 6 4 + 4 5 3. Calculate the product of P with itself, i.e. PP. 9 Solution: There are different ways to solve this problem. (9). () () 3 4. () 3 3 ˆ The intuitive way is to realize that after we have projected a vector onto a subspace, the projected vector lies within the subspace and thus projecting it a second time does not make any difference. Thus, we expect PP P. ˆ If we want to be more formal we can show that ( ) ( ) PP 3 b b T 3 b b T (3) 3 3 3 b b T b b T 3 + b b T b b T }{{} (4) 3 b b T b b T + b b T 3 b b T P. (5) ˆ inally, one can do it the direct (hard) way by simply multiplying the matrices. PP 5 4 8 5 4 8... (6) 9 9 4 5 4 5 3.4. Exercise: Is P a projection matrix Determine whether matrix is a projection matrix or not. P 5 ( 4 ) () Solution: The defining property of a projection matrix is that you get the same result if you apply it twice, i.e. PP P. We verify that PP ( ) ( ) ( ) 5 ( ) P. () 5 5 4 4 5 5 5 4 3

3.4.3 Exercise: Symmetry of a projection matrix Prove that the matrix P of an orthogonal projection is always symmetric. Solution: If {b i } is an orthonormal basis of the space onto which P projects, then P can be written as P i b i b T i. () With this it is easy to show that ( ) T P T b i b T i ( ) T b i b T i i i i b T T i b T i i b i b T i P. () 3.5 Change of basis 3.5. Exercise: Change of basis Let {a i } and {b i } be two orthonormal bases in R 3 : a, a 3 b, b 6, a 3, 6, b 3. 3 Determine the matrices B b a and B a b for the transformations from basis a to basis b and vice versa. Are there similarities between the two matrices? What happens if you multiply the two matrices? Solution: If a vector v is given in terms of basis a, then v in terms of the Euclidean basis is given by v v v (a, a, a 3 ) v (v i ) a a i. () v 3 v 3 i Vector v given in terms of basis b can then be computed with v b T v v b T v v 3 b T v 3 3 Combining these two transformations we have b T B b a b T (a, a, a 3 ) b T 3 or B a b one gets analogously e b B a b a T a T a T 3 e a b T v b T v b T 3 v b T a b T a b T a 3 b T a b T a b T a 3 b T 3 a b T 3 a b T 3 a 3. (). (3) (b, b, b 3 ) B T b a. (4) It is intuitively clear that a back- and forth transformation between bases a and b should have no effect. We verify that the product of the two matrices indeed results in the identity matrix. a T b T B a b B b a a T a T (b, b, b 3 ) b T a T (a, a, a 3 ) a T (a, a, a 3 ). (5) 3 a T 3 b T 3 } {{ } 4 } {{ }

or the concrete bases given above we find with (3) /9 3/4 /36 B b a /3 /4 / B T a b (6) /9 8/9 and verify that B a b B b a /9 /3 /9 3/4 /4 /36 / 8/9 /9 3/4 /36 /3 /4 / /9 8/9. (7) Extra question: What would change if the basis would not be orthonormal? Extra question: How can you generalize this concept of change of basis to vector spaces of polynomials of degree? 3.5. Exercise: Change of basis Consider {a i } and {b i }, where a b 6, a, a 3,, b, b 3 3.. Are {a i } and {b i } an orthonormal basis of R 3? If not, make them orthonormal.. ind the transformation matrix B b a. 3. ind the inverse of matrix B b a. 3.5.3 Exercise: Change of basis Let {a i } and {b i } be two orthonormal bases in R : a ( ), a ( ), b ( 5 Write the vector v ( 3 which is given in terms of the basis a, in terms of basis b. Solution: irst we determine the vector in the Euclidean basis. ( ) v a 3 3a ( ) ( 3 3 Then we write this vector wrt basis b. a v ( v v ) b ( b T v b T v 5 ) ) b a ), b 5 ( ). (), () ( 3 ) ) ( 5 b ). (3). (4)

3.6 Schmidt orthogonalization process 3.6. Exercise: Gram-Schmidt orthonormalization Construct an orthonormal basis for the space spanned by the vectors v, v, v 3. Solution: There is something suspicious here. Either the three vectors are linearly independent, then the Euclidean basis (,, ) T, (,, ) T, (,, ) T would do, or they are linearly dependent, then one of the three vectors can be ignored. With some guessing one sees that v 3 v v, so the problem reduces to finding a basis for the first two vectors. We apply the Gram-Schmidt orthonormalization to obtain the basis vectors b and b. v + + 4 5, () v b : v, () 5 b : v (v, b )b (3) ( + + ) (4) 5 5 5 5 8 5 4, (5) b (64 + 5 + 6)/5 5/5, (6) b b : b 5 8 5 8 5. (7) 5 5 4 5 4 Now, if one has not guessed that v 3 can be expressed as a linear combination of the other two vectors but proceeds with the Gram-Schmidt procedure, one gets the following. b 3 v 3 (v 3, b )b (v 3, b )b (8) ( + 4) 8 (8 + 5 + 8) 5 (9) 5 5 5 5 4 3 8 5 () 5 5 4 5 5 + 3 8 5. () 5 5 5 5 6 4 Thus, it becomes apparent that v 3 is linearly dependent on v and v and is therefore redundant. So we are done and the basis is b and b. It is easy to see that the two vectors are normalized and orthogonal. 6

We also verify v (v, b )b + (v, b )b () ( + + 4) + (8 + 8) 8 5 (3) 5 5 5 5 4 5 + 8 5, (4) 5 5 4 v (v, b )b + (v, b )b (5) ( + + ) + 8 (6 + 5 + ) 5 (6) 5 5 5 5 4 + 8 5 + 8 5 5, (7) 5 5 5 5 5 4 4 4 v 3 (v 3, b )b + (v 3, b )b (8) ( + 4) + (8 + 5 + 8) 8 5 (9) 5 5 5 5 4 3 + 8 5 3 + 8 5 5 5. () 5 5 5 5 5 4 6 4 Thus b and b are indeed a basis for the space spanned by the v i. 3.6. Exercise: Gram-Schmidt orthonormalization ind an orthonormal basis for the spaces spanned by the following sets of vectors.. v, v, v 3. (). Solution: The three vectors are obviously linearly independent and thus span the whole threedimensional space. A simple basis of this space is b :, b :, b 3 :. () v, v. (3) Solution: We see that the two vectors are not linearly dependent and not orthogonal already and 7

apply Gram-Schmidt orthonormalization to obtain the basis vectors b and b. v + + 4 5, (4) v b : v, (5) 5 b : v (v, b )b (6) ( + + ) (7) 5 5 4, (8) 5 5 b + 6 + 4/5 /5, (9) b b : b 5 4 5 4 5. () 3 3 3.6.3 Exercise: Gram-Schmidt orthonormalization of polynomials Construct an orthonormal basis for the space of polynomials of degree in R given the inner product and the norm induced by this inner product. (g, h) g(x)h(x) dx () Solution: We apply Gram-Schmidt orthogonalization to the functions g (x), g (x) x, and g (x) x. 8

b (x) g (x) g, () b (x) g (x) (g, b )b (x) x (/) x /, (3) b (x /) dx (x x + /4) dx (4) /3 / + /4 /, (5) b (x) b (x) b x / / x 3, (6) b (x) g (x) (g, b )b (x) (g, b )b (x) (7) ( x x ( x ) 3) dx ( x 3) (/3) (8) x ( /4 3/3)( x 3) (/3) (9) x ( 3/4 /3)( x 3) (/3) () x (3 x x 3/ + ) (/3) () x x + /6, () b (x x + /6) dx (3) (x 4 x 3 + /6 x + x /6 x + /36) dx (4) /5 /4 + /8 + /3 / + /36 (5) (36 9 + + 6 3 + 5)/8 (6) /8, (7) b (x) b (x) b x x + /6 /8 8(x x) + 5. (8) Extra question: Would the result change if we usee a different inner product, e.g. with an integral on the interval [, +] instead of [, ]? Extra question: Seeing this basis, what does it mean to project a polynomial of degree onto the space of polynomials of degree? 4 Matrices 4.. Exercise: Matrix as a sum of a symmetric and an antisymmetric matrix Prove that any square matrix M can be written as a sum of a symmetric matrix M + and an antisymmetric matrix M, i.e. M M + + M with (M + ) T M + and (M ) T M. Hint: Construct a symmetric matrix and an antisymmetric matrix from M. Solution: We can make M symmetric or antisymmetric by adding or subtracting its transpose, respectively. 9

If we also devide by, to get the normalization right, we have ( ) ( T M + : M + M T / M T + M) / (M + ) T, () ( ) ( T M : M M T / M T M) / (M ) T, () ( ) ( ) M? M + + M M + M T / + M M T / (3) ( ) M + M T + M M T / M. (4) 4. Matrix multiplication 4. Matrices as linear transformations 4.. Exercise: Antisymmetric matrices yield orthogonal vectors. Show that multiplying a vector v R N with an antisymmetric N N-matrix A yields a vector orthogonal to v. In other words A T A (v, Av) v R N. () Solution: If we write the inner product in matrix notation, we find that i.e. Av is orthogonal to v. (v, Av) v T Av () (v T Av) T (because v T Av is a scalar) (3) v T A T (v T ) T (4) (because (AB) T B T A T for any A and B) v T A T v (5) v T Av (because A is antisymmetric) (6) (v, Av) (7) (v, Av), (8) One can get an intuition for that by performing the product explicitly for a simple example but maintaining the matrix order (Phillip reyer, SS 9). a a a 3 v v T Av (v, v, v 3 ) a a a 3 v (9) a 3 a 3 a 33 (v, v, v 3 ) a v + a v + a 3 v 3 a v + a v + a 3 v 3 a 3 v + a 3 v + a 33 v 3 v (a v + a v + a 3 v 3 ) +v (a v + a v + a 3 v 3 ) +v 3 (a 3 v + a 3 v + a 33 v 3 ) v 3 () () v a v + v a v + v a 3 v 3 +v a v + v a v + v a 3 v 3 +v 3 a 3 v + v 3 a 3 v + v 3 a 33 v 3 () + v a v + v a 3 v 3 v a v + + v a 3 v 3 v 3 a 3 v v 3 a 3 v + + v a v + v a 3 v 3 v a v + + v a 3 v 3 v a 3 v 3 v a 3 v 3 + (since A is antisymmetric) (3). (4) 3

Now one can see that the terms that are related by a transposition of the matrix cancel out each other, so that the sum is zero.. Show the converse. If a matrix A transforms any vector v such that it becomes orthogonal to v, then A is antisymmetric. In other words (v, Av) v R N A T A. (5) Solution: We know the inner product (v, Av) is zero. If we write it explicitly in terms of the coefficients, and choose v to be either a Cartesian basis vector e i or a sum of two such vectors, i.e. e i + e j, then we find i.e. A is antisymmetric. e T i Ae i A ii i (6) (e i + e j ) T A(e i + e j ) i, j (7) e T i Ae i + e T i Ae j + e T j Ae i + e T j Ae j (8) A ii + A ij + A ji + A jj (9) A ij + A ji (because A ii A jj ) () A ii i () A ij A ji i, j () A T A, (3) This proof was fairly direct. However, there is a more elegant proof (Oswin Krause, SS 9), which requires a bit more background knowledge, namely (i) any matrix M can be written as a sum of a symmetric M + and an antisymmetric matrix M and (ii) if a quadratic form x T Hx with a symmetric matrix H is zero for any vector x, then H must be the zero matrix. i.e., A is antisymmetric.! (v, Av) v (4) v T Av (5) (i) v T (A + + A ) v }{{} (6) : A v T A + v + v T A v (7) () v T A + v (8) (ii) A + (9) A A T, (3) 4.. Exercise: Matrices that preserve the length of all vectors Let A be a matrix that preserves the length of any vector under its transformation, i.e. Av v v R N. () Show that A must be an orthogonal matrix. Hint: or a square matrix M we have v T Mv v R N M M T. () 3

Solution: Length preservation for any vector v means v T A T Av (3) Av () v (4) v T v v (5) v T (A T A )v v (6) () (A T A ) (A T A ) T (7) (A T A ) (because (A T A ) is symmetric) (8) (A T A ) T (9) A T A, () which means that A is orthogonal. 4..3 Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector spaces spanned by the following function sets. Use the given functions as a basis with respect to which you represent the vectors. Determine the rank of each matrix. (a) {sin(x), cos(x)}. () Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) sin(x), cos(x). () The derivatives are correspondingly ( ) (sin(x)) cos(x) ( ), (cos(x)) sin(x). (3) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D. (4) Interpreted as a transformation the matrix performs a rotation by 9. obviously. The rank of the matrix is We can verify that for a general function f(x) a sin(x) + b cos(x) considered as a vector f within the given vector space the derivative can actually be computed with D. ( ) a f a sin(x) + b cos(x) f(x), (5) b ( ) ( ) ( ) f a b D f (6) b a b sin(x) + a cos(x) a cos(x) b sin(x) f (x). (7) (b) {, x +, x }. (8) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors., x +, x. (9) 3

The derivatives are correspondingly (), (x + ), (x ) x. () The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. () The rank of the matrix is obviously. We can verify that for a general function f(x) a + b x + c x f a b b c f D f (a b) + b (x + ) + c x a + b x + c x f(x), () a b b c b c c (3) (b c) + c (x + ) + x b + c x f (x). (4) (c) {exp(x), exp(x)}. (5) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) exp(x), exp(x). (6) The derivatives are correspondingly ( ) (exp(x)) exp(x) ( ), (exp(x)) exp(x). (7) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D 3. (8) Interpreted as a transoformation the matrix performs a stretching along the second axis by a factor of two. The rank of the matrix is obviously. We can verify that for a general function f(x) a exp(x) + b exp(x) ( ) a f a exp(x) + b exp(x) f(x), (9) b ( ) ( ) ( ) f a a D 3 f () b b a exp(x) + b exp(x) f (x). () 4..4 Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector spaces spanned by the following function sets. Use the given functions as a basis with respect to which you represent the vectors. Determine the rank of each matrix. 33

(a) {sin(x), cos(x)}. () Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. ( ) ( ) sin(x), cos(x). () The derivatives are correspondingly ( ) (sin(x)) cos(x) ( ), (cos(x)) sin(x). (3) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. ( ) D. (4) Interpreted as a transformation the matrix performs a rotation by 9. obviously. The rank of the matrix is We can verify that for a general function f(x) a sin(x) + b cos(x) considered as a vector f within the given vector space the derivative can actually be computed with D. ( ) a f a sin(x) + b cos(x) f(x), (5) b ( ) ( ) ( ) f a b D f (6) b a b sin(x) + a cos(x) a cos(x) b sin(x) f (x). (7) (b) {, x +, x }. (8) Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. The derivatives are correspondingly (), x +, (x + ), x, (x ) x. (9). () The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. () The rank of the matrix is obviously. We can verify that for a general function f(x) a + b x + c x f a b b (a b) + b (x + ) + c x a + b x + c x f(x), () c f D f a b b b c c (3) c (b c) + c (x + ) + x b + c x f (x). (4) 34

4..5 Exercise: Derivative as a matrix operation Taking the derivative of a function is a linear operation. ind a matrix that realizes a derivative on the vector space spanned by the function set {sin(x), cos(x), x sin(x), x cos(x)}. Use the given functions as a basis. Determine the rank of the matrix. Solution: The functions of the basis written in terms of the basis look like Euclidean basis vectors. sin(x), cos(x), () x sin(x) The derivatives are correspondingly (sin(x)) cos(x) (x sin(x)) sin(x) + x cos(x), x cos(x), (cos(x)) sin(x). (), (x cos(x)) cos(x) x sin(x), (3). (4) The derivative matrix is simply the combination of the column vectors resulting from taking the derivatives of the basis functions, i.e. D. (5) The rank of the matrix is obviously 4. 4.3 Rank of a matrix 4.4 Determinant 4.4. Exercise: Determinants Calculate the determinants of the following matrices. (a) ( cos(φ) sin(φ) M sin(φ) cos(φ) ) () Solution: The formula for the determinant of a -matrix yields M cos(φ) cos(φ) ( sin(φ)) sin(φ). () This result is not surprising, since M is a rotation matrix, which obviously does not change the volume of the unit square under its transformation. 35