Lecture Notes Methods of Mathematical Physics MATH 535

Size: px
Start display at page:

Download "Lecture Notes Methods of Mathematical Physics MATH 535"

Transcription

1 Lecture Notes Methods of Mathematical Physics MATH 535 Instructor: Ivan Avramidi New Mexico Institute of Mining and Technology Socorro, NM September 19, 2013 Textbooks: S. Hassani, Mathematical Physics (Springer, 1999) L. Debnath and P. Mikusinski, Introduction to Hilbert Spaces with Applications (Academic Press, 1999) Author: Ivan Avramidi; File: mathphyshass1.tex; Date: November 20, 2013; Time: 16:51

2 Contents 1 Preliminaries Preliminaries Finite-Dimensional Vector Spaces Vectors and Linear Transformations Vector Spaces Inner Product and Norm Exercises Linear Transformations Algebras Operator Algebra Algebra of Operators on a Vector Space Derivatives of Functions of Operators Self-Adjoint and Unitary Operators Trace and Determinant Finite Difference Operators Projection Operators Exercises Matrix Representation of Operators Matrices Operation on Matrices Inverse Matrix Trace Determinant Exercises Spectral Decomposition Direct Sums Invariant Subspaces I

3 II CONTENTS Eigenvalues and Eigenvectors Spectral Decomposition Functions of Operators Polar Decomposition Real Vector Spaces Heisenberg Algebra, Fock Space and Harmonic Oscillator Exercises Banach Spaces Normed Spaces Banach Spaces Linear Mappings Banach Fixed Point Theorem Hilbert Spaces Inner Product Spaces Norm in an Inner Product Space Hilbert Spaces Strong and Weak Convergence Orthogonal and Orthonormal Systems Properties of Orthonormal Systems Orthonormal Complements and Projection Theorem Separable Hilbert Spaces Trigonometric Fourier Series Linear Functionals and the Riesz Representation Theorem Homework Operators on Hilbert Spaces Examples of Operators Homework Bilinear Functionals and Quadratic Forms Adjoint and Self-Adjoint Operators Normal, Isometric and Unitary Operators Positive Operators Projection Operators Compact Operators Eigenvalues and Eigenvectors Spectral Decomposition mathphyshass1.tex; November 20, 2013; 16:51; p. 1

4 CONTENTS Unbounded Operators Homework The Fourier Transform Fourier Transform in L 1 (R) Fourier Transform in L 2 (R) Bibliography 180 Answers To Exercises 183 Notation 185 mathphyshass1.tex; November 20, 2013; 16:51; p. 2

5 2 CONTENTS mathphyshass1.tex; November 20, 2013; 16:51; p. 3

6 Chapter 1 Preliminaries 1.1 Preliminaries Sets, subsets, empty set, proper subset, universal set, union, intersection, power set, complement, Cartesian product Equivalence relations, equivalence classes, partitions, quotient set Maps, domain, codomain, image, preimage, graph, range, injections, surjections, bijections, binary operations Example. Metric spaces, sequences, convergence, Cauchy sequence, completeness Cardinality, countably infinite sets, uncountable sets Mathematical induction 3

7 4 CHAPTER 1. PRELIMINARIES mathphyshass1.tex; November 20, 2013; 16:51; p. 4

8 Chapter 2 Finite-Dimensional Vector Spaces 2.1 Vectors and Linear Transformations Vector Spaces A vector space consists of a set E, whose elements are called vectors, and a field F (such as R or C), whose elements are called scalars. There are two operations on a vector space: 1. Vector addition, + : E E E, that assigns to two vectors u, v E another vector u + v, and 2. Multiplication by scalars, : R E E, that assigns to a vector v E and a scalar a R a new vector av E. The vector addition is an associative commutative operation with an additive identity. It satisfies the following conditions: 1. u + v = v + u, u, v, E 2. (u + v) + w = u + (v + w), u, v, w E 3. There is a vector 0 E, called the zero vector, such that for any v E there holds v + 0 = v. 4. For any vector v E, there is a vector ( v) E, called the opposite of v, such that v + ( v) = 0. The multiplication by scalars satisfies the following conditions: 5

9 6 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 1. a(bv) = (ab)v, v E, a, br, 2. (a + b)v = av + bv, v E, a, br, 3. a(u + v) = au + av, u, v E, ar, 4. 1 v = v v E. The zero vector is unique. For any u, v E there is a unique vector denoted by w = v u, called the difference of v and u, such that u + w = v. For any v E, 0v = 0, and ( 1)v = v. Let E be a real vector space and A = {e 1,..., e k } be a finite collection of vectors from E. A linear combination of these vectors is a vector where {a 1,..., a n } are scalars. a 1 e a k e k, A finite collection of vectors A = {e 1,..., e k } is linearly independent if implies a 1 = = a k = 0. a 1 e a k e k = 0 A collection A of vectors is linearly dependent if it is not linearly independent. Two non-zero vectors u and v which are linearly dependent are also called parallel, denoted by u v. A collection A of vectors is linearly independent if no vector of A is a linear combination of a finite number of vectors from A. Let A be a subset of a vector space E. The span of A, denoted by span A, is the subset of E consisting of all finite linear combinations of vectors from A, i.e. span A = {v E v = a 1 e a k e k, e i A, a i R}. We say that the subset span A is spanned by A. mathphyshass1.tex; November 20, 2013; 16:51; p. 5

10 2.1. VECTORS AND LINEAR TRANSFORMATIONS 7 Theorem The span of any subset of a vector space is a vector space. A vector subspace of a vector space E is a subset S E of E which is itself a vector space. Theorem A subset S of E is a vector subspace of E if and only if span S = S. Span of A is the smallest subspace of E containing A. A collection B of vectors of a vector space E is a basis of E if B is linearly independent and span B = E. A vector space E is finite-dimensional if it has a finite basis. Theorem If the vector space E is finite-dimensional, then the number of vectors in any basis is the same. The dimension of a finite-dimensional real vector space E, denoted by dim E, is the number of vectors in a basis. Theorem If {e 1,..., e n } is a basis in E, then for every vector v E there is a unique set of real numbers (v i ) = (v 1,..., v n ) such that v = n v i e i = v 1 e v n e n. i=1 The real numbers v i, i = 1,..., n, are called the components of the vector v with respect to the basis {e i }. It is customary to denote the components of vectors by superscripts, which should not be confused with powers of real numbers v 2 (v) 2 = vv,..., v n (v) n. Examples of Vector Subspaces Zero subspace {0}. Line with a tangent vector u: S 1 = span {u} = {v E v = tu, t R}. mathphyshass1.tex; November 20, 2013; 16:51; p. 6

11 8 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Plane spanned by two nonparallel vectors u 1 and u 2 S 2 = span {u 1, u 2 } = {v E v = tu 1 + su 2, t, s R}. More generally, a k-plane spanned by a linearly independent collection of k vectors {u 1,..., u k } S k = span {u 1,..., u k } = {v E v = t 1 u t k u k, t 1,..., t k R}. An (n 1)-plane in an n-dimensional vector space is called a hyperplane. Examples of vector spaces: P[t], P n [t], M m n, C k ([a, b]), C ([a, b]) Inner Product and Norm A complex vector space E is called an inner product space if there is a function (, ) : E E R, called the inner product, that assigns to every two vectors u and v a complex number (u, v) and satisfies the conditions: u, v, w E, a C: 1. (v, v) 0 2. (v, v) = 0 if and only if v = 0 3. (u, v) = (v, u) 4. (u + v, w) = (u, w) + (v, w) 5. (u, av) = a(u, v) A finite-dimensional real inner product space is called a Euclidean space. Examples: On C([a, b]) ( f, g) = b a f (t)g(t)w(t)dt where w is a positive continuous real-valued function called the weight function. The Euclidean norm is a function : E R that assigns to every vector v E a real number v defined by v = (v, v). mathphyshass1.tex; November 20, 2013; 16:51; p. 7

12 2.1. VECTORS AND LINEAR TRANSFORMATIONS 9 The norm of a vector is also called the length. A vector with unit norm is called a unit vector. The natural distance function (a metric) is defined by Example. d(u, v) = u v Theorem For any u, v E there holds u + v 2 = u 2 + 2Re(u, v) + v 2. If the norm satisfies the parallelogram law u + v 2 + u v 2 = 2 u v 2 then the inner product can be defined by (u, v) = 1 4 { u + v 2 u v 2 i u + iv 2 + i u iv 2} Theorem A normed linear space is an inner product space if and only if the norm satisfies the parallelogram law. Theorem Every finite-dimensional vector space can be turned into an inner product space. Theorem Cauchy-Schwarz s Inequality. For any u, v E there holds (u, v) u v. The equality (u, v) = u v holds if and only if u and v are parallel. Corollary Triangle Inequality. For any u, v E there holds u + v u + v. mathphyshass1.tex; November 20, 2013; 16:51; p. 8

13 10 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES In real vector space the angle between two non-zero vectors u and v is defined by (u, v) cos θ = u v, 0 θ π. Then the inner product can be written in the form (u, v) = u v cos θ. Two non-zero vectors u, v E are orthogonal, denoted by u v, if (u, v) = 0. A basis {e 1,..., e n } is called orthonormal if each vector of the basis is a unit vector and any two distinct vectors are orthogonal to each other, that is, (e i, e j ) = { 1, if i = j 0, if i j. Theorem Every Euclidean space has an orthonormal basis. Let S E be a nonempty subset of E. We say that x E is orthogonal to S, denoted by x S, if x is orthogonal to every vector of S. The set S = {x E x S } of all vectors orthogonal to S is called the orthogonal complement of S. Theorem The orthogonal complement of any subset of a Euclidean space is a vector subspace. Two subsets A and B of E are orthogonal, denoted by A B, if every vector of A is orthogonal to every vector of B. Let S be a subspace of E and S be its orthogonal complement. If every element of E can be uniquely represented as the sum of an element of S and an element of S, then E is the direct sum of S and S, which is denoted by E = S S. The union of a basis of S and a basis of S gives a basis of E. mathphyshass1.tex; November 20, 2013; 16:51; p. 9

14 2.1. VECTORS AND LINEAR TRANSFORMATIONS Exercises 1. Show that if λv = 0, then either v = 0 or λ = Prove that the span of a collection of vectors is a vector subspace. 3. Show that the Euclidean norm has the following properties (a) v 0, v E; (b) v = 0 if and only if v = 0; (c) av = a v, v E, a R. 4. Parallelogram Law. Show that for any u, v E u + v 2 + u v 2 = 2 ( u 2 + v 2) 5. Show that any orthogonal system in E is linearly independent. 6. Gram-Schmidt orthonormalization process. Let G = {u 1,, u k } be a linearly independent collection of vectors. Let O = {v 1,, v k } be a new collection of vectors defined recursively by v 1 = u 1, j 1 v j = u j i=1 and the collection B = {e 1,..., e k } be defined by v i (v i, u j ) v i 2, 2 j k, e i = v i v i. Show that: a) O is an orthogonal system and b) B is an orthonormal system. 7. Pythagorean Theorem. Show that if u v, then u + v 2 = u 2 + v Let B = {e 1, e n } be an orthonormal basis in E. Show that for any vector v E and n v = e i (e i, v) i=1 n v 2 = (e i, v) 2. i=1 mathphyshass1.tex; November 20, 2013; 16:51; p. 10

15 12 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 9. Prove that the orthogonal complement of a subset S of E is a vector subspace of E. 10. Let S be a subspace in E. Prove that a) E = {0}, b) {0} = E, c) (S ) = S. 11. Show that the intersection of orthogonal subsets of a Euclidean space is either empty or consists of only the zero vector. That is, for two subsets A and B, if A B, then A B = {0} or Linear Transformations. A linear transformation from a vector space V to a vector space W is a map T : V W satisfying the condition: for any u, v V and α, β C. T(αu + βv) = αtu + βtv Zero transformation maps all vectors to the zero vector. The linear transformation is called an endomorphism (or a linear operator) if V = W. The linear transformation is called a linear functional if W = C. A linear transformation is uniquely determined by its action on a basis. The set of linear transformations from V to W is a vector space denoted by L(V, W). The set of endomorphisms (operators) on V is denoted by End (V) or L(V). The set of linear functionals on V is called the dual space and is denoted by V. Example. The kernel (null space) (denoted by Ker T) of a linear transformation T : V W is the set of vectors in V that are mapped to zero. mathphyshass1.tex; November 20, 2013; 16:51; p. 11

16 2.1. VECTORS AND LINEAR TRANSFORMATIONS 13 Theorem The kernel of a linear transformation is a vector space. The dimension of a finite-dimensional kernel is called the nullity of the linear transformation. null T = dim Ker T Theorem The range of a linear transformation is a vector space. The dimension of a finite-dimensional range is called the rank of the linear transformation. rank T = dim Im T Theorem Dimension Theorem. Let T : V W be a linear transformation between finite-dimensional vector spaces. Then dim Ker T + dim Im T = dim V. Theorem A linear transformation is injective if and only if its kernel is zero. An endomorphism of a finite-dimensional space is bijective if it is either injective or surjective. Two vector spaces are isomorphic if they can be related by a bijective linear transformation (which is called an isomorphism). An isomorphism is called an automorphism if V = W. The set of all automorphisms of V is denoted by Aut (V) or GL(V). A linear surjection is an isomorphism if and only if its nullity is zero. Theorem An isomorphism maps linearly independent sets onto linearly independent sets. Theorem Two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension. All n-dimensional complex vector spaces are isomorphic to C n. All n-dimensional real vector spaces are isomorphic to R n. mathphyshass1.tex; November 20, 2013; 16:51; p. 12

17 14 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES The dual basis f i in the dual space V is defined by where e j is the basis in V. f i (e j ) = δ i j, Theorem The dual space V is isomorphic to V. The dual (or the pullback) of a linear transformation T : V W is the linear transformation T : W V defined for any g W by Graph. If T is surjective then T is injective. If T is injective then T is surjective. (T g)v = g(tv), v V. If T is an isomorphism then T is an isomorphism Algebras An algebra A is a vector space together with a binary operation called multiplication satisfying the conditions: for any u, v, w, α, β C. u(αv + βw) = αuv + βuw (αv + βw)u = αvu + βwu Examples. Matrices, functions, operators. The dimension of the algebra is the dimension of the vector space. The algebra isassociative if u(vw) = (uv)w and commutative if uv = vu mathphyshass1.tex; November 20, 2013; 16:51; p. 13

18 2.1. VECTORS AND LINEAR TRANSFORMATIONS 15 An algebra with identity is an algebra with an identity element 1 satisfying u1 = 1u = u for any u A. An element v is a left inverse of u if vu = 1 and the right inverse if uv = 1. Example. Lie algebras. An operator D : A A on an algebra A is called a derivation if it satisfies D(uv) = (Du)v + udv Example. Let A = Mat(n) be the algebra of square matrices of dimension n with the binary operation being the commutator of matrices. It is easy to show that for any matrices A, B, C the following identity (Jacobi identity) holds [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0 Let C be a fixed matrix. We define an operator Ad C on the algebra by Ad C B = [C, B] Then this operator is a derivation since for any matrices A, B Ad C [A, B] = [Ad C A, B] + [A, Ad C B] A linear transformation T : A B from an algebra A to an algebra B is called an algebra homomorphism if for any u, v A. T(uv) = T(u)T(v) mathphyshass1.tex; November 20, 2013; 16:51; p. 14

19 16 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES An algebra homomorphism is called an algebra isomorphism if it is bijective. Example. The isomorphism of the Lie algebra so(3) and R 3 with the cross product. Let X i, i = 1, 2, 3 be the antisymmetric matrices defined by (X i ) j k = ε j ik. They form an algebra with respect to the commutator [X i, X j ] = ε k i jx k. We define a map T : R 3 so(3) as follows. Let v = v i e i be a vector in R 3. Then T(v) = v i X i. let R 3 be equipped with the cross product. Then T(v u) = (Tv)(Tu) Thus T is an isomorphism (linear bijective algebra homomorphism). Any finite dimensional vector space can be converted into an algebra by defining the multiplication of the basis vectors by e i e j = n C k i je k k=1 where C k i j are some scalars called the structure constants of the algebra. Example. Lie algebra su(2). Pauli matrices are defined by ( ) ( i σ 1 =, σ = i 0 ) ( 1 0, σ 3 = 0 1 ). (2.1) They are Hermitian traceless matrices satisfying σ i σ j = δ i j I + iε i jk σ k. (2.2) mathphyshass1.tex; November 20, 2013; 16:51; p. 15

20 2.1. VECTORS AND LINEAR TRANSFORMATIONS 17 They satisfy the following commutation relations and the anti-commutation relations [σ i, σ j ] = 2iε i jk σ k (2.3) σ i σ j + σ j σ i = 2δ i j I (2.4) Therefore, Pauli matrices form a representation of Clifford algebra in 2 dimensions. The matrices J i = i 2 σ i (2.5) are the generators of the Lie algebra su(2) with the commutation relations [J i, J j ] = ε k i jj k (2.6) Algebra homomorphism Λ : su(2) so(3) is defined as follows. Let v = v i J i su(2). Then Λ(v) is the matrix defined by Λ(v) = v i X i. Example. Quaternions. The algebra of quaternions H is defined by (here i, j, k = 1, 2, 3) e 2 0 = e 0, e 2 i = e 0, e 0 e i = e i e 0 = e i, e i e j = ε k i je k i j There is an algebra homomorphism ρ : H su(2) ρ(e 0 ) = I, ρ(e j ) = iσ j A subspace of an algebra is called a subalgebra if it is closed under algebra multiplication. A subset B of an algebra A is called a left ideal if AB B, that is, for any u A and any v B, uv B. A subset B of an algebra A is called a right ideal if BA B, that is, for any u A and any v B, vu B. mathphyshass1.tex; November 20, 2013; 16:51; p. 16

21 18 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES A subset B of an algebra A is called a two-sided ideal if it is both left and right ideal, that is, if ABA B, or for any u, w A and any v B, uvw B. Every ideal is a subalgebra. A proper ideal of an algebra with identity cannot contain the identity element. A proper left ideal cannot contain an element that has a left inverse. If an ideal does not contain any proper subideals then it is the minimal ideal. Examples. Let x be and element of an algebra A. Let Ax be the set defined by Ax = {ux u A} Then Ax is a left ideal. Similarly xa is a right ideal and AxA is a two-sided ideal. mathphyshass1.tex; November 20, 2013; 16:51; p. 17

22 2.2. OPERATOR ALGEBRA Operator Algebra Algebra of Operators on a Vector Space A linear operator on a vector space E is a mapping L : E E satisfying the condition u, v E, a R, L(u + v) = L(u) + L(v) and L(av) = al(v). Identity operator I on E is defined by I v = v, v E Null operator 0 : E E is defined by 0v = 0, v E The vector u = L(v) is the image of the vector v. If S is a subset of E, then the set L(S ) = {u E u = L(v) for some v S } is the image of the set S and the set is the inverse image of the set A. L 1 (S ) = {v E L(v) S } The image of the whole space E of a linear operator L is the range (or the image) of L, denoted by Im(L) = L(E) = {u E u = L(v) for some v E}. The kernel Ker(L) (or the null space) of an operator L is the set of all vectors in E which are mapped to zero, that is Ker (L) = L 1 ({0}) = {v E L(v) = 0}. Theorem For any operator L the sets Im(L) and Ker (L) are vector subspaces. mathphyshass1.tex; November 20, 2013; 16:51; p. 18

23 20 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES The dimension of the kernel Ker (L) of an operator L null (L) = dim Ker (L) is called the nullity of the operator L. The dimension of the range Im(L) of an operator L is called the rank of the operator L. rank (L) = dim Ker (L) Theorem For any operator L on an n-dimensional Euclidean space E rank (L) + null (L) = n The set L(E) of all linear operators on a vector space E is a vector space with the addition of operators and multiplication by scalars defined by (L 1 + L 2 )(x) = L 1 (x) + L 2 (x), and (al)(x) = al(x). The product of the operators A and B is the composition of A and B. Since the product of operators is defined as a composition of linear mappings, it is automatically associative, which means that for any operators A, B and C, there holds (AB)C = A(BC). The integer powers of an operator are defined as the multiple composition of the operator with itself, i.e. A 0 = I A 1 = A, A 2 = AA,... The operator A on E is invertible if there exists an operator A 1 on E, called the inverse of A, such that A 1 A = AA 1 = I. Theorem Let A and B be invertible operators. Then: (A 1 ) 1 = A, (AB) 1 = B 1 A 1. mathphyshass1.tex; November 20, 2013; 16:51; p. 19

24 2.2. OPERATOR ALGEBRA 21 The operators A and B are commuting if AB = BA and anti-commuting if AB = BA. The operators A and B are said to be orthogonal to each other if AB = BA = 0. An operator A is involutive if A 2 = I idempotent if A 2 = A, and nilpotent if for some integer k A k = 0. Two operators A and B are equal if for any u V Au = Bu If Ae i = Be i for all basis vectors in V then A = B. Operators are uniquely determined by their action on a basis. Theorem An operator A is equal to zero if and only if for any u, v V (u, Av) = 0 Theorem An operator A is equal to zero if and only if for any u (u, Au) = 0 Proof: Use (w, Aw) = 0 for w = au + bv with a = 1, b = i and a = i, b = 1. Theorem The inverse of an automorphism is unique. mathphyshass1.tex; November 20, 2013; 16:51; p. 20

25 22 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 2. The product of two automorphisms is an automorphism. 3. A linear transformation is an automorphism if and only if it maps a basis to another basis. Polynomials of Operators. where I is the identity operator. P n (T) = a n T n + a 1 T + a 0 I, Commutator of two operators A and B is an operator [A, B] defined by [A, B] = AB BA Theorem Properties of commutators. Anti-symmetry [A, B] = [B, A] linearity [aa, bb] = ab[a, B] [A, B + C] = [A, B] + [A, C] [A + C, B] = [A, B] + [C, B] right derivation [AB, C] = A[B, C] + [A, C]B left derivation [A, BC] = [A, B]C + B[A, C] Jacobi identity [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0 Consequences [A, A m ] = 0 [A, A 1 ] = 0 [A, f (A)] = 0 mathphyshass1.tex; November 20, 2013; 16:51; p. 21

26 2.2. OPERATOR ALGEBRA 23 Functions of Operators. Negative powers T m = T T } {{ } m T m = (T 1 ) m T m T n = T m+n (T m ) n = T mn Let f be an analytic function given by f (x) = k=0 f (k) (x 0 ) (x x 0 ) k k! Then for an operator T f (T) = k=0 f (k) (x 0 ) (T x 0 I) k k! Exponential Example. exp(t) = k=0 1 k! T k Derivatives of Functions of Operators A time-dependent operator is a map H : R End (V) Note that Example. [H(t), H(t )] 0 A : R 2 R 2 A(x, y) = ( y, x) mathphyshass1.tex; November 20, 2013; 16:51; p. 22

27 24 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Note that so Therefore A 2n = ( 1) n I, which is a rotation by the angle t A 2 = I A 2n+1 = ( 1) n A exp(ta) = cos ti + sin ta exp(ta)(x, y) = (cos t x sin t y, cos t y + sin t x) So A is a generator of rotation. Derivative of a time-dependent operator is an operator defined by dh dt Rules of the differentiation Example. Exponential of the adjoint. Let It satisfies the equation with initial condition Let Ad A be defined by Then = lim h 0 H(t + h) H(t) h d da (AB) = dt dt B + AdB dt d exp(ta) = A exp(ta) dt X(t) = e ta Be ta d X = [A, B] dt X(0) = I Ad A B = [A, B] X(t) = exp(tad A )B = k=0 t k [A, [A, B] k! } {{ } k mathphyshass1.tex; November 20, 2013; 16:51; p. 23

28 2.2. OPERATOR ALGEBRA 25 Duhamel s Formula. Proof. Let Then and We compute So, Therefore d exp[h(t)] = exp[h(t)] dt 1 0 exp[ sh(t)] dh(t) dt Y(s) = e sh d dt esh Y(0) = 0 d exp[h] = exp[h]y(1) dt dy ds Y(1) = exp( Ad H ) which can be written in the form Y(1) = = [H, Y] + H ( s + Ad H )Y = H exp(sad H )H ds e (1 s)h H e (1 s)h ds exp[sh(t)]ds By changing the variable s (1 s) we get the desired result. Particular case. If H commutes with H then Campbell-Hausdorff Formula. t e H(t) = e H t H exp A exp B = exp[c(a, B)] Consider U(s) = e A e sb = e C(s) mathphyshass1.tex; November 20, 2013; 16:51; p. 24

29 26 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Of course We have Also, Let Then Therefore Now, let Note that Then s U = U e Ad C 1 0 F(z) = C(0) = A d ds U = UB exp[ τc] s C exp[τc] dτ 1 0 e τz dτ = 1 e z z s U = UF(Ad C ) s C Ψ(z) = F(Ad C ) s C = B 1 F(log z) = z log z z 1 = Ade C = Ade A Ade sb = e Ad A e sad B Ψ(e Ad A e sad B )F(Ad C ) = I Therefore, we get a differential equation with initial condition Therefore, C(1) = A + s C = Ψ(e Ad A e sad B )B C(0) = A 1 This gives a power series in Ad A, Ad B. 0 Ψ(e Ad A e sad B )Bds Particular case. If [A, B] commutes with both A and B then ( ) 1 e A e B = e A+B exp [A, B] 2 mathphyshass1.tex; November 20, 2013; 16:51; p. 25

30 2.2. OPERATOR ALGEBRA Self-Adjoint and Unitary Operators The adjoint A of an operator A is defined by (Au, v) = (u, A v), u, v E. Theorem For any two operators A and B (A ) = A, (AB) = B A. (A + B) = A + B (aa) = āa An operator A is self-adjoint (or Hermitian) if and anti-selfadjoint if A = A A = A Every operator A can be decomposed as the sum A = A S + A A of its selfadjoint part A S and its anti-selfadjoint part A A A S = 1 2 (A + A ), A A = 1 2 (A A ). Theorem An operator H is Hermitian if and only if (u, Hu) is real for any u. An operator A on E is called positive, denoted by A 0, if it is selfdadjoint and v E (Av, v) 0. An operator H is positive definite (H > 0) if it is positive and only for u = 0. (u, Hu) = 0 mathphyshass1.tex; November 20, 2013; 16:51; p. 26

31 28 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Example. H = A A 0 An operator A is called unitary if AA = A A = I. An operator U is isometric if for any v E Uv = v Example. U = exp(a), A = A Unitary operators preserve the inner product. Theorem Let U be a unitary operator on a real vector space E. Then there exists an anti-selfadjoint operator A such that U = exp A. Recall that the operators U and A satisfy the equations U = U 1 and A = A Trace and Determinant The trace of an operator A is defined by tr A = n (e i, Ae i ) i=1 The determinant of a positive operator on a finite-dimensional space is defined by det A = exp(tr log A) mathphyshass1.tex; November 20, 2013; 16:51; p. 27

32 2.2. OPERATOR ALGEBRA 29 Properties tr AB = tr BA det AB = det A det B tr (RAR 1 ) = tr A det(rar 1 ) = det A Theorem. d dt det(i + ta) t=0 = tr A det(i + ta) = I + ttr A + O(t 2 ) ( ) d 1 da det A = det A tr A dt dt Note that tr I = n, det I = 1. Theorem Let A be a self-adjoint operator. Then det exp A = e tr A. Let A be a positive definite operator, A > 0. The zeta-function of the operator A is defined by ζ(s) = tr A s = 1 Γ(s) 0 dt t s 1 tr e ta. Theorem The zeta-functions has the properties ζ(0) = n, and ζ (0) = log det A. mathphyshass1.tex; November 20, 2013; 16:51; p. 28

33 30 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Finite Difference Operators Let e i be an orthonormal basis. The shift operator E is defined by Ee 1 = 0, Ee i = e i 1, i = 1,..., n, that is, or E f = n 1 i=1 f i+1 e i (E f ) i = f i+1 Let = E I = I E 1 Next, define an operator D by E = exp(hd) that is, D = 1 h log E = 1 h log(i + ) = 1 log(i ) h Also, define an operator J by J = D 1 Then f i = f i+1 f i f i = f i f i 1 Problem. Compute U(t) = exp[td 2 ]. mathphyshass1.tex; November 20, 2013; 16:51; p. 29

34 2.2. OPERATOR ALGEBRA Projection Operators A Hermitian operator P is a projection if P 2 = P Two projections P 1, P 2 are orthogonal if P 1 P 2 = P 2 P 1 = 0. Let S be a subspace of E and E = S S. Then for any u E there exist unique v S and w S such that u = v + w. The vector v is called the projection of u onto S. The operator P on E defined by Pu = v is called the projection operator onto S. The operator P defined by is the projection operator onto S. P u = w The operators P and P are called complementary projections. They have the properties: P = P, (P ) = P, P + P = I, P 2 = P, (P ) 2 = P, PP = P P = 0. More generally, a collection of projections {P 1,..., P k } is a complete orthogonal system of complimentary projections if P i P k = 0 if i k and k P i = P P k = I. i=1 mathphyshass1.tex; November 20, 2013; 16:51; p. 30

35 32 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES The trace of a projection P onto a vector subspace S is equal to its rank, or the dimension of the vector subspace S, tr P = rank P = dim S. Theorem An operator P is a projection if and only if P is idempotent and self-adjoint. Theorem The sum of projections is a projection if and only if they are orthogonal. The projection onto a unit vector e has the form P = e e Let { e i } m i=1 be an orthonormal set and S = span 1 i m { e i }. Then the operator m P = e i e i is the projection onto S. If e i is an orthonormal basis then the projections Examples for a complete orthogonal set. i=1 P i = e i e i Let u be a unit vector and P u be the projection onto the one-dimensional subspace (line) S u spanned by u defined by P u v = u(u, v). The orthogonal complement S u is the hyperplane with the normal u. The operator J u defined by J u = I 2P u is called the reflection operator with respect to the hyperplane S u. The reflection operator is a self-adjoint involution, that is, it has the following properties J u = J u, J 2 u = I. mathphyshass1.tex; November 20, 2013; 16:51; p. 31

36 2.2. OPERATOR ALGEBRA 33 The reflection operator has the eigenvalue 1 with multiplicity 1 and the eigenspace S u, and the eigenvalue +1 with multiplicity (n 1) and with eigenspace S u. Let u 1 and u 2 be an orthonormal system of two vectors and P u1,u 2 be the projection operator onto the two-dimensional space (plane) S u1,u 2 spanned by u 1 and u 2 P u1,u2v = u 1 (u 1, v) + u 2 (u 2, v). Let N u1,u 2 be an operator defined by Then and N u1,u 2 v = u 1 (u 2, v) u 2 (u 1, v). N u1,u 2 P u1,u 2 = P u1,u 2 N u1,u 2 = N u1,u 2 N 2 u 1,u 2 = P u1,u 2. A rotation operator R u1,u 2 (θ) with the angle θ in the plane S u1,u 2 is defined by R u1,u 2 (θ) = I P u1,u 2 + cos θ P u1,u 2 + sin θ N u1,u 2. The rotation operator is unitary, that is, it satisfies the equation Exercises R u 1,u 2 R u1,u 2 = I. 1. Prove that the range and the kernel of any operator are vector spaces. 2. Show that (aa + bb) = aa + bb a, b R, (A ) = A (AB) = B A 3. Show that for any operator A the operators AA and A + A are selfadjoint. 4. Show that the product of two selfadjoint operators is selfadjoint if and only if they commute. 5. Show that a polynomial p(a) of a selfadjoint operator A is a selfadjoint operator. mathphyshass1.tex; November 20, 2013; 16:51; p. 32

37 34 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 6. Prove that the inverse of an invertible operator is unique. 7. Prove that an operator A is invertible if and only if Ker A = {0}, that is, Av = 0 implies v = Prove that for an invertible operator A, Im(A) = E, that is, for any vector v E there is a vector u E such that v = Au. 9. Show that if an operator A is invertible, then (A 1 ) 1 = A. 10. Show that the product AB of two invertible operators A and B is invertible and (AB) 1 = B 1 A Prove that the adjoint A of any invertible operator A is invertible and (A ) 1 = (A 1 ). 12. Prove that the inverse A 1 of a selfadjoint invertible operator is selfadjoint. 13. An operator A on E is called isometric if v E, Av = v. Prove that an operator is unitary if and only if it is isometric. 14. Prove that unitary operators preserves inner product. That is, show that if A is a unitary operator, then u, v E (Au, Av) = (u, v). 15. Show that for every unitary operator A both A 1 and A are unitary. 16. Show that for any operator A the operators AA and A A are positive. 17. What subspaces do the null operator 0 and the identity operator I project onto? 18. Show that for any two projection operators P and Q, PQ = 0 if and only if QP = Prove the following properties of orthogonal projections P = P, (P ) = P, P + P = I, PP = P P = 0. mathphyshass1.tex; November 20, 2013; 16:51; p. 33

38 2.2. OPERATOR ALGEBRA Prove that an operator is projection if and only if it is idempotent and selfadjoint. 21. Give an example of an idempotent operator in R 2 which is not a projection. 22. Show that any projection operator P is positive. Moreover, show that v E (Pv, v) = Pv Prove that the sum P = P 1 +P 2 of two projections P 1 and P 2 is a projection operator if and only if P 1 and P 2 are orthogonal. 24. Prove that the product P = P 1 P 2 of two projections P 1 and P 2 is a projection operator if and only if P 1 and P 2 commute. mathphyshass1.tex; November 20, 2013; 16:51; p. 34

39 36 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 2.3 Matrix Representation of Operators Matrices C n is the set of all ordered n-tuples of complex numbers, which can be assembled as columns or as rows. Let v be a vector in a n-dimensional vector space V with a basis e i. Then v = n v i e i i=1 where v 1,..., v n are complex numbers called the components of the vector v. The column-vector is an ordered n-tuple of the form v 1 v 2.. v n We say that the column vector represents the vector v in the basis e i. Let < v = ( v >) be a linear functional dual to the vector v >. Let < e i be the dual basis. Then n < v = v i < e i i=1 The row-vector (also called a covector) is an ordered n-tuple of the form ( v 1, v 2,..., v n ). It represents the dual vector < v in the same basis. A set of nm complex numbers A i j, i = 1,..., n; j = 1,..., m, arranged in an array that has m columns and n rows A 11 A 12 A 1m A 21 A 22 A 2m A = A n1 A n2 A nm is called a rectangular n m complex matrix. mathphyshass1.tex; November 20, 2013; 16:51; p. 35

40 2.3. MATRIX REPRESENTATION OF OPERATORS 37 The set of all complex n m matrices is denoted by Mat(n, m; C). The number A i j (also called an entry of the matrix) appears in the i-th row and the j-th column of the matrix A A 11 A 12 A 1 j A 1m A 21 A 22 A 2 j A 2m A = A i1 A i2 A i j A im A n1 A n2 A n j A nm Remark. Notice that the first index indicates the row and the second index indicates the column of the matrix. The matrix whose all entries are equal to zero is called the zero matrix. Finally, we define the multiplication of column-vectors by matrices from the left and the multiplication of row-vectors by matrices from the right as follows. Each matrix defines a natural left action on a column-vector and a right action on a row-vector. For each column-vector v and a matrix A = (A i j ) the column-vector u = Av is given by u 1 u 2. u i. u n A 11 A 12 A 1n A 21 A 22 A 2n = A i1 A i2 A in.... A n1 A n2 A nn The components of the vector u are v 1 v 2. v i. v n = A 11 v 1 + A 12 v A 1n v n A 21 v 1 + A 22 v A 2n v n. A i1 v 1 + A i2 v A in v n. A n1 v 1 + A n2 v A nn v n u i = n A i j v j = A i1 v 1 + A i2 v A in v n. j=1 mathphyshass1.tex; November 20, 2013; 16:51; p. 36

41 38 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Similarly, for a row vector v T the components of the row-vector u T = v T A are defined by u i = n v j A ji = v 1 A 1i + v 2 A 2i + + v n A ni. j=1 Let W be an m-dimensional vector space with aa basis f i and A : V W be a linear transformation. Such an operator defines a n m matrix (A i j ) by Ae i = m A ji f j j=1 or A ji = (f j, Ae i ) Thus the linear transformation A is represented by the matrix A i j. The components of a vector v are obtained by acting on the colum vector (v i ) from the left by the matrix (A ji ), that is, (Av) i = n A i j v j j=1 Proposition. The vector space L(V, W) of linear transformations A : V W is isomorphic to the space M(m n, C) of m n matrices. Proposition. The rank of a linear transformation is equal to the rank of its matrix Operation on Matrices The addition of matrices is defined by A 11 + B 11 A 12 + B 12 A 1m + B 1m A 21 + B 21 A 22 + B 22 A 2m + B 2m A + B = A n1 + B n1 A n2 + B n2 A nm + B nm mathphyshass1.tex; November 20, 2013; 16:51; p. 37

42 2.3. MATRIX REPRESENTATION OF OPERATORS 39 and the multiplication by scalars by ca 11 ca 12 ca 1m ca 21 ca 22 ca 2m ca = ca n1 ca n2 ca nm A n m matrix is called a square matrix if n = m. The numbers A ii are called the diagonal entries. Of course, there are n diagonal entries. The set of diagonal entries is called the diagonal of the matrix A. The numbers A i j with i j are called off-diagonal entries; there are n(n 1) off-diagonal entries. The numbers A i j with i < j are called the upper triangular entries. The set of upper triangular entries is called the upper triangular part of the matrix A. The numbers A i j with i > j are called the lower triangular entries. The set of lower triangular entries is called the lower triangular part of the matrix A. The number of upper-triangular entries and the lower-triangular entries is the same and is equal to n(n 1)/2. A matrix whose only non-zero entries are on the diagonal is called a diagonal matrix. For a diagonal matrix A i j = 0 if i j. The diagonal matrix A = λ λ λ n is also denoted by A = diag (λ 1, λ 2,..., λ n ) mathphyshass1.tex; November 20, 2013; 16:51; p. 38

43 40 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES A diagonal matrix whose all diagonal entries are equal to I = is called the identity matrix. The elements of the identity matrix are δ i j = 1, if i = j 0, if i j. A matrix A of the form A = where represents nonzero entries is called an upper triangular matrix. Its lower triangular part is zero, that is, A i j = 0 if i < j. A matrix A of the form A = whose upper triangular part is zero, that is, is called a lower triangular matrix. A i j = 0 if i > j, mathphyshass1.tex; November 20, 2013; 16:51; p. 39

44 2.3. MATRIX REPRESENTATION OF OPERATORS 41 The transpose of a matrix A whose i j-th entry is A i j is the matrix A T whose i j-th entry is A ji. That is, A T obtained from A by switching the roles of rows and columns of A: A 11 A 21 A j1 A n1 A 12 A 22 A j2 A n A T = A 1i A 2i A ji A ni A 1m A 2m A jm A nm or (A T ) i j = A ji. The Hermitian conjugate of a matrix A = (A i j ) is a matrix A = (A i j ) defined by (A ) i j = Ā ji A matrix A is called symmetric if A T = A and anti-symmetric if A T = A. A matrix A is called Hermitian if A = A and anti-hermitian if A = A. An anti-hermitian matrix has the form A = ih where H is Hermitian. A Hermitian matrix has the form H = A + ib where A is real symmetric and B is real anti-symmetric matrix. mathphyshass1.tex; November 20, 2013; 16:51; p. 40

45 42 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES The number of independent entries of an anti-symmetric matrix is n(n 1)/2. The number of independent entries of a symmetric matrix is n(n + 1)/2. The diagonal entries of a Hermitian matrix are real. The number of independent real parameters of a Hermitian matrix is n 2. Every square matrix A can be uniquely decomposed as the sum of its diagonal part A D, the lower triangular part A L and the upper triangular part A U A = A D + A L + A U. For an anti-symmetric matrix A T U = A L and A D = 0. For a symmetric matrix A T U = A L. Every square matrix A can be uniquely decomposed as the sum of its symmetric part A S and its anti-symmetric part A A A = A S + A A, where A S = 1 2 (A + AT ), A A = 1 2 (A AT ). The product of square matrices is defined as follows. The i j-th entry of the product C = AB of two matrices A and B is n C i j = A ik B k j = A i1 B 1 j + A i2 B 2 j + + A in B n j. k=1 This is again a multiplication of the i-th row of the matrix A by the j-th column of the matrix B. Theorem The product of matrices is associative, that is, for any matrices A, B, C (AB)C = A(BC). Theorem For any two matrices A and B (AB) T = B T A T, (AB) = B A. mathphyshass1.tex; November 20, 2013; 16:51; p. 41

46 2.3. MATRIX REPRESENTATION OF OPERATORS Inverse Matrix A matrix A is called invertible if there is another matrix A 1 such that AA 1 = A 1 A = I. The matrix A 1 is called the inverse of A. Theorem For any two invertible matrices A and B (AB) 1 = B 1 A 1, and (A 1 ) T = (A T ) 1. A matrix A is called orthogonal if which means A T = A 1. A matrix A is called unitary if which means A = A 1. Every unitary matrix has the form where H is Hermitian. A T A = AA T = I, A A = AA = I, U = exp(ih) A similarity transformation of a matric A is a map where U is a given invertible matrix. A UAU 1 The similarity transformation of a function of a matrix is equal to the function of the similar matrix U f (A)U 1 = f (UAU 1 ). mathphyshass1.tex; November 20, 2013; 16:51; p. 42

47 44 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Trace The trace is a map tr : Mat(n, C) that assigns to each matrix A = (A i j ) a complex number tr A equal to the sum of the diagonal elements of a matrix tr A = n A kk. k=1 Theorem The trace has the properties tr (AB) = tr (BA), and tr A T = tr A, tr A = tr A Obviously, the trace of an anti-symmetric matrix is equal to zero. The trace is invariant under a similarity transformation. A natural inner product on the space of matrices is defined by (A, B) = tr (A B) Determinant Consider the set Z n = {1, 2,..., n} of the first n integers. A permutation ϕ of the set {1, 2,..., n} is an ordered n-tuple (ϕ(1),..., ϕ(n)) of these numbers. That is, a permutation is a bijective (one-to-one and onto) function ϕ : Z n Z n that assigns to each number i from the set Z n = {1,..., n} another number ϕ(i) from this set. An elementary permutation is a permutation that exchanges the order of only two numbers. mathphyshass1.tex; November 20, 2013; 16:51; p. 43

48 2.3. MATRIX REPRESENTATION OF OPERATORS 45 Every permutation can be realized as a product (or a composition) of elementary permutations. A permutation that can be realized by an even number of elementary permutations is called an even permutation. A permutation that can be realized by an odd number of elementary permutations is called an odd permutation. Proposition The parity of a permutation does not depend on the representation of a permutation by a product of the elementary ones. That is, each representation of an even permutation has even number of elementary permutations, and similarly for odd permutations. The sign of a permutation ϕ, denoted by sign(ϕ) (or simply ( 1) ϕ ), is defined by { +1, if ϕ is even, sign(ϕ) = ( 1) ϕ = 1, if ϕ is odd The set of all permutations of n numbers is denoted by S n. Theorem The cardinality of this set, that is, the number of different permutations, is S n = n!. The determinant is a map det : Mat(n, C) C that assigns to each matrix A = (A i j ) a complex number det A defined by det A = sign (ϕ)a 1ϕ(1) A nϕ(n), ϕ S n where the summation goes over all n! permutations. The most important properties of the determinant are listed below: Theorem The determinant of the product of matrices is equal to the product of the determinants: det(ab) = det A det B. 2. The determinants of a matrix A and of its transpose A T are equal: det A T = det A. mathphyshass1.tex; November 20, 2013; 16:51; p. 44

49 46 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 3. The determinant of the conjugate matrix is det A = det A. 4. The determinant of the inverse A 1 of an invertible matrix A is equal to the inverse of the determinant of A: det A 1 = (det A) 1 5. A matrix is invertible if and only if its determinant is non-zero. The determinant is invariant under the similarity transformation. The set of complex invertible matrices (with non-zero determinant) is denoted by GL(n, C). A matrix with unit determinant is called unimodular. The set of complex matrices with unit determinant is denoted by S L(n, C). The set of complex unitary matrices is denoted by U(n). The set of complex unitary matrices with unit determinant is denoted by S U(n). The set of real orthogonal matrices is denoted by O(n). An orthogonal matrix with unit determinant (a unimodular orthogonal matrix) is called a proper orthogonal matrix or just a rotation. The set of real orthogonal matrices with unit determinant is denoted by S O(n). Theorem The determinant of an orthogonal matrix is equal to either 1 or 1. Theorem. The determinant of a unitary matrix is a complex number of modulus 1. A set G of invertible matrices forms a group if it is closed under taking inverse and matrix multiplication, that is, if the inverse A 1 of any matrix A in G belongs to the set G and the product AB of any two matrices A and B in G belongs to G. mathphyshass1.tex; November 20, 2013; 16:51; p. 45

50 2.3. MATRIX REPRESENTATION OF OPERATORS Exercises 1. Show that tr [A, B] = 0 2. Show that the product of invertible matrices is an invertible matrix. 3. Show that the product of matrices with positive determinant is a matrix with positive determinant. 4. Show that the inverse of a matrix with positive determinant is a matrix with positive determinant. 5. Show that GL(n, R) forms a group (called the general linear group). 6. Show that GL + (n, R) is a group (called the proper general linear group). 7. Show that the inverse of a matrix with negative determinant is a matrix with negative determinant. 8. Show that: a) the product of an even number of matrices with negative determinant is a matrix with positive determinant, b) the product of odd matrices with negative determinant is a matrix with negative determinant. 9. Show that the product of matrices with unit determinant is a matrix with unit determinant. 10. Show that the inverse of a matrix with unit determinant is a matrix with unit determinant. 11. Show that S L(n, R) forms a group (called the special linear group or the unimodular group). 12. Show that the product of orthogonal matrices is an orthogonal matrix. 13. Show that the inverse of an orthogonal matrix is an orthogonal matrix. 14. Show that O(n) forms a group (called the orthogonal group). 15. Show that orthogonal matrices have determinant equal to either +1 or Show that the product of orthogonal matrices with unit determinant is an orthogonal matrix with unit determinant. 17. Show that the inverse of an orthogonal matrix with unit determinant is an orthogonal matrix with unit determinant. mathphyshass1.tex; November 20, 2013; 16:51; p. 46

51 48 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES 18. Show that S O(n) forms a group (called the proper orthogonal group or the rotation group). mathphyshass1.tex; November 20, 2013; 16:51; p. 47

52 2.4. SPECTRAL DECOMPOSITION Spectral Decomposition Direct Sums Let U and W be vector subspaces of a vector space V. Then the sum of the vector spaces U and W is the space of all sums of the vectors from U and W, that is, U + W = {v V v = u + w, u U, w W} If the only vector common to both U and W is the zero vector then the sum U + W is called the direct sum and denoted by U W. Theorem Let U and W be subspaces of a vector space V. Then V = U W if and only if every vector v V in V can be written uniquely as the sum v = u + w with u U, w W. Proof. Theorem Let V = U W. Then Proof. dim V = dim U + dim W The direct sum can be naturally generalized for several subspaces so that r V = To such a decomposition one naturally associates orthogonal complementary projections P i on each subspace U i such that r P 2 i = I, P i P j = 0 if i j, P i = I A complete orthogonal system of projections defines the orthogonal decomposition of the vector space i=1 U i V = U 1 U r, where U i is the subspace the projection P i projects onto. i=1 mathphyshass1.tex; November 20, 2013; 16:51; p. 48

53 50 CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES Theorem The dimension of the subspaces U i are equal to the ranks of the projections P i dim U i = rank P i. 2. The sum of dimensions of the vector subspaces U i equals the dimension of the vector space V r dim U i = dim U dim U r = dim V. i=1 Let M be a subspace of a vector space V. Then the orthogonal complement of M is the vector space M of all vector in V orthogonal to all vectors in M M = {v V (v, u) = 0 u M} Show that M is a vector space. Theorem Every vector subspace M of V defines the orthogonal decomposition V = M M such that the corresponding projection operators P and P are Hermitian. Remark. The projections P i are Hermitian only in inner product spaces when the subspaces E i are mutually orthogonal Invariant Subspaces Let V be a finite-dimensional vector space, M be its subspace and P be the projection onto the subspace M. The subspace M is an invariant subspace of an operator A if it is closed under the action of this operator, that is, A(M) M. An invariant subspace M is called proper invariant subspace if M V. Theorem Let v be a vector in an n-dimensional vector space V and A be an operator on V. Then M = span {v, Av,..., A n 1 v} mathphyshass1.tex; November 20, 2013; 16:51; p. 49

2.2. OPERATOR ALGEBRA 19. If S is a subset of E, then the set

2.2. OPERATOR ALGEBRA 19. If S is a subset of E, then the set 2.2. OPERATOR ALGEBRA 19 2.2 Operator Algebra 2.2.1 Algebra of Operators on a Vector Space A linear operator on a vector space E is a mapping L : E E satisfying the condition u, v E, a R, L(u + v) = L(u)

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

A Little Beyond: Linear Algebra

A Little Beyond: Linear Algebra A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course Matrix Lie groups and their Lie algebras Mahmood Alaghmandan A project in fulfillment of the requirement for the Lie algebra course Department of Mathematics and Statistics University of Saskatchewan March

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Representations of Matrix Lie Algebras

Representations of Matrix Lie Algebras Representations of Matrix Lie Algebras Alex Turzillo REU Apprentice Program, University of Chicago aturzillo@uchicago.edu August 00 Abstract Building upon the concepts of the matrix Lie group and the matrix

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Notes on Lie Algebras

Notes on Lie Algebras NEW MEXICO TECH (October 23, 2010) DRAFT Notes on Lie Algebras Ivan G. Avramidi Department of Mathematics New Mexico Institute of Mining and Technology Socorro, NM 87801, USA E-mail: iavramid@nmt.edu 1

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V.

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V. 1 Clifford algebras and Lie triple systems Section 1 Preliminaries Good general references for this section are [LM, chapter 1] and [FH, pp. 299-315]. We are especially indebted to D. Shapiro [S2] who

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Math 113 Solutions: Homework 8. November 28, 2007

Math 113 Solutions: Homework 8. November 28, 2007 Math 113 Solutions: Homework 8 November 28, 27 3) Define c j = ja j, d j = 1 j b j, for j = 1, 2,, n Let c = c 1 c 2 c n and d = vectors in R n Then the Cauchy-Schwarz inequality on Euclidean n-space gives

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Differential Topology Final Exam With Solutions

Differential Topology Final Exam With Solutions Differential Topology Final Exam With Solutions Instructor: W. D. Gillam Date: Friday, May 20, 2016, 13:00 (1) Let X be a subset of R n, Y a subset of R m. Give the definitions of... (a) smooth function

More information

Functional Analysis. James Emery. Edit: 8/7/15

Functional Analysis. James Emery. Edit: 8/7/15 Functional Analysis James Emery Edit: 8/7/15 Contents 0.1 Green s functions in Ordinary Differential Equations...... 2 0.2 Integral Equations........................ 2 0.2.1 Fredholm Equations...................

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Index. Banach space 630 Basic Jordan block 378, 420

Index. Banach space 630 Basic Jordan block 378, 420 Index Absolute convergence 710 Absolute value 15, 20 Accumulation point 622, 690, 700 Adjoint classsical 192 of a linear operator 493, 673 of a matrix 183, 384 Algebra 227 Algebraic number 16 Algebraically

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information