Mathematical Methods wk 1: Vectors

Similar documents
Mathematical Methods wk 1: Vectors

Mathematics Department Stanford University Math 61CM/DM Inner products

INNER PRODUCT SPACE. Definition 1

Quantum Computing Lecture 2. Review of Linear Algebra

Linear Algebra Massoud Malek

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

The following definition is fundamental.

October 25, 2013 INNER PRODUCT SPACES

Linear Algebra and Dirac Notation, Pt. 1

Linear Algebra. Min Yan

Mathematical Methods wk 2: Linear Operators

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

1. General Vector Spaces

Elementary linear algebra

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012

Vectors in Function Spaces

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture Notes 1: Vector spaces

Fourier and Wavelet Signal Processing

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

1 Dirac Notation for Vector Spaces

There are two things that are particularly nice about the first basis

Applied Linear Algebra in Geoscience Using MATLAB

Functional Analysis Exercise Class

Math Linear Algebra II. 1. Inner Products and Norms

Mathematical Foundations of Quantum Mechanics

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Categories and Quantum Informatics: Hilbert spaces

Lecture 3: Hilbert spaces, tensor products

MATH 235. Final ANSWERS May 5, 2015

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Inner product spaces. Layers of structure:

LINEAR ALGEBRA W W L CHEN

(K + L)(c x) = K(c x) + L(c x) (def of K + L) = K( x) + K( y) + L( x) + L( y) (K, L are linear) = (K L)( x) + (K L)( y).

Tune-Up Lecture Notes Linear Algebra I

A Brief Introduction to Functional Analysis

Definitions for Quizzes

SUMMARY OF MATH 1600

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

LINEAR ALGEBRA REVIEW

Chapter 2 Linear Transformations

CHAPTER VIII HILBERT SPACES

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Math Linear Algebra

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Functional Analysis HW #5

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Supplementary information I Hilbert Space, Dirac Notation, and Matrix Mechanics. EE270 Fall 2017

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

I teach myself... Hilbert spaces

Numerical Linear Algebra

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Inner Product and Orthogonality

Typical Problem: Compute.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

2. Review of Linear Algebra

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Inner Product Spaces

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Chapter 6: Orthogonality

Lecture 3: Review of Linear Algebra

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

Algebra II. Paulius Drungilas and Jonas Jankauskas

Section 7.5 Inner Product Spaces

Chapter 1 Vector Spaces

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Lecture 3: Review of Linear Algebra

1 Mathematical preliminaries

MATH 235: Inner Product Spaces, Assignment 7

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

Math 593: Problem Set 10

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

1 The Dirac notation for vectors in Quantum Mechanics

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

5 Compact linear operators

Review of linear algebra

Linear Algebra 2 Spectral Notes

II. The Machinery of Quantum Mechanics

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

MATH Linear Algebra

Chapter 2. Vectors and Vector Spaces

Linear Algebra Review

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

NORMS ON SPACE OF MATRICES

Supplementary Notes on Linear Algebra

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MAT Linear Algebra Collection of sample exams

MATH 583A REVIEW SESSION #1

Transcription:

Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm Linear vector spaces A linear vector space (or just vector space for short) consists of a set V of vectors (the elements of which we ll usually denote by a, b,, a, b, or a, b, ); a set F of scalars (scalars denoted by α, β, a, b,), a rule for adding two vectors to produce another vector, a rule for multiplying vectors by scalars, that together satisfy conditions The four most interesting conditions are the following (i) The set V of vectors is closed under addition, ie, (ii) V is also closed under multiplication by scalars, ie, (iii) V contains a special zero vector, V, for which a + b V for all a, b V; () α a V for all a V and α F (2) a + = a for all a V; (3) (iv) Every vector has an additive inverse: for all a V there is some a V for which a + a = (4) The other six conditions are more technical The addition operator must be commutative and associative: a + b = b + a, (5) ( a + b ) + c = a + ( b + c ) (6) The multiplication-by-scalar operation must be distributive with respect to vector and scalar addition, consistent with the operation of multiplying two scalars and must satisfy the multiplicative identity: α( a + b ) = α a + α b (7) (α + β) a = α a + β a (8) α(β a ) = (αβ) a (9) a = a () For our purposes the set F will usually be either the set R of all real numbers (in which case we have a real vector space) or the set C of all complex numbers (giving a complex vector space) Basic ideas Mathematical Methods wk : Vectors I 2 In a raw vector space there is no notion of the length of a vector or the angle between two vectors Nevertheless, there are many important ideas that follow by applying the basic rules ( ) above to linear combinations of vectors, ie, weighted sums such as α v + α 2 v 2 + () A set of vectors { v,, v n } is said to be linearly independent (abbreviated LI) if the only solution to the equation α v + α 2 v 2 + α n v n = (2) is if all scalar coefficients α i = Otherwise the set is linearly dependent The dimension of a vector space is the maximum number of LI vectors in the space The span of a list of vectors v,, v m is the set of all possible linear combinations {α v + +α m v m : α,, α m F} A list e, e 2, e n of vectors forms a basis for the space V if the elements of the list are LI and span V Then any a V can be expressed as a = a i e i, (3) and the coefficients (a,, a n ) for which (3) holds are known as the components or coordinates of a with respect to the basis vectors e i Claim: Given a basis e,, e n the coordinates a i of a are unique Proof: suppose that there is another set of coordinates a i Then we can express a in two ways: Subtracting, a = a e + e 2 + + a n e n = a e + a 2 e 2 + + a n e n (4) = (a a ) e + ( a 2) e 2 + + (a n a n) e n (5) But the e i are LI Therefore the only way of satisfying the equation is if all a i a i = So a i = a i: the coordinates are unique A subset W V is a subspace of V if it satisfies conditions ( 4) That is: it must be closed under addition of vectors and multiplication by scalars; it must contain the zero vector; the additive inverse of each element must be included Conditions (5 ) are automatically satisfied because they depend only on the definition of the addition and multiplication operations 2 Examples Example: Three-dimensional column vectors with real coefficients The set of column vectors (x, x 2, x 3 ) T with x i R forms a real vector space under the usual rules of vector addition and multiplication by scalars This space is usually known as R 3 To confirm that this really is a vector space, let s check the conditions ( ) The usual rules of vector algebra satisfy conditions (5 ) For the conditions ( 4) note that: (i) For any a, b b 2 R 3, the sum a + b b 2 a + b + b 2 R 3 a 3 b 3 a 3 b 3 a 3 + b 3 (ii) Multiplying any vector a by a real scalar α gives α a αa α R 3 a 3 a 3 αa 3

I 3 Mathematical Methods wk : Vectors (iii) There is a zero element, (,, ) T R 3 (iv) Each vector a has an additive inverse a R 3 a 3 a 3 So, all conditions ( ) are satisfied Here are two possible bases for this space:,, or π,, 2 (6) 6 Each of these basis sets has three LI elements that span R 3 Therefore the dimension of R 3 is 3 Exercise: The set of all 3-dimensional column vectors with real coefficients cannot form a complex vector space Why not? (Which of the conditions is broken?) Example: R n and C n Similarly, the set of all n-dimensional column vectors with real (complex) elements forms a real (complex) vector space under the usual rules of vector addition and multiplication by scalars Example: Arrows on a plane The set of all arrows on a plane with the obvious definitions of addition of arrows and multiplication by scalars forms a real two-dimensional vector space There is an natural, invertible mapping between elements elements x R 2 and arrows for which α x +α 2 x 2 maps to α arrow + α 2 arrow 2 whenever x maps to arrow and x 2 maps to arrow 2 An invertible mapping between two vector spaces that preserves the operations of addition and multiplication is known as a isomorphism Example: Perverse arrows in the plane Let us introduce a new vector space of arrows in the plane that uses the same definition of vector addition as in the previous example, but in which multiplication by a complex scalar α is defined by increasing the length of the vector by a factor α and rotating it anticlockwise by an angle arg α Any arrow in the plane can be represented as Our new multiplication rule is that r, θ = ( ) r cos φ (7) r sin φ α r, θ α r, θ + arg α (8) which is another member of V for all choices of α, r, θ The addition operation is unchanged so this new definition satisfies the first four conditions ( 4) Let s check the more technical conditions: α(β r, φ ) = α β r, φ + arg β = α β r, φ + arg β + arg α = β α r, φ + arg α + arg β = β α r, φ + arg α = β(α r, φ ) Therefore our new multiplication rule satisfies condition (5) (9) Exercise: Show that all of the conditions (5 ) are satisfied What is the dimension of this space? Find another vector space to which it is isomorphic Example: The set of all m n matrices with complex coefficients forms a complex vector space with dimension mn The most natural basis is,,,,, (2) Mathematical Methods wk : Vectors I 4 Example: n th -order polynomials The set of all n th -order polynomials in a complex variable z forms an n + dimensional complex vector space A natural basis is the set of monomials {, z, z 2,, z n } Example: Trigonometric polynomials Given n distinct (mod 2π) complex constants λ,, λ n, the set of all linear combinations of e iλnz forms an n-dimensional complex vector space Example: Functions for which the integral The set L 2 (a, b) of all complex-valued functions f : [a, b] C (2) b a dx f(x) 2 (22) exists forms a complex vector space under the usual operations of addition of functions and multiplication of functions by scalars This space has an infinite number of dimensions We postpone the issue of identifying a suitable basis until 4 later 3 Linear maps A mapping A : V W from one vector space V to another W is a linear map if it satisfies A ( v + v 2 ) = A v + A v 2, A ( α v ) = αa v, (23) for all v, v 2 V and scalars α F A linear operator is the special case of a linear map from a vector space V to itself Now let n be the dimension of V and m the dimension of W Choose any basis e,, e n for V and another, e,, e m, for W Any vector v V can be expressed as v = n a j e j Using the properties (23) we have that the image of v under the linear map A is A v = ( a j A ej ) (24) j= As this holds for any v V, we see that the map A is completely determined by the images A e,, A e n of V s basis vectors Each of these images A e i is a vector that lives in W and so can be expressed in terms of the basis e,, e m as m A e j = A ij e i, (25) where A ij is the i th component in the e,, e m basis of the vector A e j Substituting this into (24), m A v = A ij a j e i (26) j= That is, a vector in V with components a,, a n maps under A to another vector in W whose i th component is given by n j= A ija j The values of the coefficients A ij depend on the choice of basis for V and W

I 5 Mathematical Methods wk : Vectors 4 Representation of vectors and linear maps by matrices 2 Inner-product spaces Mathematical Methods wk : Vectors I 6 Given a basis e,, e n for an n-dimensional vector space let us represent Then any vector v can be expressed as e =, e 2 =,, e n = (27) v = a e + e 2 + + a n e n = (28) a n Given two vector spaces, V with dimension n and W with dimension m, then any linear map A from V to W can be represented as the matrix with a A A 2 A n A A = 2 A 22 A 2n, (29) A m A m2 A mn A A 2 A n a A A v = 2 A 22 A 2n a 2 A m A m2 A mn a n being given by the familiar rules of matrix multiplication A a + A 2 + + A n a n A 2 a + A 22 + + A 2n a n A m a + A m2 + + A mn a n (3) Linear operators (that is, linear maps of a vector space to itself) are represented by square n n matrices Further reading Linear vector spaces are introduced in RHB 8 and linear maps in RHB 82 DK II is another good starting point Most maths-for-physicists books introduce inner products (see 2 below) at the same time as vector spaces Nevertheless, pausing to work out the consequences of the unadorned conditions ( ) is an supremely useful introduction to mathematical reasoning: many of the statements that we take as self-evident from our experience in manipulating vectors and matrices are not easy to prove without some practice For more on this see, eg, Linear Algebra by Lang or similar books for mathematicians The conditions ( ) do not allow us to say whether two vectors are orthogonal, or even what the length of a vector is To do these, we need to introduce some additional structure on the space, namely the idea of an inner product This is a straightforward generalization of the familiar scalar product In the following I use bra-ket notation a b for the inner product of the vectors a and b An inner product is a mapping V V F that takes two vectors and returns a scalar and satisfies the following conditions for all a, b, c V and α F: c d = c a + c b if d = a + b ; (2) c d = α c a if d = α a ; (22) a b = b a ; (23) a a =, only if a =, >, otherwise (24) Notice that the inner product is linear in the second argument, but not necessarily in the first An inner-product space is simply a vector space V on which an inner product a b has been defined Some definitions: The inner product of a vector with itself, a a, is real and non-negative The length or norm of the vector a is a a a The vectors a and b are orthogonal if a b = A set of vectors { v i } of V is orthonormal if v i v j = δ ij The condition (23) is essential if we want lengths of vectors to be real numbers, but a consequence is that in general the inner product is not linear in both arguments Exercise: Use the properties (2 24) above to show that d c = α a c + β b c (25) for d = α a + β b Some books use the term sesquilinear to describe this property Under what conditions is the scalar product linear in both arguments? Exercise: Show that if a v = for all v V then a = Exercise: Show that any n orthonormal vectors in an n-dimensional inner-product space form a basis The converse is not true: see 24 below 2 Orthonormal bases Let V be an n-dimensional inner-product space in which the vectors { e i } form an orthonormal basis so that e i e j = δ ij and let a = a i e i, and b = b i e i (26) be any two vectors in V Using properties (2) and (22) of the inner product together with the orthonormality of the basis vectors, we have that the projection of a onto the j th basis vector e j a = a i e j e i = a i δ ji = a j (27)

I 7 Mathematical Methods wk : Vectors and similarly e j b = b j Therefore the inner product of a and b is a b = b i a e i = b i e i a = a i b i (28) This can be written in matrix form as b a b = ( a a n ) a b, (29) where a is the Hermitian conjugate of the column vector a 22 Duals: bras and kets There is another way of looking at the inner product that serves as a useful reminder of its sesquilinearity and helps motivate the unfamiliar a b notation Consider the set V of all linear maps from ket vectors v V to scalars F Applying any L V to an element v = α e + + α n e n of V, we have, by the linearity of L, that ( n ) L v = L α i e i = α i L e i (2) So, given a basis { e,, e n } for V, any L V is completely defined by the n scalar values L e,,l e n We can turn V into a vector space Given L, L 2 V, define their sum (L + L 2 ) as the new mapping b n (L + L 2 ) v L v + L 2 v (2) and the result of multiplying L V by a scalar α as the mapping (αl) defined through (αl) v α L v (22) It is easy to confirm that the set V with these operations satisfies the conditions ( ) Inhabitants of the vector space V are called bras and are more conventionally written a, b etc instead of the L, L 2 notation used above V has the same dimension as V and is known as the dual space (or adjoint space) of V For every ket there is a corresponding dual (or adjoint) bra and vice versa The addition and mutliplication rules (2) and (22) above mean that if kets a, b have dual bras a, b respectively, then the ket v = α a + β b has dual v = α a + β b (23) Given basis kets e,, e n V we may introduce corresponding basis bras e,, e n V with each e i defined through e i ( e j ) = δ ij (24) Then given a = a i e i and b = b i e i, the dual to a is a = n a i e i and we may define ( n a b ( a ) ( b ) = a i e i ) b j e j j= j= = a i b j e i e j = a i b j δ ij = a i b i, j= (25) Mathematical Methods wk : Vectors I 8 in agreement with equation (29) from the previous section It is easy to confirm that this alternative definition of a b as the result of operating on the ket vector b by the bra vector a V satisfies the conditions (2 24) for an inner product 23 Representation of bras and kets Here is a brief summary of the results in the last two sections If we have an orthonormal basis in which we represent kets by column vectors ( 4), v = α e + α 2 e 2 + + α n e n α = α + α 2 + + α n α 2, (26) α n then the bra dual to v is represented by the Hermitian conjugate of this column vector: v =α e + α 2 e 2 + + α n e n =α ( ) + α 2 ( ) + + α n ( ) = ( α α 2 α n ) (27) The inner product a b of the vectors a = (a,, a n ) T and b = (b,, b n ) T is obtained by premultiplying b by the dual vector to a under the usual rules of matrix multiplication: b a b a b = ( a a 2 a n ) 2 24 Gram Schmidt procedure for constructing an orthonormal basis b b n n a i b i (28) In an n-dimensional inner-product space any list of n LI vectors v, v n is a basis, but in general this basis is not orthonormal There is a simple procedure for constructing an orthonormal basis from the list () Start with the first vector from the list, v The first basis vector e is defined via e = v e = e / e (29) (2) Take the next vector v 2 Subtract any component that is parallel to the previously constructed basis vector e Normalise the result to get e 2 e 2 = v 2 e v 2 e e 2 = e 2 / e 2 (22) (i) Similarly, work along the remaining v i, i = 3,, n, subtracting from each one any component that is parallel to any of the previously constructed basis vectors e,, e i That is, i e i = v i e j v i e j j= e i = e i / e i (22)

I 9 Mathematical Methods wk : Vectors It is easy to see that applying any e k with k < i to both sides of (22) yields e k e i = : by construction each new e i is orthogonal to all the preceding ones The same procedure can be used to construct an orthonormal basis for the space spanned by a list of vectors v,, v m of any length, including cases where the list is not LI: if v i is linearly dependent on the preceding v,, v i then e i = and so that particular v i does not produce a new basis vector Example: Example Consider the list v = (, i, i, ) T, v 2 = (, 2, 2, ) T, v 3 = (,,, ) T and v 4 = (2,,, ) T From v we immediately have that e = 2 (, i, i, ) T (222) Further reading Mathematical Methods wk : Vectors I Much of the material in this section is covered in 8 of RHB and II of DK For another introduction to the concept of dual vectors see 3 of Shankar s Principles of Quantum Mechanics (The first chapter of Shankar gives a succinct summary of the first half of this course) Beware in that most books written for mathematicians the inner product a, b is defined to be linear in the first argument The corresponding basis bra is the row vector e = 2 (, i, i, ) (223) The inner product e v 2 = 2 2i, so e 2 = v 2 ( 2 2i) e = (,,, ) T = e 2 (224) For v 3 the necessary inner products are e v 3 = 2i and e 2 v 3 = Then e 3 = v 3 ( 2i) e e 2 = (,,, ) T = e 3 (225) Finally, notice that e 4 = because v 4 = 2 e 3 2i e Therefore the four vectors v,, v 4 span a three-dimensional subspace of the original four-dimensional space The kets e, e 2 and e 3 constructed above are one possible orthonormal basis for this subspace 25 Some important relations Recall that a 2 a a Pythagoras if a b = then a + b 2 = a 2 + b 2 (226) Parallelogram law Triange inequality a + b 2 + a b 2 = 2 a 2 + b 2 (227) a + b a + b (228) Cauchy Schwarz inequality a b 2 a a b b (229) Proof of (229): Let d = a + c b, where c is a scalar whose value we choose later Then d = a + c b By the properties of the inner product, d d = a a + c b a + c a b + c 2 b b (23) Now choose c = b a / b b Then c = a b / b b and (23) becomes which on rearrangement gives the required result a a a b 2 / b b, (23)