Linear Algebra and the Basic Mathematical Tools for Finite-Dimensional Entanglement Theory

Size: px
Start display at page:

Download "Linear Algebra and the Basic Mathematical Tools for Finite-Dimensional Entanglement Theory"

Transcription

1 PHYS 476Q: An Introduction to Entanglement Theory (Spring 2018) Eric Chitambar Linear Algebra and the Basic Mathematical Tools for Finite-Dimensional Entanglement Theory Quantum mechanics describes nature in the language of linear algebra In this lecture, we will review concepts from linear algebra that will be essential for our study of quantum entanglement Our focus here is on finite-dimensional vector spaces since all physical systems that we consider in this course are finite-dimensional Contents 1 Hilbert Space 1 11 Vector Spaces, Kets and Bras 1 12 Computational Basis and Matrix Representations 4 13 Direct Sum Decompositions of Hilbert Space 5 2 Linear Operators 6 21 Outer Products, Adjoints, and Eigenvalues 6 22 Important Types of Operators Projections Normal Operators Unitary Operators and Isometric Mappings Hermitian Operators Positive Operators Functions of Normal Operators The Singular Value Decomposition 14 3 Tensor Products Tensor Product Spaces and Vectors The Isomorphism H A b H B L(H B, H A ) and the Schmidt Decomposition The Trace and Partial Trace 22 4 Exercises 23 1 Hilbert Space 11 Vector Spaces, Kets and Bras In 1939, PM Dirac published a short paper whose opening lines express the adage that good notation can be of great value in helping the development of a theory [Dir39] In that paper,

2 Dirac introduced notation for linear algebra that is commonly referred today as Dirac or braket notation To set the stage for explaining Dirac notation, let us assume that we are dealing with a d- dimensional complex vector space V Now V could be a set of anything - numbers, matrices, chairs, polarizations of a photon, etc - so long as it satisfies the properties of a vector space 1 But regardless of what these objects are, we can represent them by kets, which are symbols of the form ψy Thus on an abstract level, we can treat V as just being a collection of kets t ψy, ϕy, αy, βy, u Since it is a vector space, that means that if ψy and ϕy are two element of V, then so is any complex linear combination a ψy + b ϕy P V, where a, b P C For a vector space V, a collection of vectors t α 1 y, α 2 y, α r yu is called linearly independent if 0 = r c i α i y implies that c i = 0 for all i In other words, the zero vector cannot be written as a nontrivial linear combination of elements in the set; otherwise the set is called linearly dependent The vector space is finite-dimensional with dimension d if every element in V can be written as linear combination of vectors belonging to a linear independent set of d vectors Such a set is called a basis for V In this course, we will be dealing exclusively with finite-dimensional vector spaces We will further assume that an inner product is defined on V Recall that an inner product is a complex valued function ω : V V Ñ C that satisfies the three properties for all αy, βy, γy P V and a, b P C: 1 Conjugate symmetry: ω( αy, βy) = ω( βy, αy) ; 2 Linearity in the second argument: ω( γy, a αy + b βy) = aω( γy, αy) + bω( γy, βy); 3 Positive-definiteness: ω( αy, αy) 0 with equality holding if and only if αy = 0 It is a simple exercise to show that properties one and two can be combined to show complex linearity in the first argument That is, ω(a αy + b βy, γy) = a ω( αy, γy) + b ω( βy, γy) (1) Two vectors αy and βy are called orthogonal with respect to the inner product if ω( αy, βy) = 0 As we will see in this course, the concept of orthogonality corresponds to certain physical properties in quantum information processing The second property above says that any inner product ω : V V Ñ C can be converted into a linear function by fixing a vector in the first argument of ω That is, for any fixed γy P V and inner product ω, define the linear function ω( γy, ) : V Ñ C that maps any ket αy to its inner product with γy Functions like ω( γy, ) are used so frequently in quantum mechanics that we denote them simply as xγ, and their action on an arbitrary αy P V is written as xγ αy := xγ ( αy) = ω( γy, αy) (2) The function xγ is called the bra or dual vector of the ket γy Thus, we think of bras as linear functions that act on kets and map them to set of complex numbers The action of a bra on a ket 1 A complex vector space is a set V that is closed under two operations, vector addition + : V V Ñ V and scalar multiplication C V Ñ V, and such that (i) vector addition is associative and commutative, (ii) there exists a zero vector 0 P V such that αy + 0 = αy for any αy P V, (iii) there exists an additive inverse vector αy P V for any αy P V such that αy + ( αy) = 0, (iv) for any αy, βy P V and a, b P C it holds that a(b αy) = (ab) αy, 1 αy = αy, a( αy + βy) = a αy + b βy, and (a + b) αy = a αy + b αy 2

3 is sometimes called a contraction because it maps some d-dimensional vector space (V) down to a 0-dimensional vector space (C) The collection of all bras acting on V itself forms a vector space called the dual space of V If xα and xβ are two bras, then any linear combination axα + bxβ forms a new bra whose action on kets is defined by (axα + bxβ ) γy := axα γy + bxβ γy (3) This is the bra generated by a linear combination of other bras, but what about the bra corresponding to a linear combination of kets? Equation (1) shows how to obtain determine the latter For example, if ψy = a αy + b βy and γy is any other vector, then the bra of ψy satisfies xψ γy = ω( ψy, γy) = ω(a αy + b βy, γy) = a ω( αy, γy) + b ω( βy, γy) = a ωxα γy + b ωxβ γy = (a xα + b xβ ) γy (4) This shows that the bra of ψy = a αy + b βy is xψ = a xα + b xβ In general, if t µ i yu n is any collection of vectors in V and ta i u n are arbitrary complex coefficients, then the mapping between bras and kets is given by ket: a i µ i y ðñ bra: a i xµ i (5) In particular, this implies that if t ν i yu n1 is another set of vectors and tb iu n1 another set of complex coefficients, then the inner product between vectors ψy = n c i µ i y and ϕy = n 1 d i ν i y is given by xψ ϕy = ( a i xµ i ) n1 b j ν j y = n 1 a i b jxµ i ν j y (6) A complex vector space with a defined inner product is called an inner product space In finite-dimensions, inner product spaces are also known as Hilbert spaces When working with infinite-dimensional vector spaces, Hilbert spaces require more structure than just being an inner product space But we will not worry about such details in this course since we only consider finite dimensions, where the two types of spaces are equivalent As is common practice, we will often denote the Hilbert space we are working in as H (in place of V) In a Hilbert space, the norm of a vector ψy is defined as b ψy := xψ ψy (7) A vector ψy can always be multiplied by a complex scalar so that its norm is one; such a vector is called normalized Notice that this scalar multiplier is not unique Specifically, if ψy is an arbitrary vector, then eiθ ψy?xψ ψy is a normalized vector for any θ P R The factor eiθ is called an overall phase, but as we will see shortly, overall phase factors have no physical meaning in quantum theory Therefore, when dividing ψy by its norm to obtain a normalized vector, without loss of generality we can always choose the overall phase to be +1 (ie take θ = 0) 3

4 12 Computational Basis and Matrix Representations Every d-dimensional Hilbert space possesses an orthonormal basis t ɛ 1 y, ɛ 2 y,, ɛ d yu which is a set of vectors that satisfy xɛ i ɛ j y = δ ij The existence of an orthonormal basis for any vector space V follows from the so-called Gram-Schmidt orthogonalization process, which we will not review here Any normalized vector ϕy P H can be expressed as a linear combination of the basis vectors ϕy = d b k ɛ k y By contracting both sides of the first equality by xɛ j, we see that xɛ j ϕy = xɛ j ( b k ɛ k y ) = b k xɛ j ɛ k y = b k δ jk = b j Therefore, for an orthonormal basis t ɛ k yu d, an arbitrary vector ϕy in H is a linear combination of the form ϕy = xɛ k ϕy ɛ k y (8) There are an infinite number of orthonormal bases for a vector space However, when performing calculations, it is customary to choose one particular basis and relabel the kets simply as t 1y, 2y,, dyu This basis is then often referred to as the computational basis It should be emphasized that the computational basis is not something intrinsic to the vector space, but rather it is chosen by the individual for mathematical convenience or because the elements of a particular basis correspond to physically important objects Once a computational basis is chosen, the entire vector space can be represented using column matrices Namely, one first makes the column vector identifications and then extends by linearity: y = 0, 2y = 1, dy = ñ b 1 b k ky = b 2 (9) Throughout this book we will also use the notation = to identify abstract elements in an arbitrary vector space H with their coordinate representations in C d, as in Eqns (9) and (10) Customarily, the notation changes slightly when the Hilbert space has dimension d = 2 Twodimensional Hilbert spaces play an important role in quantum information theory as they represent quantum systems known as qubits We will learn quite a bit more about qubits in the upcoming chapters But for now, we just remark that when dealing with two-dimensional spaces, the computational basis vectors are denoted by 0y and 1y (instead of 1y and 2y) with corresponding column vector representations 0y = ( ) 1, 1y = 0 b d ( ) 0 (10) 1 The computational basis is an orthonormal basis meaning that the inner product between any two basis vectors satisfies xl ky = δ lk For two vectors ψy = d a k ky and ϕy = d b k ky, their inner product is computed as in Eq (6): xψ ϕy = ( l=1 a l xl ) ( b k ky ) = l, 4 a l b kxl ky = l, a l b kδ lk = a k b k (11)

5 Figure 1: A direct sum decomposition of H into four orthogonal subspaces The only vector common to each of the subspaces is the zero vector 0 Any ψy P H can be uniquely expressed as a linear combination of vectors φ i y P V i This equation shows that if bras are represented by row vectors, then the inner product can be computed by standard matrix multiplication Namely, for a ket ψy = k a k ky, one obtains the row vector representation of its bra according to xψ = ( ) a 1 a 2 a d Then xψ ϕy = ( ) a 1 a 2 a b 2 d = 13 Direct Sum Decompositions of Hilbert Space b 1 b d a k b k (12) The notion of vector orthogonality can be extended to subspaces We say that subspace V 1 is orthogonal to subspace V 2 if xα 1 α 2 y = 0 for every α 1 y P V 1 and α 2 y P V 2 An orthogonal direct sum decomposition of a finite-dimensional Hilbert space H is a collection of orthogonal subspaces tv i u n whose union contains all of H (see Fig 13) 2 When tv i u n provides a direct sum decomposition of H, we write H = V 1 ` V 2 ` ` V n (13) Since H = n V i, every ψy P H can be uniquely written as ψy = i a i ϕ i y with ϕ i y P V i For each subspace V i an orthonormal basis t e i,k yu d i can be chosen, and the full set n t e i,kyu d i t constitutes an orthonormal basis for H Here the dimension of V i is d i and obviously d i = d, where d is the dimension of H For any subspace V H, its orthogonal complement, denoted by V K, is the collection of all ϕy such that xψ ϕy = 0 for every ψy P H It is easy to verify that V K is a vector space; indeed if xψ ϕy = xψ ϕ 1 y = 0 for any ψy P H, then xψ ( ϕy + ϕ 1 y) = 0 A natural direct sum decomposition of H is then given by H = V ` V K From the definitions of vector space dimension and direct sum decomposition, we obtain the following proposition Proposition 1 Let V H be any subspace of a d-dimensional space H Then where dim[v] is the dimension of V and likewise for dim[v K ] dim[h] = dim[v] + dim[v K ], (14) 2 A subspace of Hilbert space H is any subset V H that is itself a vector space 5

6 2 Linear Operators 21 Outer Products, Adjoints, and Eigenvalues For two Hilbert spaces H 1 and H 2, a linear operator R is simply a linear function that maps vectors in H 1 to H 2 That is, for any αy, βy P H 1 and any a, b P C, the operator R has action R (a αy + b βy) = ar αy + br βy P H 2 (15) We denote the set of all linear operators mapping H 1 to H 2 by L(H 1, H 2 ) For example, the vector space of bras acting H is precisely the set L(H, C) When H 1 = H 2 = H, we denote L(H 1, H 2 ) simply by L(H) Equation (15) implies that every linear operator is defined by its action on a basis Specifically if H 1 is a d 1 -dimensional space with computational basis t iyu d 1, then an operator R P L(H 1, H 2 ) acts on this basis according to R iy = d 2 R ji jy for k = 1,, d 1, where t jyu d 2 is the computational basis in H 2 By contracting both the RHS and LHS with xj, we see that R ji = xj R iy (16) Bra-ket notation provides a convenient way to write any linear operator With respect to bases t iyu d 1 for H 1 and t jyu d 2 for H 2, any linear operator R can be written as R = d 1 d 2 When R acts on a state ψy, the transformed state is R ψy = d 1 d 2 R ji jyxi ψy = R ji jyxi (17) d 1 d 2 R ji xi ψy jy (18) In general, the ket-bra βyxα is called the outer product of kets αy P H 1 and βy P H 2 It is the linear operator in L(H 1, H 2 ) that acts on an arbitrary ket ψy by outputting the ket βy multiplied by the bra-ket (ie the inner product) xα ψy: ψy Ñ ( βyxα ) ψy = xα ψy βy (19) Eqns (16) and (17) show how a linear operator can be represented as a matrix Using bases t iyu d 1 and t jyud 2 for H 1 and H 2 respectively, an operator R can has the matrix representation x1 R 1y x1 R 2y x1 R d 1 y R = x2 R 1y x2 R 2y x2 R d 1 y (20) xd 2 R 1y xd 2 R 2y xd 2 R d 1 y 6

7 An alternative way of forming this matrix can be obtained by taking a linear combination of outer products For a kets αy = d 1 a i iy and βy = d 2 b j jy, their outer product is computed in matrix form as b 1 b 1 a 1 b 1 a 2 b 1 a d 1 b 2 ) βyxα = (a 1 a 2 a b 2 a 1 b 2 a 2 b 2 a d = 1 d1 (21) b d2 b d2 a 1 b d2 a 2 b d2 a d 1 In particular, the outer product jyxi is simply the d 2 d 1 matrix that has zeros everywhere except a 1 in the j th row and i th column From Eq (17), the matrix for R can be formed by a linear combination of the jyxi with the coefficients being R ji For every operator R P L(H 1, H 2 ), there exists a unique operator R : P L(H 2, H 1 ) called the adjoint of R that is defined by the condition that xβ R : αy = xα R αy P H 1, βy P H 2 (22) From this equation we can easily determine the bra for vector R βy Indeed, let γy = R βy Then Eq (22) says that xα γy = xβ R : αy But we also know that xγ αy = xα γy, and so xγ αy = xβ R : αy Since this holds for any αy, we obtain the general rule In terms of its matrix representation, we can write ket: R βy ðñ bra: xβ R : (23) xj R : iy = xi R jy = R ij (24) for the orthonormal bases t iyu d 1 and t jyud 2 We therefore see that R : = d 1 d 2 R ij jyxi (25) The matrix representation for R : is obtained by taking the complex conjugate of each element in R and then taking the matrix transpose Finally, we recall that the eigenvalue of a linear operator R is a complex number λ such that R ψy = λ ψy for some vector ψy Note that ψy will not be the unique vector satisfying this equation In fact, it is easy to see that for a fixed λ, the set V λ = t ϕy : R ϕy = λ ϕyu forms a vector space This vector space V λ is called the eigenspace associated with λ (or the λ-eigenspace), and any vector from this space is an eigenvector of R with eigenvalue λ The kernel (or null space) of an operator R is the eigenspace associated with the eigenvalue 0, and it is denoted by ker(r) Lemma 1 For a nonempty Hilbert space H, every operator R P L(H) has at least one eigenvalue Proof The proof of this lemma is borrowed from Axler s textbook [Axl97], and it does not rely on determinants Suppose H is d-dimensional, and let ϕy be any element of H Consider the set of d + 1 vectors t ϕy, R ϕy, R 2 ϕy,, R d ϕyu As the dimension of H is d, these vectors must be linearly dependent Hence there exists constants c i not all equaling zero such that ( ) 0 = c k R k ϕy k=0 7

8 Now consider the polynomial p(z) = d k=0 c kz k for z P C If instead of a complex number z we substitute the operator R and multiply additive constants by the identity, then we obtain p(r) := d k=0 c kr k, which is precisely the term in parentheses in the above equation The fundamental theorem of algebra says that every degree d polynomial has d roots r 1, r 2,, r d (including multiplicities), and so r(z) can be factored as p(z) = c ± d (z r k) Applying the same factorization to p(r), we obtain 0 = p(r) ϕy = c This implies that there must exist some r k such that 0 = (R r k I) ϕ k y d¹ (R r k I) ϕy (26) We finally describe an important relationship between the kernel of R and its range The range (or image) of a linear operator R P L(H 1, H 2 ) is the vector space defined by Rng(R) = " ϕy P H 2 ϕy = R ψy for some ψy P H 1 * The image of an operator is itself a vector space, a fact that can be easily verified The dimension of Rng(R) is called the rank of R (denoted by rk(r)), and the following fundamental theorem relates rk(r) and dim[ker(r)] Theorem 1 (Rank-Nullity Theorem) For any operator R P L(H 1, H 2 ), dim[h 1 ] = rk(r) + dim[ker(r)] (27) Proof By Prop 1 we must show that rk(r) = dim[ker(r) K ] Expand R in the computational basis as R = d 1 d 2 R ji jyxi = d 2 jyxφ j where xφ j = d 1 R ji xi From this we see that ψy P ker(r) iff xφ j ψy = 0 for all j In other words, t φ j yu d 1 belong to ker(r) K Let us take t ɛ i yu r as an orthonormal basis for ker(r)k so that we can write φ j y = r c jk ɛ k y for every φ j y Then R = d 2 jyxφ j = d 2 ŗ c jk jyxɛ k = ŗ ν k yxɛ k, (28) where ν k y = d 2 c jk jy The t ν kyu r must be linearly independent Indeed, suppose that a k ν k y = 0 for a k not all zero Then ψy = r a i ɛ iy satisfies R ψy = 0, which is impossible since all the ɛ i y belong to ker(r) K Hence, the t ν k yu r are linearly independent, and they clearly span Rng(R) This proves that r = dim[ker(r) K ] = dim[rng(r)] Typically, the rank of an operator R is defined as the dimension of the column space (or row space) when R is represented as a matrix Eq (28) shows that this is equivalent to the definition given here of rk(r) = dim[rng(r)] Indeed, the t ν k yu r are precisely the column vectors when expressing R in an t ɛ k yu k basis of H 1 8

9 Figure 2: A rough schematic depicting the general relationships between operators described in this section In particular, there exists operators that are both unitary and hermitian, but the identity I is the only positive operator that is also unitary (and also a projection) 22 Important Types of Operators In quantum mechanics certain types of operators are vitally important to the theory as they correspond to certain physical processes or objects This will be discussed in the next chapter For now we stay on the level of mathematics and describe five different classes of linear operators Their inclusion structure is depicted in Fig Projections We begin with projectors since, as the Spectral Theorem shows below (Thm 2), they are the basic building block for all normal operators A projection on a d-dimensional space H is any operator of the form P V = ş ɛ i yxɛ i, (29) where the t ɛ i yu s is an orthonormal set of vectors for some integer 1 s d These vectors form a basis for some s-dimensional vector subspace V H, and P V is called the subspace projector onto V To understand why it has this name, first note that P V ψy = ψy for any ψy P V This equality can be verified by writing ψy = i xɛ i ψy ψy (which is possible since ψy P V) and then applying P V to both sides: ) P V ψy = ( ş ɛ i yxɛ i ψy = ş xɛ i ψy ɛ i y = ψy Next, let V K be the orthogonal complement of V and consider the direct sum decomposition H = V ` V K Any ψy P H can be uniquely written as ψy = a ϕ 1 y + b ϕ 2 y with ϕ 1 y P V and ϕ 2 y P V K Then since P V ϕ 1 y = ϕ 1 y and P V ϕ 2 y = 0, we have P V ψy = a ϕ 1 y In other words, when P V acts on arbitrary vector, it projects the vector onto its component that lies in V Notice that if V 1 and V 2 are orthogonal subspaces of H, then it is easy to verify that P Vi P Vj = δ ij P Vi i, j P t1, 2u (30) In particular, every projector satisfies P 2 V = P V for any subspace V Operators having the property A 2 = A are often called idempotent 9

10 For s 1, there are an infinite number of orthonormal bases for a given s-dimensional subspace V, and each of these provides a different way to represent the subspace projector In other words, if t ɛ i yu s and t δ iyu s are both orthonormal bases for V, then ş ɛ i yxɛ i = ş δ i yxδ i An extreme case is when V is the entire vector space H In this case, the subspace projector is simply the identity operator I, ie the unique operator in L(H) satisfying I ψy = ψy for all ψy P H Then if t ɛ i yu d is any orthonormal basis for H, we can express the identity as I = ɛ i yxɛ i Throughout this course we will work heavily with the eigenspaces of various operators For an operator R with eigenvalue λ, we define its λ-eigenprojector to be the projector P λ that projects onto the λ-eigenspace V λ 222 Normal Operators An operator N P L(H) is called normal if NN : = N : N Note, each normal operator is represented by a square matrix since it maps a given vector space H onto itself In general, every normal operator has very nice mathematical properties, two of which are the following Proposition 2 If N P L(H) is normal with eigenvalue λ and eigenspace V λ, then λ is an eigenvalue N : with eigenspace V λ Proof First, let ψy be any element in V λ By assumption we have that (N λi) ψy = 0 It is easy to see that if N is normal, then so is N λi Therefore, 0 = xψ (N λi) : (N λi) ψy = xψ (N λi)(n λi) : ψy (31) The last line implies that (N : λ I) ψy = 0, which is what we set out to prove Proposition 3 If N P L(H) is normal with distinct eigenvalues λ 1 and λ 2, then the associated eigenspaces V λ1 and V λ2 are orthogonal Furthermore, H = λ i V λi, where the union is taken over all distinct eigenvalues of N Proof Let ψ 1 y P V λ1 and ψ 2 y P V λ2 be arbitrary Then (i) N ψ 1 y = λ 1 ψ 1 y, and (ii) N : ψ 2 y = λ 2 ψ 2y Hence xψ 2 N ψ 1 y = xψ 2 (N ψ 1 y) = λ 1 xψ 2 ψ 1 y = (xψ 2 N) ψ 1 y = λ 2 xψ 2 ψ 1 y, (32) where the second line follows from (ii) Subtracting these gives xψ 2 ψ 1 y = 0 To prove the second claim of the proposition let V = λ i V λi We want to show that V K = H To this end, let p N be the operator N with domain restricted to VK ; that is, p N : VK Ñ H with action defined by p N βy = N βy for all βy P VK Notice that for any αy P V and βy P V K we have xα p N βy = xα N βy = xβ N : αy = 0, 10

11 where we have used Prop 2 to conclude that N : αy P V This shows that p N maps VK onto itself; ie p N P L(VK ) Therefore, we can use Lem 1 which says that if V K is nonempty, then p N has at least one eigenvalue and eigenvector But this would be a contradiction since any eigenvector of pn is also an eigenvector N, and all eigenspaces of N are contained in V Hence V K must by the empty set Proposition 3 shows that every normal operator N P L(H) is uniquely associated with a direct sum decomposition H = V λ1 ` V λ2 ` ` V λn (33) where the V λi are the eigenspaces of N This directly leads to the spectral decomposition of N, which is one of the most important mathematical theorems in quantum mechanics Theorem 2 (Spectral Decomposition) Every normal operator N P L(H) can be uniquely written as N = λ i P λi, (34) where the tλ i u n are the distinct eigenvalues of N and the tp λ i u n are the corresponding eigenspace projectors Proof By the direct sum decomposition of Eq (33), any vector ψy P H can be written as ψy = n α iy with α i y P V λi Thus, (N λ i P λi ) ψy = N α i y λ i P λi α j y = λ i α i y λ i α i y = 0 Since this holds for all ψy P H, we must have (N n λ ip λi ) = 0, or N = n λ ip λi If a normal operator has a zero eigenvalue, then the associated eigenspace is ker(n) The vector space supp(n) := ker(n) K = V λk (35) is called the support of N, and it is denoted by supp(n) From the spectral decomposition, we can immediately see that the support of N is equivalent to its range; ie supp(n) = rng(n) Since every projector is a normal operator, it also has a spectral decomposition In fact, this is precisely what is given in Eq (29) Then if t λ i,j yu d i is an orthonormal basis for the eigenspace V λi of a normal operator N, we can write the spectral decomposition of N as N = λ i P λi = d i λ i λ i,j yxλ i,j = λ k =0 d i λ i λ i,j yxλ i,j (36) Customarily, we combine the two sums over (i, j) into a single sum where the same eigenvalue is counted multiple times More precisely, the eigenvalue λ i is counted dim[v λi ] times Since n d i = d = dim[h] we can write N = λ k λ k yxλ k (37) As a matter of convention, whenever the λ k are enumerated from k = 1,, d we will assume that the set tλ k u d may contain repeated eigenvalues The full collection t λ kyu d spans all of H, and it is sometimes called a complete eigenbasis 11

12 223 Unitary Operators and Isometric Mappings An element U P L(H) is called unitary if U : U = UU : = I In fact, it is easy to see that if U : U = I, then we must also have UU : = I Indeed, suppose that U : U = I, and let t ɛ i yu d be an orthonormal basis for H Clearly U ɛ i y is also an orthonormal basis But then U : U = I implies that (UU : I)U ɛ i y = 0 for all i, which is only possible if UU : I = 0 An important property of unitary operators is that they preserve inner products when transforming vectors: xα βy = xα U : U βy for all αy, βy P H In particular, if t ɛ i yu d is an orthonormal basis for H, then so will be tu ɛ i yu d for any unitary operator U If we denote δ iy = U ɛ i y, then the unitary U can be written as U = i=i δ i yxɛ i (38) Conversely, for any two orthornormal bases, there always exists a unitary operator like Eq (38) that transforms one basis to other An important class of unitary operators are permutations, which are operators that act invariantly on the set of computational basis vectors t iy, 1y,, dyu Any permutation has the form Π = π(i)yxi, (39) where π is a bijection acting on the set of integers t1,, du It is easy to verify that Π : Π = ΠΠ : = I Using unitary operators, the spectral decomposition can be expressed in an alternative form Let t λ k yu d be a complete eigenbasis of some normal operator N Define the unitary operator U = λ k yxk, which rotates the computational basis t kyu d into the eigenbasis of N Then the spectral decomposition of N can be written as N = UΛU :, (40) where Λ is a d d matrix diagonal in the computational basis of the form λ Λ = 0 λ 2 0, 0 0 λ d with λ k an eigenvalue associated with eigenvector λ k y When applying a function f to the matrix N, we have ˆf (N) = U f (Λ)U :, (41) where f (Λ) is the diagonal matrix obtained by applying f to each of the eigenvalues of N on the diagonal Unitary operators are defined as operators that map a Hilbert space H onto itself A more general class of operators consists of isometries An isometry between two Hilbert spaces H 1 and H 2 is a linear operator U P L(H 1, H 2 ) that preserves inner products That is, xψ U : U φy = xψ φy 12

13 for all ψy, φy P H 1 If dim(h 1 ) = dim(h 2 ), then this mapping is necessarily invertible, and the spaces are called isomorphic, a relationship we denote by H 1 H 2 For example, Eq (9) shows that every d-dimensional Hilbert space is isomorphic to C d Consequently, any two Hilbert spaces of the same dimension are isomorphic Mathematicians like to think of two isomorphic vector spaces as being equivalent since any property or result derived in one space can be equivalently mapped to a property or result in the other We finally note the useful fact if H 1 H 2, then any isometry U :P L(H 1, H 2 ) can be extended to a unitary transformation U 1 P L(H 2 ) By an extension of U, it is meant that U 1 ψy = U ψy for every ψy P H 1 Lemma 2 If U P L(H 1, H 2 ) is an isometry with H 1 H 2, then there exists a unitary operator U 1 :P L(H 2 ) that is an extension of U Proof With U being an isometry, we can write U = d 1 ɛ iyxi where t iyu d is an orthonormal basis of H 1 and t ɛ i yu d 1 is an orthonormal basis for some subspace S of H 2 Thus dim(h1 K) = dim(s K ) = d 2 d 1, where d 2 is the dimension of H 2 Thus, we can identify an orthonormal basis t iyu d 2 i=d 1 +1 for HK 1 and an orthonormal basis t ɛ iyu d 2 i=d 1 +1 for SK so that the map U 1 = d 2 ɛ iyxi is a unitary extension of U 224 Hermitian Operators A special type of normal operator are those satisfying A = A : Such operators are called hermitian, and we will let Herm(H) L(H) denote the space of all hermitian operators acting on H Proposition 2 says that λ is an eigenvalue of A : with eigenspace V λ whenever λ is an eigenvalue of A with the same eigenspace V λ In other words, if ψy P V λ is a nonzero vector, then A ψy = λ ψy and A : ψy = λ ψy Since A = A : for hermitian operators, this immediately implies that λ = λ We have thus proven an important fact Proposition 4 Every hermitian operator has real eigenvalues Less frequently encountered objects in quantum mechanics are anti-hermitian operators These are normal operators satisfying A = A : In contrast to hermitian operators, all the eigenvalues of an anti-hermitian operator are purely imaginary 225 Positive Operators A normal operator A is called positive (or positive semi-definite) if all its eigenvalues are nonnegative It is called positive definite if all its eigenvalues are positive An alternative characterization of positive operators is the following Proposition 5 A normal operator A is positive if and only if xψ A ψy 0 for all ψy P H If R P L(H 1, H 2 ) is an arbitrary operator, then R : R P L(H 1 ) and RR : P L(H 2 ) are both positive since xψ R : R ψy 0 for all ψy We will let Pos(H) L(H) denote the set of all positive operators acting on H 23 Functions of Normal Operators Let f : X Ñ C be a complex-valued function whose domain is X C We would like to extend this function to be a mapping of normal operators If N has eigenvalues lying in X, then we can 13

14 define a new operator ˆf (N) = f (λ i )P λi, (42) where n λ ip λi is the spectral decomposition of N For example, if f (x) = x 2, then ˆf (N) = N λ2 i P λ i Furthermore, since N 2 = λ i P λi λ j P λj = λ 2 i P λ i, (43) we see that ˆf (N) = N 2 In other words, ˆf has the same functional form as f except applied to normal matrices having eigenvalues in the domain of f In many cases, we will be interested in studying functions whose domain does not include zero In this case, we can still consider the operator obtained by applying the function to the nonzero eigenvalues of N This is often described as restricting the function to the support of N For example, the logarithm f (x) = log(x) is only defined in the interval (0, +8) For a positive operator X = n λ ip λi, we define log(x) := log(λ i )P λi (44) λ i =0 Likewise, as a function on C, the multiplicative inverse is given by f (z) = z 1 = 1 z For any normal operator N = λ i =0 λ ip λi, we can define N 1 := λ 1 i P λi (45) λ i =0 Note that N 1 N = NN 1 = λ i =0 λ i P λi λ j =0 λ 1 j P λj = P λi = P supp(n) (46) λ i =0 Hence, when N has a zero eigenvalue, N 1 N is not the identity on H but rather the projector onto supp(n) H 24 The Singular Value Decomposition In this section we will derive a decomposition theorem for a general linear operator R P L(H 1, H 2 ) As observed above, the operators R : R and RR : will always be positive What may not be obvious is that these operators actually have the same eigenvalues This will be one consequence of the the Singular Value Decomposition that we show below For a spectral decomposition R : R = d λ k λ k yxλ k with eigenvalues λ k, the non-negative numbers σ k := a λ k k = 1,, d (47) are called the singular values of R With multiplicity of values included, the number r of nonzero singular values is equivalent to the rank of R Note that the set of vectors tλ k u λk =0 span ker(r) K Theorem 3 (Singular Value Decomposition) Any R P L(H 1, H 2 ) can be decomposed as R = UΛ σ V :, (48) 14

15 where U P L(H 2 ) and V P L(H 1 ) are both unitary operators, and Λ σ P L(H 1, H 2 ) is operator whose matrix representation has the r non-zero singular values of R listed along the diagonal and 0 1 s everywhere else: σ Λ σ = σ r dim(h 2 ) dim(h 1 ) (49) Proof By Thm 1, the dimensions of ker(r) K and rng(r) are equal, and hence there exists an isomorphic mapping T : rng(r) Ñ ker(r) K Define the operator R 0 = TR With d λ k λ k yxλ k being the spectral decomposition of R : R, note that R : 0 R 0 = σ k =0 σ2 k λ kyxλ k We can therefore decompose R 0 = U 0 br : 0 R 0 where U 0 = σ k =0 σ 1 k R 0 λ k yxλ k (50) is a unitary operator when acting on ker(r) K Indeed, it is easy to see that U : 0 U 0 = P ker(r) K Putting it all together, we therefore have R = T : U 0 σ k λ k yxλ k = σ k =0 σ k =0 σ k λ 1 k yxλ k, (51) where t λ 1 k yu σ k =0 forms an orthonormal basis for rng(r) Let U then be any unitary acting on H 2 that rotates the λ 1 k y into the first r computational basis vectors for H 2, and likewise let V be any unitary acting on H 1 transforming the λ k y to the first r computational basis vectors for H 1 From the Singular Value Decomposition, we can now easily very the fact that R : R and RR : have equal eigenvalues One just directly computes R : R = VΛ 2 σv :, RR : = Uλ 2 σu : (52) We note one further relationship involving matrices satisfying AA : = BB : Its proof makes use of the following simple proposition Proposition 6 Let A and A 1 be hermitian operators having the same eigenspaces, and suppose W is some full rank operator satisfying AW = WA 1 Then A = A 1 and W acts invariantly on the eigenspaces of A That is, xλ i A λ j y = 0 if λ i y and λ j y belong to distinct eigenspaces of A Proof Let λ k denote the eigenvalues of A and λ 1 k the eigenvalues of A1 From the equality AW = WA 1 and the assumption that A and A 1 have the same eigenspaces, it follows that λ i xλ i W λ j y = λ 1 j xλ i W λ j y Hence since W is full rank, we must have λ i = λ 1 i (and thus A = A1 ) by considering the case i = j in this equation It likewise follows that xλ i W λ j y = 0 by considering the case λ i = λ j 15

16 Lemma 3 For A P L(H 1, H 2 ) and B P L(H 1, H 3 ), if AA : = BB : then there exists a unitary U P L(H 1 ) such that A = UΛV : 1 and B = UΛV: 2 are Singular Value Decompositions A similar statement holds whenever A : A = B : B for A P L(H 2, H 1 ) and B P L(H 3, H 1 ) Proof Let A = U A Λ A V : A and B = U BΛ B V : B be Singular Value Decompositions of A Decomposition of A Then the assumption AA : = BB : implies that Λ 2 A W = WΛ2 B, where W = U: A U B From the previous proposition, we have that Λ A = Λ B =: Λ, and W applies block unitary transformations on the blocks of Λ having the same singular value Hence we have the commutation relation WΛ = WΛ(W : W) = ΛW, and consequently which is what we want to prove B = U B ΛV : B = U AWΛV B = U A Λ(WV B ), (53) 3 Tensor Products 31 Tensor Product Spaces and Vectors The tensor product is a construction that combines different Hilbert spaces to form one large Hilbert space As we will see, tensor products are used to describe quantum systems composed of multiple subsystems In anticipation of this application, we will henceforth label different Hilbert spaces by superscript capital letters (ie H A, H B, etc) instead of by subscript numbers (ie H 1, H 2, etc) The reason is that later we will associate Hilbert space H A with Alice, H B with Bob, etc If H A is a Hilbert space with computational basis t iy A u d A and HB is a Hilbert space with computational basis t iy B u d B, then their tensor product space is a Hilbert space HA b H B with orthonormal basis given by t iy A b jy B u d A,d B,, called a tensor product basis Every vector ψyab P H A b H B can be written as a linear combination of the basis vectors: ψy AB = c ij iy A b jy B (54) In order to equip the tensor product space with more structure, we need a precise way to relate elements in H A and H B to elements in H A b H B The tensor product is a map b : H A H B Ñ H A b H B that generates the tensor product vector ψy A b φy B P H A b H B for ψy A P H A and φy B P H B, such that 1 (c ψy A ) b φy B = ψy A b (c φy B ) = c( ψy A b φy B ψy A P H A, φy B P H B, c P C, 2 ( ψy A + ωy A ) b φy B = ψy A b φy B + ωy A b φy ψy A, ωy A P H A, φy B P H B, 3 ψy A b ( φy B + ωy B ) = ψy A b φy B + ψy A b ωy ψy A P H A, φy B, ωy B P H B These three properties are known as bilinearity Hence, the tensor product is a bilinear map from H A H B to H A b H B One important consequence of bilinearity is that it allows us to form a tensor product basis for H A b H B using any pair of bases for H A and H B Indeed suppose that t i 1 y A u d A and t j1 y B u d B are 16

17 arbitrary orthonormal bases for H A and H B respectively Then we can write the computational basis vectors in these new bases by iy A = iy B = a ij j 1 y A for i = 1,, d A b ij j 1 y B for i = 1,, d B Substituting these into Eq (54) and using bilinearity gives ψy = c 1 ij i1 y A b j 1 y B, (55) where c 1 ij = d A db l=1 a ikb jl c kl Hence, t i 1 y A b j 1 y B u d A,d B, also provides a basis for HA b H B For notation simplicity, we will drop the superscript labels A and B on the kets when the associated Hilbert spaces are clear Also, the tensor product of two vectors ψy b φy is often written as ψy φy, or even just ψφy It should be stressed that the the tensor product space H A b H B is much larger than the set of tensor product vectors The latter is given by S = t αy b βy : αy P H A, βy P H B u while the former consists of all linear combinations of vectors in S We have S H A b H B, but S = H A b H B since not every ψy P H A b H B is a tensor product of two vectors That is, ψy = αy b βy for most ψy P H A b H B We will later see that such vectors ψy correspond to entangled states in quantum states We next discuss the structure of linear operators acting on H A b H B If A P L(H A, H A1 ) and B P L(H B, H B1 ), then their tensor product A b B is a linear operator H A b H B Ñ H A1 b H B1 with action defined by A b B i,j c ij iy b jy = c ij A iy b B jy (56) This action is linear such that if A, C P L(H A, H A1 ) and B, D P L(H B, H B1 ), then A b B + C b D P L(H A b H B, H A1 b H B1 ) with action i,j (A b B + C b D) ψy = A b B ψy + C b D ψy (57) for all ψy P H A b H B Furthermore, from Eq (56) we see that the product of tensor product operators satisfies (A b B)(C b D) = AC b BD (58) One important example of Eq (56) is known as a partial contraction If H A1 = C, then A P L(H A, C) is a bra acting on H A ; ie A = xφ for some φy P H A Letting H B = H B1, then the partial contraction of φy on system A is the operator xφ A b I B P L(H A b H B, H B ) such that for any ψy AB = i,j c ij iy A jy B, xφ A b I B ( ψy AB ) = xφ A b I B i,j 17 c ij iy A jy B = c ij xφ iy jy B (59) ij

18 This is called a partial contraction since it is essentially contracting the two spaces H A and H B down to the single space H B Also note that if γy AB = i,j a i,j iy b jy and ωy AB = i,j b i,j iy b jy are two vectors in H A b H B, then their inner product represents a full contraction: ) ( xγ ωy = a i,j xi A b xj B b k,l ky A b ly B = a i,j b k,lxi ky A xj ly B = a i,j b i,j (60) i,j k,l Next, recall that any operator A P L(H A, H A1 ) and any operator B P L(H B, H B1 ) can be expressed as A = d A 1 i,j,k,l a ik iyxk, B = d B 1 l=1 Likewise, any operator K P L(H A b H B, H A1 b H B1 ) can be expressed as K = d A 1 d B 1 l=1 b jl jyxl c ij,kl iyxk b jyxl (61) Analogous to the case of vectors in H A b H B, not every K P L(H A b H B, H A1 b H B1 ) is a tensor product of operators from L(H A, H A1 ) and L(H B, H B1 ) The matrix representation of tensor products is given by the Kronecker product of matrices That is, if A = a 11 a 12 a 1dA a 21 a 22 a 2dA a da 11 a da 1 d A, B = b 11 b 12 b 1dB b 21 b 22 b 2dB b db 11 b db 1 d B are matrix representations for A P L(H A, H A1 ) and B P L(H B, H B1 ), then a 11 b 11 a 11 b 12 a 12 b 11 a 12 b 12 a 1d1 b 1d2 A b B = a 11 b 21 a 11 b 22 a 12 b 11 a 12 b 12 a 1dA b 2dB a 21 b 11 a 21 b 12 a 22 b 11 a 22 b 12 a 2dA b 1dB (63) a da 11b db 11 a da 11b db 12 a da 12b db 11 a da 12b db 12 a da 1 d A b db 1 d B An easy way to remember this construction is in terms of block matrices: a 11 B a 12 B a 1dA B A b B = a 21 B a 22 B a 2dA B (64) a da 11B a da 1 d A B Here, each element a ij B is a d 2 d 2 submatrix in which a ij is being multiplied to every element in B For example, if ( ) ( ) σ x =, σ 1 0 z =, (65) 0 1 i,j (62) 18

19 then σ x b σ z = σ z b σ x = (66) This way of representing tensor products also applies to kets and bras For example, a 1 b 1 αy = a 2, βy = b 2, ñ αy b βy = a da b db a 1 b 1 a 1 b 2 a 1 b db a 2 b 1 a da b db For a general vector ψy that is not necessarily a tensor product of two vectors, we have ψy = c ij iy jy ñ ψy = c 11 c 12 c 1dB c 21 c da d B (67) (68) The case of H A b H B = C 2 b C 2 is so important that we explicitly describe it here From Eq (10), we see that the tensor product basis vectors have the matrix representations y = 0 0, 01y = 1 0, 10y = 0 1, 11y = 0 0 (69) Then a ψy = a 00y + b 01y + c 10y + d 11y ñ ψy = b c (70) d 32 The Isomorphism H A b H B L(H B, H A ) and the Schmidt Decomposition Recall that an isomorphism is just a linear invertible mapping between two vector spaces that preserves inner products The first isomorphism theorem says that L(H B, H A ) is isomorphic to H A b H B for any two Hilbert spaces H A and H B To see this, we must describe an invertible linear map from H A b H B to L(H B, H A ) that preserves inner products But how should we define an inner product between two operators in L(H B, H A )? Instead of answering this question directly, 19

20 let us first describe an invertible mapping from H A b H B to L(H B, H A ), and then in the next section we will use this map to define an inner product on L(H B, H A ) so that the two spaces become isomorphic Let t iyu d A be the computational basis specified for HA and t jyu d B the computational basis for H B Then we introduce the following one-to-one association between elements in H A b H B with elements in L(H B, H A ): ψy = m ij iy jy Ø m ij iyxj =: M ψy (71) Using our established notation, we denote the equivalence between ψy and M ψy as ψy = M ψy Thus, any vector ψy in the tensor product space H A b H B can uniquely identified with a linear operator M ψy in L(H B, H A ) and vice-versa In practice this equivalence is very easy to use since one just needs to remember the rule iy jy Ø iyxj (72) However, there are a few words of caution here First, this mapping is basis-dependent since it is defined in terms of the computational basis for H A and H B For example, suppose we had chosen some other bases t i 1 yu d A and t j1 yu d B and defined the mapping between HA b H B and L(H A, H B ) according to the rule i 1 y j 1 y Ø i 1 yxj 1 Then we would end up associating a different operator with ψy then we would under the original mapping iy jy Ø iyxj The reason is that in the i 1 y j 1 y basis, ψy will have different components m ij, and therefore it will be identified with the d operator A db m1 ij iyxj which is different than d A db m ij iyxj Typically the subscript ψy on M ψy will be dropped when the associated vector in H A b H B is clear Another potential source of confusion arises when comparing Eq (71) to the conversion between kets and bras Let αy = d A a i iy and βy = d B b j jy be any two vectors expanded in the computational bases Then according to the mapping of Eq (71), the operator associated with the vector αy βy is M αy βy = a i b j iyxj = αyxβ (73) The reason for this inequality is because the bra of βy is given by xβ = d B b j xj Hence if we define the state β y := d B b j jy, then we have M αy βy = αyxβ (74) The mapping of Eq (71) can also be interpreted in operational terms For a general d-dimensional Hilbert space H with computational basis t jyu d B, introduce the vector φ + d y = jjy (75) which is an element of H b H Here, H b H is representing two copies of the same Hilbert space H Then for any ψy P H A b H B with associated operator M = M ψy P L(H B, H A ), we can write ψy = M b I φ + d B y (76) 20

21 This equality can be verified by noting the action of M ψy on a basic vector jy is given by Then ψy = M jy B = m ij iy A jy B = ( da m ij iy A (77) m ij iy A ) jy B = M b I φ + d B y (78) Thus, we can think of M ψy as the operator that, when acting on the left side of φ + d b y generates the state ψy It is also possible to switch the order of summations in Eq (78) That is, we have ψy = m ij iy A jy B = iy A m ij jy B, (79) The term in parentheses can be written as d B m ij jy B = M T iy A, where M T :P L(H A, H B ) is the operator obtained by taking a transpose of M in the computational basis Thus, we can write ψy = I b M T φ + d A y (80) Equations (76) and (80) are so useful in general that we state them in the following lemma, known as the ricochet property Lemma 4 (Ricochet Property) An arbitrary A P L(H A, H B ) satisfies I b A φ + d A y = A T b I φ + d b y, (81) where A T P L(H B, H A ) is the transpose of A taken in the computational bases, which are the same bases used to define φ + d A y and φ + d B y Our next task is to translate the Singular Value Decomposition of an operator in L(H A, H)) to decomposition of vectors in H A b H B This is known as the Schmidt Decomposition, and it is one of the most important tools in the study quantum entanglement Theorem 4 (Schmidt Decomposition) Every vector ψy P H A b H B can be written as ψy = ŗ σ i α i y b β i y, (82) where t α i yu d A and t β iyu d B are orthonormal bases called Schmidt bases for HA and H B respectively, the σ i are positive numbers known as the Schmidt coefficients of ψy, and r is known as the Schmidt rank of ψy Equivalently, the σ i are the nonzero singular values of the operator M ψy Proof This proof of this is almost trivial given the ricochet property Let M ψy = UΛ σ V : be the singular value decomposition of M ψy Then ψy = M ψy b I φ + d B y = UΛ σ V : b I φ + d B y = where α i y = U iy and β i = V iy 21 UΛ σ iy b V iy = = ŗ ŗ σ i U iy A b V iy B σ i α i y b β i y, (83)

22 33 The Trace and Partial Trace Let us now return to the question of establishing a true isomorphism between H A b H B and L(H B, H A ) Given our one-to-one linear mapping in Eq (71), what remains is to identify an inner product on L(H B, H A ) that is preserved under this mapping To be clear, we need a function ω : L(H B, H A ) L(H B, H A ) Ñ C satisfying the axioms of the inner product and such that ω n ij iyxj, m ij iyxj = xφ ψy = n ij m ij, (84) for any ψy = d A db m ij iy jy and φy = d A db n ij iy jy This equality can be achieved using a function known as the operator trace For any X P L(H), its trace is defined as Tr[X] = xɛ i X ɛ i y, (85) where t ɛ i yu d is any orthonormal basis for H Note that Tr : L(H) Ñ C is a linear mapping Then we can introduce an inner product on L(H B, H A ), known as the Hilbert-Schmidt inner product, given by ω HS (N, M) := Tr[N : M] (86) One can easily verify that the Hilbert-Schmidt inner product satisfies Eq (84) thereby establishing the isomorphism H A b H B L(H B, H A ) The trace of an operator is used frequently in quantum mechanics One important property of the trace is known as cyclicity This says that Tr[XYZ] = Tr[ZXY] = Tr[YZX] (87) This can be easily verified from the definition Since d ɛ iyxɛ i = I, we have, for instance, Tr[XYZ] = xɛ i XYZ ɛ i y = = i,j, i,j, xɛ i X ɛ j yxɛ j Y ɛ k yxɛ k Z ɛ i y For a rank-one operator K = ψyxφ, its trace is easily found to be Tr[ ψyxφ ] = xɛ i ψyxφ ɛ i y = xɛ k Z ɛ i yxɛ i X ɛ j yxɛ j Y ɛ k y = Tr[ZXY] (88) xφ ɛ i yxɛ i φy = xψ φy (89) On tensor product spaces, one can also define a partial trace For any K P L(H A b H B ) written as a sum of tensor products, K = ş c i A i b B i (90) with A i P L(H A ) and B i P L(H B ), the partial trace of K over H A is the operator in L(H B ) given by Tr A [K] := ş 22 c i Tr[A i ]B i (91)

23 Likewise, the partial trace of H B over H B is Tr B [K] := ş c i Tr[B i ]A i (92) Linearity of the partial trace assures that Tr A [K] and Tr B [K] are well-defined For rank-one positive operator K = ψyxψ P L(H A b H B ), its partial traces are closely connected to the Schmidt decomposition of ψy P H A b H B and is operator representation M ψy P L(H B, H A ) Specifically, let ψy = r σ i α i y b β i y be a Schmidt decomposition of ψy Then ψyxψ = ŗ i, Since the Schmidt bases are orthonormal, we have Likewise, Tr A [ ψyxψ ] = ŗ i, σ i σ j Tr[ α i yxα j ] β i yxβ j = Tr B [ ψyxψ ] = ŗ σ i σ j α i yxα j b β i yxβ j (93) ŗ σ 2 i β iyxβ i = M T ψy M ψy (94) σ 2 i α iyxα i = M ψy M : ψy (95) These relationships imply that if two vectors have the same Schmidt coefficients and Schmidt bases for either H A or H B, then the corresponding partial traces will be the same Using Lem 3 we can show that the converse of this is also true Corollary 1 Suppose that ψy AB and φy AB are two vectors such that Tr B ψyxψ = Tr B φyxφ Then ψy AB and φy AB have the same Schmidt coefficients, and there exists a common Schmidt basis t α i yu r of H A for both states That is, there exists Schmidt decompositions ψy = φy = ŗ ŗ σ i α i y b β i y A similar statement holds when Tr A ψyxψ = Tr A φyxφ Proof This follows immediately from Lem 3 and the ricochet property σ i α i y b β 1 iy (96) 4 Exercises Exercise 1 Suppose that H = C 3 and let V be the one-dimensional subspace spanned by ψy =? 1/3( 0y + e i/3 1y 2y) Provide an orthonormal basis for V K 23

24 Exercise 2 Compute the eigenvalues and eigenvectors of the three matrices: σ x = ( ) 0 1, σ 1 0 y = ( ) 0 i i 0 σ z = ( ) 1 0 (97) 0 1 Exercise 3 For a d-dimensional Hilbert space, compute the eigenvalues and dimensions of the associated eigenspaces for the operators R 1 = βyxα and R 2 = I βyxα when (a) αy = βy; (b) αy and βy are orthogonal In both cases assume that αy and βy are normalized Exercise 4 Suppose that t ɛ i yu s and t δ iyu s are both orthonormal bases for a subspace V P H Show that ş ɛ i yxɛ i = ş Exercise 5 δ i yxδ i = P S (a) Give an example of a normal operator that is not hermitian (b) Give an example of a unitary operator that is not hermitian (c) Give an example of a hermitian operator that is not positive Exercise 6 Prove Prop 5 That is, show that a normal operator A is positive iff xψ A ψy 0 for all ψy P H Consider the operator R : R 2 Ñ R 2 given by Exercise 7 R = 0yx0 + 1yx+, where +y =? 1/2( 0y + 1y) Find the singular value decomposition of R Exercise 8 For H A b H B = C 2 b C 2, consider the operator Y P L(H A b H B ) given by where σ y is defined in Exercise 2 Y = I b σ y σ y b I, 24

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra and Dirac Notation, Pt. 3

Linear Algebra and Dirac Notation, Pt. 3 Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

The Framework of Quantum Mechanics

The Framework of Quantum Mechanics The Framework of Quantum Mechanics We now use the mathematical formalism covered in the last lecture to describe the theory of quantum mechanics. In the first section we outline four axioms that lie at

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Linear Algebra and Dirac Notation, Pt. 2

Linear Algebra and Dirac Notation, Pt. 2 Linear Algebra and Dirac Notation, Pt. 2 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14

More information

1 Dirac Notation for Vector Spaces

1 Dirac Notation for Vector Spaces Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Categories and Quantum Informatics: Hilbert spaces

Categories and Quantum Informatics: Hilbert spaces Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Linear Algebra and Dirac Notation, Pt. 1

Linear Algebra and Dirac Notation, Pt. 1 Linear Algebra and Dirac Notation, Pt. 1 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 1 February 1, 2017 1 / 13

More information

Lecture 3: Hilbert spaces, tensor products

Lecture 3: Hilbert spaces, tensor products CS903: Quantum computation and Information theory (Special Topics In TCS) Lecture 3: Hilbert spaces, tensor products This lecture will formalize many of the notions introduced informally in the second

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review Chapter III Quantum Computation These lecture notes are exclusively for the use of students in Prof. MacLennan s Unconventional Computation course. c 2017, B. J. MacLennan, EECS, University of Tennessee,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Quantum Information & Quantum Computing

Quantum Information & Quantum Computing Math 478, Phys 478, CS4803, February 9, 006 1 Georgia Tech Math, Physics & Computing Math 478, Phys 478, CS4803 Quantum Information & Quantum Computing Problems Set 1 Due February 9, 006 Part I : 1. Read

More information

Ph 219/CS 219. Exercises Due: Friday 20 October 2006

Ph 219/CS 219. Exercises Due: Friday 20 October 2006 1 Ph 219/CS 219 Exercises Due: Friday 20 October 2006 1.1 How far apart are two quantum states? Consider two quantum states described by density operators ρ and ρ in an N-dimensional Hilbert space, and

More information

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003 Handout V for the course GROUP THEORY IN PHYSICS Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003 GENERALIZING THE HIGHEST WEIGHT PROCEDURE FROM su(2)

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Mathematical Foundations of Quantum Mechanics

Mathematical Foundations of Quantum Mechanics Mathematical Foundations of Quantum Mechanics 2016-17 Dr Judith A. McGovern Maths of Vector Spaces This section is designed to be read in conjunction with chapter 1 of Shankar s Principles of Quantum Mechanics,

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Quantum Computation. Alessandra Di Pierro Computational models (Circuits, QTM) Algorithms (QFT, Quantum search)

Quantum Computation. Alessandra Di Pierro Computational models (Circuits, QTM) Algorithms (QFT, Quantum search) Quantum Computation Alessandra Di Pierro alessandra.dipierro@univr.it 21 Info + Programme Info: http://profs.sci.univr.it/~dipierro/infquant/ InfQuant1.html Preliminary Programme: Introduction and Background

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

The Principles of Quantum Mechanics: Pt. 1

The Principles of Quantum Mechanics: Pt. 1 The Principles of Quantum Mechanics: Pt. 1 PHYS 476Q - Southern Illinois University February 15, 2018 PHYS 476Q - Southern Illinois University The Principles of Quantum Mechanics: Pt. 1 February 15, 2018

More information

2. Introduction to quantum mechanics

2. Introduction to quantum mechanics 2. Introduction to quantum mechanics 2.1 Linear algebra Dirac notation Complex conjugate Vector/ket Dual vector/bra Inner product/bracket Tensor product Complex conj. matrix Transpose of matrix Hermitian

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

Spectral Theorems in Euclidean and Hermitian Spaces

Spectral Theorems in Euclidean and Hermitian Spaces Chapter 9 Spectral Theorems in Euclidean and Hermitian Spaces 9.1 Normal Linear Maps Let E be a real Euclidean space (or a complex Hermitian space) with inner product u, v u, v. In the real Euclidean case,

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Dimension. Eigenvalue and eigenvector

Dimension. Eigenvalue and eigenvector Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i.

. Here we are using the standard inner-product over C k to define orthogonality. Recall that the inner-product of two vectors φ = i α i. CS 94- Hilbert Spaces, Tensor Products, Quantum Gates, Bell States 1//07 Spring 007 Lecture 01 Hilbert Spaces Consider a discrete quantum system that has k distinguishable states (eg k distinct energy

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Unitary Dynamics and Quantum Circuits

Unitary Dynamics and Quantum Circuits qitd323 Unitary Dynamics and Quantum Circuits Robert B. Griffiths Version of 20 January 2014 Contents 1 Unitary Dynamics 1 1.1 Time development operator T.................................... 1 1.2 Particular

More information

Quantum Data Compression

Quantum Data Compression PHYS 476Q: An Introduction to Entanglement Theory (Spring 2018) Eric Chitambar Quantum Data Compression With the basic foundation of quantum mechanics in hand, we can now explore different applications.

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information