Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB

Similar documents
Quantum Computing Lecture 2. Review of Linear Algebra

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

5.6. PSEUDOINVERSES 101. A H w.

Matrix Factorization and Analysis

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Phys 201. Matrices and Determinants

Diagonalization by a unitary similarity transformation

Mathematical Methods wk 2: Linear Operators

Assignment 12. O = O ij O kj = O kj O ij. δ ik = O ij

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Notes on basis changes and matrix diagonalization

Introduction to Matrix Algebra

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Appendix A: Matrices

Linear Systems and Matrices

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

CS 246 Review of Linear Algebra 01/17/19

Mathematical Foundations of Quantum Mechanics

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Fundamentals of Engineering Analysis (650163)

Matrices and Linear Algebra

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Introduction to Quantum Mechanics Physics Thursday February 21, Problem # 1 (10pts) We are given the operator U(m, n) defined by

Notes on Mathematics

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Lecture 3: Matrix and Matrix Operations

G1110 & 852G1 Numerical Linear Algebra

MATH 235. Final ANSWERS May 5, 2015

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Vector spaces and operators

Matrix Representation

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

6.2 Unitary and Hermitian operators

Elementary maths for GMT

Linear Algebra Primer

Elementary Linear Algebra

Topic 15 Notes Jeremy Orloff

L8. Basic concepts of stress and equilibrium

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Quantum Information & Quantum Computing

Knowledge Discovery and Data Mining 1 (VO) ( )

Course 2BA1: Hilary Term 2007 Section 8: Quaternions and Rotations

Physics 215 Quantum Mechanics 1 Assignment 1

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Math 215 HW #9 Solutions

Introduction to Matrices

Closed-Form Solution Of Absolute Orientation Using Unit Quaternions

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Chapter 2 Notes, Linear Algebra 5e Lay

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013

Elementary Row Operations on Matrices

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Symmetric and anti symmetric matrices

NOTES ON BILINEAR FORMS

Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n.

Review of Linear Algebra

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Linear Algebra: Matrix Eigenvalue Problems

= a. a = Let us now study what is a? c ( a A a )

Repeated Eigenvalues and Symmetric Matrices

Chapter 6 Inner product spaces

MATRICES The numbers or letters in any given matrix are called its entries or elements

Matrix Theory, Math6304 Lecture Notes from Sept 11, 2012 taken by Tristan Whalen

Mathematical Introduction

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Recitation 8: Graphs and Adjacency Matrices

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Coordinate and Momentum Representation. Commuting Observables and Simultaneous Measurements. January 30, 2012

Physics 116A Solutions to Homework Set #5 Winter Boas, problem Use an augmented matrix to solve the following equation

Spectral Theorem for Self-adjoint Linear Operators

Matrices. Chapter Definitions and Notations

Linear Algebra Review. Vectors

Eigenvectors and Hermitian Operators

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Math 489AB Exercises for Chapter 2 Fall Section 2.3

! 4 4! o! +! h 4 o=0! ±= ± p i And back-substituting into the linear equations gave us the ratios of the amplitudes of oscillation:.»» = A p e i! +t»»

Linear Algebra Review

LINEAR ALGEBRA REVIEW

Topic 1: Matrix diagonalization

Foundations of Matrix Analysis

Singular Value Decomposition and Principal Component Analysis (PCA) I

1 Matrices and vector spaces

Unit 3: Matrices. Juan Luis Melero and Eduardo Eyras. September 2018

PHYS-4007/5007: Computational Physics Course Lecture Notes Section VII

HW2 Solutions

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Mathematical Formulation of the Superposition Principle

MATRICES AND MATRIX OPERATIONS

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

ICS 6N Computational Linear Algebra Matrix Algebra

1 Matrices and Systems of Linear Equations. a 1n a 2n

Mathematics. EC / EE / IN / ME / CE. for

Transcription:

Arfken 3.4.6 Matrix C is not Hermition. But which is Hermitian. Likewise, Assignment 11 (C + C ) = (C + C ) = (C + C) [ i(c C ) ] = i(c C ) = i(c C) = i ( C C ) Arfken 3.4.9 The matrices A and B are both Hermitian: A = A and B = B. The adjoint of their product is (AB) = (AB) = B A = BA For the product then to be Hermitian, we must have AB = (AB) ger = BA, i.e. A and B must commute. Thus, this shows that if AB = (AB), then [A, B] = 0. To go the other way, and AB = (AB) and is Hermitian. 0 = [A, B] = [A, B] = (AB BA) = (AB) A B = (AB) AB 1

Arfken 3.4.12 (a) Two matrices U and H are related by U = e iah 1 + iah + (ia)2 2 H2 + (ia)3 H 3 + 3! First assume H = H and take the adjoint of the above relation U = 1 iah + ( ia)2 2! ( H 2 ) ( ia) 3 ( + H 3 ) + 3! It should be clear that we need to show (H n ) = H n. Briefly, for n = 1 and n = 2, this is straightforward to show. By induction we can then demonstrate the general case. Taking it as a result, we have U = 1 iah + (ia)2 H 2 (ia)3 H 3 + 2! 3! = e iah By now multiplying on the left by U = e iah, we can see that U must equal U 1 and therefore U is unitary. (b) Now assume U = U 1. We know that U = e iah. Its inverse is U 1 which we might guess is e iah. But we need to show this: U U 1 = e iah e iah and we have verified it. Being unitary, we have = e iah iah = 1 U = e iah = ( e iah) = e iah where we have two (matrix-valued) Taylor series which are equal: e iah = e iah. If the Taylor series are to be equal, we must have, in general, H n = (H n ) = (H ) n. This will be true provided, H = H, i.e. H is Hermitian. Arfken 3.5.4 Assume the matrix A is not symmetric but that it can be diagonalized by an orthogonal similarity transformation. We then have A il = R ij A jk (R T ) kl = R ij A jk R lk = R lk A jk R ij where R is the appropriate orthogonal matrix. Since A is diagonal, it is symmetric: A il = A li. This implies A il = R ik A jk R lj = R ik A jk (R T ) jl Relabeling indices so that k j and j k above, it become clear that this can only equal the first line if A jk = A kj, i.e. A is symmetric. This is a contradiction and we conclude that a non-symmetric matrix cannot be diagonalized. 2

Arfken 3.5.8 Two matrices, A and B, are diagonalized by the same transformation: These two diagonal matrices now commute: which will be true if and only if AB = BA. Arfken 3.5.12 0 = A B B A A = RAR T B = RBR T = RAR T RBR T RBR T RAR T = RABR T RBAR T = R(AB BA)R T For a rigid body defined by m 1 = 1 at (1, 1, 2), m 2 at ( 1, 1, 0), and m 3 at (1, 1, 2), the components of the inertia matrix are 3 I xx = m i (ri 2 x 2 i ) i=1 = m 1 (r 2 1 x 2 1) + m 2 (r 2 2 x 2 2) + m 3 (r 2 3 x 2 3) = 1 (6 1) + 2 (2 1) + 1 (6 1) = 12 with similar calculations leading to I yy = 12, I zz = 8, I xy = 4, and I xz = I yz = 0. Putting it together, 12 4 0 I = 4 12 0 0 0 8 (b) Getting the eigenvalues and eigenvectors requires the secular equation 0 = det(i λ1) = (8 λ) ( (12 λ) 2 16 ) = (λ 8) 2 (λ 16) Solving the eigenvalue equations for λ = 16 gives the equations x = y and z = 0 so we pick a normalized eigenvector of (1, 1, 0)/ 2. The degenerate eigenvalue λ = 8 gives the equations x = y and z can be anything. So one eigenvector associated with λ = 8 is (1, 1, 1)/ 3. Another eigenvector which would go with the λ = 8 eigenvalue is ( 1, 1, 2)/ 6 which, one can readily check, is orthogonal to both the other eigenvectors. Arfken 3.5.20 Diagonalize The secular equation is A = 1 0 0 0 1 1 0 1 1 0 = det(a λ1) = (1 λ) ( (1 λ) 2 1 ) = λ(λ 1)(λ 2) 3

with eigenvalues λ = 0, 1, 2. The eigenvector associated with the first eigenvalue can be found from the equations x = 0 and y + z = 0. It is (0, 1, 1)/ 2. For the second eigenvalue, the eigenvector can be determined from the equations z = 0 and y = 0 with x anything. The second eigenvector is thus (1, 0, 0). For λ = 2, the equations for the eigenvector are x = 0 and y = z. Thus we have (0, 1, 1)/ 2. Arfken 3.5.27 Diagonalize The secular equation is 0 = det(a λ1) A = 5 0 2 0 1 0 2 0 2 = (5 λ)(1 λ)(2 λ) + 2(1 λ) 2 = (1 λ) [ λ 2 7λ + 6 ] = (λ 1) 2 (λ 6) with eigenvalues λ = 1, 1, 6. The eigenvector associated with the last eigenvalue λ = 6 can be found from the equations y = 0 and x = 2z. It is (2, 0, 1)/ 5. For the degenerate eigenvalue, the eigenvectors can be determined from the equations 2x = z and with y anything. Thus one eigenvector is (1, 0, 2)/ 6. To get another, just notice that (0, 1, 0) satisfies the equations and is orthogonal to the other two eigenvectors. Arfken 3.6.3 is The secular equation for the matrix a b A = c d 0 = det(a λ1) = (a λ)(d λ) bc = λ 2 λ(a + d) + ad bc = λ 2 λtra + det A 4

Arfken 3.6.7 In bra-ket notation A r i = λ i r i (1) A r j = λ j r j (2) Taking the adjoint of Eq. (2), we get (A r j ) = (λ j r j ) r j A = λ r j (3) Now multiply by r j on the left of Eq. (1) and by r i on the right of Eq.(3). Finally, subtract the two r j A A r i = (λ i λ j ) r j r i The left hand side is zero for a Hermitian matrix (A = A ). For i j (and no degeneracy) the eigenvectors are orthogonal. For i = j, the eigenvalues must be real: λ i = λ i. Now take Eq. (1) and multiply by A 1 A 1 A r i = λ i A 1 r i which can be re-written A 1 r i = 1 λ i r i Multiply this by r j on the left and subtract from this Eq. (3) r j A 1 A r i = ( 1 λ i λ j ) r j r i For a unitary matrix, the left hand side is zero and for i = j, we must have λ i λ i = 1 Thus if a matrix is both Hermitian and unitary, λ i λ i = 1 and the eigenvalues can only be ±1. Arfken 3.6.14 We have A = 1 2 2 5 1 4 The transpose, à together with Aà and ÃA are à = 1 2 1 5 2 4 8 6 Aà = 1 5 6 17 5 0 ÃA = 1 5 0 20 1 0 = 0 4 5

(b) The eigenvalues of AÃ come out of the secular equation: 0 = det(aã λ1) = ( 8 5 λ)(17 36 λ) 5 25 = (λ 4)(λ 1) Thus, λ 2 n = 1, 4. The eigenvectors, g n, associated with these are (2, 1)/ 5 and (1, 2)/ 5, respectively. (c) The eigenvalues of ÃA are simple since it is a diagonal matrix. They are as before: λ2 n = 1, 4. However, the eigenvectors, f n, are (1, 0) and (0, 1). (d) Note that (e) By construction, we find A f 1 = 1 2 2 1 = 1 2 = λ 5 1 4 0 5 1 1 g 1 A f 2 = 1 2 2 0 = 1 2 = λ 5 1 4 1 5 4 2 g 2 Ã g 1 = 1 2 1 1 2 = 1 5 = λ 5 2 4 5 1 5 0 1 f 1 Ã g 2 = 1 2 1 1 1 = 1 0 = λ 5 2 4 5 2 5 10 2 f 2 A = n λ n g n f n = λ 1 g 1 f 1 + λ 2 g 2 f 2 1 2 1 2 = 1 ( 1 0 ) + 2 ( 0 1 ) 5 1 5 4 = 1 2 2 5 1 4 6