Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n.

Similar documents
Assignment 4. u n+1 n(n + 1) i(i + 1) = n n (n + 1)(n + 2) n(n + 2) + 1 = (n + 1)(n + 2) 2 n + 1. u n (n + 1)(n + 2) n(n + 1) = n

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear algebra. 1.1 Numbers. d n x = 10 n (1.1) n=m x

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

I = i 0,

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

CS 246 Review of Linear Algebra 01/17/19

Linear Algebra March 16, 2019

Introduction to Group Theory

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Assignment 12. O = O ij O kj = O kj O ij. δ ik = O ij

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

ELEMENTARY LINEAR ALGEBRA

Topic 15 Notes Jeremy Orloff

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Introduction to Matrix Algebra

A matrix over a field F is a rectangular array of elements from F. The symbol

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Matrices. 1 a a2 1 b b 2 1 c c π

Systems of Linear Equations and Matrices

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

Matrix Representation

Assignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB

Phys 201. Matrices and Determinants

Linear Equations and Matrix

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Systems of Linear Equations and Matrices

Lecture 2j Inner Product Spaces (pages )

Linear Systems and Matrices

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Algebra Workshops 10 and 11

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Physics 129B, Winter 2010 Problem Set 4 Solution

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Quantum Field Theory III

1 Matrices and Systems of Linear Equations. a 1n a 2n

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

1 Matrices and matrix algebra

1 Index Gymnastics and Einstein Convention

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Elementary maths for GMT

Review of Linear Algebra

Fundamentals of Engineering Analysis (650163)

Matrix Arithmetic. j=1

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Basic Linear Algebra in MATLAB

MATRICES. a m,1 a m,n A =

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Linear Algebra Highlights

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

Boolean Inner-Product Spaces and Boolean Matrices

Math 3108: Linear Algebra

MIT Final Exam Solutions, Spring 2017

Math 320, spring 2011 before the first midterm

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Srednicki Chapter 24

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra: Matrix Eigenvalue Problems

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Vectors and Matrices Notes.

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

William Stallings Copyright 2010

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Notes on Mathematics

Section 9.2: Matrices.. a m1 a m2 a mn

Elementary Row Operations on Matrices

A Review of Matrix Analysis

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)

Math 489AB Exercises for Chapter 1 Fall Section 1.0

We are already familiar with the concept of a scalar and vector. are unit vectors in the x and y directions respectively with

CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013

MATH 320, WEEK 7: Matrices, Matrix Operations

ICS 6N Computational Linear Algebra Matrix Algebra

Mathematics for Graphics and Vision

7 Matrix Operations. 7.0 Matrix Multiplication + 3 = 3 = 4

Lecture 21 Relevant sections in text: 3.1

Linear Algebra Review. Vectors

Complex Numbers and Quaternions for Calc III

SOLUTION OF GENERALIZED LINEAR VECTOR EQUATIONS IN IDEMPOTENT ALGEBRA

Math113: Linear Algebra. Beifang Chen

Linear Algebra Homework and Study Guide

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Math Linear Algebra Final Exam Review Sheet

Quantum Computing Lecture 2. Review of Linear Algebra

Algebra I Fall 2007

Course 2BA1: Hilary Term 2007 Section 8: Quaternions and Rotations

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Matrix Multiplication

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Mathematical Foundations

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Transcription:

Assignment Arfken 5.. Show that Stirling s formula is an asymptotic expansion. The remainder term is R N (x nn+ for some N. The condition for an asymptotic series, lim x xn R N lim x nn+ B n n(n x n B n n(n x n N is thus met. We should also check that the series formally diverges. We can do that using lim N x N R N (x or just use the ratio test on the series (and using the representation of the Bernoulli numbers given in equation 5.5 a n+ a n B n+ x n n(n (n + (n + B n x n (n +! (πn ζ(n + (n! (π n+ ζ(n ζ(n + (π n(n ζ(n x n(n (n + (n + which obviously as n (note that lim n ζ(n. Thus the Stirling series is an asymptotic expansion. Arfken 5.. Let s do both Fresnel integrals together. But first note that the infinite version of both of these integrals is cos πu / du / (getting this from a table. So use this to define an asymptotic series C(x + is(x x cos πu + i e iπu / du x sin πu du x ( + i π πx / e iπu / du e iz z dz ( + i π e iz iz / + i e iz ( + i π iz + / i ( i cos πx ( + i + π x + π x + i + i π x ] e iz dz z/ e iz z + / 4i πx / e iz dz z5/ ] πx / πx + sin (cos πx πx πx + i sin / ( i 4π x 4 cos πx πx 5 + sin + (cos πx πx /4 8π x 6 + i sin /8 sin πx πx cos πx πx π sin x4 + 5 ] πx π cos x6 + cos πx πx sin πx + πx π cos x4 + 5 ] πx π sin x6 + x ] +

Thus the real part of this is C(x and the imaginary part is S(x. Arfken 5..5 For the series the remainder terms are, respectively, n P ν (z + ( n s 4ν (s ] (n! (8z n n n Q ν (z ( n+ s 4ν (s ] (n! (8z n n R P n (z R Q n (z kn+ k ( k s 4ν (s ] (k! (8z k k ( k+ s 4ν (s ] (k! (8z k kn+ and both quantities, z n R n (z, approach zero as z since the z terms go like z k+n and z k++n and k > n. To demonstrate that both are formally divergent series, use the ratio test for P ν (z a n+ n+ a n s 4ν (s ] (n! (8z n (n +! (8z n+ n s 4ν (s ] ( 4ν (4n ( 4ν (4n (n + (n + (8z which, for a fixed z, goes to as n. A similar calculation follows for Q ν (z. Arfken 5..8 We want to expand the integral e xv( + v dv x e u ( + u x e u k du where u xv x u k du ( k (k + n ( k (k + x x k k n ( k (k + (k! k x k+ x e u u k du Note that the book would seem to have an error. My guess is that the in the exponent of the integral was really supposed to be.

Arfken.. (a (b (c ( ( + ( ( 9 Arfken..6a D D 4 Arfken..4 D 4 6 4 5 ( 5 6 ( + ( 8 9 (a If complex numbers can be represented by matrices, then we should be able to reproduce the basic arithmetic operations in a consistent manner. For instance addition of two complex numbers: (a+ib+(c+id becomes a b c d a + c b + d + b a d c (b + d a + c which, translated back to complex numbers, is (a + c + i(b + d as we would expect for an isomorphic representation. Likewise, multiplication of two complex numbers (a + ib (c + id becomes a b c d ac bd bd + ac b a d c (bc + ad bd + ac

which, translated back to complex numbers, is (ac bd + i(bd + ac and which is complex mutiplication. Thus the algebra of complex numbers is isomorphic to the algebra of matrices which have the form a b b a (b The matrix corresponding to (a + ib is just the inverse of the standard matrix. Finding the inverse, we get a b a + b b a which in complex notation is a ib a +b Arfken..6a Starting with a general matrix as expected. the demand that A A leads to four equations: x y A z w x yz y(x + w z(x + w w yz One solution is the trivial solution: x y z w which we discard. The other solution is w x and y x /z with x and z arbitrary. The matrix becomes x x A /z z x and if we make the redefinitions x ab and z a, we get the form in the text. Arfken..4 From the previous problem, we have the anti-commutation relations and can deduce the commutation relations between the Pauli matrices: Adding these equations and dividing by we get σ i σ j + σ j σ i δ ij σ i σ j σ j σ i iɛ ijk σ k σ i σ j δ ij + iɛ ijk σ k Since σ is a matrix-valued vector (this is the same language as when we speak of a real-valued function we can dot an ordinary vector (i.e. a real-valued vector into it (σ i a i (σ j a j a i b j δ ij + iɛ kij σ k a i b j or in traditional vector notation, ( σ a ( σ b a b + i σ a b 4

Arfken..6 (a M x, M y ] i i i i i i i i i i i im z with similar calculations for M y, M z ] im x and M z, M x ] im y. (b M M x + M y + M z i i + i i + i i i i i i i i + + (c Using just the commutation relations we get since the identity matrix commutes with everything. M, M i ], M i ], M i ] M z, L + ] M z, M x + im y ] M z, M x ] + im z, M y ] im y + i ( im x L + Lastly, L +, L ] M x + im y, M x im y ] M x, M x ] im x, M y ] + im y, M x ] + M y, M y ] i (im z + i ( im z + M z 5

Arfken.. The matrix A is diagonal and commutes with the matrix B. Show B is diagonal. The commutator in index notation is A ij B jk B ij A jk A ii B i k B ik A k k where our notation is a bit strange here. We are summing over i and k, however, since A is diagonal, there is only a single term in each of these sums. The notation is to emphasize the fact that i i and k k, again, because A is diagonal. Therefore, since i i and k k, B i k B ik and we can write B i k(a ii A k k which, if the diagonal elements of A are distinct (assumed implies that what is in parentheses is nonzero for i k and hence B i k for i k and B is diagonal. Arfken..8 The matrices A and B satisfy A B and {A, B} AB + BA Multiply the above anti-commutation relation by A and take the trace: tr (AAB + ABA tr (B + AAB tr (B where we have used A and the cyclic property of the trace. A virtually identical calculation shows the same thing for tr (A. Arfken.. The matrices A and B are orthogonal: A T A and B T B. Consider the transpose of their product in index notation (A B T (A ik B kj T ij Thus we can now write A jk B ki B ki A jk (B ik T (A kj T ( B T A T ij (A B T B T A T B A If we multiply this by AB from the left, we get the identity, which establishes (A B T as the inverse of AB and hence AB as an orthogonal matrix. Arfken..8 In index notation, we can write our symmetric matrix as (S ij s ij s ji and the anti-symmetric matrix as (A ij a ij a ji. The trace is then tr (S A s ij a ji (s ji ( a ij s ji a ij tr (S A and we have the trace equal to its negative, thus it is zero. 6

Arfken..9 For a similarity transformation, we have A B A B. Taking the trace of this, we have tra tr ( B A B tr ( B B A tr (A and the trace of a matrix is invariant under similarity transformations. Arfken.. For a similarity transformation, we have A B A B. Taking the determinant of this, we have det A det ( B A B (det B (det A ( det B (det B (det A (det B det A and the determinant of a matrix is invariant under similarity transformations. 7