Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Similar documents
Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 6, January 30, 2006

Linear Algebra in Hilbert Space

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 7, February 1, 2006

Outline 1. Real and complex p orbitals (and for any l > 0 orbital) 2. Dirac Notation :Symbolic vs shorthand Hilbert Space Vectors,

Linear Algebra and Dirac Notation, Pt. 1

II. The Machinery of Quantum Mechanics

Quantum Computing Lecture 2. Review of Linear Algebra

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 22, March 20, 2006

Chem 4502 Introduction to Quantum Mechanics and Spectroscopy 3 Credits Fall Semester 2014 Laura Gagliardi. Lecture 21, November 12, 2014

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Mathematical Formulation of the Superposition Principle

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 9, February 8, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 17, March 1, 2006

Harmonic Oscillator I

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

The quantum state as a vector

MATH 320, WEEK 7: Matrices, Matrix Operations

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

1 Matrices and matrix algebra

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Statistical Interpretation

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Elementary maths for GMT

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Introduction to Electronic Structure Theory

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Matrix Representation

Mathematical Methods wk 2: Linear Operators

The Schrodinger Wave Equation (Engel 2.4) In QM, the behavior of a particle is described by its wave function Ψ(x,t) which we get by solving:

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

Eigenvectors and Hermitian Operators

Lecture 1 Systems of Linear Equations and Matrices

E = hν light = hc λ = ( J s)( m/s) m = ev J = ev

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 20, March 8, 2006

Vector spaces and operators

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Linear Algebra. Min Yan

Physics 342 Lecture 2. Linear Algebra I. Lecture 2. Physics 342 Quantum Mechanics I

Formalism of Quantum Mechanics

Supplementary information I Hilbert Space, Dirac Notation, and Matrix Mechanics. EE270 Fall 2017

6.2 Unitary and Hermitian operators

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Spring Semester 2006 Christopher J. Cramer. Lecture 8, February 3, 2006 & L "

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Repeated Eigenvalues and Symmetric Matrices

Linear Operators, Eigenvalues, and Green s Operator

Phys 201. Matrices and Determinants

Lecture 1: Systems of linear equations and their solutions

Lecture 6. Four postulates of quantum mechanics. The eigenvalue equation. Momentum and energy operators. Dirac delta function. Expectation values

Lecture If two operators A, B commute then they have same set of eigenkets.

What is the Matrix? Linear control of finite-dimensional spaces. November 28, 2010

An operator is a transformation that takes a function as an input and produces another function (usually).

1 The postulates of quantum mechanics

Lecture 4: Gaussian Elimination and Homogeneous Equations

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

1 Notes and Directions on Dirac Notation

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

The following definition is fundamental.

Spectral Theorem for Self-adjoint Linear Operators

Physics 505 Homework No. 1 Solutions S1-1

Coordinate and Momentum Representation. Commuting Observables and Simultaneous Measurements. January 30, 2012

1 Fundamental physical postulates. C/CS/Phys C191 Quantum Mechanics in a Nutshell I 10/04/07 Fall 2007 Lecture 12

Quantum Mechanics - I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 7 The Uncertainty Principle

Quantum Mechanics crash course (For the scholar with an higher education in mathematics) Fabio Grazioso :48

Linear Algebra and Matrix Inversion

1 Dirac Notation for Vector Spaces

df(x) = h(x) dx Chemistry 4531 Mathematical Preliminaries Spring 2009 I. A Primer on Differential Equations Order of differential equation

Lecture 3: Hilbert spaces, tensor products

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

PLEASE LET ME KNOW IF YOU FIND TYPOS (send to

LECTURE 6: LINEAR VECTOR SPACES, BASIS VECTORS AND LINEAR INDEPENDENCE. Prof. N. Harnew University of Oxford MT 2012

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

OPERATORS AND MEASUREMENT

Basic Quantum Mechanics Prof. Ajoy Ghatak Department of Physics Indian Institute of Technology, Delhi

Symmetric and anti symmetric matrices

Harmonic Oscillator with raising and lowering operators. We write the Schrödinger equation for the harmonic oscillator in one dimension as follows:

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Order of Operations. Real numbers

BASICS OF QUANTUM MECHANICS. Reading: QM Course packet Ch 5

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Properties of Commutators and Schroedinger Operators and Applications to Quantum Computing

Mon Feb Matrix algebra and matrix inverses. Announcements: Warm-up Exercise:

Matrix Algebra: Definitions and Basic Operations

B553 Lecture 5: Matrix Algebra Review

Review of Basic Concepts in Linear Algebra

Numerical Methods Lecture 2 Simultaneous Equations

B. Physical Observables Physical observables are represented by linear, hermitian operators that act on the vectors of the Hilbert space. If A is such

Linear Equations in Linear Algebra

Math 1302 Notes 2. How many solutions? What type of solution in the real number system? What kind of equation is it?

Extreme Values and Positive/ Negative Definite Matrix Conditions

Linear Algebra using Dirac Notation: Pt. 2

Unitary Dynamics and Quantum Circuits

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Vector Spaces for Quantum Mechanics J. P. Leahy January 30, 2012

Transcription:

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told that the probability of a random variable taking on a value between x and x is P( x) = Ne ax 2 Such a function is called a "gaussian" function, incidentally, and we will see this function many, many times during this course. Since we want P to be normalized, we must have " 1 = Ne ax 2 dx " " = N e ax 2 dx " = N a in which case we demonstrate that N must be a/" to normalize the function (the solution to the integral comes from looking it up in a table). A sketch for a = 0.3, just as an example. P 0.35 0.3 0.25 0.2 0.15 P 0.1 0.05 0-6 - 4-2 0 2 4 6

5-2 The points of inflection occur where the second derivative is equal to zero. 2 P( x) x 2 = x setting the r.h.s. equal to zero requires so that ) % Ne "ax 2 &, + ( '. +. + x. + -. = x "2axNe"ax 2 & % ( ' = % 4a 2 x 2 Ne "ax2 " 2aNe "ax2 & ( ' 0 = ( 2ax 2 1) x = ± 1 2a For my graph above, where a = 0.3, that is x = ±1.29. (Note that these points define one standard deviation for a random variable having normal distribution.) Now, to compute the mean value of x, we must solve x = a " xe "ax2 dx while one could go through the math to determine the value of this integral, it is simpler to employ a bit of common sense. From the shape of the probability function, the probability of getting any value x is exactly equal to the probability of getting x. In such a case, the average value of x will necessarily be zero. (Later in the course, we will see that parity arguments can also be used to come to this conclusion without explicitly evaluating the integral.) The mean value of x 2, on the other hand, is computed from x 2 = a " x 2 e "ax 2 dx Integral tables provide

5-3 ( ) " x 2n e ax2 1 3 5 L 2n 1 % ( " dx = 2 n a n ' & a ) In our case, n = 1, the square roots cancel one another, and we are left with simply <x 2 > = (2a) 1. In probability, <x 2 > is called σ 2 and we indeed see that the square root is one standard deviation, consistent with our work thus far. Finally, with respect to the question why is <x 2 > not equal to <x> 2, that is fairly obvious. The former expectation value measures distance from the mean without respect to direction (which is to say, deviation to either left or right is weighted positive by virtue of squaring), while the latter is simply the square of the mean, not the deviation from it. In statistics, the difference between the two is referred to as the dispersion of the data. 1/ 2 Necessary Mathematical Tools Complex numbers. Wave functions and operators can take on complex values, even though expectation values, i.e., the results of physical measurements, are always real numbers. Every complex number c can be represented as c = a + bi (5-1) where a and b are real numbers and i is the square root of 1 (the unit for the so-called imaginary part of complex numbers). We refer to the complex conjugate of c, which is written c, as The square modulus of the complex number c is c = a bi (5-2) c c = c 2 = ( a + bi) ( a bi) = a 2 + b 2 (5-3) Note that one way to view complex numbers is as vectors in a 2-dimensional cartesian coordinate system. The x axis represents the real component of c (written Re[c]) and the y axis the imaginary component of c (written Im[c]). In that case, it becomes clear that the square modulus of c is simply the square of the length of the corresponding vector in the complex plane. This is illustrated below.

5-4 y = Im[c] (a 2 + b 2 ) 1/2 a (a,b) c = a + bi b x = Re[c] Vectors of complex numbers. Imagine that we have a vector whose elements are complex numbers. Dirac notation in quantum mechanics provides a shorthand connection between vectors and matrix algebra. We refer to a column vector of complex numbers as a ket, which is written as τ > and defined as " ( c 1,c 2,c 3,K) " c 1 & % ( % c 2 ( % ( % c 3 ( % M ' ( (5-4) The term in parentheses is written like a vector specification in an n-dimensional coordinate system (n may be infinite) and the term in brackets is a matrix of n rows and 1 column (also called a column vector). We may define the analog of the square modulus for such a column vector. In general, the square of the length of a vector may be computed as the product of the transpose of the vector with itself. The transpose of a column vector is defined as the row vector having the same elements (in order) as the column vector. We might write " [ c 1 c 2 c 3 L] (5-5) The transpose definition is true in reverse as well (i.e., the column vector is the transpose of the row vector; in general, the transpose of a matrix with an arbitrary number of rows and columns is defined to be the matrix formed by converting every i,j th element of the original matrix to a j,i th element of a new matrix). The rules of matrix multiplication define that when an m x n matrix (where m defines the number of rows and n the number of columns) multiplies an n x l matrix (n must be the same for both matrices), then the result is an m x l matrix. So, for a row vector (1 row, n columns) multiplying its transpose column vector (n rows, 1 column),

5-5 one obtains a 1 x 1 matrix, which is to say a single number, and this defines the square modulus of the vector. Note that as discussed so far, we would have = c 1 2 + c2 2 + c3 2 +L (5-6) This is fine if all the values c are real numbers, but if they are complex numbers, the square of a complex number is still generally complex. To get a real number, we need to use square moduli instead of simple squares. So, for arbitrary kets, we define the bra, written < τ, as the complex-conjugate transpose, or adjoint of the ket, i.e., " c 1 in which case the square modulus of tau is well defined as [ c 2 c 3 L] (5-7) = c 1 c1 + c 2 c2 + c 3 c3 +L (5-8) When < τ τ > = 1, the vector τ is said to be normalized. Note that the general notation for the adjoint of an arbitrary matrix is A. In general, we use this notation because it is convenient for wave functions. Recall that a generic solution to the time-dependent Schrödinger equation is ( ) = c n " n x, y, z,t ( x, y, z)e %ie nt / h n=1 (5-9) It is much quicker to write this as Ψ >, where it is implicit in the notation that each i th element of the vector is the i th element of the sum. To indicate why this is particularly useful, we need to digress for a bit to discuss the properties of quantum mechanical operators. Hermitian Operators. Recall that an operator A acts on a wave function to deliver the value of an observable a according to A = a (5-10) where Ψ is called an eigenfunction of A and a is its associated eigenvalue. Let us now multiply both sides of eq. 5-10 from the left by Ψ and integrate to obtain " " " ( A )dr = ( a )dr = a dr = a (5-11)

5-6 where dr is the volume element of a space of arbitrary dimensions. Since a is just a number (indeed, a real number, since expectation values of observables must be real numbers) it comes outside the integral and normalization of the wave function provides the final result. Indeed, this operationally is how one computes a (see last lecture s section on expectation values). Now, let us consider the special case where it is also true that A = a (5-12) An operator satisfying both eqs. 5-10 and 5-12 is said to be Hermitian. It is trivial to show, then, that the following holds true for a Hermitian operator ( A )dr = ( A )dr = ( A )dr " " [ " ] = A " Let us consider the case where Ψ is the sum of two arbitrary functions [ ( )dr] (5-13) = 1 + 2 (5-14) If we plug this expression into the central equality in eq. 5-13, we have ( 1 + 2 ) A [ ( 1 + 2 ) " ]dr = " 1 + 2 which may be expanded to [ ( ) [ A ( 1 + 2 )]dr] (5-15) " 1 A 1 dr + 1 A 2 dr " " " ( " ) + " 1 A2 dr + 2 A 1 dr + 2 A 2 dr ( ) ( ) + ( 2 A 2 dr) = 1 A1 dr + 2 A 1 dr " " (5-16) From the central equality of eq. 5-13, we may eliminate the first and last terms on both sides as they are equal to one another, leaving us with 1 A " 2 dr + 2 A " 1 dr ( ) + ( " 2 A1 dr) (5-17) = " 1 A2 dr We can now do exactly the same thing, but with a different definition of Ψ, namely

5-7 = 1 + i 2 (5-18) In this case, we will have = ( 1 + i 2 ) = 1 "i2 (5-19) If we follow the same approach used in eq. 5-16, we have " " " 1 A 1 dr + i 1 A 2 dr " "i 2 A 1 dr + 2 A 2 dr = ( 1 A 1 dr " ) + [ 1 A( i 2 )dr " ] " ( " i 2 A1 dr) + ( " 2 A2 dr) ( " ) " i [ 1 A2 dr " ] + i ( " 2 A1 dr) + ( " 2 A2 dr) = 1 A1 dr (5-20) where it is quite critical to be careful in keeping track of complex conjugate sign changes Again, we can eliminate the first and last terms on each side based on the central equality of eq. 5-13. If we also divide all remaining terms by i we are left with 1 A " 2 dr " 2 A " 1 dr [ ] + ( " 2 A1 dr) (5-21) = " " 1 A2 dr Now, if we add eq. 5-17 to eq. 5-21, we lose two terms by cancellation and, after eliminating a common factor of 2, we are left with ( ) 1 A 2 dr = " 2 A1 dr " = A " 1 2 dr (5-22) Finally, we know from A being Hermitian that AΨ 2 = AΨ 2, so we may substitute the l.h.s. to arrive at the turnover rule, namely

5-8 1 A2 dr = A " 1 2 dr " = A " 1 ( ) 2 dr (5-23) The turnover rule for Hermitian operators is tremendously useful in various manipulations of integral quantities in quantum mechanics. Eigenvalues and Eigenvectors of Hermitian Operators Let us apply the turnover rule to a single eigenfunction Ψ of A. Note that " " " " Adr = A " " adr = a ( ) dr ( ) dr a dr = a dr a = a (5-23) For a to be equal to a, a must be real. This is no surprise, of course, as we defined Hermitian operators to have real numbers as eigenvalues, but it is important to emphasize again that this is exactly what we want for quantum mechanical operators, that they produce only real expectation values. Hence, we have a fundamental postulate that all quantum mechanical observables have associated with them a Hermitian operator. Let us consider another property of Hermitian operators. If our operator A has two different eigenfunctions with associated eigenvalues according to A i = a i i and A j = a j j (5-24) Now, consider multiplication on the left side of the first equality by " j followed by integration, that is And, by the turnover rule j Ai dr = a " i j i dr (5-25) " " j A i dr = A "( j ) i dr = a j j i dr " (5-26) So, if we subtract eq. 5-25 from eq. 5-26, we have

5-9 ( a j a i ) " j "i dr = 0 (5-27) For this equality to be true, either the two eigenvalues a must be equal to one another (that is, the expectation values are "degenerate"), or, the integral must be zero. When the integral is zero, we say that the two functions are "orthogonal". If they are also normalized (remember that it is trivial to normalize a wave function), we say the functions are "orthonormal" and we write where δ is the "Kronecker delta", which is defined as " j i dr = % ij (5-28) ij = 1, i = j % 0, i " j (5-29) Note that if eq. 5-27 is satisfied because the eigenvalues are degenerate, we can still construct wave functions using Ψ i and Ψ j that will be eigenfunctions of A and yet be orthogonal (this is the homework for next class). Thus, we can state as a proven principle that the eigenfunctions of a Hermitian operator are orthonormal. This property will be very useful for simplifying integrals later on. We may take advantage of this immediately in a notational sense. Consider the case where a wave function is expressed as a linear combination of orthonormal functions. That is, In that case, we would have n = c i " i (5-30) i=1 n ' ' n dr = )& c " i % i, ) & c " j % j, dr ( i=1 + ( j=1 + n = & c i c j i, j=1 % i %j dr " n = & c i c j - ij i, j=1 2 = c 1 + 2 c2 +L+ 2 cn (5-31)

5-10 Notice that the final result is exactly the same as for multiplication of a column vector by its adjoint row vector when the elements of the vector are the coefficients of the wave function. This is the key connection between our wave function notation thus far and the matrix notation we discussed at the beginning. Thus, in Dirac notation, we simplify our notation by defining dr = (5-32) " In addition, when there is an operator involved, we may consider the matter from the context of matrix algebra. Our final integral is just a number, so we still want the result of the operator acting on the column vector to be a column vector times a scalar (the scalar is the eigenvalue, see eq. 5-11). This implies that the operator must be a n x n matrix, since a n x n matrix multiplies a n x 1 column vector to give a n x 1 column vector. This idea of a matrix representation of an operator will prove useful much later in the class, but for now is simply a point of general interest. More practically for the moment, let us define the Dirac notation Adr = A (5-33) " While we ve written eqs. 5-32 and 5-33 with a bra that is the specific adjoint of the ket, that need not generally be the case. For instance, we could write eq. 5-28 as j i = " ij (5-34) It is critical to remember always that Dirac notation represents integrals, but those integrals are just numbers (zero, one, π 8, 0.31456..., whatever). Commutators Notice that when two operators are applied in sequence, the order of the sequence may matter. For instance, if we consider the operators A x and B " "x (5-35) and their operation on the function x 3, we find and A[ B( x 3 )] = A [ ] 3x2 = 3x 3 (5-36)

B[ A( x 3 )] = A [ x 4 ] 5-11 = 4x 3 (5-37) These results are not equal. We define the commutator of two operators as [ A, B] = AB BA (5-38) The commutator is itself an operator, of course. Indeed, from eqs. 5-36 and 5-37, we have [ A, B]x 3 = x 3 (5-39) or, [A,B] = 1 (a rather simple operator, which changes the sign of its operand it is trivial to prove that this is true for any function, not just x 3 ). From eq. 5-38, it is obvious that [B,A] = [A,B]. When [A,B] = 0, the operators A and B are said to commute. Commuting operators have several important properties here is a critical executive summary: If Hermitian operators A and B commute, then i B j = b i " ij (5-40) where the elements of {Ψ} are a necessarily existing set of orthonormal eigenfunctions that are common to both A and B, and the elements of {b} are the associated eigenvalues of B (the full set of eigenvalues is called the spectrum of B). Homework To be solved in class: Given eqs. 5-10 and 5-12 where a is a real number, prove the equalities in eq. 5-13. To be turned in for possible grading Feb. 3: Consider a pair of degenerate, normalized eigenfunctions, φ 1 and φ 2, of a Hermitian operator A with common eigenvalue a. Show that two new functions defined as u 1 = φ 1 and u 2 = φ 2 Sφ 1 are orthogonal, provided that S is properly chosen (i.e., determine what value of S is required to enforce orthogonality). Show that u 1 and u 2 remain degenerate with common eigenvalue a.