Lectures about Python, useful both for beginners and experts, can be found at (http://scipy-lectures.github.io).

Similar documents
Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2

Computational Methods for Nonlinear Systems

2017, James Sethna, all rights reserved. This exercise was developed in collaboration with Christopher Myers.

2 Getting Started with Numerical Computations in Python

ECE521 W17 Tutorial 1. Renjie Liao & Min Bai

MITOCW ocw f99-lec01_300k

HW5 computer problem: modelling a dye molecule Objectives By the end of this lab you should: Know how to manipulate complex numbers and matrices. Prog

ECE 5615/4615 Computer Project

A Glimpse at Scipy FOSSEE. June Abstract This document shows a glimpse of the features of Scipy that will be explored during this course.

MITOCW watch?v=ztnnigvy5iq

NUMERICAL ANALYSIS WEEKLY OVERVIEW

Physics with Matlab and Mathematica Exercise #1 28 Aug 2012

Eigenvalue problems III: Advanced Numerical Methods

Conditioning and Stability

2017, James Sethna, all rights reserved. This exercise was developed in collaboration with Christopher Myers.

MITOCW ocw f99-lec30_300k

Practical Information

Linear Least-Squares Data Fitting

MITOCW ocw f99-lec05_300k

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

Note: Please use the actual date you accessed this material in your citation.

Lecture 10: Powers of Matrices, Difference Equations

NumPy 1.5. open source. Beginner's Guide. v g I. performance, Python based free open source NumPy. An action-packed guide for the easy-to-use, high

Recall, we solved the system below in a previous section. Here, we learn another method. x + 4y = 14 5x + 3y = 2

Python & Numpy A tutorial

Outline Python, Numpy, and Matplotlib Making Models with Polynomials Making Models with Monte Carlo Error, Accuracy and Convergence Floating Point Mod

Iterative Solvers. Lab 6. Iterative Methods

Linear Algebra. PHY 604: Computational Methods in Physics and Astrophysics II

PHYS 7411 Spring 2015 Computational Physics Homework 3

BCMB/CHEM 8190 Lab Exercise Using Maple for NMR Data Processing and Pulse Sequence Design March 2012

Lab 6: Linear Algebra

Matrix-Vector Operations

Computational Methods for Nonlinear Systems

Using Microsoft Excel

Notes for CS542G (Iterative Solvers for Linear Systems)

MOL410/510 Problem Set 1 - Linear Algebra - Due Friday Sept. 30

MITOCW watch?v=y6ma-zn4olk

Matrix-Vector Operations

Lecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties

MATH2071: LAB #5: Norms, Errors and Condition Numbers

Assignment 1 Physics/ECE 176

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Math 504 (Fall 2011) 1. (*) Consider the matrices

Mathematical Physics II

Chm 331 Fall 2015, Exercise Set 4 NMR Review Problems

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices

Vector, Matrix, and Tensor Derivatives

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Eigenvalues & Eigenvectors

Lecture 3: Special Matrices

LAB 2 - ONE DIMENSIONAL MOTION

MITOCW watch?v=dztkqqy2jn4

Lecture 5b: Starting Matlab

Since the logs have the same base, I can set the arguments equal and solve: x 2 30 = x x 2 x 30 = 0

Info 2950 Intro to Data Science

Lab 1: Iterative Methods for Solving Linear Systems

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

MAT 343 Laboratory 6 The SVD decomposition and Image Compression

Introduction to SVD and Applications

Eigenvalues and Eigenvectors

Homework 9: Protein Folding & Simulated Annealing : Programming for Scientists Due: Thursday, April 14, 2016 at 11:59 PM

Satellite project, AST 1100

CS 237 Fall 2018, Homework 06 Solution

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

18.06 Quiz 2 April 7, 2010 Professor Strang

Skriptsprachen. Numpy und Scipy. Kai Dührkop. Lehrstuhl fuer Bioinformatik Friedrich-Schiller-Universitaet Jena

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Exercises. Class choose one. 1.11: The birthday problem. 1.12: First to fail: Weibull.

Introduction. How to use this book. Linear algebra. Mathematica. Mathematica cells

Math 314 Lecture Notes Section 006 Fall 2006

k is a product of elementary matrices.

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

MITOCW MITRES_18-007_Part1_lec5_300k.mp4

Discrete Probability and State Estimation

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

CS 420/594: Complex Systems & Self-Organization Project 1: Edge of Chaos in 1D Cellular Automata Due: Sept. 20

Lorenz Equations. Lab 1. The Lorenz System

MATH 320, WEEK 7: Matrices, Matrix Operations

Mathematica Project, Math21b Spring 2008

MITOCW watch?v=ko0vmalkgj8

1.1.1 Algebraic Operations

Mathematical Methods in Engineering and Science Prof. Bhaskar Dasgupta Department of Mechanical Engineering Indian Institute of Technology, Kanpur

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Complex Matrix Transformations

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

MODELS USING MATRICES WITH PYTHON

MITOCW MITRES_18-007_Part5_lec3_300k.mp4

Class President: A Network Approach to Popularity. Due July 18, 2014

MITOCW ocw f99-lec09_300k

Designing Information Devices and Systems I Fall 2018 Homework 5

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

CS 237 Fall 2018, Homework 07 Solution

Lab Objective: Introduce some of the basic optimization functions available in the CVXOPT package

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

MITOCW watch?v=rf5sefhttwo

Applied Linear Algebra in Geoscience Using MATLAB

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Transcription:

Random Matrix Theory (Sethna, "Entropy, Order Parameters, and Complexity", ex. 1.6, developed with Piet Brouwer) 2016, James Sethna, all rights reserved. This is an ipython notebook. This hints file is unusually detailed, but only covers the computational parts (a & f) of the exercise: don't neglect the analytical portions! If you are familiar with Mathematica, feel free to skip parts you don't need, or start your own notebooks. Lectures about Python, useful both for beginners and experts, can be found at http://scipy-lectures.github.io (http://scipy-lectures.github.io). I recommend installing the Anaconda (https://store.continuum.io/cshop/academicanaconda) distribution. Go to https://www.continuum.io/downloads (https://www.continuum.io/downloads) Pick your operating system (Windows, Mac OS X, Linux) Follow instructions for Python 3.5. To make it run much faster (probably necessary for some assignments): Be sure not to pay for it! Get an academic subscription, using your Cornell (.edu) e-mail, from https://www.continuum.io/anaconda-academicsubscriptions-available (https://www.continuum.io/anaconda-academic-subscriptions-available) Set up an account, using your Cornell e-mail Go to My Settings under your name in the upper right hand side of the Anaconda Cloud window, and Go to Add Ons, the lowest menu entry in the left-hand column Download the licenses, and follow the installation instructions link just above the MKL licence button (or go to http://docs.continuum.io/advanced-installation (http://docs.continuum.io/advanced-installation)) file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 1/10

Open the notebook by (1) copying this file into a directory, (2) in that directory typing: ipython notebook and (3) selecting the notebook. In later exercises, if you do animations, you may choose to not have the plots inline. Like Mathematica, you type in commands and then hit 'Shift Return' to execute them. a = 3. print(a) Python is not a complete scientific environment. Functions like log, sin, and exp are provided by external packages. The plotting package pylab supports many of these; '%pylab inline' puts the graphs into the notebook. We usually want more functionality than that, so we import a package called 'scipy' (an extension of 'numpy'). We'll also be using some of their linear algebra routines, which are in a sub-package 'scipy.linalg'. %pylab inline from scipy import * One of the most active and unusual applications of ensembles is random matrix theory, used to describe phenomena in nuclear physics, mesoscopic quantum mechanics, and wave phenomena. Random matrix theory was invented in a bold attempt to describe the statistics of energy level spectra in nuclei. In many cases, the statistical behavior of systems exhibiting complex wave phenomena -- almost any correlations involving eigenvalues and eigenstates -- can be quantitatively modeled using ensembles of matrices with completely random, uncorrelated entries! The most commonly explored ensemble of matrices is the Gauusian orthogonal ensemble (GOE). Generating a member takes two steps: Generate an NxN matrix whose elements are independent Gaussian distributions of zero mean and standard deviation. Add the matrix to its transpose to symmetrize it. Start by finding out how Python generates random numbers. Type?random to find out about scipy's random number generators. Try typing 'random.random()' a few times. Try calling it with an integer argument. Use 'hist' (really pylab.hist) to make a histogram of 1000 numbers generated by random.random. Is the distribution Gaussian, uniform, or exponential? The flag 'normed' tells us to not just count the number of occurences in each bin, but to divide by the width of the bin and the number of counts (to approximate a normalized probability distribution). file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 2/10

?random print(random.random()) print(random.random(6)) hist(random.random(1000), normed=true); Use random.standard_normal to generate a Gaussian distribution of one thousand random numbers. Check that it is a standard Gaussian by using 'hist'. print(random.standard_normal(...)) hist(random.standard_normal(...), normed=...); x = arange(-3,3,0.01) plot(x,(1./sqrt(2*pi))*exp(-x**2/2),'r') show() One final matrix operation. To generate a symmetric matrix of the GOE form, we'll want to add a matrix to its transpose. Generate a 2x3 matrix of normally distributed random numbers, and print it and its transpose. m = random.standard_normal((2,3)); print(m) print(transpose(m)) Now define a function GOE(N) (using 'def') which generates a GOE NxN matrix. file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 3/10

def GOE(N): Creates an NxN element of the Gaussian Orthogonal Ensemble, by creating an NxN matrix of Gaussian random variables using the random array function random.standard_normal([shape]) with [shape] = (N,N). and adding it to its transpose (applying transpose to the matrix) Typing GOE(4) a few times, check it's symmetric and random m = random....((...,...)) m = m + transpose(m) return m Test it by generating a 4x4 matrix. Is it symmetric? GOE(4) Now write a function GOE_Ensemble(num, N) generating an ensemble (a long list with 'num' elements) of GOE NxN matrices. Call GOE(N) to fill the list. def GOE_Ensemble(num, N): Creates a list "ensemble" of length num of NxN GOE matrices. You can start with an empty list, and fill it with 'append' ensemble = [] for n in range(num): ensemble.append(goe(n)) return ensemble or you can use the convenient 'list comprehension' ensemble = [GOE(N) for n in range(num)] Check GOE_Ensemble(3,2) gives 3 2x2 symmetric matrices ensemble =... return ensemble file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 4/10

Here you may want to generate an ensemble, and check if the diagonal elements and the off-diagonal elements have different standard deviations, as you calculated in part (c). (This difference is part of the definition of the GOE ensemble, and is needed to make the ensemble rotation invariant.) One of the most striking properties that large random matrices share is the distribution of level splittings. To look at eigenvalues of our symmetric matrices, we use 'eigvalsh'. We'll want them sorted. And we'll want to find the difference between the two middle-most eigenvalues. Notice, in python, arrays are indexed lambda = [lambda[0],..., lambda[n-1]], so the middle two are N/2 and N/2-1. Note that in the new version of python dividing integers 4/2 = 2.0 gives a floating point number, so we need to use N//2 to get an integer. (This is a good thing: having 2/10 = 0 causes lots of problems when you write routines that don't know whether their inputs are floats or ints.) N=4 midpoint = int(n/2) mat = GOE(N) print(mat) eig = sort(eigvalsh(mat)) print(eig) print(eig[n//2]-eig[n//2-1]) file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 5/10

def CenterEigenvalueDifferences(ensemble): For each symmetric matrix in the ensemble, calculates the difference between the two center eigenvalues of the ensemble, and returns the differences as an array. Given an ensemble of symmetric matrices, finds their size N, using the first member ensemble[0] len(m) gives the number of rows in m starts with an empty list of differences for each m in ensemble finds the eigenvalues eigvals(m) gives the eigenvalues of m sorts them sort(e) sorts a list from smallest to largest appends eigenvalue[n/2] - eigenvalue[n/2-1] to the list (Note: python lists go from 0... N-1, so for N=2 this gives the difference between the only two eigenvalues) Check ensemble = GOE_Ensemble(3,2) CenterEigenvalueDifferences(ensemble) gives three positive numbers, that look like eigenvalues of the 2x2 matrices Size of matrix N = len(...) diffs = [] Note that you can 'iterate' not only over integers, but over elements of a list (for mat in ens emble:...) for mat in ensemble: eigenvalues = sort(...) diffs.append(...) return diffs file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 6/10

Check that ensemble = GOE_Ensemble(3,2) CenterEigenvalueDifferences(ensemble) gives three positive numbers, that look like eigenvalue differences of 2x2 matrices, Check that hist(centereigenvaluedifferences(ensemble), bins=30, normed=true); gives three spikes at the three eigenvalue differences. ensemble = GOE_Ensemble(...) print(centereigenvaluedifferences(...)) hist(centereigenvaluedifferences(...), bins=50, normed=true); Now try it with 10000 or more. Notice the smooth distribution, with a linear tail near zero. M = 10000 N = 2 ensemble = GOE_Ensemble(M, N); diffs = CenterEigenvalueDifferences(ensemble) hist(diffs, bins=50, normed=true); What is this dip in the eigenvalue splitting probability near zero? It is called level repulsion. Wigner's surmise (meaning 'guess') was that this distribution was universal, that is, independent of the ensemble of matrices. Clearly another ensemble, with all our matrix elements multiplied by two, will have eigenvalue differences twice as big. Wigner surmised that the eigenvalue differences, divided by the mean of the differences would be universal. Plot these normalized centered differences against your analytical prediction for 2x2 matrices (see text), which should be file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 7/10

hist(diffs/mean(diffs), bins=50, normed=true); s = arange(0.,3.,0.01) rhowigner =... plot(s, rhowigner, 'r'); title('2x2 matrix eig diffs') Try N=4 and N=10; do they look similar? M = 10000 N = 4 ensemble =... diffs =... hist(...); plot(...); title(...) M = 10000 N = 10 ensemble =... diffs =... hist(...); plot(...); title(...) Note that we're typing the same commands several times. This is a bad programming practice (a 'bad smell') -- whenever writing the same code more than a couple of lines long, write it as a subroutine. That way, when you fix an error, it'll get fixed everywhere you use it. def CompareEnsembleWigner(ensemble): diffs =... hist(...); plot(...) CompareEnsembleWigner(ensemble) file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 8/10

Your 2x2 formula is pretty good, but turns out to be up to 2% off for larger matrices; Wigner was wrong about that. But he was correct that the eigenvalue splitting is universal, if you consider large matrices. We can test this by trying a different ensemble. For example, let's try random symmetric matrices filled with +-1. In the past, we've created these matrices the hard way. But let's just take the signs of the matrix elements generated by the GOE ensemble! sign(goe_ensemble(4,3)) Define a PM1 ensemble (Plus or Minus 1) def PM1_Ensemble(num, N): Generates a +-1 ensemble, as for GOE_Ensemble above. Use with CompareEnsembleWigner to test that N=2 looks different from the Wigner ensemble, but larger N looks close to the GOE ensemble. This is a powerful truth: the eigenvalue differences of very general classes of symmetric NxN matrices all have the same probability distribution as N goes to infinity. This is what we call universality. return sign(...) Try comparing the splittings for 2x2 matrices with the prediction. CompareEnsembleWigner(PM1_Ensemble(10000,2)) Is it well approximated by the Wigner surmise? Why not? (Hint: how many different symmetric 2x2 matrices of are there?) Try it with 4x4 and 10x10 matrices. CompareEnsembleWigner(...) 10x10 should work better, except for a funny bump at zero. file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 9/10

CompareEnsembleWigner(...) The GOE ensemble has some nice statistical properties (see text). This is our first example of an emergent symmetry. Many different ensembles of symmetric matrices, as the size goes to infinity, have eigenvalue and eigenvector distributions that are invariant under orthogonal transformations even though the original matrix ensemble did not have this symmetry. Similarly, rotational symmetry emerges in random walks on the square lattice as the number of steps goes to infinity, and also emerges on long length scales for Ising models at their critical temperatures. file:///users/sethna/teaching/6562/python/randommatrixtheoryhintspython.html 10/10