Constructing and (some) classification of integer matrices with integer eigenvalues

Similar documents
CONSTRUCTING INTEGER MATRICES WITH INTEGER EIGENVALUES CHRISTOPHER TOWSE, Scripps College

Math 3191 Applied Linear Algebra

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Recall : Eigenvalues and Eigenvectors

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Math 2030, Matrix Theory and Linear Algebra I, Fall 2011 Final Exam, December 13, 2011 FIRST NAME: LAST NAME: STUDENT ID:

Dimension. Eigenvalue and eigenvector

1 Last time: least-squares problems

Chapter 5 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Eigenvalues and Eigenvectors

Study Guide for Linear Algebra Exam 2

Math 315: Linear Algebra Solutions to Assignment 7

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

MATH 221, Spring Homework 10 Solutions

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

12x + 18y = 30? ax + by = m

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

MA 265 FINAL EXAM Fall 2012

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Eigenvalues and Eigenvectors

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Definition (T -invariant subspace) Example. Example

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

MAT 1302B Mathematical Methods II

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Diagonalization. Hung-yi Lee

M 340L CS Homework Set 12 Solutions. Note: Scale all eigenvectors so the largest component is +1.

Problem 1: Solving a linear equation

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors A =

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

and let s calculate the image of some vectors under the transformation T.

Linear Algebra- Final Exam Review

235 Final exam review questions

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Final Exam Practice Problems Answers Math 24 Winter 2012

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

4. Linear transformations as a vector space 17

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

m We can similarly replace any pair of complex conjugate eigenvalues with 2 2 real blocks. = R

MATH 1553-C MIDTERM EXAMINATION 3

Linear Algebra Practice Problems

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Name: Final Exam MATH 3320

Math 308 Practice Final Exam Page and vector y =

Lecture 3 Eigenvalues and Eigenvectors

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Final Exam - Take Home Portion Math 211, Summer 2017

Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Review problems for MA 54, Fall 2004.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math 205 A B Final Exam page 1 12/12/2012 Name

Math Matrix Algebra

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 1553 Worksheet 5.3, 5.5

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Problems for M 10/26:

Review problems for Math 511

Math 2B Spring 13 Final Exam Name Write all responses on separate paper. Show your work for credit.

Eigenvalues, Eigenvectors, and Diagonalization

Conceptual Questions for Review

Math 314H Solutions to Homework # 3

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Lecture 15, 16: Diagonalization

Chapter 3. Determinants and Eigenvalues

City Suburbs. : population distribution after m years

Calculating determinants for larger matrices

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

Total 170. Name. Final Examination M340L-CS

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

Solutions to Final Exam

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Summer Session Practice Final Exam

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Lecture 12: Diagonalization

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Diagonalization of Matrix

LINEAR ALGEBRA SUMMARY SHEET.

Topic 1: Matrix diagonalization

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Math 2114 Common Final Exam May 13, 2015 Form A

9.1 Eigenvectors and Eigenvalues of a Linear Map

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Spring 2019 Exam 2 3/27/19 Time Limit: / Problem Points Score. Total: 280

Econ Slides from Lecture 7

Transcription:

Constructing and (some) classification of integer matrices with integer eigenvalues Chris Towse* and Eric Campbell Scripps College Pomona College January 6, 2017

The question An example Solve a linear system: x (t) = Ax

The question An example Solve a linear system: x (t) = Ax Answer: x = C 1 v 1 e λ 1t + C 2 v 2 e λ 2t

The question An example Solve a linear system: x (t) = Ax Answer: x = C 1 v 1 e λ 1t + C 2 v 2 e λ 2t + + C n v n e λnt... or something similar.

The question An example An example: 2 2 1 A = 1 3 1 1 2 2

The question An example An example: Eigenvalues are λ = 1, 1, 5. 2 2 1 A = 1 3 1 1 2 2

The question An example An example: Eigenvalues are λ = 1, 1, 5. 2 2 1 A = 1 3 1 1 2 2 So the characteristic polynomial is χ A (x) = x 3 7x 2 + 11x 5.

The question An example An example: Eigenvalues are λ = 1, 1, 5. 2 2 1 A = 1 3 1 1 2 2 So the characteristic polynomial is χ A (x) = x 3 7x 2 + 11x 5. Reasonable?

Set-up GOAL: Construct a matrix A with relatively small integer entries and with relatively small integer eigenvalues. (IMIE) Such an A will be a good example.

Set-up GOAL: Construct a matrix A with relatively small integer entries and with relatively small integer eigenvalues. (IMIE) Such an A will be a good example. Note: Martin and Wong, Almost all integer matrices have no integer eigenvalues

Set-up Recall/notation: where A = PDP 1 P = u 1 u 2 u n has columns which form a basis of eigenvectors for A and D = λ 1, λ 2,..., λ n is the diagonal matrix with corresponding eigenvalues on the diagonal.

Set-up Our example: 2 2 1 A = 1 3 1 1 2 2 D = 1, 1, 5 And in this case, 2 1 1 P = 1 0 1 0 1 1 A = PDP 1.

First approach Failure Idea: With the same P, try PDP 1 for D = 1, 3, 5.

First approach Failure Idea: With the same P, try PDP 1 for D = 1, 3, 5. Then 1 0 0 5/2 3 1/2 P 0 3 0 P 1 = 1 3 1 0 0 5 1/2 1 7/2

First approach Failure The problem is that 1 2 1 P 1 = (1/4) 1 2 3 1 2 1

First approach Failure The problem is that 1 2 1 P 1 = (1/4) 1 2 3 = 1 det P Padj 1 2 1 Or, really, that det P = 4.

First approach Failure Solutions:

First approach Failure Solutions: 1. Choose P with det P = ±1. (New problem!)

First approach Failure Solutions: 1. Choose P with det P = ±1. (New problem!) 2. If k = det P, choose eigenvalues that are all multiples of k. (But probably avoid 0.)

First approach Failure Solutions: 1. Choose P with det P = ±1. (New problem!) 2. If k = det P, choose eigenvalues that are all multiples of k. (But probably avoid 0.) But actually...

First approach Improvement: modulo k Proposition (Eigenvalues congruent to b modulo k) Let P be an n n invertible matrix with k = det P 0. Suppose every λ i b mod k. (So all the λ i s are congruent to each other.) Then A = P λ 1,..., λ n P 1 is integral.

First approach Improvement: modulo k Proposition (Eigenvalues congruent to b modulo k) Let P be an n n invertible matrix with k = det P 0. Suppose every λ i b mod k. (So all the λ i s are congruent to each other.) Then A = P λ 1,..., λ n P 1 is integral. For example, 1 0 4 P 1, 3, 5 P 1 = 1 3 1 2 4 1

First approach Improvement: modulo k Proposition (Eigenvalues congruent to b modulo k) Let P be an n n invertible matrix with k = det P 0. Suppose every λ i b mod k. (So all the λ i s are congruent to each other.) Then A = P λ 1,..., λ n P 1 is integral. For example, 1 0 4 P 1, 3, 5 P 1 = 1 3 1 2 4 1 which has χ(x) = x 3 3x 2 13x + 15 = (x 1)(x + 3)(x 5).

First approach Improvement: modulo k Proposition (Eigenvalues congruent to b modulo k) Let P be an n n invertible matrix with k = det P 0. Suppose every λ i b mod k. (So all the λ i s are congruent to each other.) Then A = P λ 1,..., λ n P 1 is integral. For example, 1 0 4 P 1, 3, 5 P 1 = 1 3 1 2 4 1 which has χ(x) = x 3 3x 2 13x + 15 = (x 1)(x + 3)(x 5). Good?

First approach Improvement: modulo k Eric Campbell (Pomona) created a web app

First approach Improvement: modulo k Eric Campbell (Pomona) created a web app http://ericthewry.github.io/integer matrices/

First approach Improvement: modulo k Eric Campbell (Pomona) created a web app http://ericthewry.github.io/integer matrices/ Input: eigenvectors (really P) Select good eigenvalues Output: IMIE and its characteristic polynomial

A different approach

A different approach A useful property Proposition If X is invertible than XY and YX have the same characteristic polynomials.

A different approach A useful property Proposition If X is invertible than XY and YX have the same characteristic polynomials. Idea [Renaud, 1983]: Take X, Y integer matrices. If YX is upper (or lower) triangular then the eigenvalues are the diagonal elements, hence integers. So XY will be a good example.

A different approach A useful property Proposition If X is invertible than XY and YX have the same characteristic polynomials. Idea [Renaud, 1983]: Take X, Y integer matrices. If YX is upper (or lower) triangular then the eigenvalues are the diagonal elements, hence integers. So XY will be a good example. If X and Y are n n, we are imposing n(n 1)/2 othogonality conditions on the rows of Y and columns of X.

A different approach A useful property ( ) ( ) 1 2 1 2 Let X = and Y =. 3 2 3 1 ( ) 7 4 Then XY = has the same eigenvalues as 3 4 ( ) 5 YX =, namely λ = 5, 8. 0 8

A different approach A useful property ( ) ( ) 1 2 1 2 Let X = and Y =. 3 2 3 1 ( ) 7 4 Then XY = has the same eigenvalues as 3 4 ( ) 5 YX =, namely λ = 5, 8. 0 8 Starting with X, the only condition on Y is that the second row of Y must be orthogonal to the first row of X.

A different approach A better version Better fact: Theorem (Folk Theorem) Let U be an r s matrix and V be an s r matrix, where r s. Then χ UV (x) = x s r χ VU (x).

A different approach A better version Better fact: Theorem (Folk Theorem) Let U be an r s matrix and V be an s r matrix, where r s. Then χ UV (x) = x s r χ VU (x). Proof ( idea ) (due ( to Horn ) and Johnson): UV 0 0 0 V 0 V VU

A different approach A better version Better fact: Theorem (Folk Theorem) Let U be an r s matrix and V be an s r matrix, where r s. Then χ UV (x) = x s r χ VU (x). Proof ( idea ) (due ( to Horn ) and Johnson): UV 0 0 0 V 0 V VU ( ) Ir U via. 0 I s

A different approach A better version In other words, UV and VU have nearly the same characteristic polynomials and nearly the same eigenvalues. The only difference: 0 is a eigenvalue of the larger matrix, repeated as necessary. For now, let us take U to be n (n 1) and V to be (n 1) n.

A different approach A better version An example. Begin with 1 2 3 U = 0 1 2 1 2 1 0 2 2 1 2 1 1 V = 1 1 1 1 1 2 1 1

A different approach A better version Then 2 0 4 VU = 0 3 4 0 0 2

A different approach A better version Then with eigenvalues λ = 2, 3, 2 2 0 4 VU = 0 3 4 0 0 2

A different approach A better version Then with eigenvalues λ = 2, 3, 2 and 2 0 4 VU = 0 3 4 0 0 2 4 10 6 0 UV = 1 3 1 3 0 2 2 4. 4 6 4 0 So UV is an integer matrix with eigenvalues λ = 2, 3, 2

A different approach A better version Then with eigenvalues λ = 2, 3, 2 and 2 0 4 VU = 0 3 4 0 0 2 4 10 6 0 UV = 1 3 1 3 0 2 2 4. 4 6 4 0 So UV is an integer matrix with eigenvalues λ = 2, 3, 2 and λ = 0.

A different approach A better version Writing U = u 1 u 2 u 3 v 1 V = v 2 v 3

A different approach A better version Writing U = u 1 u 2 u 3 v 1 V = v 2 v 3 We needed v 2 u 1 = v 3 u 1 = v 3 u 2 = 0.

A different approach A better version Writing U = u 1 u 2 u 3 v 1 V = v 2 v 3 We needed v 2 u 1 = v 3 u 1 = v 3 u 2 = 0. Then the eigenvalues of UV are the three dot products v i u i and 0.

A different approach A better version Note: we now only need an (n 1) (n 1) matrix to be triangular. There are now (n 1)(n 2)/2 orthogonality conditions.

A different approach A better version Note: we now only need an (n 1) (n 1) matrix to be triangular. There are now (n 1)(n 2)/2 orthogonality conditions. This means there are no conditions to be checked in order to construct a 2 2 example.

A different approach A better version Any ( u1 ) ( ) (v1 ) u1 v v 2 = 1 u 1 v 2 u 2 u 2 v 1 u 2 v 2 is an integer matrix with eigenvalues 0 and u 1 v 1 + u 2 v 2.

A different approach A better version Any ( u1 ) ( ) (v1 ) u1 v v 2 = 1 u 1 v 2 u 2 u 2 v 1 u 2 v 2 is an integer matrix with eigenvalues 0 and u 1 v 1 + u 2 v 2. E.g. ( ) ( ) 2 (1 ) 2 8 4 = 1 1 4 λ = 2, 0

A different approach A better version Disadvantages: We always get 0 as an eigenvalue. What are the eigenvectors? Advantage: This is (nearly) a complete characterization for these matrices.

A different approach A shift by b Great tool No. 2:

A different approach A shift by b Great tool No. 2: Proposition (Shift Proposition) If A has eigenvalues λ 1,..., λ n.

A different approach A shift by b Great tool No. 2: Proposition (Shift Proposition) If A has eigenvalues λ 1,..., λ n. Then A + bi n has eigenvalues b + λ 1,..., b + λ n.

A different approach A shift by b Great tool No. 2: Proposition (Shift Proposition) If A has eigenvalues λ 1,..., λ n. Then A + bi n has eigenvalues b + λ 1,..., b + λ n. Why does this work? Note that A and A + bi have the same eigenvectors.

A different approach A shift by b Example: 4 10 6 0 2 0 4 VU = 0 3 4 so UV = 1 3 1 3 0 2 2 4. 0 0 2 4 6 4 0 has eigenvalues 2, 3, 2, 0.

A different approach A shift by b Example: 4 10 6 0 2 0 4 VU = 0 3 4 so UV = 1 3 1 3 0 2 2 4. 0 0 2 4 6 4 0 has eigenvalues 2, 3, 2, 0. So A = UV I = has eigenvalues 1, 4, 1, 1. 5 10 6 0 1 2 1 3 0 2 1 4 4 6 4 1

A different approach A shift by b Earlier we had UV = ( ) ( ) 2 (1 ) 2 8 4 = 1 1 4 with λ = 2, 0

A different approach A shift by b Earlier we had UV = ( ) ( ) 2 (1 ) 2 8 4 = 1 1 4 with λ = 2, 0 So has eigenvalues λ = 1, 1. UV + I = ( ) 3 8 1 3

A different approach A shift by b Note: This gives a quick proof to our Eigenvalues Congruent to b modulo k Proposition.

Classification Results

Classification Results This is actually a classification of all IMIEs. Theorem (Renaud) Every n n integer matrix with integer eigenvalues can be written as a UV + bi n where VU is a triangular (n 1) (n 1) matrix.

Classification Results This is actually a classification of all IMIEs. Theorem (Renaud) Every n n integer matrix with integer eigenvalues can be written as a UV + bi n where VU is a triangular (n 1) (n 1) matrix. Note: U is n (n 1) and V is (n 1) n.

Classification Results Further refinements Is there any way to take further advantage of the Folk Theorem?

Classification Results Further refinements Is there any way to take further advantage of the Folk Theorem? What if U is n r and V is r n?

Classification Results Further refinements Is there any way to take further advantage of the Folk Theorem? What if U is n r and V is r n? In particular, if U is n 1 and V is 1 n then VU is trivially triangular.

Classification Results Further refinements 1 Take U = 1 and V = ( 1 2 1 ). 1 Then VU = (4) and UV must be a 3 3 integer matrix with eigenvalues 4, 0, 0.

Classification Results Further refinements 1 Take U = 1 and V = ( 1 2 1 ). 1 Then VU = (4) and UV must be a 3 3 integer matrix with eigenvalues 4, 0, 0. 2 2 1 In fact UV + I 3 = 1 3 1 = A, our original example. 1 2 2

Classification Results Further refinements

Classification Results Further refinements What about the eigenvectors of UV?

Classification Results Further refinements What about the eigenvectors of UV? Suppose VU is not just triangular, but diagonal.

Classification Results Further refinements What about the eigenvectors of UV? Suppose VU is not just triangular, but diagonal. Then the (n 1) columns of U are eigenvectors corresponding to the λ 1,..., λ n 1.

Classification Results Further refinements What about the eigenvectors of UV? Suppose VU is not just triangular, but diagonal. Then the (n 1) columns of U are eigenvectors corresponding to the λ 1,..., λ n 1. Any vector, w in the null space of V is an eigenvector corresponding to b.

Classification Results Further refinements 3 2 Example. U = 1 0 and V = 1 2 ( ) 1 2 1. 0 1 1

Classification Results Further refinements 3 2 ( ) Example. U = 1 0 1 2 1 and V =. 0 1 1 1 2 ( ) 4 0 Then VU =, diagonal. 0 2 So the columns of U are eigenvectors (corresponding to λ = 4, 2, resp.) of 3 2 1 UV = 1 2 1. 1 2 3

Classification Results Further refinements 3 2 ( ) Example. U = 1 0 1 2 1 and V =. 0 1 1 1 2 ( ) 4 0 Then VU =, diagonal. 0 2 So the columns of U are eigenvectors (corresponding to λ = 4, 2, resp.) of 3 2 1 UV = 1 2 1. 1 2 3 1 w = 1 has V w = 0. So it is an eigenvector for UV 1 corresponding to λ = 0.

Classification Results Further refinements

Classification Results Further refinements Recall: A = PDP 1 approach. Idea 1. Make sure P has determinant ±1. How to construct such P?

Classification Results Further refinements One technique [Ortega, 1984] uses: Take u, v R n. Then P = I n + u v has det P = 1 + u v.

Classification Results Further refinements One technique [Ortega, 1984] uses: Take u, v R n. Then P = I n + u v has det P = 1 + u v. So choose u v. Then P = I n + u v has det P = 1

Classification Results Further refinements One technique [Ortega, 1984] uses: Take u, v R n. Then P = I n + u v has det P = 1 + u v. So choose u v. Then P = I n + u v has det P = 1 and P 1 = I n u v.

Classification Results Further refinements One technique [Ortega, 1984] uses: Take u, v R n. Then P = I n + u v has det P = 1 + u v. So choose u v. Then P = I n + u v has det P = 1 and P 1 = I n u v. This is just a special case of what we have been doing with the Folk Theorem and the Shift Propostion.

Classification Results Further refinements 1 Example. Try u = 2 4

Classification Results Further refinements 1 2 Example. Try u = 2 and v = 1. (Check: u v = 0.) 4 1

Classification Results Further refinements 1 2 Example. Try u = 2 and v = 1. (Check: u v = 0.) 4 1 Then 2 1 1 u v = 4 2 2 8 4 4

Classification Results Further refinements 1 2 Example. Try u = 2 and v = 1. (Check: u v = 0.) 4 1 Then 2 1 1 u v = 4 2 2 8 4 4 Get and 1 1 1 P = I + u v = 4 1 2. 8 4 5 3 1 1 P 1 = I u v = 4 3 2. 8 4 3

Classification Results Further refinements So Ortega uses this to say that any A = PDP 1 will be an IMIE.

Classification Results Further refinements So Ortega uses this to say that any A = PDP 1 will be an IMIE. But in fact, P itself is an IMIE. It has the single eigenvalue 1. P is clearly non-diagonalizable. 1 1 1 1 0 0 4 1 2 0 1 1. 8 4 5 0 0 1

Classification Results Further refinements Ortega s full result is that if u v = β then P = I n + u v has det P = 1 + β and (if β 1) then P 1 = I n 1 1+β u v. The case β = 2 is interesting.

Classification Results Further refinements Earlier we had UV = ( ) ( ) 2 (1 ) 2 8 4 = 1 1 4 with λ = 2, 0

Classification Results Further refinements Earlier we had UV = ( ) ( ) 2 (1 ) 2 8 4 = 1 1 4 with λ = 2, 0 So Q = UV + I = ( ) 3 8 1 3 has eigenvalues λ = 1, 1. In fact, this Q is coming from Ortega s construction. So Q 1 = Q.

Classification Results Further refinements

Christopher Towse and Eric Campbell. Constructing integer matrices with integer eigenvalues. The Math. Scientist 41(2): 45 52, 2016. W. P. Galvin., The Teaching of Mathematics: Matrices with Custom-Built Eigenspaces. Amer. Math. Monthly, 91(5): 308 309, 1984. Greg Martin and Erick Wong. Almost all integer matrices have no integer eigenvalues.amer. Math. Monthly, 116(7): 588-597, 2009. James M. Ortega. Comment on: Matrices with integer entries and integer eigenvalues. Amer. Math. Monthly, 92(7): 526, 1985. J.-C. Renaud.Matrices with integer entries and integer eigenvalues. Amer. Math. Monthly, 90(3): 202 203, 1983.