How Euler Did It. Jeff Miller s excellent site [M] Earliest Known Uses of Some of the Words of Mathematics reports:

Similar documents
How Euler Did It. Today we are fairly comfortable with the idea that some series just don t add up. For example, the series

How Euler Did It. A Series of Trigonometric Powers. 1 x 2 +1 = 2. x 1, x ln x + 1 x 1.

How Euler Did It. Many people have been interested in summing powers of integers. Most readers of this column know, for example, that ( )

How Euler Did It. by Ed Sandifer. A theorem of Newton %! 3 =! 3 + " 3 + # 3 +!+ $ 3, %! =! + " + # +!+ $, ! 2 =! 2 + " 2 + # 2 +!

How Euler Did It. by Ed Sandifer. Partial fractions. June 2007

How Euler Did It. by Ed Sandifer. The Euler line. January 2009

How Euler Did It. by Ed Sandifer. Foundations of Calculus. September 2006

How Euler Did It. a 2 + b 2 = c 2.

How Euler Did It. by Ed Sandifer. Inexplicable functions ( ) November 2007

1 Matrices and matrix algebra

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

22A-2 SUMMER 2014 LECTURE Agenda

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices

v = ( 2)

arxiv:math/ v1 [math.gm] 26 Jan 2007

A primer on matrices

CHAPTER 4 VECTORS. Before we go any further, we must talk about vectors. They are such a useful tool for

arxiv: v1 [math.ho] 23 May 2007

Complex Numbers Year 12 QLD Maths C

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

#26: Number Theory, Part I: Divisibility

Review of Linear Algebra

Dominant Eigenvalue of a Sudoku Submatrix

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Chapter 5 Trigonometric Functions of Angles

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Section 29: What s an Inverse?

Math 416, Spring 2010 Gram-Schmidt, the QR-factorization, Orthogonal Matrices March 4, 2010 GRAM-SCHMIDT, THE QR-FACTORIZATION, ORTHOGONAL MATRICES

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Dot Products, Transposes, and Orthogonal Projections

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Linear Algebra March 16, 2019

Linear Algebra, Summer 2011, pt. 3

Matrix & Linear Algebra

Answers Investigation 3

A. The vector v: 3,1 B. The vector m : 4i 2j C. The vector AB given point A( 2, 2) and B(1, 6)

Linear Algebra V = T = ( 4 3 ).

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products

Investigation of Rule 73 as a Case Study of Class 4 Long-Distance Cellular Automata. Lucas Kang

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

A primer on matrices

8 Square matrices continued: Determinants

Properties of Arithmetic

Lecture 8: A Crash Course in Linear Algebra

Vector calculus background

Lecture 6: Finite Fields

Lecture 2: Mutually Orthogonal Latin Squares and Finite Fields

* 8 Groups, with Appendix containing Rings and Fields.

Mathematics for Graphics and Vision

Volume in n Dimensions

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

An observation on the sums of divisors

{ }. The dots mean they continue in that pattern.

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH.2720 Introduction to Programming with MATLAB Vector and Matrix Algebra

Groups. s t or s t or even st rather than f(s,t).

CS1800: Strong Induction. Professor Kevin Gold

ADVANCED PROGRAMME MATHEMATICS MARKING GUIDELINES

Diagnostic quiz for M303: Further Pure Mathematics

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Lecture 5: Special Functions and Operations

Review of Linear Algebra

MATRICES The numbers or letters in any given matrix are called its entries or elements

(arrows denote positive direction)

7.6 The Inverse of a Square Matrix

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

What this shows is that an a + b by c + d rectangle can be partitioned into four rectangular regions with areas ac, bc, ad and bd, thus proving that

Answers. Investigation 3. ACE Assignment Choices. Applications. = = 210 (Note: students

Direct Proofs. the product of two consecutive integers plus the larger of the two integers

Math 360 Linear Algebra Fall Class Notes. a a a a a a. a a a

The PROMYS Math Circle Problem of the Week #3 February 3, 2017

MATH 310, REVIEW SHEET 2

Linear Algebra Basics

Quantum Mechanics for Scientists and Engineers. David Miller

Definition 2.3. We define addition and multiplication of matrices as follows.

How Euler Did It. by Ed Sandifer. PDEs of fluids. September 2008

Vectors Summary. can slide along the line of action. not restricted, defined by magnitude & direction but can be anywhere.

NVLAP Proficiency Test Round 14 Results. Rolf Bergman CORM 16 May 2016

Analytic Trigonometry. Copyright Cengage Learning. All rights reserved.

Calculus: What is a Limit? (understanding epislon-delta proofs)

Section 13.4 The Cross Product

Why It s Important. What You ll Learn

SVD, PCA & Preprocessing

MATH 106 LINEAR ALGEBRA LECTURE NOTES

Introduction to Algorithms

Review of similarity transformation and Singular Value Decomposition

M. Matrices and Linear Algebra

1 What is the area model for multiplication?

= 5 2 and = 13 2 and = (1) = 10 2 and = 15 2 and = 25 2

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Stat 206: Linear algebra

INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATRICES

ELEMENTARY LINEAR ALGEBRA

Differential Equations

Math 1120 Solutions to Review for the Third Exam Spring 2011

Adding and Scaling Points

Transcription:

How Euler Did It Orthogonal matrices August 006 by Ed Sandifer Jeff Miller s excellent site [M] Earliest Known Uses of Some of the Words of Mathematics reports: The term MATRIX was coined in 1850 by James Joseph Sylvester (1814-1897): [...] For this purpose we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants by fixing upon a number p, and selecting at will p lines and p columns, the squares corresponding of pth order. The citation above is from "Additions to the Articles On a new class of theorems, and On Pascal's theorem," Philosophical Magazine, pp. 363-370, 1850. Reprinted in Sylvester's Collected Mathematical Papers, vol. 1, pp. 145-151, Cambridge (At the University Press), 1904, page 150. On the subject of orthogonal matrices, he writes The term ORTHOGONAL MATRIX was used in 1854 by Charles Hermite (18-1901) in the Cambridge and Dublin Mathematical Journal, although it was not until 1878 that the formal definition of an orthogonal matrix was published by Frobenius (Kline, page 809). Imagine my surprise when I was browsing Series I Volume 6 of Euler s Opera Omnia trying to answer a question for Rob Bradley, President of The Euler Society. This is the volume Ad theoriam aequationum pertinentes, pertaining to the theory of equations. In the middle of that volume I found an article [E407] with the unremarkable title Algebraic problems that are 1

memorable because of their special properties, Problema algebraicum ob affectiones prorsus singulares memorabile. Usually, it seems, when Euler calls a problem memorable, I don t agree. This was an exception. Euler asks us to find nine numbers, that satisfy twelve conditions: A, B, C, D, E, F, G, H, I 1.. 3. 7. 8. 9. A + D + G = 1 B + E + H = 1 C + F + I = 1 A + B + C = 1 D + E + F = 1 G + H + I = 1 4. AB + DE + GH = 0 5. AC + DF + GI = 0 6. BC + EF + HI = 0 10. AD + BE + CF = 0 11. AG + BH + CI = 0 1. DG + EH + FI = 0 If we regard the nine numbers as a 3x3 matrix A B C M = D E F G H I then conditions 1,, 3, 10, 11 and 1 are exactly the conditions that make terms, this makes M an orthogonal matrix. Note that the other six conditions make orthogonal matrices. T M M T MM = I, another characterization of = I. In modern Of course, Euler doesn t know these are orthogonal matrices. When he wrote this paper in 1770, people didn t use matrices to do their linear algebra. As Jeff Miller s site suggests, Euler was doing this 80 years before these objects joined the mathematical consciousness. What, then, was Euler thinking when he formulated this problem? We can only speculate. It was probably something more than that he admired the pretty patterns, but something less than that he sensed the profound power of operations on such arrays of numbers. Let s look at Euler s paper. First he notes that we have 1 equations, but only 9 unknowns, so there is a chance that the problem has no solution. Rather than showing that the system of equations has a solution by giving an example of a solution (say A = E = I = 1, all the rest of the unknowns equaling zero), he offers a theorem:

Theorem: If nine numbers satisfy the first six conditions given above, then they also satisfy the other six. Euler himself describes the proof of this theorem as calculos vehementer intricatos, vehemently intricate calculations, and we won t describe them in any significant detail. He does do an interesting step at the beginning, though. Euler re-writes conditions 4, 5 and 6 as 4. AB = DE GH 5. AC = DF GI 6. BC = EF HI Then he asks us to multiply equation 4 by equation 5, then divide by equation 6 to get, as he writes it, 45 : 6 ( DE + GH)( DF + GI ) AABC = AA = BC EF + HI. The notation 45 is not an arithmetic operation, but an ad hoc notation for an algebraic 6 operation on equations 4, 5 and 6. Euler does this one or two times in other papers, but this is the only time he uses such a notation in this paper. After three pages of such calculations, Euler eventually derives all of the conditions 7 to 1 from conditions 1 to 6, thus proving his theorem. Now Euler turns to the solution of the problem that was proposed at the beginning. Condition 1 (or 7) guarantees that A is between 1 and 1, so it has to be the cosine of something. Let A = cos. ζ. (Note that Euler still uses cos. as an abbreviation for cosine, hence the period. Now that we ve mentioned it, we ll write it the modern way. We ll also write Euler wrote sin.ζ.) Then, from conditions 1 and 7, we get sin ζ where DD+ GG= 1 AA = 1 cos ζ = sin ζ and similarly BB+ CC = sin ζ. These equations will be satisfied by taking B = sinζ cosη C = sinζ sinη D = sinζ cosθ G = sinζ sinθ. 3

Let s check the score. We have six equations, nine unknowns, and we ve made three arbitrary decisions by choosing ζ,, η and θ. Euler knows enough linear algebra (see, for example, [S 004]) to suspect that this will determine a unique solution. Indeed, they determine A, B, C, D and G. Indeed, with half a page of calculations he first finds that and then that E = sinηsinθ cosζ sinηcosθ and I = cosηcosθ cosζ sinη sinθ F = cosηsinθ cosζ sinηcosθ and H = sinηcosθ cosζ cosηsin θ. This solves the problem. In typical Eulerian fashion, though, he doesn t stop there. Instead, he develops some slightly more efficient techniques and goes on to solve analogous problems for 4x4 and 5x5. Almost as if Euler did not want us to believe that he was actually doing modern linear algebra, Euler s last problem is to find a 4x4 array of integers A B C D E F G H I K L M N O P Q (he skipped J on purpose) satisfying the 1 orthogonality equations AE + BF + CG + DH = 0 AI + BK + CL + DM = 0 etc. and the additional condition on the sums of the squares of the numbers on the two diagonals, A + F + L + Q = D + G + K + N. He demonstrates how to find two different solutions. One is 68 9 41 37 17 31 79 3 59 8 3 61 11 77 8 49 where the dot product of any row (column) with any other row (column) is zero, and the sum of the squares of 68, 31, -3 and 49 equals the sum of the squares of 37, 79, 8, -11. They both equal 8415. 4

His other solution is 73 85 65 11 53 31 107 41 89 67 1 67 9 65 35 103 where the sums of the squares on the diagonals are 16,900. Euler s methods are typical of his work in Diophantine equations, and would allow us to generate as many solutions as we want. References: [E407] Euler, Leonhard, Problema algebraicum ob affections prorsus singulars memorabile, Novi Commentarii academiae scientiarum Petropolitanae 15, (1770) 1771, p. 75-101, reprinted in Opera Omnia Series 1, Volume 6, pp. 87-315. Available through The Euler Archive at www.eulerarchive.org. [M] Miller, Jeff, Earliest Uses of Symbols in Number Theory, http://members.aol.com/jeff570/nth.html, consulted June 13, 006. [S 004] Sandifer, Ed, Cramer s Paradox, How Euler Did It, MAA Online, August 004. Ed Sandifer (SandiferE@wcsu.edu) is Professor of Mathematics at Western Connecticut State University in Danbury, CT. He is an avid marathon runner, with 34 Boston Marathons on his shoes, and he is Secretary of The Euler Society (www.eulersociety.org) How Euler Did It is updated each month. Copyright 006 Ed Sandifer 5