Matrix Powers and Applications

Similar documents
Diagonalization of Matrix

Econ Slides from Lecture 7

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Eigenvalues and Eigenvectors: An Introduction

Linear Algebra Practice Problems

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Lecture 15, 16: Diagonalization

and let s calculate the image of some vectors under the transformation T.

Math 3191 Applied Linear Algebra

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Draft Version 1 Mark scheme Further Maths Core Pure (AS/Year 1) Unit Test 5: Algebra and Functions. Q Scheme Marks AOs. Notes

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Maths for Signals and Systems Linear Algebra in Engineering

Math Linear Algebra Final Exam Review Sheet

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory

Eigenvalue and Eigenvector Homework

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Definition (T -invariant subspace) Example. Example

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

B.Sc. MATHEMATICS I YEAR

Generalized Eigenvectors and Jordan Form

Recall : Eigenvalues and Eigenvectors

CSL361 Problem set 4: Basic linear algebra

Eigenvalues and Eigenvectors

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

Systems of Algebraic Equations and Systems of Differential Equations

Maths Extension 2 - Polynomials. Polynomials

Announcements Wednesday, November 01

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Math Spring 2011 Final Exam

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Matrix Algebra: Introduction

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Properties of Linear Transformations from R n to R m

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

1. Matrix multiplication and Pauli Matrices: Pauli matrices are the 2 2 matrices. 1 0 i 0. 0 i

Lecture 10 - Eigenvalues problem

Ma/CS 6a Class 18: Groups

Math 240 Calculus III

Matrices and Linear Algebra

Math 315: Linear Algebra Solutions to Assignment 7

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Exercise Set 7.2. Skills

Basic Elements of Linear Algebra

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Chapter 1: Systems of Linear Equations and Matrices

Examples of numerics in commutative algebra and algebraic geo

Math Matrix Algebra

Linear Algebra- Final Exam Review

SYMMETRIC POLYNOMIALS

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

CS 246 Review of Linear Algebra 01/17/19

Study Guide for Linear Algebra Exam 2

Announcements Monday, November 06

Symmetric and anti symmetric matrices

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Math 205, Summer I, Week 4b:

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Numerical Methods I Eigenvalue Problems

4. Linear transformations as a vector space 17

Matrix operations Linear Algebra with Computer Science Application

12x + 18y = 30? ax + by = m

MATRICES. a m,1 a m,n A =

Math 308 Practice Final Exam Page and vector y =

Linear Algebra - Part II

Math 489AB Exercises for Chapter 2 Fall Section 2.3

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Topic 1: Matrix diagonalization

MTH 464: Computational Linear Algebra

Permutations and Polynomials Sarah Kitchen February 7, 2006

Chapter 4 & 5: Vector Spaces & Linear Transformations

MATH 56A: STOCHASTIC PROCESSES CHAPTER 0

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

MA106 Linear Algebra lecture notes

OCR Maths FP1. Topic Questions from Papers. Roots of Polynomial Equations

Symmetric matrices and dot products

Math 118, Fall 2014 Final Exam

18.06 Professor Strang Quiz 3 May 2, 2008

Math Matrix Algebra

UNIT-I CURVE FITTING AND THEORY OF EQUATIONS

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors, and Diagonalization

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Linear Algebra: Matrix Eigenvalue Problems

MATH 369 Linear Algebra

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

Spectral radius, symmetric and positive matrices

235 Final exam review questions

Transcription:

Assistant Professor of Mathematics 3/7/18

Motivation Let A = [ 1 3 1 2 ]. Suppose we d like to find the 99 th power of A, Would you believe that A 99 = A 99 = A } A {{ A A}. 99 times [ 1 0 0 1 ]?

Motivation Let A = [ 1 3 1 2 ]. Suppose we d like to find the 99 th power of A, Would you believe that A 99 = A 99 = A } A {{ A A}. 99 times [ 1 0 0 1 ]?

Motivation Let A = [ 1 3 1 2 ]. Suppose we d like to find the 99 th power of A, Would you believe that A 99 = A 99 = A } A {{ A A}. 99 times [ 1 0 0 1 ]?

Setup [ a b In general, let A = c d ]. Suppose we d like to find the n th power of A, A n = A } A {{ A A}. n times Matrix multiplication is a costly operation. Can we find a formula for A n?

Setup [ a b In general, let A = c d ]. Suppose we d like to find the n th power of A, A n = A } A {{ A A}. n times Matrix multiplication is a costly operation. Can we find a formula for A n?

Setup [ a b In general, let A = c d ]. Suppose we d like to find the n th power of A, A n = A } A {{ A A}. n times Matrix multiplication is a costly operation. Can we find a formula for A n?

Setup Yes! (With some caveats.) Suppose we already know the eigenvalues of A. Definition We say a matrix A has eigenvalue α if Ax = αx for some (eigen)vector x.

Setup Yes! (With some caveats.) Suppose we already know the eigenvalues of A. Definition We say a matrix A has eigenvalue α if Ax = αx for some (eigen)vector x.

Setup Yes! (With some caveats.) Suppose we already know the eigenvalues of A. Definition We say a matrix A has eigenvalue α if Ax = αx for some (eigen)vector x.

Setup Definition The complete homogeneous symmetric polynomial of degree k in two variables is h k (x, y) = x i y j. Some examples: h 0 (x, y) = 1 h 1 (x, y) = x + y h 2 (x, y) = x 2 + xy + y 2 h 3 (x, y) = x 3 + x 2 y + xy 2 + y 3 i+j=k, i,j 0

2 2 Theorem Theorem (2015) Suppose a 2 2 matrix A has eigenvalues α and β. Then, for any integer n 2, where I = [ 1 0 0 1 A n = h n 1 (α, β)a αβh n 2 (α, β)i, ].

An Example Example Suppose A = [ 1 3 1 2 ]. Let s find A 3. A has eigenvalues α = 1+ 3 2 and β = 1 3 2. A 3 = h 2 (α, β)a αβh 1 (α, β)i = (α 2 + αβ + β 2 )A αβ(α + β)i = 0A (1)( 1)I = I.

An Example Example Suppose A = [ 1 3 1 2 ]. Let s find A 3. A has eigenvalues α = 1+ 3 2 and β = 1 3 2. A 3 = h 2 (α, β)a αβh 1 (α, β)i = (α 2 + αβ + β 2 )A αβ(α + β)i = 0A (1)( 1)I = I.

An Example Example Suppose A = [ 1 3 1 2 ]. Let s find A 3. A has eigenvalues α = 1+ 3 2 and β = 1 3 2. A 3 = h 2 (α, β)a αβh 1 (α, β)i = (α 2 + αβ + β 2 )A αβ(α + β)i = 0A (1)( 1)I = I.

Back to the First Slide Notice that this matrix is the same A from the first slide. Since A 3 = I, it follows that A 99 = (A 3 ) 33 = I 33 = I. This behavior appears to be easiest to describe if we find the smallest n with A n = I.

Back to the First Slide Notice that this matrix is the same A from the first slide. Since A 3 = I, it follows that A 99 = (A 3 ) 33 = I 33 = I. This behavior appears to be easiest to describe if we find the smallest n with A n = I.

Back to the First Slide Notice that this matrix is the same A from the first slide. Since A 3 = I, it follows that A 99 = (A 3 ) 33 = I 33 = I. This behavior appears to be easiest to describe if we find the smallest n with A n = I.

Back to the First Slide Notice that this matrix is the same A from the first slide. Since A 3 = I, it follows that A 99 = (A 3 ) 33 = I 33 = I. This behavior appears to be easiest to describe if we find the smallest n with A n = I.

Projective Order Definition We say A has projective order n if n is the smallest positive integer so that A n = λi, λ 0. In the last example we discovered that A 3 = I. This is because A has projective order 3. It just so happens that in this case, λ = 1.

Projective Order Definition We say A has projective order n if n is the smallest positive integer so that A n = λi, λ 0. In the last example we discovered that A 3 = I. This is because A has projective order 3. It just so happens that in this case, λ = 1.

Projective Order Definition We say A has projective order n if n is the smallest positive integer so that A n = λi, λ 0. In the last example we discovered that A 3 = I. This is because A has projective order 3. It just so happens that in this case, λ = 1.

Projective Order Definition We say A has projective order n if n is the smallest positive integer so that A n = λi, λ 0. In the last example we discovered that A 3 = I. This is because A has projective order 3. It just so happens that in this case, λ = 1.

Motivation Part 2 Suppose we represent a function f with the matrix A. If A has projective order n and we compose f with itself n times: f (f (f f (x))) }{{} n times this composition behaves like a multiple of the identity!

Motivation Part 2 Suppose we represent a function f with the matrix A. If A has projective order n and we compose f with itself n times: f (f (f f (x))) }{{} n times this composition behaves like a multiple of the identity!

Motivation Part 2 Suppose we represent a function f with the matrix A. If A has projective order n and we compose f with itself n times: f (f (f f (x))) }{{} n times this composition behaves like a multiple of the identity!

Linear Fractional Transformation Definition [ ] a b Let M = with ad bc 0. The linear fractional c d transformation is defined as: M x = ax + b cx + d Note that if λ 0, (λm) x = M x.

Linear Fractional Transformation Definition [ ] a b Let M = with ad bc 0. The linear fractional c d transformation is defined as: M x = ax + b cx + d Note that if λ 0, (λm) x = M x.

Interpretation [ a b Suppose M = c d ] has projective order n. Then n is the smallest positive integer such that M n x = (λi) x = λx+0 0x+λ = x.

Interpretation [ a b Suppose M = c d ] has projective order n. Then n is the smallest positive integer such that M n x = (λi) x = λx+0 0x+λ = x.

A Particular Application If c = 0, M x acts like a (boring) linear function. If c 0, we may essentially force c = 1, since λm and M are equivalent: [ ] [ a b a b M = = c ] c d 1 d where a = a/c, b = b/c, d = d/c. We will focus on this second case.

A Particular Application If c = 0, M x acts like a (boring) linear function. If c 0, we may essentially force c = 1, since λm and M are equivalent: [ ] [ a b a b M = = c ] c d 1 d where a = a/c, b = b/c, d = d/c. We will focus on this second case.

A Particular Application If c = 0, M x acts like a (boring) linear function. If c 0, we may essentially force c = 1, since λm and M are equivalent: [ ] [ a b a b M = = c ] c d 1 d where a = a/c, b = b/c, d = d/c. We will focus on this second case.

A Particular Application If c = 0, M x acts like a (boring) linear function. If c 0, we may essentially force c = 1, since λm and M are equivalent: [ ] [ a b a b M = = c ] c d 1 d where a = a/c, b = b/c, d = d/c. We will focus on this second case.

The Problem If M = [ a b 1 d ] and has projective order n, what must its entries look like?

The Problem Notice that M appears to have projective order 2 if a + d = 0. In this case, and M 2 = M = [ a b 1 a ] [ a 2 + b 0 0 a 2 + b ]. To make sure that M isn t a zero matrix, we should also require that b a 2. This case is covered by the definition of the linear fractional transformation.

The Problem Notice that M appears to have projective order 2 if a + d = 0. In this case, and M 2 = M = [ a b 1 a ] [ a 2 + b 0 0 a 2 + b ]. To make sure that M isn t a zero matrix, we should also require that b a 2. This case is covered by the definition of the linear fractional transformation.

The Problem Notice that M appears to have projective order 2 if a + d = 0. In this case, and M 2 = M = [ a b 1 a ] [ a 2 + b 0 0 a 2 + b ]. To make sure that M isn t a zero matrix, we should also require that b a 2. This case is covered by the definition of the linear fractional transformation.

The Problem Theorem (2013) If M λi has entries from Q, only these projective orders are possible: order 2: order 3: order 4: order 6: [ a b M = λ 1 a [ a a M = λ 2 ad d 2 1 d [ ( a 1 M = λ 2 a 2 d 2) 1 d ], b a 2 [ ( a 1 M = λ 3 a 2 + ad d 2) 1 d ], d a ], d a ], d a

The Problem Theorem (2013) If M λi has entries from Q, only these projective orders are possible: order 2: order 3: order 4: order 6: [ a b M = λ 1 a [ a a M = λ 2 ad d 2 1 d [ ( a 1 M = λ 2 a 2 d 2) 1 d ], b a 2 [ ( a 1 M = λ 3 a 2 + ad d 2) 1 d ], d a ], d a ], d a

Example for Order 4 Example Let λ = 1, a = 2, and d = 4. Then, if b = 1 2 we have [ ] 2 10 M = 1 4 and [ ] M 2 6 20 = 2 6 [ ] M 3 8 20 = 2 4 [ ] M 4 4 0 = 0 4 = 4I ( a 2 + d 2) = 10,

Example for Order 4 Example Let λ = 1, a = 2, and d = 4. Then, if b = 1 2 we have [ ] 2 10 M = 1 4 and [ ] M 2 6 20 = 2 6 [ ] M 3 8 20 = 2 4 [ ] M 4 4 0 = 0 4 = 4I ( a 2 + d 2) = 10,

Example for Order 4 Example Let λ = 1, a = 2, and d = 4. Then, if b = 1 2 we have [ ] 2 10 M = 1 4 and [ ] M 2 6 20 = 2 6 [ ] M 3 8 20 = 2 4 [ ] M 4 4 0 = 0 4 = 4I ( a 2 + d 2) = 10,

What About the Eigenvalues? We can come up with the previous theorem in at least two ways: By simply multiplying n copies of a generic M together and solving the nonlinear system of equations. By using M n = h n 1 (α, β)m αβh n 2 (α, β)i and searching for a pattern in the eigenvalues.

What About the Eigenvalues? We can come up with the previous theorem in at least two ways: By simply multiplying n copies of a generic M together and solving the nonlinear system of equations. By using M n = h n 1 (α, β)m αβh n 2 (α, β)i and searching for a pattern in the eigenvalues.

What About the Eigenvalues? We can come up with the previous theorem in at least two ways: By simply multiplying n copies of a generic M together and solving the nonlinear system of equations. By using M n = h n 1 (α, β)m αβh n 2 (α, β)i and searching for a pattern in the eigenvalues.

Further Explorations With some generalization, we may... investigate cases besides just the rational numbers. [ ] 1 1 M = 1 2 has projective order 5 over the integers modulo 11. generalize to higher dimensions. 1 3 3 M = 3 1 3 2 2 0 has projective order 3 over the integers modulo 7.

Further Explorations With some generalization, we may... investigate cases besides just the rational numbers. [ ] 1 1 M = 1 2 has projective order 5 over the integers modulo 11. generalize to higher dimensions. 1 3 3 M = 3 1 3 2 2 0 has projective order 3 over the integers modulo 7.

Further Explorations With some generalization, we may... investigate cases besides just the rational numbers. [ ] 1 1 M = 1 2 has projective order 5 over the integers modulo 11. generalize to higher dimensions. 1 3 3 M = 3 1 3 2 2 0 has projective order 3 over the integers modulo 7.

Further Explorations With some generalization, we may... investigate cases besides just the rational numbers. [ ] 1 1 M = 1 2 has projective order 5 over the integers modulo 11. generalize to higher dimensions. 1 3 3 M = 3 1 3 2 2 0 has projective order 3 over the integers modulo 7.

Continued Research In order to move these concepts into higher dimensions, we need a similar formula for A n when A is 3 3, 4 4, and beyond. My research over the last year has been consumed with this pursuit.

Continued Research In order to move these concepts into higher dimensions, we need a similar formula for A n when A is 3 3, 4 4, and beyond. My research over the last year has been consumed with this pursuit.

Higher Dimensions Definition The complete homogeneous symmetric polynomial of degree k in n variables is Some examples: h k (x 1, x 2,..., x n ) = h 0 (x, y, z) = 1 h 1 (x, y, z) = x + y + z i 1 +i 2 + i n=k, i j 0 x i 1 1 x i 2 2 x in n. h 3 (x, y, z) = x 3 +x 2 y+x 2 z+xy 2 +xz 2 +xyz+y 3 +y 2 z+yz 2 +z 3

The 3 3 Case Theorem (2017) Suppose A is a 3 3 matrix with eigenvalues α, β, γ. Then, for any integer n 2, A n = h n (α, β, γ)i h n 1 (α, β, γ) [(α + β + γ)i A] + h n 2 (α, β, γ) [ (αβ + αγ + βγ)i (α + β + γ)a + A 2], where I is the 3 3 identity matrix.

The 4 4 Case Theorem (2018) Suppose A is a 4 4 matrix with eigenvalues α, β, γ, δ. Then, for any integer n 3, A n = h n I h n 1 [e 1 I A] + h n 2 [ e2 I e 1 A + A 2] h n 3 [ e3 I e 2 A + e 1 A 2 A 3], where I is the 4 4 identity matrix, h k = h k (α, β, γ, δ), e 1 = α + β + γ + δ, e 2 = αβ + αγ + αδ + βγ + βδ + γδ, and e 3 = αβγ + αβδ + αγδ + βγδ.

Noticing a Pattern Definition The elementary symmetric polynomial of degree k in n variables is e k (x 1, x 2,..., x n ) = x i1 x i2 x ik. 1 i 1 <i 2 < i k n if k n and 0 otherwise. Some examples: e 1 (x, y, z) = x + y + z e 2 (x, y, z) = xy + xz + yz e 3 (a, b, c, d) = abc + abd + acd + bcd e k (x, y, z, 0) = e k (x, y, z)

Noticing a Pattern Definition The elementary symmetric polynomial of degree k in n variables is e k (x 1, x 2,..., x n ) = x i1 x i2 x ik. 1 i 1 <i 2 < i k n if k n and 0 otherwise. Some examples: e 1 (x, y, z) = x + y + z e 2 (x, y, z) = xy + xz + yz e 3 (a, b, c, d) = abc + abd + acd + bcd e k (x, y, z, 0) = e k (x, y, z)

The Conjecture Conjecture (2018) Suppose A is a k k matrix with known eigenvalues. Let h i and e i refer to those polynomials evaluated at the eigenvalues. Then, for any integer n k 1, k 1 i A n = ( 1) i h n i ( 1) j e i j A j, i=0 j=0 where we allow the convention A 0 = I.

One Last Note Finding the eigenvalues of a matrix is a time-consuming job, and appears to only be the first step to using these formulas. In general, eigenvalue decomposition allows you to (usually) write a square matrix A as A = QΛQ 1, where Q is a square matrix with columns made from the eigenvectors of A, and Λ is a diagonal matrix whose diagonal elements are the corresponding eigenvalues. Typically, this is the route taken to find large powers of A, since A n = QΛQ 1 QΛQ 1 QΛQ 1 = QΛ n Q 1.

One Last Note Finding the eigenvalues of a matrix is a time-consuming job, and appears to only be the first step to using these formulas. In general, eigenvalue decomposition allows you to (usually) write a square matrix A as A = QΛQ 1, where Q is a square matrix with columns made from the eigenvectors of A, and Λ is a diagonal matrix whose diagonal elements are the corresponding eigenvalues. Typically, this is the route taken to find large powers of A, since A n = QΛQ 1 QΛQ 1 QΛQ 1 = QΛ n Q 1.

One Last Note Finding the eigenvalues of a matrix is a time-consuming job, and appears to only be the first step to using these formulas. In general, eigenvalue decomposition allows you to (usually) write a square matrix A as A = QΛQ 1, where Q is a square matrix with columns made from the eigenvectors of A, and Λ is a diagonal matrix whose diagonal elements are the corresponding eigenvalues. Typically, this is the route taken to find large powers of A, since A n = QΛQ 1 QΛQ 1 QΛQ 1 = QΛ n Q 1.

One Last Note Does this mean our formula is strictly worse than eigenvalue decomposition? No! If we re careful, we may actually write these symmetric polynomials entirely in terms of the entries of A, without doing any eigen-stuff at all! Additionally, our formula works even if the matrix isn t diagonalizable.

One Last Note Does this mean our formula is strictly worse than eigenvalue decomposition? No! If we re careful, we may actually write these symmetric polynomials entirely in terms of the entries of A, without doing any eigen-stuff at all! Additionally, our formula works even if the matrix isn t diagonalizable.

One Last Note Does this mean our formula is strictly worse than eigenvalue decomposition? No! If we re careful, we may actually write these symmetric polynomials entirely in terms of the entries of A, without doing any eigen-stuff at all! Additionally, our formula works even if the matrix isn t diagonalizable.

Future Research Special matrices (stochastic, symmetric, etc) Geometric interpretation Further study of symmetric polynomials Computation!

Future Research Special matrices (stochastic, symmetric, etc) Geometric interpretation Further study of symmetric polynomials Computation!

Future Research Special matrices (stochastic, symmetric, etc) Geometric interpretation Further study of symmetric polynomials Computation!

Future Research Special matrices (stochastic, symmetric, etc) Geometric interpretation Further study of symmetric polynomials Computation!

References J. Boone, Higher-order Lucas sequences and Dickson polynomials, Ph. D. Dissertation, (2013). R. Fitzgerald and J. Yucas, A generalization of Dickson polynomials via linear fractional transformations, International Journal of Mathematics and Computer Science 1 (2006), 391 416. S. Friedberg, A. Insel, and L. Spence, Linear Algebra, Prentice Hall, (2003). I.G. Macdonald, Symmetric functions and Hall polynomials, Oxford: Clarendon Press, (1995).

Questions/Comments? Thank you for your kind attention! Feel free to contact me with any comments, concerns, ideas, or questions: Hamilton Math & Science Building Office 341 joshua.boone@lmunet.edu joshuaboone.wordpress.com