Ortho-Skew and Ortho-Sym Matrix Trigonometry 1

Similar documents
November 18, 2013 ANALYTIC FUNCTIONAL CALCULUS

CHAPTER 3 ELEMENTARY FUNCTIONS 28. THE EXPONENTIAL FUNCTION. Definition: The exponential function: The exponential function e z by writing

Keywords: Kinematics; Principal planes; Special orthogonal matrices; Trajectory optimization

CHAPTER 4. Elementary Functions. Dr. Pulak Sahoo

Lecture 10 - Eigenvalues problem

Diagonalizing Matrices

Revision Checklist. Unit FP3: Further Pure Mathematics 3. Assessment information

Minimum-Parameter Representations of N-Dimensional Principal Rotations

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Completion Date: Monday February 11, 2008

Further Pure Mathematics 3 GCE Further Mathematics GCE Pure Mathematics and Further Mathematics (Additional) A2 optional unit

Mathematical Methods for Engineers and Scientists 1

Ossama Abdelkhalik and Daniele Mortari Department of Aerospace Engineering, Texas A&M University,College Station, TX 77843, USA

Linear Algebra: Matrix Eigenvalue Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Numerical methods for matrix functions

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

1. General Vector Spaces

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2014

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson


Candidates are expected to have available a calculator. Only division by (x + a) or (x a) will be required.

Chapter 3 Elementary Functions

PONDI CHERRY UNI VERSI TY

GENERAL ARTICLE Realm of Matrices

MATHEMATICAL FORMULAS AND INTEGRALS

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Spring Semester 2015

Two Results About The Matrix Exponential

Numerical Linear Algebra Homework Assignment - Week 2

Lecture 2: Computing functions of dense matrices

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

0.1 Rational Canonical Forms

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Linear Feedback Control Using Quasi Velocities

NYS Algebra II and Trigonometry Suggested Sequence of Units (P.I's within each unit are NOT in any suggested order)

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 122 Test 3. April 17, 2018

QM and Angular Momentum

Algebra Exam Topics. Updated August 2017

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

7.1. Calculus of inverse functions. Text Section 7.1 Exercise:

Math 200 University of Connecticut

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Test one Review Cal 2


a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

cos t 2 sin 2t (vi) y = cosh t sinh t (vii) y sin x 2 = x sin y 2 (viii) xy = cot(xy) (ix) 1 + x = sin(xy 2 ) (v) g(t) =

Two applications of the theory of primary matrix functions

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Math Spring 2014 Solutions to Assignment # 6 Completion Date: Friday May 23, 2014

MATH 5524 MATRIX THEORY Problem Set 4

ax 2 + bx + c = 0 where

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Some Formulas for the Principal Matrix pth Root

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 17, Time Allowed: 150 Minutes Maximum Marks: 30

Lecture 1: Review of linear algebra

Conformal Mapping among Orthogonal, Symmetric, and Skew-Symmetric Matrices

ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES

TABLE OF CONTENTS POLYNOMIAL EQUATIONS AND INEQUALITIES

1 Discussion on multi-valued functions

8. Hyperbolic triangles

arxiv: v2 [math.na] 27 Dec 2016

MTH4101 CALCULUS II REVISION NOTES. 1. COMPLEX NUMBERS (Thomas Appendix 7 + lecture notes) ax 2 + bx + c = 0. x = b ± b 2 4ac 2a. i = 1.

Mathematics Notes for Class 12 chapter 7. Integrals

3 Elementary Functions

Eigenvalue and Eigenvector Problems

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 25, Time Allowed: 150 Minutes Maximum Marks: 30

PURE MATHEMATICS AM 27

Second Midterm Exam Name: Practice Problems March 10, 2015

Elementary linear algebra

MATH 583A REVIEW SESSION #1

Introduction to Group Theory

Lecture notes: Applied linear algebra Part 2. Version 1

Guide for Ph.D. Area Examination in Applied Mathematics

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1 Functions and Inverses

ELA MATRIX FUNCTIONS PRESERVING SETS OF GENERALIZED NONNEGATIVE MATRICES

MATHEMATICS 217 NOTES

MATHEMATICAL FORMULAS AND INTEGRALS

NATIONAL BOARD FOR HIGHER MATHEMATICS. M. A. and M.Sc. Scholarship Test. September 24, Time Allowed: 150 Minutes Maximum Marks: 30

MTH3101 Spring 2017 HW Assignment 4: Sec. 26: #6,7; Sec. 33: #5,7; Sec. 38: #8; Sec. 40: #2 The due date for this assignment is 2/23/17.

GEOMETRICAL TRANSFORMATIONS IN HIGHER DIMENSIONAL EUCLIDEAN SPACES

Cayley-Hamilton Theorem

Background on Linear Algebra - Lecture 2

Chapter 4 Sequences and Series

Pre-Calculus (#9400)

PURE MATHEMATICS AM 27

Math 370 Semester Review Name

A Surprising Application of Non-Euclidean Geometry

Conformal maps. Lent 2019 COMPLEX METHODS G. Taylor. A star means optional and not necessarily harder.

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Math 122 Test 3. April 15, 2014

Linear Algebra Methods for Data Mining

Lone Star College-CyFair Formula Sheet

Physics 307. Mathematical Physics. Luis Anchordoqui. Wednesday, August 31, 16

Transcription:

Abstract Ortho-Skew and Ortho-Sym Matrix Trigonometry Daniele Mortari Department of Aerospace Engineering, Texas A&M University, College Station, TX 77843-34 This paper introduces some properties of two families of matrices: the Ortho-Skew, which are simultaneously Orthogonal and Skew-Hermitian, and the real Ortho-Sym matrices, which are Orthogonal and Symmetric. These relationships consist of closed-form compact expressions of trigonometric and hyperbolic functions that show that multiples of these matrices can be interpreted as angles. The analogies with trigonometric and hyperbolic functions, such as the periodicity of the trigonometric functions, are all shown. Additional expressions are derived for some other functions of matrices such as the logarithm, exponential, inverse, and power functions. All these relationships show that the Ortho-Skew and the Ortho-Sym matrices can be respectively considered as matrix extensions of the imaginary and the real units. Introduction The computation of functions of square matrices, real or complex, is an important topic in many engineering application fields and in applied mathematics. Evaluation of exponential, logarithm, powers, as well as trigonometric, hyperbolic and many other functions is often needed. Unfortunately, in general, this problem does not admit closedform solutions. However, as it will be shown in this paper, it is possible to develop Work previously presented as paper AAS 03 65 of the John L. Junkins Astrodynamics Symposium, College Station, TX, May 3-4, 003. Associate Professor, Department of Aerospace Engineering, 70 H.R. Bright Building, Room 74A, Texas A&M University, 34 TAMU, College Station, Texas 77843-34. Tel. (979) 845-0734, Fax (979) 845-605. AIAA and AAS Member.

compact closed-form solutions for several different functions of two families of matrices: the Ortho-Skew and the Ortho-Sym matrices. There are several methods to evaluate functions of a matrix. In particular, the books [], [], [3], and [4] dedicate full chapters on functions of a matrix. The function of a matrix f(m) can be introduced and defined in different ways that are all equivalent [5]. For instance, there is the line integral definition[6] f(m) = πi Γ f(z) (zi M) dz () that is computed on a closed contour Γ that encircle the eigenvalues λ of M. As noted by [3], this definition is the matrix version of the Cauchy integral theorem. In fact, respectively writing f k,j and g k,j (z) for the (k, j) entries of f(m) and (zi M), Equation () can be written in scalar form f k,j = πi f(z) g k,j (z) dz () Γ Other existing methods to define f(m) use the Jordan decomposition M = W J W [7, 8], where W and J are respectively the eigenvector matrix and the Jordan normal form of M; or the Schur decomposition M = H T H, where H and T are respectively orthogonal and upper triangle matrices [9]. Using either decomposition, we can define f(m) as f(m) = W f(j) W and f(m) = H f(t ) H (3) In the Jordan decomposition, f(j) is a block diagonal matrix with upper triangle blocks f i (J) that are associated with the J i blocks of J accordingly with λ i 0 0 f i f () f () i f i! (m i ) i (m i )! 0 λ i 0 J i = 0 0 λ i 0 0 f i f () f i (m i ) i (m i )! f...... i (J) = f. 0 0 f i (m i 3) i (4) (m i 3)! 0 0 0 λ. i...... 0 0 0 f i where λ i is the eigenvalue of the m i m i block J i, f i :=f(λ i ), and f (k) i is the kth derivative of f(x) computed at λ i. Note also that when M is diagonalizable, the computation of W in Eq. (3) is unnecessary because of the Sylvester (or Lagrange-Sylvester) formula

[] that requires only the computation of the n eigenvalues of M f(m) = n f(λ j ) Z j where Z j = j= n i=,i j (λ j λ i ) (M λ ii) (5) where the eigenvalues of M (correctly counting multiplicities) are λ,..., λ n. The Jordan decomposition given in Eq. (3) is a specific case of a more general relationship relating two similar matrices, A and B. In this case, if A = C B C then f(a) = C f(b) C (6) It is to be noted that there are some computational difficulties with the Jordan decomposition approach, particularly when M is defective (non-diagonalizable) or has ill-conditioned eigenvectors [3]. For the Schur decomposition, once H and T have been evaluated, the computation of f(m) is reduced to the computation of f(t ), where T is upper triangular. Explicit expressions for f(t ) easily become very complicated. To overcome this problem, Parlett [9] has introduced a recursive method that has completely solved this problem. The Jordan and Schur decomposition methods are more general in the sense that an arbitrary function of a given square matrix can be computed using these algorithms. Another general method to define any f(m) that will extensively used throughout this paper comes from the power series representation of functions or, in other words, from the Taylor series. This implies that, if f(x) = a k x k then f(m) = a k M k (7) Equation (7) holds if the f(λ i ) is analytic in a neighborhood of 0 and the radius of convergence of f contains all the λ i, where λ i are the eigenvalues of M. In particular, f(x) = k! d k f dx k x k then f(m) = x=0 k! d k f dx k M k (8) x=0 The use of Eq. (8) clearly implies some additional numerical approximation work and for this reason its use is rather limited 3. However, this paper nevertheless shows that the Maclaurin series expansion yields a compact closed-form solutions for several 3 The rational Padé approximation [3] is commonly used to approximate the specific and important case of matrix exponentials. 3

matrix functions in the special case of two important matrix families: the Ortho-Skew and the Ortho-Sym matrices. In particular, for the case of trigonometric and hyperbolic matrix functions, the similarity with the scalar formulations is so close that one can justifiably speak of matrix trigonometry. The Ortho-Skew and Ortho-Sym Matrix Sets The Ortho-Skew matrices I C n n are simultaneously Orthogonal and Skew-Hermitian I I = I and I = I = I = I (9) Reference [0] has shown that the Ortho-Skew matrix set presents very interesting properties, some of which, are summarized in the following. The general expression of these matrices, whose eigenvalues are always ±i, is I = i p n c k c k i c k c k where c i c j = δ ij (0) k= k=p+ where p indicates the number of positive eigenvalues +i (0 p n). Note that, if p = n then I = i I, and if p = 0 then I = i I. Equation (9) then clearly implies that the powers of an Ortho-Skew matrix obey the following simple rule I k = ik + ( i) k I i ik ( i) k ( I = cos k π ) ( I + sin k π ) I () In general, an Ortho-Skew matrix is complex. However, in even-dimensional spaces, it is possible to build real I matrices as follows n/ [ I = P k I Pk T where P k = [ c k. c k ] and I = k= 0 0 is the simplectic matrix. Similarly the Ortho-Sym real matrices R R n n, which have been introduced in Ref. [], are simultaneously Orthogonal and Symmetric ] () R T R = I and R = R T = R = I (3) The powers of these matrices clearly obey the following simple rule R k = + ( )k I + + [ ] [ ] ( )k+ + cos(kπ) cos(kπ) R = I + R (4) 4

The general expression of these matrices, whose eigenvalues are always ±, is p n R = r k r T k r k r T k where r T i r j = δ ij (5) k= k=p+ where p (0 p n) is the number of positive eigenvalues +. Since the r k are orthogonal, if p = n, then R = I whereas if p = 0, then R = I. Equations () and (4) then allow us to give compact closed form expressions for certain infinite series expansions of the Ortho-Skew and the Ortho-Sym matrices in the following way: Given any function f : C C analytic in a disc of radius > centered at the origin, assume it s Maclaurin expansion is f(x)= c k x k. Clearly then, Eq. () implies f(i) = and Eq. (4) implies = = f(r) = = = c k I k = [ c k i k + ( i) k f(i) + f( i) c k R k = [ Exponential functions [ ] i k + ( i) k c k I i ik ( i) k I (6) ] [ ] i k ( i) k I i c k I (7) I i c k + ( ) k f() + f( ) Equation (8), with f(x) = e αx yields f(i) f( i) I (8) [ + ( ) k c k I + + ] ( )k+ R (9) ] [ ] + ( ) k+ I + c k R (0) I + f() f( ) R. () e Iα = I cos α + I sin α () which gives a matrix extension of the fundamental DeMoivre formula e i α = cos α + i sin α (3) 5

to the matrix field. In this light, we can see I as a matrix generalization of the imaginary unit (or Ortho-Skew matrix). Similarly, Eq. () allows us to express e Rα as follows e Rα = I cosh α + R sinh α (4) which yields a matrix extension of the fundamental formula e α = cosh α + sinh α (5) In this case, the real unit can be considered a Ortho-Sym matrix. Equations () and (4) yield the trigonometric identities I cos α = eiα + e Iα and I sin α = eiα e Iα (6) and the hyperbolic identities I cosh α = erα + e Rα and R sinh α = erα e Rα (7) Trigonometric and Hyperbolic Functions Equation (8), with f(x)=sin(αx) or f(x)=cos(αx) yields sin(αi) = I sinh α and cos(αi) = I cosh α (8) From Eq. (8), it is possible to evaluate tan(αi) = I tanh α and cot(αi) = I coth α (9) Similarly, for the Ortho-Sym matrices, we have the relationships sin(αr) = R sin α and cos(αr) = I cos α (30) which allows us to evaluate the tangent function tan(αr) = R tan α and cot(αr) = R cot α (3) In the same way, we can also derive the following hyperbolic functions sinh(αi) = I sin α and cosh(αi) = I cos α (3) 6

From this equation, we can also evaluate the following expressions tanh(αi) = I tan α and coth(αi) = I cot α (33) Similarly, the hyperbolic functions of Ortho-Sym matrices are sinh(αr) = R sinh α and cosh(αr) = I cosh α (34) which allows us to evaluate the hyperbolic tangent functions tanh(αr) = R tanh α and coth(αr) = R coth α (35) The effect of the cosine and the hyperbolic cosine applied to αi and αr is interesting, for the result in both cases is a diagonal matrix which depends on the constant α only. Put another way, the cosine and the hyperbolic cosine of I and R completely delete all the eigenvalue and eigenvector information contained in these matrices. It appears that the cosine functions are a sort of projection aligned with all the eigenvalues and eigenvectors of these two matrix families. In fact, for I I and R R, we can write that { cos(αi ) = cos(αi ) = cosh(αr ) = cosh(αr ) = I cosh α (36) cosh(αi ) = cosh(αi ) = cos(αr ) = cos(αr ) = I cos α From Eqs. (8) and (3), we can derive that { cos (αi ) + sin (αi ) = I cosh (αi ) sinh (αi ) = I while, from Eqs. (30) and (34), we obtain { cos (αr ) + sin (αr ) = I cosh (αr ) sinh (αr ) = I (I I ) (37) (R R ) (38) It is to be noted that the relationships given in Eqs. (37) and (38) are quite different from the identities { cos (M) + sin (M) = I cosh (M) sinh (39) (M) = I that, in turn, hold for just a single (but arbitrary) matrix M. Equations (8), (30), (3), and (34), allow us to derive the general power expressions for the sines when k is even { sin k (αi) = ( ) k/ I sinh k α sinh k (αi) = ( ) k/ I sin k α 7 { sin k (αr) = I sin k α sinh k (αr) = I sinh k α (40)

while when k is odd we have { sin k (αi) = ( ) (k )/ I sinh k α sin k (αr) = R sin k α { sinh k (αi) = ( ) (k )/ I sin k α sinh k (αr) = R sinh k α (4) as well as for the cosines { cos k (αi) = I cosh k α = cosh k (αr) cosh k (αi) = I cos k α = cos k (αr) (4) Inverse Trigonometric and Hyperbolic Functions As for the inverse functions, they can be derived straightforwardly from our preceding expressions. Let us, just for example, derive one of them. From the identity sin(βi) = I sinh β given in Eq. (8), we obtain sin [sin(βi)] = βi = sin (I sinh β). (43) Now α = sinh β β = sinh α, so Eq. (43) becomes sin (αi) = I sinh α. By analogous procedures, we derive the following inverse trigonometric functions sin (αi) = I sinh α tan (αi) = I tanh α cot (αi) = I coth α and sin (αr) = R sin α tan (αr) = R tan α cot (αr) = R cot α (44) while, for the inverse hyperbolic functions, we obtain sinh (αi) = I sin α tanh (αi) = I tan α coth (αi) = I cot α and sinh (αr) = R sinh α tanh (αr) = R tanh α coth (αr) = R coth α (45) The inverse trigonometric cosine functions have the expressions cos (αi) = π I I sinh α and cos (αr) = π I R sin α (46) derived from the trigonometric identity cos α = π sin α (47) and from Eq. (44). The expressions of the inverse hyperbolic functions are cosh (αi) = π I + I sinh α and cosh (αr) = i π I i R sin α (48) 8

that can be derived from the scalar hyperbolic identity Equations (46) and (48) allow us to write the identities cosh (iα) = i π + sinh α (49) cosh (αi) = I cos (αi) and cosh (αr) = i cos (αr) (50) Trigonometric and Hyperbolic Functions Periodicity An immediate consequence of our preceding formulae is that the Ortho-Skew and the Ortho-Sym matrices fully respect the symmetry and periodicity of the trigonometric and hyperbolic functions. For instance, we have cos( αi) = cos(αi), sin( αi) = sin(αi), tan( αi) = tan(αi), cot( αi) = cot(αi), and also cos( αr) = cos(αr), sin( αr) = sin(αr), tan( αr) = tan(αr), cot( αr) = cot(αr). As for the kπ, and the kπ function periodicity, simple analogues of Eqs. (8) and () easily yield the following formulae sin(αi) = sin(αi + kπi) sin(αi ± πi) = sin(αi) cos(αi) = cos(αi + kπi) cos(αi ± πi) = cos(αi) tan(αi) = tan(αi + kπi) cot(αi) = cot(αi + kπi) and and the obvious analogous formulae hold for shifts by Iπ/ sin(αr) = sin(αr + kπi) sin(αr ± πi) = sin(αr) cos(αr) = cos(αr + kπi) cos(αr ± πi) = cos(αr) tan(αr) = tan(αr + kπi) cot(αr) = cot(αr + kπi) (5) { cos(αi) = sin(αi + π/i) tan(αi) = cot( αi + π/i) and { cos(αr) = sin(αr + π/i) tan(αr) = cot( αr + π/i) (5) All the relationships given in Eqs. (5) and (5) can be demonstrated through spectral decomposition and series expansions. As an example, let us demonstrate the identity cos(αr + kπi) = cos(αr) = I cos α. Let R = CΛC T be the spectral decomposition of R. Then we can write αr + kπi = C(αΛ)C T + C(kπI)C T = C(αΛ + kπi)c T, while the series expansion of the cosine allows us to write [ ] ( ) n cos(αr + kπi) = C (αλ + kπi)n C T (53) (n)! n=0 9

First, we notice that (αλ + kπi) n are powers of diagonal matrices, that can be substituted by powers of their scalar diagonal elements. Secondly, Λ contains only elements λ = ± and, therefore, these scalars can be written in the form kπ ± α, which means that the series for the diagonal elements (the only nonzero elements) converges either to cos(kπ + α) or to cos(kπ α), both equal to cos α. Therefore, Eq. (53) can be re-written as cos(αr + kπi) = C I cos α C T = I cos α = cos(αr) (54) which, thanks to Eq. (36), allows us to write cos(αr + mπi) = cos(αr + nπi) = I cos α cosh(αi + mπi) = cosh(αi + nπi) = I cos α cos(αi + mπi) = cos(αi + nπi) = I cosh α cosh(αr + mπi) = cosh(αr + nπi) = I cosh α for any values of the integers m and n, and for R R, and I I. The demonstration of the other formulae follows similar paths. (55) Power Function As for the power series, it is always possible to write { (αi + βi) n = I + (n even) (αr + βi) n I 4 R = 3 I + For even values of n, the coefficients of Eq. (56) have the closed-form expressions n/ ( ) = n! β n ( ) j j α (n j)! (j)! β n/ = n! β n 3 = n! β n n/ n/ 4 = n! β n ( ) j (n j )! (j + )! (n j)! (j)! ( ) j α β (n j )! (j + )! ( ) j+ α β ( ) j+ α β (56) (57) 0

while, for odd values of n, the coefficients become (n )/ = n! β n (n odd) (n )/ = n! β n (n )/ 3 = n! β n (n )/ 4 = n! β n ( ) j (n j)! (j)! ( ) j α β ( ) j (n j )! (j + )! (n j)! (j)! ( ) j α β (n j )! (j + )! ( ) j+ α β ( ) j+ α β (58) Logarithmic Function The logarithmic functions ln(i + M) has the series expansion [] ln(i + M) = ( ) k M (k+) (k + ) (59) Therefore, we have ln(i + αi) = = I ( ) k α(k+) (k + ) + I ( ) k αk+ k + (60) The first (alternating) series in Eq. (60) converges to ln( + α ), while the second is the inverse trigonometric tangent series expansion. Therefore, we obtain the solution ln(i + αi) = I ln( + α ) The logarithmic series for the Ortho-Sym matrices + I tan α (α ±i) (6) ln(i + αr) = = I α (k+) (k + ) + R α k+ k + (6)

The first series converges to ln( α ), while the second coincides with the inverse hyperbolic tangent series expansion. Therefore, Eq. (6) becomes Inverse Function ln(i + αr) = I ln( α ) Applying the geometric series identity results in (I αi) = I + R tanh α (α ±) (63) x = x k to the Ortho-Skew matrices, ( ) k α k + I ( ) k α k+ = (I + αi) (64) + α (α ±i) while, for the Ortho-Sym matrices, it gives (α ±) (I αr) = I α k + R α k+ = (I + αr) (65) α Equations (64) and (65) allow us to write { (I + I)(I I) = (I I) (I + I) = I (I + i R)(I i R) = (I i R) (I + i R) = i R (66) which are Isomorphic Cayley Transforms since they map the I and the i R matrices onto themselves, respectively. The first of Eq. (66) can be easily understood since the Cayley function f(z) = z maps z = ±i into f(z) = i [3] which are, in turn, the + z eigenvalues of I. The second of Eq. (66) involves the pure imaginary matrix ir which, in turn, has eigenvalues λ = ±i only. This is the main reason that Cayley Transforms become isomorphic for this matrix. Conclusion This paper presents some interesting compact expressions of matrix functions of two families of matrices: the Ortho-Skew and the Ortho-Sym matrices. The Ortho-Skew matrices consist of complex matrices (that can be real in even-dimensional spaces),

which are simultaneously Unitary and Skew-Hermitian, while the Ortho-Sym matrices are real matrices that are Orthogonal and Symmetric. Based mainly on the simplicity of their powers, simple analytical expressions can be derived from the series expansions of many matrix functions for these two kind of matrices. In particular, the simplicity of the presented formulae and their analogies with the scalar trigonometric and hyperbolic functions clearly allow one to speak of matrix trigonometry of Ortho-Skew and Ortho-Sym matrices. Additionally, other matrix functions such as multiplicative inverse, powers, and the logarithmic functions are taken into consideration and compact general expressions have been found for them. Acknowledgements This paper is my personal gift to Dr John L. Junkins in occasion of his sixtieth orbit about the Sun. I want to thank John for sharing part of this most recent orbit with me, for his big friendship, for his modesty, and for being a living example that working and dreaming may coincide! I strongly feel in dept with my colleague Dr Maurice J. Rojas, for the generous help given in revising this manuscript and in making it more clear, compact, and simple. I also want to thank two anonymous referees for their helpful suggestions. References [] S. Barnett, Matrices: Methods and Applications, Clarendon Press, Oxford University Press, 99. [] F. R. Gantmacher, The Theory of Matrices, Chelsea Publishing Company, New York, N.Y., 990. [3] G. H. Golub and C. V. Loan, Matrix Computations, The Johns Hopkins Univeristy Press, Baltimore, MD, 983. [4] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 99. [5] R. F. Rinehart, The Equivalence of Definitions of a Matrix Function, American Math. Monthly, Vol. 6, pp. 395 44. 3

[6] N. Dunford and J. Schwartz, Linear Operators, Part I, Interscience, New York, N.Y., 958. [7] J. S. Frame, Matrix Functions and Applications, Part II, IEEE Spectrum, April 964, pp. 0 08. [8] J. S. Frame, Matrix Functions and Applications, Part IV, IEEE Spectrum, June 964, pp. 3 3. [9] B. N. Parlett, A Recurrance among the Elements of Functions of Triangular Matrices, Linear Algebra and its Applications, Vol. 4, 976, pp. 7. [0] D. Mortari, On the Rigid Rotation Concept in n Dimensional Spaces, Journal of the Astronautical Sciences, Vol. 49, No. 3, July September 00, pp. 40 40. [] A. K. Sanyal, Geometrical Transformations in Higher Dimensional Euclidean Spaces, Master s thesis, Department of Aerospace Engineering, Texas A&M University, College Station, TX, May 00. [] G. A. Korn and T. M. Korn, Mathematical Handbook for Scientists and Engineers, McGraw Hill Book Company, Avenue of the Americas, New York, N.Y. 000, Second ed., 968. [3] D. Mortari, Conformal Mapping among Orthogonal, Symmetric and Skew Symmetric Matrices, Advances in the Astronautical Sciences, AAS Paper 03 90 of the AAS/AIAA Space Flight Mechanics Meeting, Ponce, Puerto Rico, February 9 3, 003. 4