Some Formulas for the Principal Matrix pth Root

Similar documents
Computing the pth Roots of a Matrix. with Repeated Eigenvalues

Solving Homogeneous Systems with Sub-matrices

Computing Real Logarithm of a Real Matrix

Recurrence Relations between Symmetric Polynomials of n-th Order

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

Solving Systems of Fuzzy Differential Equation

Recursiveness in Hilbert Spaces and Application to Mixed ARMA(p, q) Process

Two applications of the theory of primary matrix functions

On Powers of General Tridiagonal Matrices

k-pell, k-pell-lucas and Modified k-pell Numbers: Some Identities and Norms of Hankel Matrices

MATH 5524 MATRIX THEORY Problem Set 4

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh

ON THE QR ITERATIONS OF REAL MATRICES

Properties for the Perron complement of three known subclasses of H-matrices

Explicit Expressions for Free Components of. Sums of the Same Powers

Parallel Properties of Poles of. Positive Functions and those of. Discrete Reactance Functions

A Family of Optimal Multipoint Root-Finding Methods Based on the Interpolating Polynomials

Research Article Diagonally Implicit Block Backward Differentiation Formulas for Solving Ordinary Differential Equations

of a Two-Operator Product 1

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Geometric Mapping Properties of Semipositive Matrices

Pascal Eigenspaces and Invariant Sequences of the First or Second Kind

Analysis of Carleman Representation of Analytical Recursions

Optimal Scaling of Companion Pencils for the QZ-Algorithm

Symmetric Properties for Carlitz s Type (h, q)-twisted Tangent Polynomials Using Twisted (h, q)-tangent Zeta Function

Compound matrices and some classical inequalities

The Expansion of the Confluent Hypergeometric Function on the Positive Real Axis

Definition (T -invariant subspace) Example. Example

Diagonalizing Matrices

Solution of the Inverse Eigenvalue Problem for Certain (Anti-) Hermitian Matrices Using Newton s Method

A Note of the Strong Convergence of the Mann Iteration for Demicontractive Mappings

Z-Pencils. November 20, Abstract

Equiintegrability and Controlled Convergence for the Henstock-Kurzweil Integral

k-jacobsthal and k-jacobsthal Lucas Matrix Sequences

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

Diagonalizing Hermitian Matrices of Continuous Functions

Matrix functions that preserve the strong Perron- Frobenius property

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

On the Solution of the n-dimensional k B Operator

The Generalized Viscosity Implicit Rules of Asymptotically Nonexpansive Mappings in Hilbert Spaces

Qualitative Theory of Differential Equations and Dynamics of Quadratic Rational Functions

Note About a Combinatorial Sum

Some families of identities for the integer partition function

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

ELA

On Some Identities and Generating Functions

A Practical Method for Decomposition of the Essential Matrix

Frame Diagonalization of Matrices

Surjective Maps Preserving Local Spectral Radius

INFINITUDE OF MINIMALLY SUPPORTED TOTALLY INTERPOLATING BIORTHOGONAL MULTIWAVELET SYSTEMS WITH LOW APPROXIMATION ORDERS. Youngwoo Choi and Jaewon Jung

Numerical Investigation of the Time Invariant Optimal Control of Singular Systems Using Adomian Decomposition Method

ON THE MATRIX EQUATION XA AX = X P

Solution of Differential Equations of Lane-Emden Type by Combining Integral Transform and Variational Iteration Method

MATHEMATICS 217 NOTES

1 Introduction. 2 Determining what the J i blocks look like. December 6, 2006

The Greatest Common Divisor of k Positive Integers

Morera s Theorem for Functions of a Hyperbolic Variable

Fixed Point Theorems with Implicit Function in Metric Spaces

Second Hankel Determinant Problem for a Certain Subclass of Univalent Functions

Some Reviews on Ranks of Upper Triangular Block Matrices over a Skew Field

ACG M and ACG H Functions

Non Isolated Periodic Orbits of a Fixed Period for Quadratic Dynamical Systems

On the Three-Phase-Lag Heat Equation with Spatial Dependent Lags

On the computation of the Jordan canonical form of regular matrix polynomials

The generalized order-k Fibonacci Pell sequence by matrix methods

Efficient algorithms for finding the minimal polynomials and the

2 Computing complex square roots of a real matrix

Application of Block Matrix Theory to Obtain the Inverse Transform of the Vector-Valued DFT

The Fibonacci Identities of Orthogonality

Strong Convergence of the Mann Iteration for Demicontractive Mappings

Diameter of the Zero Divisor Graph of Semiring of Matrices over Boolean Semiring

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

On Numerical Solutions of Systems of. Ordinary Differential Equations. by Numerical-Analytical Method

Improvements in Newton-Rapshon Method for Nonlinear Equations Using Modified Adomian Decomposition Method

Spectral inequalities and equalities involving products of matrices

r-ideals of Commutative Semigroups

BP -HOMOLOGY AND AN IMPLICATION FOR SYMMETRIC POLYNOMIALS. 1. Introduction and results

GENERAL ARTICLE Realm of Matrices

QUASI-DIAGONALIZABLE AND CONGRUENCE-NORMAL MATRICES. 1. Introduction. A matrix A C n n is normal if AA = A A. A is said to be conjugate-normal if

Research Article The Adjacency Matrix of One Type of Directed Graph and the Jacobsthal Numbers and Their Determinantal Representation

Representation of doubly infinite matrices as non-commutative Laurent series

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

AN ESTIMATE OF GREEN S FUNCTION OF THE PROBLEM OF BOUNDED SOLUTIONS IN THE CASE OF A TRIANGULAR COEFFICIENT

ELA

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

On Linear Recursive Sequences with Coefficients in Arithmetic-Geometric Progressions

1 Last time: least-squares problems

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

Linear Algebra and its Applications

Some bounds for the spectral radius of the Hadamard product of matrices

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

MATRIX ARITHMETIC-GEOMETRIC MEAN AND THE COMPUTATION OF THE LOGARITHM

k-weyl Fractional Derivative, Integral and Integral Transform

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Pairs of matrices, one of which commutes with their commutator

An Alternative Definition for the k-riemann-liouville Fractional Derivative

Means of unitaries, conjugations, and the Friedrichs operator

A new parallel polynomial division by a separable polynomial via hermite interpolation with applications, pp

Certain Generating Functions Involving Generalized Mittag-Leffler Function

Transcription:

Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and M. Rachidi Equip of DEFA - Department of Mathematics and Informatics Faculty of Sciences University of My Ismail B.P. 4010 Beni M hamed Menes - Morocco Copyright c 014 R. Ben Taher Y. El Khatabi and M. Rachidi. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium provided the original wor is properly cited. Abstract In this paper we project to develop two methods for computing the principal matrix pth root. Our approach maes use of the notion of primary matrix functions and minimal polynomial. Therefore compact formulas for the principal matrix pth root are established and significant cases are explored. Mathematics Subject Classification: Primary 15A4 15A99 65H10 - Secondary 15A18 Keywords: Principal Matrix pth root Matrix function 1 Introduction Recently the study of the matrix pth root attract much attention for the sae of its applications in various fields of mathematics engineering and many problems of applied science. For example in the field of systems theory owing to its closed relation with the matrix sector function [16]. The matrix pth roots are also deployed in the computation of matrix logarithm through an interesting usual relation see for example [7]. Others applications arise in areas such as matrix differential equations nonlinear matrix equations finance and health care [11]. Notably the matrix pth root was the ey in the proof of Floquet

14 R. Ben Taher Y. El Khatabi and M. Rachidi theorem for the difference equations. Therefore various theoretical and numerical methods have been provided for evaluating this matrix functions (see [6] [10] [14] [15] [16]). Sadeghi et al. [15] propose a method for computing the principal pth root of matrices with repeated eigenvalues; which is based on the properties of constituent matrices. Guo [10] gives new convergence results for Newton s and Halley s method. Iannazzo [14] provides a Newton s iteration and derives a general algorithm for computing the matrix pth root he demonstrates convergence and stability for this algorithm in a specific convergence region. Theoretical results and some algorithms have been presented by Bini et al. [6]. Yet a Schur method was proposed by Smith [16] for computing the matrix pth root. Similarly to the matrix logarithm function (see for example [11] [1] [1] [9]) the definition of the matrix pth root and the principal matrix pth root is not easy to identify we give here the basic tool for our study. Let A be a real or complex matrix of order r and p be a positive integer. Any matrix X such that X p = A is called pth root of A. When A has no eigenvalues on R (the closed negative real axis) then there exists a unique matrix X such that X p = A; and whose eigenvalues are belonging to the set {z; π/p < arg(z) < π/p} where arg(z) denotes the argument of the complex number z. In this case the unique matrix X is called the principal pth root of the matrix A and is a primary matrix function of A (see [11] and [1 Ch. 6]). It is denoted by X = A 1/p. For details about theory of Matrices we refer the reader to the boos [11] and [1]. In this paper we deal with two methods for computing the principal matrix pth root. The methods involve nothing more than the notion of primary matrix functions minimal polynomial and the Lagrange-Sylvester interpolation polynomial (see [] [3] [4] [5] [8] [9] [13] ). More precisely if M A (z) = s i=1 (z λ i) m i ( s i=1 m i = m r) is the minimal polynomial of A we are interested in two decompositions of the form A 1/p = m 1 w (A) where w (z) =z or w (z) = s j=1j (z λ ji r ) m j and the scalars satisfy the linear system of m equations. Thereby we establish some formulas of the principal matrix pth root of A (r ). For reasons of clarity some new particular cases are examined and others existing in the literature are paraphrased. Moreover significant examples are also presented for illustrating our various results. We emphasize that our formulas are not current in the literature The material of this paper is organized as follows. In Section we study some explicit formulas of the principal matrix pth root by using the decomposition A 1/p = m 1 A. Section 3 is also devoted to the computation of the principal matrix pth root here our approach rests on the decomposition A 1/p = m 1 s j=1j (A λ ji r ) m j. Furthermore the case p =1/for 3 3 matrices is explored in light of our results. Finally some concluding remars and perspectives are given in Section 4.

Principal matrix pth root 143 Throughout this paper A 1/p represents the principal matrix pth root and Sp(A) will designate the set of all eigenvalues of A. Polynomial expression of A 1/p Given A M r (C) such that Sp(A) C \ (R {0}) and M A (z) = s i=1 (z μ i ) m i ( s i=1 m i = m r) its minimal polynomial. There exist an invertible matrix Z satisfying A = ZJ A Z 1 where J A denotes the Jordan canonical form of the matrix A. Then the principal matrix pth root of A is A 1/p = Zdiag(M 1/p 1 M 1/p... Ms 1/p )Z 1 where the blocs M i associated to the eigenvalues μ i are of sizes r i r i with r 1 + r + + r s = r. For each i =1... swe can write M i = J i1 J i J iqi ; J i1 = J mi (μ i ) is a Jordan bloc of order m i and such that the other blocs are of orders m i. Therefore the blocs [J n (μ i )] 1/p (n m i ) are given by f(μ i ) f f (μ i ) (n 1) (μ i ) (n 1)!. [J n (μ i )] 1/p = f(μ i ).... ;.. f (μ i ) f(μ i ) where f(z) =z 1/p is the pth root function considered on its principal branch. Notice that f is defined on the spectrum of A in other words the values f(μ ) f (μ )... f m 1 (μ ) exist for each =1... s. We formulate the main result of this section as follows. Theorem.1 Let p be an integer and A M r (C) such that Sp(A) C\(R {0}) and M A (z) = s i=1 (z μ i) m i ( s i=1 m i = m r) its minimal polynomial. Then we have A 1/p = m 1 A where the scalars satisfy the following system of m equations m 1 μ i = μ 1 p i ( ) m 1 =j Ω(p) μ j i = 1 j 1 j l=0 ( 1 l) μ p j p i =1... s j =1... m i 1. n n i j! Proof. Consider the complex function f(z) =z 1/p defined on its principal branch. The function f(a) =A 1/p is a primary matrix function of A then there exist a polynomial q of degree m 1 and m unique scalars 0 1... m 1 such that A 1/p = q(a) = m 1 A (for more details see [9 11 1 13]). Besides the Jordan canonical form of A is J A = J m1 (μ 1 ) J ms (μ s ) J A

144 R. Ben Taher Y. El Khatabi and M. Rachidi where JA is the direct sum of Jordan blocs corresponding to the eigenvalues μ i ; the orders of which are m i (if there exist). Since J 1/p A = Z 1 A 1/p Zwe get J 1/p A = m 1 J A which implies (by equality of blocs) [J m i (μ i )] 1/p = m 1 [J m i (μ i )] (for every i =1 s) and [JA ]1/p = m 1 [J A ]. And a direct computation shows that [J mi (μ i )] =(c () lj ) 1 lj m where c () lj = (see [8]) we have thereby the following system ( j l )μ j+l i m 1 μ i = μ 1 p i ( ) m 1 =j Ω(p) μ j i = 1 j 1 j l=0 ( 1 l) μ p j p i =1... s j =1... m i 1. i j! Note that [JA ]1/p = m 1 [J A ] does not produce any new equations. As a matter of fact the previous system of m equation and m unnowns admits a unique solution. Similar expressions to A 1/p = m 1 A have been explored for e ta in [3] [4] [8] and the principal matrix logarithm Log(I ta) in [1]. We recall that such expression is called the polynomial decomposition in [1] [3] [4] [8]. Moreover assume that A B M r (C) are two similar matrices that means there exists a non-singular matrix Z such that B = Z 1 AZ. If A satisfies the conditions of Theorem.1 we get A 1/p = m 1 A then it follows that B 1/p = Z 1 A 1/p Z = Z 1 m 1 A Z. Hence we have B 1/p = m 1 B. The computation of the principal matrix square root of a given matrix can be derived easily from Theorem.1 as follows. Corollary. Under the conditions of the Theorem.1 the principal square root of a matrix A M r (C) is A 1/ = m 1 =0 Ω() A where the scalars Ω () (0 m 1) are the solutions of the linear system of equations m 1 =0 Ω() μ i = μ 1 i ( ) m 1 =j Ω() μ j i = j 1 j l=0 ( 1 l) μ 1 j i j! i =1... s j =1... m i 1. In the following example we will apply Corollary. for computing the square root of a 3 3 matrix with two distinct eigenvalues.

Principal matrix pth root 145 18 33 1 Example.3 Let consider A = 5 8 4. We have M A (z) = 3 7 1 (z 3) (z 5). It turns out that A 1/ = A where the real scalars Ω () 0 Ω () 1 and Ω () verify the linear system Thereafter we get Ω () 0 = 9 4 Hence we obtain =0 Ω() 5 5 () 3 Ω 1 = 13 6 Ω () 0 +3Ω () 1 +9Ω () Ω () 0 +5Ω () 1 + 5Ω () Ω () 1 +6Ω () 3 3 = 3 = 5. = 1 3 1. () 5 and Ω = 5 3. 4 3 A 1/ = ( 9 5 5 3)I3 +( 13 3 5 3 3 5)A +( 4 6 4 3 )A 6 5 9 3 1 3 1 5 6( 5 3) = 5 11 6 3 9 3 4 5 ( 5 3). 5 6 3 5 5 3 3 3 5 3 Another approach for computing A 1/p 3.1 The general setting. In this subsection we are concerned in the computation of the principal matrix pth root A 1/p (p ) with the aid of the Lagrange-Sylvester polynomial approach. That is for a matrix A in M r (C) with minimal polynomial M A (z) = s i=1 (z λ i) m i and a function f defined on the spectrum of A there exists a unique polynomial r of degree less than m such that f(a) =r(a) (see [9 Ch. 5]). In the literature the polynomial r(a) is nown as the Lagrange-Sylvester interpolation polynomial of f on the spectrum of A. By incorporating the properties of the Jordan canonical form of the matrix A we manage to bring out the main result of this section. Theorem 3.1 Let p be an integer and A M r (C) with Sp(A) C \ (R {0}) and M A (z) = s =1 (z λ ) m ( s =1 m = m r). Then we have A 1/p = s =1 [ m 1 τ=0 τ (A λ I r ) τ ] s j=1j (A λ j I r ) m j (1)

146 R. Ben Taher Y. El Khatabi and M. Rachidi where the scalars 0 Ω(p) 1... Ω(p) m 1 satisfy the linear system s 0 j=1j (λ λ j ) m j = λ 1 p i τ=0 Ω(p) τ Δ() iτs = 1 () i 1 l=0 ( 1 l) λ p i p i! =1... s i=1... m 1 such that Δ () iτs = ( ) s mj a H s () (i τ) j=1j (λ a λ j ) m j a j and j H s () (π) ={(a 1 a a s ) H s (π) :a =0} with H s (π) ={(a 1 a a s ) N s : s i=1 a i = π} for all π 0 s 1 and 1 s. Proof. Consider the analytical function f : C \ (R {0}) D given by f(z) =z 1/p such that D = {z; π/p < arg(z) <π/p}. Recall that f is well defined on the spectrum of A applying the Lagrange-Sylvester interpolation polynomial for f we have [ s m ] 1 s A 1/p = τ (A λ I r ) τ (A λ j I r ) m j. (3) =1 τ=0 j=1j Without loss of generality we can suppose that all eigenvalues of A are semisimples. Therefore the Jordan canonical form of A will tae the form J A = J m1 (λ 1 )... J ms (λ s ). Moreover we have (J A λ I r ) τ =[J m1 (λ 1 λ )] τ... [J m (0)] τ... [J ms (λ s λ )] τ. For each =1... s the entries of the matrix s j=1j (J A λ j I r ) m j are equal to 0 except for the th diagonal bloc. The equality of blocs applied to (3) leads to [J m (λ )] 1/p = m 1 τ=0 τ [J m (0)] τ s j=1j [J m (λ λ j )] m j for =1... s. The matrix s j=1j [J m (λ λ j )] m j is upper triangular with s ( ) mj the ith upper diagonal elements are all equal to (λ a H () s (i) j=1j λ j ) m j a j i =0 1... m 1 (for more details see [8]). And a direct computation shows that the elements on the ith upper diagonal of the matrix s [J m (0)] τ [J m (λ λ j )] m j are all equal to j=1j a H () s s (i τ) j=1j ( mj a j ) (λ λ j ) m j a j a j

Principal matrix pth root 147 for i = τ... m 1. Therefore we manage to get the system of linear equations () as desired. For reason of clarity suppose that A owns two distinct eigenvalues λ and μ lying in C\(R {0}) with M A (z) =(z λ) (z μ). Theorem 3.1 implies that [ ] [ ] A 1/p = 10I 4 + 11(A μi 4 ) (A λi 4 ) + 0I 4 + 1(A λi 4 ) (A μi 4 ) where the scalars 10 11 0 and 1 satisfy the linear system 10 (μ λ) = μ 1 p 10[(μ λ)] + 11(μ λ) = 1 p μ 1 p 1 0 (λ μ) = λ 1 p 0[(λ μ)] + 1(λ μ) = 1 p λ 1 p 1. As a matter of a fact we have λ 1 p λ 1 p 10 = 0 = and Ω(p) (λ μ) 1 = (λ μ) we consider the following example. A 1/4 = μ 1 p 11 = (μ λ) Ω(p) [ 1 pλ λ μ μ 1 [ p 1 (μ λ) pμ ] μ λ ]. For illustrating this case 4 3 1 3 0 Example 3. Let p =4and A =. Then we have 1 4 4 μ = λ =3 Ω (4) 10 = 1 4 Ω (4) 11 = 17 1 4 Ω (4) 8 0 =3 1 4 and Ω (4) 1 = 3 3 1 4. 1 Therefore the principal 4th root of A is 3 7 8 +3 3 4 7 8 3 3 4 4 3 3 4 3 3 4 3 3 4 11 1 3 1 4 5 7 8 9 7 8 3 1 4 5 3 4 3 1 4 3 1 4 9 11 1 3 3 4 3 3 4 3 3 4 (3 1 4 1 4 ) ( 1 4 3 1 4 ) 4( 1 4 3 1 4 ) 3 1 4 1 4 3 3 4.

148 R. Ben Taher Y. El Khatabi and M. Rachidi Under the conditions of Theorem 3.1 suppose that A has a single eigenvalue λ inc\(r {0}) of multiplicity m r. In this case Expression (1) taes m 1 the form A 1/p = 1τ(A λi r ) τ. For all τ =0 1... i the only nonempty τ=0 set H1(i 1 τ) is obtained for τ = i which corresponds to a 1 = 0. Hence the i 1 system () permits to have 10 = λ 1 (p) p Ω 1i = ( 1 p i p l)λ1 i =1... m 1. i! l=0 Thus we can formulate the following corollary. Corollary 3.3 If A has a single eigenvalue λ C\(R {0}) of multiplicity m r then A 1/p = λ 1 p Ir + m 1 1 ( 1 p p l)λ1 (A λi r ).! =1 l=0 Using the notion of primary matrix functions and the previous formula we have A 1/p = m 1 A where =0 f (p) [ f (p) 0 = 1+ m 1 ( 1) ] 1 =1! l=0 ( 1 l) λ 1 p p f (p) = λ p 1 m 1 ( 1) i i 1! i= (i )! l=0 ( 1 l) =1... m 1. p It turn out that a similar formula may be inferred with the aid of Theorem.1. Now let suppose that A owns m distinct eigenvalues. A direct application of Theorem 3.1 permits us to recover the following Lagrange-Sylvestre interpolation property. Corollary 3.4 (Lagrange Interpolation). If A has m distinct eigenvalues λ 1 λ... λ m in C\(R {0}) then M A (z) = m i=1 (z λ i) and A 1/p = m =1 λ 1 p m j=1j A λ j I r λ λ j. (4) Our results are valid for square matrices of large size for reason of clarity we will examine the case of 3 3 matrices.

Principal matrix pth root 149 3. Study of the principal matrix square root of 3 3 matrices This subsection is devoted to explore the principal square root of 3 3 matrices using results established in Subsection 3.1. Various expressions are provided depending on the expression of minimal polynomial. Let A M 3 (C) throughout this subsection we will assume that A satisfies the conditions of Theorem 3.1. We begin by supposing that A admits a single eigenvalue λ from Corollary 3.3 it follows that A 1/ = λ 1 I 3 if M A (z) =z λ A 1/ = λ 1 I 3 + 1 λ 1 (A λi 3 ) if M A (z) =(z λ) and A 1/ = λ 1 I3 + 1 λ 1 (A λi3 ) 1 8 λ 3 (A λi3 ) if M A (z) =(z λ) 3. 13 4 1 3 Example 3.5 Consider A = 9 0 3. Since M A(z) =(z 16) 4 3 15 3 the computation of the principal square root of this matrix can be derived easily from the corresponding formula. That is a direct computation allows us to have 9 1 1 8 6 8 A =4 I3 + 1 8 (A 16I 3)= 9 9 3 8 8. Now suppose that A M 3 (C) admits two distinct eigenvalues λ and μ. Then we have two cases to discuss M A (z) =(z λ)(z μ) orm A (z) =(z λ) (z μ). Indeed in the first case the Lagrange interpolation of Corollary 3.4 implies that A 1/ = λ 1 A μi 3 λ μ + μ 1 A λi 3 μ λ. (5) For the second case it follows from Theorem 3.1 that [ ] A 1/ = Ω () λ0 I 3 +Ω () λ1 (A μi 3) (A μi 3 )+Ω () μ0(a λi 3 ) 3 8 1 6 31 8

150 R. Ben Taher Y. El Khatabi and M. Rachidi where Ω () λ0 Ω() λ1 Ω() μ0 satisfy the system : Ω () λ0 (λ μ) =λ 1 Ω () μ0(μ λ) = μ 1 and Ω () 1 λ (λ μ) =. Therefrom we obtain λ0 +Ω() λ1 A 1/ =[a I 3 + b (A λi 3 )] (A μi 3 )+c (A λi 3 ). (6) where a = λ 1 b = 1 (λ+μ)λ and c = μ 1. λ μ (λ μ) (μ λ) Example 3.6 Let compute the principal square root of the matrix A given by A = 1 +( 3 e iπ 3 0 0 4)i eiπ 3 1 +( 3 4)i 1 +(4 3)i 0 4i We have M A (z) =(z e iπ 3 )(z 4i). Thus from Formula (5) we derive A 1/ = e πi 3 A 4i I 3 πi +e 1 4 +( 3 4)i A e πi 3 I 3. 1 +(4 3 )i. The numerical entries of A can be computed easily from the preceding expression. Finally suppose that A M 3 (C) owns three eigenvalues distinct λ μ and ν. A direct application of Corollary 3.4 (when p = and m = 3) yields A 1/ = λ 1 (A μi 3 )(A νi 3 ) (λ μ)(λ ν) + μ 1 (A λi 3 )(A νi 3 ) (μ λ)(μ ν) + ν 1 (A λi 3 )(A μi 3 ). (ν λ)(ν μ) 4 Concluding remars and perspectives From one end to the other in the preceding sections we have developed a method to calculate the principal matrix pth root where the notion of primary matrix functions Jordan canonical form and Lagrange-Sylvester interpolation play a central role. In contrast the resolution of a linear system is required to obtain explicit formulas for the principal matrix pth root. As far as we now most part of our results are not current in the literature. References [1] J. Abderraman Marrero R. Ben Taher and M. Rachidi On explicit formulas for the principal matrix logarithm Applied Mathematics and Comptation 0: 14-148 (013).

Principal matrix pth root 151 [] R. Ben Taher and M. Rachidi On the matrix powers and exponential by r-generalized Fibonacci sequences methods: the companion matrix case Linear Algebra and Its Applications 370 : 341 353 003. [3] R. Ben Taher and M. Rachidi Some explicit formulas for the polynomial decomposition of the matrix exponential and applications Linear Algebra and Its Applications 350 : 171-184 00. [4] R. Ben Taher and M. Rachidi Linear recurrence relations in the algebra matrices and applications Linear Algebra and Its Applications 330 : 15 4 001. [5] R. Ben Taher M. Mouline and M. Rachidi Fibonacci-Horner decomposition of the matrix exponential and the fundamental solution Electronic Linear Algebra 15 : 178 190 006. [6] D. A. Bini N. J. Higham and B. Meini Algorithms for the matrix p th root Numerical Algorithms 39 : 349 378 005. [7] P. J. Davis and P. Rabinowitz Methods of Numerical Integration nd ed. Academic Press London 1984. [8] H.W. Cheng and S.S.-T. Yau On more explicit formulas for matrix exponential Linear Algebra and Its Appl. 6 : 131 163 1997. [9] F. R. Gantmacher Theory of Matrices. Chelsea Publishing Company New Yor 1959. [10] C. H. Guo On Newton s method and Halley s method for the principal pth root of a matrix Linear Algebra Appl 43(11) : 98 930 009. [11] N. J. Higham Functions of Matrices: Theory and Computation Society for Industrial and Applied Mathematics Philadelphia PA USA 008. [1] R. A. Horn and C. R. Johnson Topics in Matrix Analysis. Cambridge Univ. Press Cambridge UK 1994. [13] Roger A. Horn and Gregory G. Piepmeyer Two applications of the theory of primary matrix functions. Linear Algebra Appl. 361 : 99 106 003. [14] B. Iannazzo On the Newton method for the matrix p th root SIAM J. Matrix Anal. Appl. Vol. 8 No. : 503 53 006. [15] A. Sadeghi A. Izani Md. Ismail and A. Ahmad Computing the pth Roots of a Matrix with Repeated Eigenvalues. Applied Mathematical Sciences Vol. 5 No. 53 : 645 661 011.

15 R. Ben Taher Y. El Khatabi and M. Rachidi [16] M. I. Smith A Schur algorithm for computing matrix p th roots SIAM J. Matrix Anal. Appl. 4 (4) : 971 989 003. Received: January 5 014