ETNA Kent State University

Similar documents
ON LINEAR COMBINATIONS OF

Hermite Interpolation and Sobolev Orthogonality

On some tridiagonal k Toeplitz matrices: algebraic and analytical aspects. Applications

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

Diagonalizing Matrices

Zeros and ratio asymptotics for matrix orthogonal polynomials

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

Katholieke Universiteit Leuven Department of Computer Science

Review of some mathematical tools

On some tridiagonal k-toeplitz matrices: Algebraic and analytical aspects. Applications

LINEAR ALGEBRA SUMMARY SHEET.

1. Introduction and notation

Constrained Leja points and the numerical solution of the constrained energy problem

ETNA Kent State University

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Multiple Orthogonal Polynomials

Spectral Theory of Orthogonal Polynomials

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Orthogonal Polynomials and Gaussian Quadrature

Szegő-Lobatto quadrature rules

ETNA Kent State University

Determinantal point processes and random matrix theory in a nutshell

Generating Function: Multiple Orthogonal Polynomials

New series expansions of the Gauss hypergeometric function

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia

Orthogonal polynomials

Elementary linear algebra

GENERALIZED STIELTJES POLYNOMIALS AND RATIONAL GAUSS-KRONROD QUADRATURE

Positive definite preserving linear transformations on symmetric matrix spaces

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES

SZEGÖ ASYMPTOTICS OF EXTREMAL POLYNOMIALS ON THE SEGMENT [ 1, +1]: THE CASE OF A MEASURE WITH FINITE DISCRETE PART

Linear Algebra: Matrix Eigenvalue Problems

Matrix methods for quadrature formulas on the unit circle. A survey

Gegenbauer Matrix Polynomials and Second Order Matrix. differential equations.

1 Last time: least-squares problems

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Simultaneous Gaussian quadrature for Angelesco systems

Math Linear Algebra II. 1. Inner Products and Norms

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

MAT Linear Algebra Collection of sample exams

Quadratures and integral transforms arising from generating functions

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

Chapter 3 Transformations

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

OPSF, Random Matrices and Riemann-Hilbert problems

ETNA Kent State University

The Finite Spectrum of Sturm-Liouville Operator With δ-interactions

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

The inverse of a tridiagonal matrix

OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE

A Brief Outline of Math 355

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS

Applied Linear Algebra

Orthogonal matrix polynomials satisfying first order differential equations: a collection of instructive examples

October 25, 2013 INNER PRODUCT SPACES

Problems Session. Nikos Stylianopoulos University of Cyprus

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

A new linear spectral transformation associated with derivatives of Dirac linear functionals

An Inverse Problem for the Matrix Schrödinger Equation

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

CHAPTER 6. Representations of compact groups

POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers

Iyad T. Abu-Jeib and Thomas S. Shores

Recurrence Relations and Fast Algorithms

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Numerical Linear Algebra Homework Assignment - Week 2

Control Systems. Linear Algebra topics. L. Lanari

Orthogonal Polynomial Ensembles

Quantum Computing Lecture 2. Review of Linear Algebra

Problem 1A. Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1. (Hint: 1. Solution: The volume is 1. Problem 2A.

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

Asymptotics of Orthogonal Polynomials on a System of Complex Arcs and Curves: The Case of a Measure with Denumerable Set of Mass Points off the System

Clifford Algebras and Spin Groups

Kaczmarz algorithm in Hilbert space

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Boolean Inner-Product Spaces and Boolean Matrices

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Linear Algebra and its Applications

Some notes about signals, orthogonal polynomials and linear algebra

1. General Vector Spaces

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Numerical Linear Algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

The Lanczos and conjugate gradient algorithms

G1110 & 852G1 Numerical Linear Algebra

Math 307 Learning Goals

Interpolation and Cubature at Geronimus Nodes Generated by Different Geronimus Polynomials

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Linear Algebra using Dirac Notation: Pt. 2

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear Algebra Massoud Malek

Computation of Rational Szegő-Lobatto Quadrature Formulas

Lecture 7. Econ August 18

Exercise Sheet 1.

Transcription:

Electronic Transactions on Numerical Analysis Volume 14, pp 127-141, 22 Copyright 22, ISSN 168-9613 ETNA etna@mcskentedu RECENT TRENDS ON ANALYTIC PROPERTIES OF MATRIX ORTHONORMAL POLYNOMIALS F MARCELLÁN AND H O YAKHLEF Abstract In this paper we give an overview of recent results on analytic properties of matrix orthonormal polynomials We focus our attention on the distribution of their zeros as well as on the asymptotic behavior of such polynomials under some restrictions about the measure of orthogonality Key words matrix orthogonal polynomials, zeros, asymptotic behavior AMS subject classifications 42C5, 15A15, 15A23 1 Introduction Consider a p p positive definite matrix of measures W (x) = (W i,j (x)) p i,j=1 supported on Ω (Ω = R or Ω = T, the unit circle), ie, for every Borel set A Ω the numerical matrix W (A) = (W i,j (A)) p i,j=1 is positive semi-definite Notice that the diagonal entries of W are positive measures and the non-diagonal entries are complex measures with W i,j = W j,i For a positive definite matrix of measures W the support of W is the support of the trace measure τ(w ) = p i=1 W i,i A matrix polynomial of degree m is a mapping P : C C (p,p) such that P (x) = M m x m + M m 1 x m 1 + + M 1 x + M, where (M k ) m k= C(p,p) and M m is different of the zero matrix We denote P (x) = Mmx m + Mm 1x m 1 + + M1 x + M Assuming that Ω P (x)dw (x)p (x) is nonsingular for every matrix polynomial P with nonsingular leading coefficient, we introduce an inner product in the linear space of matrix polynomials C (p,p) [x] in the following way def (11) P, Q L = P (x)dw (t)q (x) Ω Using the Gram-Schmidt orthonormalization process for the canonical sequence {x n I p } n=, we will obtain many sequences of matrix polynomials which are orthonormal with respect to (11) Indeed, if (P n ) is a sequence of matrix polynomials such that (12) P n, P m L = δ n,m I p then for every sequence (U n ) of unitary matrices, the sequence (R n ) such that R n = U n P n satisfies R n, R m L = δ n,m I p Such orthonormal polynomials are interesting not only from a theoretical point of view but by their applications in many scientific domains The first author was supported by Ministerio de Ciencia y Tecnología (Dirección General de Investigación) of Spain under grant BHA2-26-C4-1 Received December 12, 21 Accepted for publication May 21, 21 Communicated by Sven Ehrich Departamento de Matemáticas, Universidad Carlos III de Madrid, Spain (pacomarc@inguc3mes) Departamento de Matemática Aplicada, Universidad de Granada, Spain (houlad@goliatugres) 127

etna@mcskentedu 128 F Marcellán and H O Yakhlef Orthogonal matrix polynomials on the real line appear in the Lanczos method for block matrices [11, 12], in the spectral theory of doubly infinite Jacobi matrices [18], in the analysis of sequences of polynomials satisfying higher order recurrence relations [9], in rational approximation and in system theory [1] Orthogonal matrix polynomials on the unit circle are used in the inversion of finite block Toeplitz matrices which arise naturally in linear estimation theory The matrix to be inverted is the covariance matrix of a multivariate stationary stochastic process [14] Furthermore, they appear in the analysis of sequences of polynomials orthogonal with respect to scalar measure supported on equipotential curves in the complex plane [13] Finally, another application in time series analysis consists in the frequency estimation of a stationary harmonic process (X n ), ie, X n = n [A k cos nw k + B k sin nw k ] + Z n, k=1 where (A k ), (B k ) are matrices of dimension p and Z n is a white noise The frequencies (w k ) n k=1 are unknown and need to be estimated from the data They can be given in terms of zeros of matrix orthogonal polynomials associated with some purely discrete measure supported on the unit circle [18] The aim of the present contribution is to give a framework of the subject and summarize some recent contributions focused in two aspects: 1 The asymptotic behavior of sequences of matrix orthonormal polynomials in several cases (real and unit circle, respectively) 2 The distribution of the zeros of such polynomials as well as their connection with matrix quadrature formulas These questions have attracted during the last decade the interest of several research groups A big effort was done in the analytic theory by A J Durán and coworkers in Universidad of Sevilla, and W Van Assche in Katholieke Universiteit Leuven among others We hope that our work will be a useful approach for beginners, following the nice surveys [15, 18] The structure of the paper is the following In section 2 we introduce matrix orthogonal polynomials on the real line, and we consider the three-term recurrence relation which characterizes them In section 3 we give some basic results about the zeros of such polynomials, and we explain the analog of the gaussian quadrature formulas in the matrix case In section 4, the matrix Nevai class is studied, and thus relative asymptotics for the corresponding sequences of matrix orthonormal polynomials are discussed Furthermore, the analysis of perturbations in the Nevai class by the addition of a discrete measure supported in a singleton is presented In section 5, we analyze matrix orthonormal polynomials on the unit circle We focus our attention on the study of their zeros and, as an application, we find quadrature formulas extending the well known results of the scalar case Finally, in section 6 we present the connection between matrix of measures supported on the interval [ 1, 1] and matrix of measures supported on the unit circle 2 Orthogonal matrix polynomials on the real line Let W be a matrix of measures supported on the real line As in the scalar case, the shift operator H : C (p,p) [x] C (p,p) [x], H[P ](x) = xp (x) is symmetric with respect to the inner product (11) Thus, for every sequence (P n ) of matrix orthonormal polynomials with respect to (11), we get a three-term recurrence relation (21) z P n (z; W ) = D n+1 (W )P n+1 (z; W ) +E n (W )P n (z; W ) + D n(w )P n 1 (z; W ), n,

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 129 where is a Hermitian matrix, and E n = xp n, P n L = P n, xp n L = E n, n D n = xp n 1, P n L, n 1 Notice that if the leading coefficient A n of P n is nonsingular, then D n = A n 1 A 1 n, ie, D n is a nonsingular matrix On the other hand, since (U n P n ) is another sequence of matrix orthonormal polynomials when (U n ) are unitary matrices, in such a case the corresponding coefficients in the three-term recurrence relation are Ẽn = U n E n Un and D n = U n 1 D n Un Conversely, given two sequences of matrices (D n ) and (E n ) of dimension p, such that (D n ) are nonsingular matrices and (E n ) are Hermitian matrices, then there exists a positive definite matrix of measures W such that the matrix polynomials defined by the recurrence relation (22) xy n = D n+1 Y n+1 + E n Y n + D n Y n 1, n, with the initial conditions Y 1 = and Y = I p constitute a sequence of matrix polynomials orthonormal with respect to the inner product (11) associated with W In fact, W is related with the spectral resolution of the identity for the operator H defined as above (cf [1]) This result constitutes the matrix analog of the Favard s theorem in the scalar case (cf [2, 4]) A second polynomial solution of (22) is associated with initial conditions Y 1 = D1 1 and Y = Then if we denote it by (Q n ), we get deg Q n = n 1 In fact (23) Q n (z; W ) = R P n (z; W ) P n (s; W ) dw (s) z s Such a sequence of matrix polynomials is called the sequence of matrix orthonormal polynomials of the second kind with respect to the matrix of measures W, where we assume R dw (s) = I p From (23) we get (24) Q n (z; W ) = P n (z; W ) R dw (s) z s R P n (s; W ) dw (s) z s If the operator H is bounded, then the function F(z; W ) def = dw (s) R z s is analytic outside S k the spectrum of H In a neighborhood of the infinity we get F(z; W ) =, where zk+1 S k = R sk dw (s) are the moments for the matrix of measures W Thus (24) yields P n (z; W )F(z; W ) Q n (z; W ) = A n,1 z n+1 + The regular rational matrix function π n (z) = P 1 n (z; W )Q n(z; W ) is said to be the nth Padé fraction for F(z; W ) This constitutes one of the main applications of matrix orthonormal polynomials in approximation theory The connection with rational matrix approximation and matrix continued fractions follows immediately [1] As in the scalar case, we introduce the nth kernel polynomial associated with the matrix of measures W k=

etna@mcskentedu 13 F Marcellán and H O Yakhlef DEFINITION 21 The matrix polynomial K n (x, y; W ) def = n said to be the nth kernel polynomial associated with W PROPOSITION 22 (Reproducing property) Q(x), K n (x, y; W ) L = Q(y) j= P j (y; W )P j(x; W ) is for every matrix polynomials Q of degree less than or equal to n Notice that the nth kernel is the same for every sequence of matrix orthonormal polynomials associated with W In fact, if R n = U n P n with (U n ) unitary matrices, then K n (x, y; W ) = n Rj (y; W )R j(x; W ) = n Pj (y; W )P j(x; W ) j= j= 3 Zeros and quadrature formulas A point x is said to be a zero of the matrix polynomial P (x) if det P (x ) = If P (x) is a p p matrix polynomial and x is a zero of P (x), we write N (x, P ) = ie, the null space for the singular matrix P (x ) { } v C (p,1) : P (x )v =, LEMMA 31 ([7]) If dim N (x, P ) = n, then [Adj P (x)] (l) (x ) = for l =, 1,, n 2, and x is a zero of P (x) of multiplicity at least n Here Adj P (x) is the matrix such that P (x) Adj P (x) = Adj P (x) P (x) = (det P (x)) I p LEMMA 32 ([7]) For n N, the zeros of the matrix polynomial P n (x) are the same as those of the polynomial det (xi np J np ) (with the same multiplicity) where J np is the Jacobi matrix of dimension np, ie, the eigenvalues of the matrix E () D 1 () D1 () E 1() D 2 () J np = C (np,np) Dn 1 () Dn 1() E n 1 () Notice that if v is an eigenvector corresponding to the eigenvalue x, writing v as a block column vector v v 1 v = C(np,1), v n 1 where v i C (p,1), the equation J np v = x v reads v 1 = P 1 (x )v v 2 = P 2 (x )v = v n 1 = P n 1 (x )v, = P n (x )v,

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 131 or, equivalently, v = P (x ) P 1 (x ) P n 1 (x ) v In other words, there exists a bijection between N (x, P n ) and the subspace of eigenvectors of the matrix J np associated with the eigenvalue x THEOREM 33 ([7]) 1 The zeros of P n have a multiplicity less than or equal to p All the zeros are real 2 If x is a zero of P n with multiplicity m then rank P n (x ) = p m 3 If we write x n,k (k = 1,, np) for the zeros of P n in increasing order and taking into account their multiplicities, then the following interlacing property holds, x n+1,k x n,k x n+1,k+p for k = 1, 2,, np 4 If x is both a zero of P n and P n+1, then N (x, P n ) N (x, P n+1 ) = {} 5 If x n,k is a zero of P n with multiplicity p, then it can not be a zero of P n+1 We will denote Z n (W ) def = {x n,k, k = 1,, np, det P n (x n,k ; W ) = } the set of zeros of the orthonormal matrix polynomial P n ( ; W ) Let M N = n N Z n (W ) and Γ = N 1 M N, then supp(dw ) Γ We will denote by ˆΓ the smallest closed interval which contains the support of dw As in the scalar case, we can deduce quadrature formulas for matrix polynomials The next theorem shows how to compute the quadrature coefficients (the matrix Christoffel constants) by means of the eigensystem of the Jacobi matrix J np Let v i,j (j = 1, 2,, m i ) be the eigenvectors of the matrix J np associated with the eigenvalue x i (i = 1,, k) (with multiplicity m i ) Let ( Λ i = v () i,1 v() i,2 v() }{{} v () i i,m i ) M 1 i v () i,1 v () i,2 v () i,m i } {{ } v () i C (p,p), where v () i,s C(p,1) (s = 1,, m i ) is the vector consisting of the first p components of v ( ) i,s and M i = v () i Kn 1 (x i, x i )v () i Then THEOREM 34 ([17]) The quadrature formula I n (P, Q) def = b a P (x)dw (x)q (x) = k P (x i ) Λ i Q (x i ) is exact for matrix polynomials P and Q with deg P + deg Q 2n 1 Here k denotes the number of different zeros of P n i=1

etna@mcskentedu 132 F Marcellán and H O Yakhlef This quadrature formula yields, as in the scalar case, I n (P, I p ) = k P (x i ) Λ i, deg P 2n 1 i=1 An alternative approach is given in the following sense THEOREM 35 ([5]) Let x n,i (i = 1,, k) be the different zeros of the matrix polynomial P n with multiplicities m i respectively Let Γ n,i def = 1 [det P n (x)] (mi) (x n,i ) [Adj P n(x)] (m i 1) (x n,i ) Q n (x n,i ) Then 1 For every polynomial P with deg P 2n 1 we get R P (x)dw (x) = k P (x n,i ) Γ n,i 2 {Γ n,i } k i=1 are positive semi-definite matrices of rank m i Using the above quadrature formula, we get the following matrix analog of the Markov s theorem THEOREM 36 ([5]) Assume the positive definite matrix of measures W is determinate, ie, no other positive definite matrix of measures has the same moments as those of W Then i=1 lim P 1 dw (t) def n n (z; W ) Q n (z; W ) = = F(z; W ) R z t locally uniformly in C \ Γ F(z; W ) is called the Stieltjes (Markov) function associated with the matrix of measures W The above result means that if W is determinate, the nth Padé fraction of F(z; W ) converges locally uniformly to F(z; W ) in C \ Γ 4 The Nevai class We will introduce an analog of the so-called Nevai class for matrix orthonormal polynomials DEFINITION 41 Given two matrices D and E, where E is Hermitian, a sequence of matrix orthonormal polynomials P n satisfying (21) belongs to the matrix Nevai class M(D, E) if lim n D n (W ) = D and lim n E n (W ) = E, respectively A positive definite matrix of measures W belongs to the Nevai class M(D, E) if some of the corresponding sequences of matrix orthonormal polynomials belongs to M(D, E) Notice that a positive definite matrix of measures can belong to several Nevai classes because of the non-uniqueness of the corresponding sequences of matrix orthonormal polynomials If D is a nonsingular matrix, we can introduce the sequence of matrix polynomials {U n (z; D, E)} defined by the recurrence formula z U n (z) = D U n+1 (z) + EU n (z) + DU n 1 (z) n with the initial conditions U (z) = I p, U 1 (z) = According to Favard s theorem, this sequence is orthonormal with respect to a positive definite matrix of measures W D,E Notice that they are the matrix analogs of Chebyshev polynomials of second kind

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 133 REMARK 42 The Jacobi matrix associated with a sequence of matrix orthonormal polynomials in the Nevai class is a compact perturbation of the Jacobi matrix E() D() J() def D () E() D() = D () E() If F(z; D, E) is the Stieltjes function associated with the matrix of measures W D,E and D is nonsingular matrix, we get THEOREM 43 ([6]) Let (P n ) M(D, E) with D a nonsingular matrix Then lim P n 1(z)Pn 1 (z)dn 1 = F(z; D, E) n locally uniformly in C \ Γ Furthermore, F(z; D, E) is a solution of the quadratic matrix equation D XDX + (E zi p )X + I p = In particular, if D is a positive definite matrix, we can give the explicit expression for F(z; D, E) Let S(z) = 1 2 D 1/2 (zi p E)D 1/2, then F(z; D, E) = D 1/2 [ S(z) ( S 2 (z) I p ) 1/2 ] D 1/2 Furthermore, the matrix S(z) is diagonalizable up to a finite set of complex numbers z and also so is D 1/2 F(z; D, E)D 1/2 If a is an eigenvalue of S(z), then a (a 2 1) 1 2 is an eigenvalue of D 1/2 F(z; D, E)D 1/2 assuming that a (a 2 1) 1 2 < 1 for a C \ [ 1, 1] which guarantees the existence of an appropriate square root Since for x R, I p S 2 (x) is Hermitian, then I p S 2 (x) = U(x)N(x)U (x), where N(x) is a diagonal matrix with entries {d i,i } p i=1 and U(x) is an unitary matrix Then the matrix weight W D,E (x), x R, is dw D,E (x) = 1 π D 1/2 U(x) [ N + (x) ] 1 2 U (x) D 1/2 dx, where N + (x) is the diagonal matrix with entries d + def i,i (x) = max{d i,i (x), } The support of W D,E is then the set of real numbers {y R : S(y) has an eigenvalue in [ 1, 1]} In fact, W D,E is absolutely continuous with respect to the Lebesgue measure multiplied by the identity matrix, and the support is the finite union of at most p disjoint and bounded intervals If D is Hermitian, in [6] an example where W D,E is absolutely continuous with respect to the Lebesgue measure times the identity matrix but with an unbounded Radon-Nikodym derivative is presented In the general case of non-singularity of D, nothing is known about the support of W D,E Furthermore, nothing is known about the absolute continuity for the

etna@mcskentedu 134 F Marcellán and H O Yakhlef entries with respect to the Lebesgue measure as well as Dirac deltas can appear This last case is one of the reasons for the analysis of perturbation of matrix of measures on the Nevai class by the addition of Dirac deltas Let W be a matrix of measures supported on the real line, M a positive definite matrix of dimension p, and c R \ Γ Consider the matrix of measures W such that d W (x) = dw (x) + Mδ(x c) If (P n (x; W )) and (P n (x; W )) are two sequences of matrix orthonormal polynomials with respect to W and W respectively, satisfying a three-term recurrence relation such that D = lim n D n (W ) is nonsingular, then THEOREM 44 ([22],[21]) There exists a sequence of matrix orthonormal polynomials (P n (x; W )) such that [A n ( W )A n (W ) 1 ] [A n ( W )A n (W ) 1 ] 1 lim = n I p + F(c; D, E) [F (c; D, E)] 1 F(c; D, E) 2 If Λ(c)Λ(c) is the Cholesky factorization of the positive definite matrix given in the right hand side of the above expression, then lim P n(x, W )Pn 1 (x, W ) = n { Λ(c) 1 + 1 c x Λ(c) Λ(c) 1} { F (c; D, E) F 1 (c; D, E) } } locally uniformly in R \ {ˆΓ {c} 3 (P n (x; W )) belongs to the matrix Nevai class M( D, Ẽ) with D = Λ (c)dλ (c) Ẽ = Λ (c)eλ (c) + Λ (c) { D [ Λ (c)λ 1 (c) I p ] D F(c; D, E) [ Λ (c)λ 1 (c) I p ] D F(c; D, E)D } Λ (c) When D is Hermitian and nonsingular, we associate with the Nevai class M(D, E) the sequence {T n (z; D, E)} of matrix orthonormal polynomials defined by the recurrence formula z T n (z) = DT n+1 (z) + ET n (z) + DT n 1 (z), n 2, with the initial conditions z T 1 (z) = DT 2 (z) + ET 1 (z) + 2 DT (z) T (z) = I p, T 1 (z) = (1/ 2)D 1 (z I p E) Notice that {T n (z; D, E)} is orthonormal with respect to a positive definite matrix of measures that we denote V D,E They are the matrix analogue of the orthonormal Chebyshev polynomials of the first kind and, in fact, as in the scalar case, the sequence of associated polynomials of the first kind for our sequence (T n (z)) is (1/ 2) U n (z; D, E) D 1, n 1 (cf [8]) If we denote σ n (x) = 1 np k m j δ(x x n,j ), j=1

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 135 where x n,j (j = 1,, k) denotes as in Theorem 33 the set of zeros of a sequence (P n ) of matrix orthonormal polynomials with respect to a matrix of measures W, m k is the multiplicity of x n,k, then the Nevai class M(D, E) has the zero asymptotic behavior THEOREM 45 ([8]) Let (P n ) be a sequence of matrix orthonormal polynomials in the Nevai class M(D, E) Then there exists a positive definite matrix of measures µ such that the sequence of discrete matrices of measures (µ n ) µ n (x) = 1 np k j=1 ( n 1 i= P i (x n,k )Γ n,k P i (x n,k) ) δ(x x n,k ) converges in the -weak topology to µ Furthermore, σ n converges to τ(µ) in the same topology Notice that if D is a Hermitian and nonsingular matrix, then µ can be explicitly given (cf [8]) by µ = 1 p V D,E 5 Orthogonal matrix polynomials on the unit circle Let W be a matrix of measures supported on the unit circle T As in the scalar case the shift operator H : C (p,p) [x] C (p,p) [x], H[P ](x) = xp (x) is a unitary operator with respect to the inner product (51) P, Q L def = 1 P (e iθ )dw (θ)q (e iθ ) Let (Φ n (; W )) be a sequence of matrix orthonormal polynomials with respect to (51), ie, Φ n, Φ m L = δ n,m I p Notice that (U n Φ n (; W )) is a sequence of matrix polynomials orthonormal with respect to (51) if we assume (U n ) is a sequence of unitary matrices Furthermore, taking into account the polar decomposition for the leading coefficient of (Φ n ), we can assume that such a matrix coefficient is a positive definite matrix, and thus, we can choose this normalization in order to have uniqueness for our sequence of matrix orthonormal polynomials [1, page 333] For a sake of simplicity we will assume such a condition On the other hand, in (51) we consider the measure 1 dw (θ) so as to have a probability measure, 1 ie, dw (θ) = I p We also introduce the reversed polynomial P (z) = z n P (1/z) for every polynomial P C (p,p) [x] with deg P = n This means that for P (z) = n k= D n,kz k, P (z) = n k= D n,n k zk Reversed polynomials will play an important role in our presentation As in the case of the real line (cf Section 2, Definition 21) we can consider the sequence of matrix polynomials (K n ), K n (x, y; W ) = n Φ j (y; W ) Φ j(x; W ), j= the so called nth kernel polynomial associated with the matrix of measures W Next, we introduce a sequence (Ψ n (; W )) of matrix polynomials such that 1 Ψ n(e iθ )dw (θ)ψ m (e iθ ) = δ n,m I p The sequence (Ψ n (; W )) is said to be a right orthonormal sequence of matrix polynomials with respect to W Notice that in the real case, right orthonormal polynomials are related with the left or standard sequence by the transposed coefficients

etna@mcskentedu 136 F Marcellán and H O Yakhlef As an analog of the backward and forward recurrence relations for the scalar case [19], we can deduce two mixed recurrence relations where left and right matrix orthonormal polynomials are involved Let (52) Since (53) Φ n (z) = n j= A n,jz j, and Ψ n (z) = n j= B n,jz j Ψ n (z)dw (θ)φ n (z) = Ψ n (z)dw (θ) Φ n (z) = n j,k= B n,n j C j ka n,k, where z = e iθ and C l = 1 e i lθ dw (θ), then taking into account that the leading coefficients of the polynomials Φ n and Ψ n are nonsingular matrices, as well as the orthonormality conditions, we get We can introduce the reflection coefficients in such a way that B 1 n,n A n, = B n, A 1 n,n H n = A n,nb n, = A n, B n,n (I p H n H n) 1/2 = Bn,n 1 B n 1,n 1 = Bn 1,n 1 B (I p H n H n )1/2 = A n,n A n 1,n 1 = A n 1,n 1A 1 n,n, n,n Since we have assumed the leading coefficients are positive definite matrices, we have that H n 2 < 1 Thus, the recurrence relations can be written using the matrices H n (as in the scalar case): (54a) Φ n (z; W ) = (I p H n H n ) 1 2 zφn 1 (z; W ) + H n Ψn (z; W ), (54b) Ψ n (z; W ) = zψ n 1 (z; W )(I p H n H n) 1 2 + Φn (z; W )H n (backward recurrence relation) From (54b) we get and from (54a), we deduce that Ψ n (z; W ) = (I p H nh n ) 1 2 Ψn 1 (z; W ) + H nφ n (z; W ), Φ n (z; W ) = Φ n 1 (z; W )(I p H n H n ) 1 2 + Ψn (z; W )H n Finally, by substitution in (54) we obtain the so-called forward recurrence relations (55a) (I p H n H n) 1 2 Φn (z; W ), = z Φ n 1 (z; W ) + H n Ψn 1 (z; W ) (55b) Ψ n (z; W )(I p H n H n) 1 2 = zψn 1 (z; W ) + Φ n 1 (z; W )H n

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 137 THEOREM 51 ((Christoffel-Darboux formula) [3]) (56) (1 yz) n Φ j (y)φ j(z) = Ψ n (y) Ψ n (z) yz Φ n (y)φ n(z) j= As a consequence, writing y = z in (56), we get (57) z 2 Φ n(z)φ n (z) = Ψ n(z) Ψ n (z) + ( z 2 1) n Φ j (z)φ j (z) Since the right hand side of (57) is a positive definite matrix for z > 1, the matrix Φ n (z) is nonsingular for z > 1 Assume that Φ n (z ) is a singular matrix for z = 1 Let u be an eigenvector of Φ n (z ) Then, from (57) it follows that Ψ n (z )u = Thus from (54a) we get Φ n 1 (z )u = By induction, Φ (z ) is a singular matrix in contradiction with the non-singularity of the matrix Φ (z ) = A, COROLLARY 52 The zeros of Φ n belong to the unit disk In order to compute such zeros, writing y = in (56), we get Ψ n() Ψ n (z) = K n (z, ) But Ψ n () = B n,n, and so Ψ n (z) = B 1 n,n K n(z, ) From (55a) z Φ n 1 (z) = (I p H n H n) 1 2 Φn (z) H n B 1 j= n 1,n 1 k= n 1 Φ k()φ k (z) Taking into account Φ j () = Ψ j ()H j = B j,jh j, we get z Φ n 1 (z) = (I p H n H n ) ( 1 n 1 n 1 ) 2 Φ n (z) H n k= j=k+1 (I p H j H j) 1 2 H k Φ k(z) = n k= M n 1,kΦ k (z) Thus, if we denote M n = M, M,1 M 1, M 1,1 M 1,2 M n 2, M n 2,1 M n 2,n 2 M n 2,n 1 M n 1, M n 1,1 M n 1,n 2 M n 1,n 1, then z Φ (z) Φ 1 (z) Φ n 1 (z) = M n Φ (z) Φ 1 (z) Φ n 1 (z) + (I p H n H n) 1 2 Φ n (z)

etna@mcskentedu 138 F Marcellán and H O Yakhlef THEOREM 53 For n R, the zeros of the matrix polynomials Φ n (z) are the eigenvalues of the block Hessenberg matrix M n C (np,np) with the same multiplicity Notice that if v is an eigenvalue corresponding to the eigenvalue z of M n, writing v as a block column vector v = v v 1 v n 1 C(np,1), where v i C (p,1), then the equation M n v = z v becomes v 1 = Φ 1 (z )v v 2 = Φ 2 (z )v v n 1 = Φ n 1 (z )v, = Φ n (z )v In other words, there exists a bijection between N (z, Φ n ) and the subspace of eigenvectors of the matrix M n associated with the eigenvalue z From (57), if z = 1, then Φ n (z)φ n(z) = Ψ n (z)ψ n (z) is nonsingular matrix As in the scalar case, we will analyze two kinds of quadrature formulas Let us now define the weight matrix function W n (θ) = [ Φ n(e i θ ) Φ n (e i θ ) ] 1 as well as Ω n (θ) = θ W n(s) ds Because the rational matrix function [Φ n(z)] 1 is analytic in the closed disk, one has THEOREM 54 ([3]) For j, k n 1 Φ j (e i θ )dω n (θ)φ k(e i θ ) = δ j,k I p This means that the sequences of matrix orthonormal polynomials corresponding to the matrices of measures Ω n and W have the same n + 1 first elements Furthermore, if P and Q are polynomials of degree less than or equal to n, we get (58) 1 P (e i θ )dw (θ)q (e i θ ) = 1 P (e i θ )dω n (θ)q (e i θ ) From the above result a straightforward proof of the Favard s theorem on the unit circle follows ([3]) In fact, given a sequence of matrices (H n ) of dimension p with H n 2 < 1, there exists a unique matrix of measures W such that the sequence of matrix polynomials given by (55) is orthonormal with respect to W Notice that zeros of Φ n which lie in the unit disk are involved in (58) In order to obtain a quadrature formula with knots on the unit circle, we need to introduce the concept of para-orthogonality DEFINITION 55 ([16], [18]) Let U n be a unitary matrix The matrix polynomial B n (z; U n ) = Φ n (z) + U n Ψn (z) is said to be para-orthogonal with respect to the matrix of measures W THEOREM 56 ([16])

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 139 1 The zeros of B n (z; U n ) are the eigenvalues of a unitary block Hessenberg matrix N n whose multiplicities are less than or equal to p The blocks of the matrix N n are the same as those of M n up to the corresponding to the last row In fact, N n 1,k = M n 1,k (I p H n H n) 1/2 (U n + H n) 1 n j=k+1 (I p H j H j ) 1/2 H k 2 If (m i ) k i=1 are the multiplicities of the eigenvalues (z i) k i=1 and (v i,j) k i=1 (j = 1, 2,, m i) the corresponding eigenvectors, then P, Q L = k P (z i )Λ i Q (z i ), i=1 where P, Q are Laurent matrix polynomials P L s,t, Q L (n 1 t),(n 1 s) (L r,s = { s k=r A kz k, A k C (p,p), r s } ), v () i,s Λ i = ( ) v () i,1, v() i,2,, v() i,m i G 1 i v () i,1 v () i,m i, C(p,1) (s = 1,, m i ) is the vector consisting of the first p component of v i,s, and (G i ) k,l = vi,k v i,l 6 Orthogonality in the unit circle from orthogonality in [ 1, 1] Let W be a matrix of measures supported on [ 1, 1] We can introduce a matrix of measures W on the unit circle T in the following way (61) W (θ) = { W (cos θ) θ π W (cos θ) π θ Taking into account the symmetry of the above measure, the matrix coefficients in (52) are related in the following way B n,j = A n,j j =, 1,, n In this case, it is easy to prove that the reflection matrix parameters (H n ) associated with the matrix of measures given in (61) are Hermitian The first question to solve is the connection between these reflection parameters and the parameters of some sequence of matrix orthonormal polynomials with respect to W PROPOSITION 61 ([2]) Let (Φ n ) and (Ψ n ) be sequences of left and right matrix orthonormal polynomials on the unit circle with respect to W (θ) with positive definite matrices as leading coefficients The sequence of matrix polynomials P n (x; W ) = 1 (I p + H 2n ) 1/2 [ Φ 2n (z; W ) + Ψ 2n (z; W ) ] z n, where x = 1 2 (z + 1 z ), is orthonormal with respect to W The sequence of the matrix orthonormal polynomials (P n ) satisfies (21) with (62) D n (W ) = 1 2 (I p + H 2n 2 ) 1 2 (I p H 2 2n 1 ) 1 2 (I p H 2n ) 1 2 E n (W ) = 1 2 (I p H 2n ) 1 2 H2n 1 (I p H 2n ) 1 2 1 2 (I p + H 2n ) 1 2 H 2n+1 (I p + H 2n ) 1 2

etna@mcskentedu 14 F Marcellán and H O Yakhlef for n 1 Notice that if H n = H for every n, we get D = D n = 1 2 (I p H 2 ) > and E = E n = H 2 Thus, the sequence (P n ) is a sequence analyzed in 4, ie, it belongs to the matrix Nevai class M(D, E) In such a case, the corresponding matrix of measures is given by dw = 1 π D 1 ( I p [ (I p H 2 ) 1 (xi p + H 2 ) 2]) 1/2 dx On the other hand, the support of the matrix of measures W lives in a finite union of at most p disjoint bounded non-degenerate intervals whose end points are some zeros of the scalar polynomial ie, det [ (I p H 2 ) (xi p + H 2 ) ] 2 =, det [I p + xi p ] = or det [ I p 2H 2 xi p ] = This means that the set of ends points is contained in Since H < I p, the above set is [ 1, 1] { 1} { 1 2λ 2, λ eigenvalue of H } REFERENCES [1] A I APTEKAREV AND E M NIKISHIN, The scattering problem for a discrete Sturm-Liouville operator, Mat USSR Sb, 49 (1984), pp 325 355 [2] T S CHIHARA, An introduction to orthogonal polynomials, Gordon and Breach, New York, 1978 [3] P DELSARTE, Y V GENIN, AND Y G KAMP, Orthogonal polynomial matrices on the unit circle, IEEE Trans Circuits and Systems, 25 (1978), pp 149 16 [4] A J DURÁN, On orthogonal polynomials with respect to a positive definite matrix of measures, Can J Math, 47 (1995), pp 88 112 [5], Markov s theorem for orthogonal matrix polynomials, Can J Math, 48 (1996), pp 118 1195 [6], Ratio asymptotic for orthogonal matrix polynomials, J Approx Theory, 1 (1999), pp 34 344 [7] A J DURÁN AND P LÓPEZ-RODRÍGUEZ, Orthogonal matrix polynomials: Zeros and Blumenthal s theorem, J Approx Theory, 84 (1996), pp 96 118 [8] A J DURÁN, P LÓPEZ-RODRÍGUEZ, AND E B SAFF, Zero asymptotic behaviour for orthogonal matrix polynomials, J Anal Math, 78 (1999), pp 37 6 [9] A J DURÁN AND W VAN ASSCHE, Orthogonal matrix polynomials and higher order recurrence relations, Linear Algebra and Appl, 219 (1995), pp 261 28 [1] P A FUHRMANN, Orthogonal matrix polynomials and system theory, Rend Sem Mat Università Politecnica Torino, (1988), pp 68 124 [11] G GOLUB AND C V LOAN, Matrix Computations, The Johns Hopkins University Press, Baltimore, second ed, 1989 [12] G GOLUB AND R UNDERWOOD, The block Lanczos methods for computing eigenvalues, in Mathematical Software III, J R Rice, ed, Academic Press, New York, 1977, pp 364 377 [13] F MARCELLÁN AND I R GONZÁLEZ, A class of matrix orthogonal polynomials on the unit circle, Linear Algebra and Appl, 121 (1989), pp 233 241 [14] M P MIGNOLET, Matrix polynomials orthogonal on the unit circle and accuracy of autoregressive models, J Comp Appl Math, 62 (1995), pp 229 238 [15] L RODMAN, Orthogonal matrix polynomials, in Orthogonal Polynomials: Theory and Practice, P Nevai, ed, vol 294 of NATO ASI Series C, Kluwer, Dordrecht, 199, pp 345 362 [16] A SINAP, Gaussian quadrature for matrix valued functions on the unit circle, Electron Trans Numer Anal, 3 (1995), pp 95 115

etna@mcskentedu Recent Trends on Analytic Properties of Matrix Orthonormal Polynomials 141 [17] A SINAP AND W VAN ASSCHE, Polynomial interpolation and gaussian quadrature for matrix-valued functions, Linear Algebra and Appl, 27 (1994), pp 71 114 [18], Orthogonal matrix polynomials and applications, J Comp Appl Math, 66 (1996), pp 27 52 [19] G SZEGÖ, Orthogonal Polynomials, vol 23 of Amer Math Soc Coll Publ, Amer Math Soc, Providence, RI, fourth ed, 1975 [2] H O YAKHLEF AND F MARCELLÁN, Orthogonal matrix polynomials, connection between recurrences on the unit circle and on a finite interval, in Proceedings of the 5th International Conference on Approximation and Optimization, Springer Verlag, 2, pp 373 386 [21] H O YAKHLEF, F MARCELLÁN, AND M PIÑAR, Perturbations in the Nevai matrix class of orthogonal matrix polynomials Linear Algebra and Appl, 336 (21) pp 231 254 [22], Relative asymptotics for orthogonal matrix polynomials with convergent recurrence coefficients J Approx Theory, 111 (21) pp 1 3