Orthogonal polynomials

Size: px
Start display at page:

Download "Orthogonal polynomials"

Transcription

1 Orthogonal polynomials Gérard MEURANT October, 2008

2 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed weight functions 9 Matrix orthogonal polynomials

3 Definition [a, b] = finite or infinite interval of the real line Definition A Riemann Stieltjes integral of a real valued function f of a real variable with respect to a real function α is denoted by b a f (λ) dα(λ) (1) and is defined to be the limit (if it exists), as the mesh size of the partition π of the interval [a, b] goes to zero, of the sums where c i [λ i, λ i+1 ] {λ i } π f (c i )(α(λ i+1 ) α(λ i ))

4 if f is continuous and α is of bounded variation on [a, b] then the integral exists α is of bounded variation if it is the difference of two nondecreasing functions The integral exists if f is continuous and α is nondecreasing In many cases Riemann Stieltjes integrals are directly written as b a f (λ) w(λ)dλ where w is called the weight function

5 Moments and inner product Let α be a nondecreasing function on the interval (a, b) having finite limits at ± if a = and/or b = + Definition The numbers µ i = b a λ i dα(λ), i = 0, 1,... (2) are called the moments related to the measure α Definition Let P be the space of real polynomials, we define an inner product (related to the measure α) of two polynomials p and q P as p, q = b a p(λ)q(λ) dα(λ) (3)

6 The norm of p is defined as ( b p = a ) 1 p(λ) 2 2 dα(λ) (4) We will consider also discrete inner products as p, q = m j=1 p(t j )q(t j )w 2 j (5) The values t j are referred as points or nodes and the values wj 2 the weights are

7 We will use the fact that the sum in equation (5) can be seen as an approximation of the integral (3) Conversely, it can be written as a Riemann Stieltjes integral for a measure α which is piecewise constant and has jumps at the nodes t j (that we assume to be distinct for simplicity), see Atkinson; Dahlquist, Eisenstat and Golub; Dahlquist, Golub and Nash 0 if λ < t 1 α(λ) = i j=1 [w j] 2 if t i λ < t i+1 i = 1,..., m 1 m j=1 [w j] 2 if t m λ

8 There are different ways to normalize polynomials: A polynomial p of exact degree k is said to be monic if the coefficient of the monomial of highest degree is 1, that is p(λ) = λ k + c k 1 λ k Definition The polynomials p and q are said to be orthogonal with respect to inner products (3) or (5), if p, q = 0 The polynomials p in a set of polynomials are orthonormal if they are mutually orthogonal and if p, p = 1 Polynomials in a set are said to be monic orthogonal polynomials if they are orthogonal, monic and their norms are strictly positive

9 The inner product, is said to be positive definite if p > 0 for all nonzero p P A necessary and sufficient condition for having a positive definite inner product is that the determinants of the Hankel moment matrices are positive µ 0 µ 1 µ k 1 µ 1 µ 2 µ k det > 0, k = 1, 2, µ k 1 µ k µ 2k 2 where µ i are the moments of definition (2)

10 Existence of orthogonal polynomials Theorem If the inner product, is positive definite on P, there exists a unique infinite sequence of monic orthogonal polynomials related to the measure α See Gautschi

11 Minimization properties Theorem If q k is a monic polynomial of degree k, then b min q q k 2 (λ) dα(λ), k a is attained if and only if q k is a constant times the orthogonal polynomial p k related to α See Szegö We have defined orthogonality relative to an inner product given by a Riemann Stieltjes integral but, more generally, orthogonal polynomials can be defined relative to a linear functional L such that L(λ k ) = µ k Two polynomials p and q are said to be orthogonal if L(pq) = 0 One obtains the same kind of existence result, see the book by Brezinski

12 Three-term recurrences The main ingredient is the following property for the inner product λp, q = p, λq Theorem For monic orthogonal polynomials, there exist sequences of coefficients α k, k = 1, 2,... and γ k, k = 1, 2,... such that p k+1 (λ) = (λ α k+1 )p k (λ) γ k p k 1 (λ), k = 0, 1,... (6) p 1 (λ) 0, p 0 (λ) 1. where α k+1 = λp k, p k, k = 0, 1,... p k, p k γ k = p k, p k, k = 1, 2,... p k 1, p k 1

13 Proof. A set of monic orthogonal polynomials p j is linearly independent Any polynomial p of degree k can be written as p = k ω j p j, j=0 for some real numbers ω j p k+1 λp k is of degree k k 2 p k+1 λp k = α k+1 p k γ k p k 1 + δ j p j (7) Taking the inner product of equation (7) with p k λp k, p k = α k+1 p k, p k j=0

14 Multiplying equation (7) by p k 1 λp k, p k 1 = γ k p k 1, p k 1 But, using equation (7) for the degree k 1 λp k, p k 1 = p k, λp k 1 = p k, p k we multiply equation (7) with p j, j < k 1 λp k, p j = δ j p j, p j The left hand side of the last equation vanishes For this, the property λp k, p j = p k, λp j is crucial Since λp j is of degree < k, the left hand side is 0 and it implies δ j = 0, j = 0,..., k 2

15 There is a converse to this theorem It is is attributed to J. Favard whose paper was published in 1935, although this result had also been obtained by J. Shohat at about the same time and it was known earlier to Stieltjes Theorem If a sequence of monic orthogonal polynomials p k, k = 0, 1,... satisfies a three term recurrence relation such as equation (6) with real coefficients and γ k > 0, then there exists a positive measure α such that the sequence p k is orthogonal with respect to an inner product defined by a Riemann Stieltjes integral for the measure α

16 Orthonormal polynomials Theorem For orthonormal polynomials, there exist sequences of coefficients α k, k = 1, 2,... and β k, k = 1, 2,... such that βk+1 p k+1 (λ) = (λ α k+1 )p k (λ) β k p k 1 (λ), k = 0, 1,... (8) where p 1 (λ) 0, p 0 (λ) 1/ β 0, β 0 = α k+1 = λp k, p k, k = 0, 1,... and β k is computed such that p k = 1 b a dα

17 Relations between monic and orthonormal polynomials Assume that we have a system of monic polynomials p k satisfying a three-term recurrence (6), then we can obtain orthonormal polynomials ˆp k by normalization Using equation (6) p k+1 ˆp k+1 = After some manipulations ˆp k (λ) = p k(λ) p k, p k 1/2 ( λ p k λp ) k, p k ˆp k p k 2 p k p k 1 ˆp k 1 p k+1 p k ˆp k+1 = (λ λˆp k, ˆp k )ˆp k p k p k 1 ˆp k 1

18 Note that and λˆp k, ˆp k = λp k, p k p k 2 βk+1 = p k+1 p k Therefore the coefficients α k are the same and β k = γ k If we have the coefficients of monic orthogonal polynomials we just have to take the square root of γ k to obtain the coefficients of the corresponding orthonormal polynomials

19 Jacobi matrices If the orthonormal polynomials exist for all k, there is an infinite symmetric tridiagonal matrix J associated with them α 1 β1 β1 α 2 β2 J = β2 α 3 β Since it has positive subdiagonal elements, the matrix J is called an infinite Jacobi matrix Its leading principal submatrix of order k is denoted as J k Orthogonal polynomials are fully described by their Jacobi matrices

20 Christoffel Darboux relation Theorem Let p k, k = 0, 1,... be orthonormal polynomials, then k i=0 p i (λ)p i (µ) = p k+1 (λ)p k (µ) p k (λ)p k+1 (µ) β k+1, if λ µ λ µ k pi 2 (λ) = β k+1 [p k+1 (λ)p k(λ) p k (λ)p k+1(λ)] i=0 Corollary For monic orthogonal polynomials we have k i=0 γ k γ k 1 γ i+1 p i (λ)p i (µ) = p k+1(λ)p k (µ) p k (λ)p k+1 (µ), if λ µ λ µ (9)

21 Properties of zeros Let P k (λ) = ( p 0 (λ) p 1 (λ)... p k 1 (λ) ) T In matrix form, the three-term recurrence is written as λp k = J k P k + η k p k (λ)e k (10) where J k is the Jacobi matrix of order k and e k is the last column of the identity matrix (η k = β k ) Theorem The zeros θ (k) j of the orthonormal polynomial p k are the eigenvalues of the Jacobi matrix J k

22 Proof. If θ is a zero of p k, from equation (10) we have θp k (θ) = J k P k (θ) This shows that θ is an eigenvalue of J k and P k (θ) is a corresponding (unnormalized) eigenvector J k being a symmetric tridiagonal matrix, its eigenvalues (the zeros of the orthogonal polynomial p k ) are real and distinct Theorem The zeros of the orthogonal polynomials p k associated with the measure α on [a, b] are real, distinct and located in the interior of [a, b] see Szegö

23 Examples of orthogonal polynomials Jacobi polynomials dα(λ) = w(λ) dλ a = 1, b = 1, w(λ) = (1 λ) δ (1 + λ) β, δ, β > 1 Special cases: Chebyshev polynomials of the first kind: δ = β = 1/2 C k (λ) = cos(k arccos λ) They satisfy C 0 (λ) 1, C 1 (λ) λ, C k+1 (λ) = 2λC k (λ) C k 1 (λ)

24 The zeros of C k are ( ) 2j + 1 π λ j+1 = cos, j = 0, 1,... k 1 k 2 The polynomial C k has k + 1 extremas in [ 1, 1] ( ) jπ λ j = cos, j = 0, 1,..., k k and C k (λ j ) = ( 1)j For k 1, C k has a leading coefficient 2 k 1 0 i j < C i, C j > α = π 2 i = j 0 π i = j = 0

25 Chebyshev polynomials (first kind) C k, k = 1,..., 7 on [ 1.1, 1.1]

26 Let π 1 n = { poly. of degree n in λ whose value is 1 for λ = 0 } Chebyshev polynomials provide the solution of the minimization problem min max q n(λ) λ [a,b] q n π 1 n The solution is written as min q n π 1 n max q n(λ) = max λ [a,b] see Dahlquist and Björck λ [a,b] ( ) C 2λ (a+b) n b a ( ) C a+b n = b a 1 C n ( a+b b a )

27 Chebyshev polynomials of the second kind δ = β = 1/2 sin(k + 1)θ U k (λ) =, λ = cos θ sin θ They satisfy the same three term recurrence as the Chebyshev polynomials of the first kind but with initial conditions U 0 1, U 1 2λ Of all monic polynomials q k, 2 k U k gives the smallest L 1 norm q k 1 = 1 1 q k (λ) dλ

28 Chebyshev polynomials (second kind) U k, k = 1,..., 7 on [ 1.1, 1.1]

29 Legendre polynomials δ = β = 0 (k+1)p k+1 (λ) = (2k+1)λP k (λ) kp k 1 (λ), P 0 (λ) 1, P 1 (λ) λ The Legendre polynomial P k is bounded by 1 on [ 1, 1]

30 Legendre polynomials P k, k = 1,..., 7 on [ 1.1, 1.1]

31 Laguerre polynomials The interval is [0, ] and the weight function is e λ The recurrence relation is (k + 1)L k+1 = (2k + 1 λ)l k kl k 1 L 0 1, L 1 1 λ

32 Laguerre polynomials L k, k = 1,..., 7 on [ 2, 20]

33 Hermite polynomials The interval is [, ] and the weight function is e λ2 The recurrence relation is H k+1 = 2λH k 2kH k 1 H 0 1, H 1 2λ

34 Hermite polynomials H k, k = 1,..., 7 on [ 10, 10]

35 Variable-signed weight functions What happens if the measure α is not positive? Theorem Assume that all the moments exist and are finite For any k > 0, there exists a polynomial p k of degree at most k such that p k is orthogonal to all polynomials of degree k 1 with respect to w see G.W. Struble The important words in this result are: of degree at most k In some cases the polynomial p k can be of degree less than k

36 C(k) = set of polynomials of degree k orthogonal to all polynomials of degree k 1 C(k) is called degenerate if it contains polynomials of degree less than k If C(k) is non-degenerate it contains one unique polynomial (up to a multiplicative constant) Theorem Let C(k) be non-degenerate with a polynomial p k Assume C(k + n), n > 0 is the next non-degenerate set. Then p k is the unique (up to a multiplicative constant) polynomial of lowest degree in C(k + m), m = 1,..., n 1

37 d k d k 1 1 p k (λ) = (α k λ d k d k 1 + β k,i λ i )p k 1 (λ) γ k 1 p k 2 (λ), k = 2,. i=0 d 1 1 p 0 (λ) 1, p 1 (λ) = (α 1 λ d 1 + β 1,i λ i )p 0 (λ) i=0 (11) The coefficient of p k 1 contains powers of λ depending on the difference of the degrees of the polynomials in the non-degenerate cases The coefficients α k and γ k 1 have to be nonzero

38 Matrix orthogonal polynomials We would like to have matrices as coefficients of the polynomials For our purposes we just need 2 2 matrices Definition For λ real, a matrix polynomial p i (λ), which is a 2 2 matrix, is defined as p i (λ) = i j=0 λ j C (i) j where the coefficients C (i) j are given 2 2 real matrices If the leading coefficient is the identity matrix, the matrix polynomial is said to be monic The measure α(λ) is a matrix of order 2 that we suppose to be symmetric and positive semi definite

39 We assume that the (matrix) moments M k = exist for all k b a λ k dα(λ) (12) The inner product of two matrix polynomials p and q is defined as p, q = b a p(λ) dα(λ)q(λ) T (13)

40 Two matrix polynomials in a sequence p k, k = 0, 1,... are said to be orthonormal if < p i, p j >= δ i,j I 2 (14) where δ i,j is the Kronecker symbol and I 2 the identity matrix of order 2 Theorem Sequences of matrix orthogonal polynomials satisfy a block three term recurrence p j (λ)γ j = λp j 1 (λ) p j 1 (λ)ω j p j 2 (λ)γ T j 1 (15) p 0 (λ) I 2, p 1 (λ) 0 where Γ j, Ω j are 2 2 matrices and the matrices Ω j are symmetric

41 The block three-term recurrence can be written in matrix form as λ[p 0 (λ),..., p k 1 (λ)] = [p 0 (λ),..., p k 1 (λ)]j k + [0,..., 0, p k (λ)γ k ] (16) where Ω 1 Γ T 1 J k = Γ 1 Ω 2 Γ T Γ k 2 Ω k 1 Γ T k 1 Γ k 1 Ω k is a block tridiagonal matrix of order 2k with 2 2 blocks

42 Let P(λ) = [p 0 (λ),..., p k 1 (λ)] T J k P(λ) = λp(λ) [0,..., 0, p k (λ)γ k ] T Theorem For λ and µ real, we have the matrix analog of the Christoffel Darboux identity, k 1 (λ µ) p j (µ)pj T (λ) = p k 1 (µ)γ T k pt k (λ) p k(µ)γ k pk 1 T (λ) j=0 (17)

43 F.V. Atkinson, Discrete and continuous boundary problems, Academic Press, (1964) C. Brezinski, Biorthogonality and its applications to numerical analysis, Marcel Dekker, (1992) T.S. Chihara, An introduction to orhogonal polynomials, Gordon and Breach, (1978) G. Dahlquist and A. Björck, Numerical methods in scientific computing, volume I, SIAM, (2008) G. Dahlquist, S.C. Eisenstat and G.H. Golub, Bounds for the error of linear systems of equations using the theory of moments, J. Math. Anal. Appl., v 37, (1972), pp G. Dahlquist, G.H. Golub and S.G. Nash, Bounds for the error in linear systems. In Proc. of the Workshop on Semi Infinite Programming, R. Hettich Ed., Springer (1978), pp

44 W. Gautschi, Orthogonal polynomials: computation and approximation, Oxford University Press, (2004) G.W. Struble, Orthogonal polynomials: variable signed weight functions, Numer. Math., v 5, (1963), pp G. Szegö, Orthogonal polynomials, Third Edition, American Mathematical Society, (1974)

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM

ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM GÉRARD MEURANT In memory of Gene H. Golub Abstract. In this paper we study how to compute an estimate

More information

1 Introduction 2 Applications 3 Ingredients 4 Quadratic forms 5 Riemann-Stieltjes integrals 6 Orthogonal polynomials 7 Examples of orthogonal polynomi

1 Introduction 2 Applications 3 Ingredients 4 Quadratic forms 5 Riemann-Stieltjes integrals 6 Orthogonal polynomials 7 Examples of orthogonal polynomi Matrices, moments and quadrature with applications (I) Ge rard MEURANT October 2010 1 Introduction 2 Applications 3 Ingredients 4 Quadratic forms 5 Riemann-Stieltjes integrals 6 Orthogonal polynomials

More information

ON LINEAR COMBINATIONS OF

ON LINEAR COMBINATIONS OF Física Teórica, Julio Abad, 1 7 (2008) ON LINEAR COMBINATIONS OF ORTHOGONAL POLYNOMIALS Manuel Alfaro, Ana Peña and María Luisa Rezola Departamento de Matemáticas and IUMA, Facultad de Ciencias, Universidad

More information

On the Vorobyev method of moments

On the Vorobyev method of moments On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015

More information

A trigonometric orthogonality with respect to a nonnegative Borel measure

A trigonometric orthogonality with respect to a nonnegative Borel measure Filomat 6:4 01), 689 696 DOI 10.98/FIL104689M Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat A trigonometric orthogonality with

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Introduction to Orthogonal Polynomials: Definition and basic properties

Introduction to Orthogonal Polynomials: Definition and basic properties Introduction to Orthogonal Polynomials: Definition and basic properties Prof. Dr. Mama Foupouagnigni African Institute for Mathematical Sciences, Limbe, Cameroon and Department of Mathematics, Higher Teachers

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES

INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES Indian J Pure Appl Math, 49: 87-98, March 08 c Indian National Science Academy DOI: 0007/s36-08-056-9 INTEGER POWERS OF ANTI-BIDIAGONAL HANKEL MATRICES João Lita da Silva Department of Mathematics and

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning

Interpolation. Chapter Interpolation. 7.2 Existence, Uniqueness and conditioning 76 Chapter 7 Interpolation 7.1 Interpolation Definition 7.1.1. Interpolation of a given function f defined on an interval [a,b] by a polynomial p: Given a set of specified points {(t i,y i } n with {t

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

ANALOG OF FAVARD S THEOREM FOR POLYNOMIALS CONNECTED WITH DIFFERENCE EQUATION OF 4-TH ORDER. S. M. Zagorodniuk

ANALOG OF FAVARD S THEOREM FOR POLYNOMIALS CONNECTED WITH DIFFERENCE EQUATION OF 4-TH ORDER. S. M. Zagorodniuk Serdica Math J 27 (2001, 193-202 ANALOG OF FAVARD S THEOREM FOR POLYNOMIALS CONNECTED WITH DIFFERENCE EQUATION OF 4-TH ORDER S M Zagorodniuk Communicated by E I Horozov Abstract Orthonormal polynomials

More information

Interpolation and Cubature at Geronimus Nodes Generated by Different Geronimus Polynomials

Interpolation and Cubature at Geronimus Nodes Generated by Different Geronimus Polynomials Interpolation and Cubature at Geronimus Nodes Generated by Different Geronimus Polynomials Lawrence A. Harris Abstract. We extend the definition of Geronimus nodes to include pairs of real numbers where

More information

Multiple Orthogonal Polynomials

Multiple Orthogonal Polynomials Summer school on OPSF, University of Kent 26 30 June, 2017 Introduction For this course I assume everybody is familiar with the basic theory of orthogonal polynomials: Introduction For this course I assume

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Matching moments and matrix computations

Matching moments and matrix computations Matching moments and matrix computations Jörg Liesen Technical University of Berlin and Petr Tichý Czech Academy of Sciences and Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials [ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate

More information

Hermite Interpolation and Sobolev Orthogonality

Hermite Interpolation and Sobolev Orthogonality Acta Applicandae Mathematicae 61: 87 99, 2000 2000 Kluwer Academic Publishers Printed in the Netherlands 87 Hermite Interpolation and Sobolev Orthogonality ESTHER M GARCÍA-CABALLERO 1,, TERESA E PÉREZ

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

On an Inverse Problem for a Quadratic Eigenvalue Problem

On an Inverse Problem for a Quadratic Eigenvalue Problem International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

The inverse of a tridiagonal matrix

The inverse of a tridiagonal matrix Linear Algebra and its Applications 325 (2001) 109 139 www.elsevier.com/locate/laa The inverse of a tridiagonal matrix Ranjan K. Mallik Department of Electrical Engineering, Indian Institute of Technology,

More information

Numerical integration formulas of degree two

Numerical integration formulas of degree two Applied Numerical Mathematics 58 (2008) 1515 1520 www.elsevier.com/locate/apnum Numerical integration formulas of degree two ongbin Xiu epartment of Mathematics, Purdue University, West Lafayette, IN 47907,

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Zeros and asymptotic limits of Löwdin orthogonal polynomials with a unified view

Zeros and asymptotic limits of Löwdin orthogonal polynomials with a unified view Applied and Computational Mathematics 2014; 3(2): 57-62 Published online May 10, 2014 (http://www.sciencepublishinggroup.com/j/acm) doi: 10.11648/j.acm.20140302.13 Zeros and asymptotic limits of Löwdin

More information

Quadratures and integral transforms arising from generating functions

Quadratures and integral transforms arising from generating functions Quadratures and integral transforms arising from generating functions Rafael G. Campos Facultad de Ciencias Físico-Matemáticas, Universidad Michoacana, 58060, Morelia, México. rcampos@umich.mx Francisco

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE

INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE INTERLACING PROPERTIES FOR HERMITIAN MATRICES WHOSE GRAPH IS A GIVEN TREE C M DA FONSECA Abstract We extend some interlacing properties of the eigenvalues of tridiagonal matrices to Hermitian matrices

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Non-positivity of the semigroup generated by the Dirichlet-to-Neumann operator

Non-positivity of the semigroup generated by the Dirichlet-to-Neumann operator Non-positivity of the semigroup generated by the Dirichlet-to-Neumann operator Daniel Daners The University of Sydney Australia AustMS Annual Meeting 2013 3 Oct, 2013 The Dirichlet-to-Neumann operator

More information

Inverse Eigenvalue Problems for Two Special Acyclic Matrices

Inverse Eigenvalue Problems for Two Special Acyclic Matrices mathematics Communication Inverse Eigenvalue Problems for Two Special Acyclic Matrices Debashish Sharma, *, and Mausumi Sen, Department of Mathematics, Gurucharan College, College Road, Silchar 788004,

More information

Lanczos tridiagonalization, Krylov subspaces and the problem of moments

Lanczos tridiagonalization, Krylov subspaces and the problem of moments Lanczos tridiagonalization, Krylov subspaces and the problem of moments Zdeněk Strakoš Institute of Computer Science AS CR, Prague http://www.cs.cas.cz/ strakos Numerical Linear Algebra in Signals and

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Orthogonal Polynomial Ensembles

Orthogonal Polynomial Ensembles Chater 11 Orthogonal Polynomial Ensembles 11.1 Orthogonal Polynomials of Scalar rgument Let wx) be a weight function on a real interval, or the unit circle, or generally on some curve in the comlex lane.

More information

Zero localization: from 17th-century algebra to challenges of today

Zero localization: from 17th-century algebra to challenges of today Zero localization: from 7th-century algebra to challenges of today UC Berkeley & TU Berlin Bremen summer school July 203 Descartes rule of signs Theorem [Descartes]. The number of positive zeros of a real

More information

1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let

1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let SENSITIVITY ANALYSIS OR SZEGŐ POLYNOMIALS SUN-MI KIM AND LOTHAR REICHEL Abstract Szegő polynomials are orthogonal with respect to an inner product on the unit circle Numerical methods for weighted least-squares

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where

Polynomials. p n (x) = a n x n + a n 1 x n 1 + a 1 x + a 0, where Polynomials Polynomials Evaluation of polynomials involve only arithmetic operations, which can be done on today s digital computers. We consider polynomials with real coefficients and real variable. p

More information

arxiv:math/ v1 [math.ca] 9 Oct 1995

arxiv:math/ v1 [math.ca] 9 Oct 1995 arxiv:math/9503v [math.ca] 9 Oct 995 UPWARD EXTENSION OF THE JACOBI MATRIX FOR ORTHOGONAL POLYNOMIALS André Ronveaux and Walter Van Assche Facultés Universitaires Notre Dame de la Paix, Namur Katholieke

More information

On Orthogonal Polynomials in Several Variables

On Orthogonal Polynomials in Several Variables Fields Institute Communications Volume 00, 0000 On Orthogonal Polynomials in Several Variables Yuan Xu Department of Mathematics University of Oregon Eugene, Oregon 97403-1222 Abstract. We report on the

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

MATRIX INTEGRALS AND MAP ENUMERATION 2

MATRIX INTEGRALS AND MAP ENUMERATION 2 MATRIX ITEGRALS AD MAP EUMERATIO 2 IVA CORWI Abstract. We prove the generating function formula for one face maps and for plane diagrams using techniques from Random Matrix Theory and orthogonal polynomials.

More information

Discrete Orthogonal Polynomials on Equidistant Nodes

Discrete Orthogonal Polynomials on Equidistant Nodes International Mathematical Forum, 2, 2007, no. 21, 1007-1020 Discrete Orthogonal Polynomials on Equidistant Nodes Alfredo Eisinberg and Giuseppe Fedele Dip. di Elettronica, Informatica e Sistemistica Università

More information

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov

More information

CALCULATION OF GAUSS-KRONROD QUADRATURE RULES. 1. Introduction A(2n+ 1)-point Gauss-Kronrod integration rule for the integral.

CALCULATION OF GAUSS-KRONROD QUADRATURE RULES. 1. Introduction A(2n+ 1)-point Gauss-Kronrod integration rule for the integral. MATHEMATICS OF COMPUTATION Volume 66, Number 219, July 1997, Pages 1133 1145 S 0025-5718(97)00861-2 CALCULATION OF GAUSS-KRONROD QUADRATURE RULES DIRK P. LAURIE Abstract. The Jacobi matrix of the (2n+1)-point

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number

More information

COMPUTATION OF GAUSS-KRONROD QUADRATURE RULES

COMPUTATION OF GAUSS-KRONROD QUADRATURE RULES MATHEMATICS OF COMPUTATION Volume 69, Number 231, Pages 1035 1052 S 0025-5718(00)01174-1 Article electronically published on February 17, 2000 COMPUTATION OF GAUSS-KRONROD QUADRATURE RULES D.CALVETTI,G.H.GOLUB,W.B.GRAGG,ANDL.REICHEL

More information

CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA

CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Hacettepe Journal of Mathematics and Statistics Volume 40(2) (2011), 297 303 CONSTRUCTION OF A COMPLEX JACOBI MATRIX FROM TWO-SPECTRA Gusein Sh Guseinov Received 21:06 :2010 : Accepted 25 :04 :2011 Abstract

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Simplified Anti-Gauss Quadrature Rules with Applications in Linear Algebra

Simplified Anti-Gauss Quadrature Rules with Applications in Linear Algebra Noname manuscript No. (will be inserted by the editor) Simplified Anti-Gauss Quadrature Rules with Applications in Linear Algebra Hessah Alqahtani Lothar Reichel the date of receipt and acceptance should

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 765 781 c 2005 Society for Industrial and Applied Mathematics QUADRATURE RULES BASED ON THE ARNOLDI PROCESS DANIELA CALVETTI, SUN-MI KIM, AND LOTHAR REICHEL

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Hankel Determinant for a Sequence that Satisfies a Three-Term Recurrence Relation

Hankel Determinant for a Sequence that Satisfies a Three-Term Recurrence Relation 1 3 47 6 3 11 Journal of Integer Sequences, Vol. 18 (015), Article 15.1.5 Hankel Determinant for a Sequence that Satisfies a Three-Term Recurrence Relation Baghdadi Aloui Faculty of Sciences of Gabes Department

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Why Gaussian quadrature in the complex plane?

Why Gaussian quadrature in the complex plane? Numerical Algorithms 26: 251 280, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. Why Gaussian quadrature in the complex plane? Paul E. Saylor a, and Dennis C. Smolarski b a Department

More information

Numerical Methods I: Polynomial Interpolation

Numerical Methods I: Polynomial Interpolation 1/31 Numerical Methods I: Polynomial Interpolation Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 16, 2017 lassical polynomial interpolation Given f i := f(t i ), i =0,...,n, we would

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Orthogonal Polynomials and Gaussian Quadrature

Orthogonal Polynomials and Gaussian Quadrature Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations

Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations Numerical Algorithms manuscript No. (will be inserted by the editor) Composite convergence bounds based on Chebyshev polynomials and finite precision conjugate gradient computations Tomáš Gergelits Zdeněk

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Orthogonal matrix polynomials satisfying first order differential equations: a collection of instructive examples

Orthogonal matrix polynomials satisfying first order differential equations: a collection of instructive examples Journal of Nonlinear Mathematical Physics Volume 12, Supplement 2 (2005), 150 163 SIDE VI Orthogonal matrix polynomials satisfying first order differential equations: a collection of instructive examples

More information

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations

1 Planar rotations. Math Abstract Linear Algebra Fall 2011, section E1 Orthogonal matrices and rotations Math 46 - Abstract Linear Algebra Fall, section E Orthogonal matrices and rotations Planar rotations Definition: A planar rotation in R n is a linear map R: R n R n such that there is a plane P R n (through

More information

Chapter 2 Spectra of Finite Graphs

Chapter 2 Spectra of Finite Graphs Chapter 2 Spectra of Finite Graphs 2.1 Characteristic Polynomials Let G = (V, E) be a finite graph on n = V vertices. Numbering the vertices, we write down its adjacency matrix in an explicit form of n

More information

Summation of Series and Gaussian Quadratures

Summation of Series and Gaussian Quadratures Summation of Series Gaussian Quadratures GRADIMIR V. MILOVANOVIĆ Dedicated to Walter Gautschi on the occasion of his 65th birthday Abstract. In 985, Gautschi the author constructed Gaussian quadrature

More information

Solutions: Problem Set 3 Math 201B, Winter 2007

Solutions: Problem Set 3 Math 201B, Winter 2007 Solutions: Problem Set 3 Math 201B, Winter 2007 Problem 1. Prove that an infinite-dimensional Hilbert space is a separable metric space if and only if it has a countable orthonormal basis. Solution. If

More information

Szegő-Lobatto quadrature rules

Szegő-Lobatto quadrature rules Szegő-Lobatto quadrature rules Carl Jagels a,, Lothar Reichel b,1, a Department of Mathematics and Computer Science, Hanover College, Hanover, IN 47243, USA b Department of Mathematical Sciences, Kent

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Analysis of Wave Propagation in 1D Inhomogeneous Media

Analysis of Wave Propagation in 1D Inhomogeneous Media Analysis of Wave Propagation in D Inhomogeneous Media Patrick Guidotti, Knut Solna, and James V. Lambers Author Query AQ Au: Please provide keywords. AQ Au: Primary and Secondary classifications are required

More information

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian

More information

OPSF, Random Matrices and Riemann-Hilbert problems

OPSF, Random Matrices and Riemann-Hilbert problems OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 2017 Plan of the course lecture 1: Orthogonal

More information

Eigenvalues of Random Matrices over Finite Fields

Eigenvalues of Random Matrices over Finite Fields Eigenvalues of Random Matrices over Finite Fields Kent Morrison Department of Mathematics California Polytechnic State University San Luis Obispo, CA 93407 kmorriso@calpoly.edu September 5, 999 Abstract

More information

GENERALIZED GAUSS RADAU AND GAUSS LOBATTO FORMULAE

GENERALIZED GAUSS RADAU AND GAUSS LOBATTO FORMULAE BIT 0006-3835/98/3804-0101 $12.00 2000, Vol., No., pp. 1 14 c Swets & Zeitlinger GENERALIZED GAUSS RADAU AND GAUSS LOBATTO FORMULAE WALTER GAUTSCHI 1 1 Department of Computer Sciences, Purdue University,

More information

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY ILSE C.F. IPSEN Abstract. Absolute and relative perturbation bounds for Ritz values of complex square matrices are presented. The bounds exploit quasi-sparsity

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Bergman shift operators for Jordan domains

Bergman shift operators for Jordan domains [ 1 ] University of Cyprus Bergman shift operators for Jordan domains Nikos Stylianopoulos University of Cyprus Num An 2012 Fifth Conference in Numerical Analysis University of Ioannina September 2012

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n.

A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS. Masaru Nagisa. Received May 19, 2014 ; revised April 10, (Ax, x) 0 for all x C n. Scientiae Mathematicae Japonicae Online, e-014, 145 15 145 A NOTE ON RATIONAL OPERATOR MONOTONE FUNCTIONS Masaru Nagisa Received May 19, 014 ; revised April 10, 014 Abstract. Let f be oeprator monotone

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

ORTHOGONAL POLYNOMIALS FOR THE OSCILLATORY-GEGENBAUER WEIGHT. Gradimir V. Milovanović, Aleksandar S. Cvetković, and Zvezdan M.

ORTHOGONAL POLYNOMIALS FOR THE OSCILLATORY-GEGENBAUER WEIGHT. Gradimir V. Milovanović, Aleksandar S. Cvetković, and Zvezdan M. PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 8498 2008, 49 60 DOI: 02298/PIM0898049M ORTHOGONAL POLYNOMIALS FOR THE OSCILLATORY-GEGENBAUER WEIGHT Gradimir V Milovanović, Aleksandar S Cvetković,

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

Fast variants of the Golub and Welsch algorithm for symmetric weight functions

Fast variants of the Golub and Welsch algorithm for symmetric weight functions Fast variants of the Golub and Welsch algorithm for symmetric weight functions Gérard Meurant 3 rue du sergent Bauchat, 7512 Paris, France Alvise Sommariva University of Padua, Italy. Abstract In this

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

NUMERICAL MATHEMATICS AND SCIENTIFIC COMPUTATION. Series Editors G. H. GOLUB Ch. SCHWAB

NUMERICAL MATHEMATICS AND SCIENTIFIC COMPUTATION. Series Editors G. H. GOLUB Ch. SCHWAB NUMEICAL MATHEMATICS AND SCIENTIFIC COMPUTATION Series Editors G. H. GOLUB Ch. SCHWAB E. SÜLI NUMEICAL MATHEMATICS AND SCIENTIFIC COMPUTATION Books in the series Monographs marked with an asterix ( ) appeared

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0. Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)

More information