Differentiation matrices in polynomial bases

Size: px
Start display at page:

Download "Differentiation matrices in polynomial bases"

Transcription

1 Math Sci () 5 DOI /s9---x ORIGINAL RESEARCH Differentiation matrices in polynomial bases A Amiraslani Received January 5 / Accepted April / Published online April The Author(s) This article is published with open access at Springerlinkcom Abstract Explicit differentiation matrices in various polynomial bases are presented in this work The idea is to avoid any change of basis in the process of polynomial differentiation This article concerns both degree-graded polynomial bases such as orthogonal bases, and non-degree-graded polynomial bases including the Lagrange and Bernstein bases Keywords Polynomial interpolation Polynomial bases Differentiation Mathematics Subject Classification Introduction A5 A Consider a scalar (complex) polynomial P(x) of degree n and a basis given by fb ðxþ B ðxþ B n ðxþ B n ðxþg This basis determines the representation PðxÞ ¼ P n j¼ a jb j ðxþ, where a j C and, regardless of the basis, the coefficient of x n in the expression is nonzero Alternatively, this polynomial can be written as & A Amiraslani aamirasl@hawaiiedu STEM Department, University of Hawaii-Maui College, W Kaahumanu Ave, Kahului, HI 9, USA PðxÞ ¼½a a a n a n B ðxþ B ðxþ I ðþ B n ðxþ 5 B n ðxþ where I is the unit matrix of size n þ We want to find a matrix D, the differentiation matrix, of size n þ such that the kth order derivative of P(x), shown by dk PðxÞ or P ðkþ ðxþ, can be written as dx k B ðxþ B ðxþ P ðkþ ðxþ ¼½a a a n a n D k B n ðxþ 5 B n ðxþ ðþ For a polynomial of degree n, D has to be a nilpotent matrix of degree n þ D has a well-known structure and can be easily found for the monomial basis For convenience, let us assume n ¼ 5 and the generalizations for all positive n will be clear If x x PðxÞ ¼½a a a a a a 5 I x ðþ 5 then x x 5

2 Math Sci () 5 ðþ 5 5 Differentiation in bases other than the monomial basis has been occasionally studied One of the most important applications of polynomial differentiation in other bases is in spectral methods like collocation method [] Differentiation matrices for Chebyshev and Jacobi polynomials were computed [] The differentiation of Jacobi polynomials through Bernstein basis was studied [] Chirikalov [] computed the differentiation matrix for the Hermite basis In this paper, we present explicit formulas for D in different polynomial bases Constructing D is fairly straightforward and having D, we can easily find derivatives of higher order by raising D to higher powers accordingly Another important advantage of having a formula for D in a basis is that we do not need to change the basis often to the monomial basis to differentiate P(x) Conversion between bases has been exhaustively studied in [], but it can be unstable [] Section Degree-graded bases of this paper considers degree-graded bases and finds D in general for them Orthogonal bases are all among degree-graded bases Section Degree-graded bases then discusses other important special cases such as the monomial and Newton bases as well as the Hermite basis Section Bernstein basis and Lagrange basis concern the Bernstein and Lagrange bases, respectively, and find D for them The Bernstein (Bézier) basis and the Lagrange basis are most useful in computer-aided geometric design (see [5], for example) For some problems in partial differential equations with symmetries in the boundary conditions Legendre polynomials can be successfully used the most natural Finally, in approximation theory, Chebyshev polynomials have a special place due to their minimumnorm property (see eg, []) Degree-graded bases Real polynomials f/ n ðxþg n¼ with / nðxþ of degree n which are orthonormal on an interval of the real line (with respect to some nonnegative weight function) necessarily satisfy a three-term recurrence relation (see Chapter of [], for example) These relations can be written in the form x/ j ðxþ ¼a j / jþ ðxþþb j / j ðxþþc j / j ðxþ j ¼ ðþ where the a j b j c j are real, a j ¼, / ðxþ, / ðxþ The choices of coefficients a j b j c j defining three wellknown sets of orthogonal polynomials (associated with the names of Chebyshev and Legendre) are summarized in Table Orthogonal polynomials have well-established significance in mathematical physics and numerical analysis (see eg, []) More generally, any sequence of polynomials f/ j ðxþg j¼ with / jðxþ of degree j is said to be degreegraded and obviously forms a linearly independent set but is not necessarily orthogonal A scalar polynomial of degree n can now be written in terms of a set of degree-graded polynomials PðxÞ ¼ P n j¼ a j/ j ðxþ, where a j C and a n ¼ We can then write / ðxþ / ðxþ PðxÞ ¼½a a a n a n I ðþ / n ðxþ 5 / n ðxþ Table Three well-known orthogonal polynomials Polynomial T n ðxþ P n ðxþ U n ðxþ Name of polynomial Chebyshev (st kind) Legendre (Spherical) Chebyshev (nd kind) Weight function ð x Þ ð x Þ Orthogonality interval ½ ½ ½ Leading coefficient k n n ðnþ! n n ðn!þ a n for n ¼ otherwise nþ nþ b n c n n nþ

3 Math Sci () 5 9 Lemma If P(x) is given by (), then / ðxþ / ðxþ P ðkþ ðxþ ¼½a a a n a n D k / n ðxþ 5 / n ðxþ ðþ where Q 5 ðþ Q is a size n lower triangular matrix that has the following structure for i ¼ n Given that / ðxþ ¼ and / ðxþ does not appear in the system, we can eliminate them and rewrite the above system as / ðxþ / ðxþ / ðxþ / ðxþ 5 ¼ H / ðxþ / ðxþ 5 ðþ / ðxþ / ðxþ where a b x a H ¼ c b x a 5 ð9þ c b x a Since a i ¼, H exists and is a lower triangular matrix that has the following row structure in general for i ¼ n i >< a q ij ¼ i > ððb a j b i Þq ij þ a j q ij þ c j q ijþ c i q ij Þ i i ¼ j i [ j ð5þ Any entry, q, with a negative or zero index is set to in the above formula Proof We start by differentiating () to get / j ðxþ ¼a j / jþ ðxþþðb j xþ/ j ðxþþc j/ jðxþ j ¼ ðþ We can write this equation in a matrix-vector form Without loss of generality, we assume n ¼ / ðxþ / ðxþ / ðxþ 5 / ðxþ 5 / ðxþ a / ðxþ c b x a / ðxþ ¼ c b x a / ðxþ c b x a 5 / ðxþ 5 / ðxþ ðþ h ðþ ij >< ¼ > a i j ¼ i x b j ðh ðþ ijþ a Þc jþ ðh ðþ ijþþ j ¼ðiÞ j a j ðþ Now / ðxþ / ðxþ / ðxþ / ðxþ 5 ¼ / ðxþ H / ðxþ 5 ðþ / ðxþ / ðxþ It is obvious from () that the variable, x, appears in entries of H Through a fairly straightforward, but rather tedious process and using () to eliminate x from H, we get Q as given by (5), and we have / ðxþ / ðxþ / ðxþ / ðxþ 5 ¼ Q / ðxþ / ðxþ 5 ðþ / ðxþ / ðxþ

4 5 Math Sci () 5 From there, adding a row and a column, we find D and have / ðxþ / / ðxþ ðxþ / ðxþ / ðxþ ¼ / ðxþ ðþ / ðxþ Q 5 5 / ðxþ 5 / ðxþ / ðxþ Using these results, we can write the first derivative of a generic first kind Chebyshev polynomial of degree (see Table ) as P ðxþ ¼½a a a a a T ðxþ T ðxþ T ðxþ 5 T ðxþ 5 T ðxþ ðþ Similarly, for a second kind Chebyshev polynomial of degree, we can write the first derivative as P ðxþ ¼½a a a a a P ðxþ ¼½a a a a a U ðxþ U ðxþ U ðxþ 5 U ðxþ 5 U ðxþ ð5þ P ðxþ P ðxþ P ðxþ 5 5 P ðxþ 5 P ðxþ ðþ Special degree-graded bases As mentioned above, the family of degree-graded polynomials with recurrence relations of the form () include all the orthogonal bases, but are not limited to them Here, we discuss some of the famous non-orthogonal bases of this kind and, consequently, for which we find the differentiation matrix, D, formulas In particular, if in (), we let a j ¼ and b j ¼ c j ¼, it will become the monomial basis Using () and (5), we can easily verify that in this case, D has a form like () Another important basis of this kind is the Newton basis Let a polynomial P(x) be specified by the data f z j P j g n j¼ where the z j s are distinct If the Newton polynomials are defined by setting N ðxþ ¼ and, for k ¼ n N k ðxþ ¼ Yk ðx z j Þ then j¼ PðxÞ ¼½a a a n a n N ðxþ N ðxþ I N n ðxþ 5 N n ðxþ ðþ ðþ For j ¼ n, the a j s can be found by divided differences as follows a j ¼½P P P j where we have ½P j ¼P j, and ð9þ ½P i P iþj ¼ ½P iþ P iþj ½P i P iþj ðþ z iþj z i If in (), we let a j ¼, b j ¼ z j and c j ¼, it will become the Newton basis For n ¼, D, as given by (), has the following form z z ðz z Þðz z Þ z þ z þ z 5 ðz z Þðz z Þðz z Þ ðz z Þðz z þ z Þþðz z Þðz z Þ z þ z þ z þ z ðþ

5 Math Sci () 5 5 The confluent case Suppose that a polynomial P(x) ofdegreen as well as its derivatives are sampled at k nodes, ie, distinct (finite) points z z z k WewriteP j ¼ Pðz j Þ P j ¼ P ðz j Þ P ðs jþ j ¼ P ðsjþ ðz j Þ j ¼ k Here s ¼ðs s s k Þ shows the confluencies (ie, the orders of the derivatives) associated with the nodes and we have P k i¼ s i ¼ n þ k If for j ¼ k, all s j ¼, then k ¼ n þ and we have the Lagrange interpolation This is an interesting polynomial interpolation that deserves a better consideration the Hermite interpolation (See eg, [, ]) It is basically similar to the Lagrange interpolation, but at each node, we have the value of P(x) as well as its derivatives up to a certain order Now, we assume that at each node, z j, we have the value and the derivatives of P(x) up to the s j th order The nodes at which the derivatives are given are treated as extra nodes In fact we pretend that we have s j þ nodes, z j, at which the value is P j and remember that P k i¼ s i ¼ n þ k In fact, the first s þ nodes are z, the next s þ nodes are z and so on Using the divided differences technique, as given by (), to find a j s, whenever we get ½P j P j P j where P j is repeated m times, we have ½P j P j P j ¼ PðmÞ j ðm Þ! ðþ and all the values P j to Pðs jþ j for j ¼ k are given For more details see eg, [] The bottom line is that the Hermite basis can be seen as a special case of the Newton basis, thus a degree-graded basis For the Hermite basis, like the Newton basis, a j ¼, b j ¼ z i, and c j ¼, but some of the b j s are repeated Other than that, the differentiation matrix, D, can be similarly found for the Hermite basis For a data set like fðz Pðz ÞÞ ðz P ðz ÞÞ ðz Pðz ÞÞ ðz Pðz ÞÞ ðz Pðz Þg, b ¼ z, we have b ¼ z, b ¼ z, b ¼ z, and b ¼ z In this case, () becomes z þ z 5 ðz z Þðz z Þ z þ z þ z ðþ Bernstein basis A Bernstein polynomial (also called Bézier polynomial) defined over the interval [a, b] has the form b j ðxþ ¼ n ðx aþ j ðb xþ nj j ðb aþ n j ¼ n ðþ This is not a typical scaling of the Bernstein polynomials however, this scaling makes matrix notations related to this basis slightly easier to write The Bernstein polynomials are nonnegative in [a, b], ie, b j ðxþ for all x ½a b (j ¼ n) Bernstein polynomials are widely used in computer-aided geometric design (eg, see []) A polynomial P(x) written in the Bernstein basis is of the form b ðxþ b ðxþ PðxÞ ¼½a a a n a n I ðþ b n ðxþ 5 b n ðxþ where the a j s are sometimes called the Bézier coefficients (j ¼ n) Lemma If P(x) is given by (), then b ðxþ P ðkþ ðxþ ¼½a a a n b ðxþ a n D k b n ðxþ 5 b n ðxþ ðþ where D is a size n þ tridiagonal matrix that has the following structure for i ¼ n þ n ði Þ i ¼ j >< i d ij ¼ i ¼ j ðþ > ðn i þ Þ i ¼ j þ Proof A little computation using () shows that b kðxþ ¼k b n k kðxþþ b kðxþþ n k b kþðxþ ð5þ

6 5 Math Sci () 5 for k ¼ n Any b i ðxþ with either a negative or larger than n index is set to in (5) This is why D is tridiagonal and form here, it is easy to derive () for D For n ¼, the differentiation matrix is as follows Lagrange basis 5 ðþ Lagrange polynomial interpolation is traditionally viewed as a tool for theoretical analysis however, recent work reveals several advantages to computation using new polynomial interpolation techniques in the Lagrange basis (see eg, [, 9]) Suppose that a polynomial P(x) of degree n is sampled at n þ distinct points z z z n, and write p j ¼ Pðz j Þ Lagrange polynomials are defined by L j ðxþ ¼ ðxþw j j ¼ n ðþ x z j where the weights w j are w j ¼ and Yn m¼ m¼j z j z m h ðþ ðxþ ¼ Yn m¼ ðx z m Þ ðþ Then P(x) can be expressed in terms of its samples in the following form L ðxþ PðxÞ ¼½p p p n L ðxþ p n I L n ðxþ 5 ðþ L n ðxþ Lemma If P(x) is given by (), then L ðxþ P ðkþ ðxþ ¼½p p p n L ðxþ p n D k L n ðxþ 5 L n ðxþ ð5þ where D is a size n þ matrix that has the following structure P n >< k¼k¼i i ¼ j z d ij ¼ i z k w ðþ i > w j ðz j z i Þ i ¼ j Proof A little computation using () shows that L Xn w i iðxþ ¼ w j ðz j z i Þ L jðxþþ Xn L i ðxþ z i z k j¼j¼i k¼k¼i ðþ for i ¼ n Form here, it is easy to derive () for D For n ¼, the differentiation matrix is as follows þ þ w z z z z z z w ðz z Þ w w ðz z Þ w w w ðz z Þ w ðz z Þ w w w ðz z Þ w ðz z Þ w w ðz z Þ þ þ w z z z z z z w w ðz z Þ w w ðz z Þ w ðz z Þ þ þ w z z z z z z w w ðz z Þ w ðz z Þ þ þ z z z z z z 5 ðþ h

7 Math Sci () 5 5 Concluding remarks A differentiation matrix, D, of size n þ in a certain basis is a nilpotent matrix of degree n þ As such, all of its eigenvalues are zero Assuming n ¼ 5 and the generalizations for all positive n will be clear we can fairly easily verify that the Jordan form for D in any basis is J ¼ ð5þ 5 We can write J ¼ T DT The jth column of the transformation matrix, T, is the first column of D nþj for j ¼ n þ For example, for the D given by (), we have T ¼ ð5þ 5 Now, if we consider two different polynomial bases for which the differentiation matrices are D and D, then we can write J ¼ T D T and J ¼ T D T From here it is easy to check that D ¼ðT T Þ D ðt T Þ ð5þ which shows, as expected, the differentiation matrices in different bases are similar In this paper, we have found explicit formulas for the differentiation matrix, D, in various polynomial bases The most important advantage of having D explicitly is that there is no need to go from one basis to another (normally monomial) to differentiate a polynomial in a given basis Moreover, having D, we can easily find higher order derivatives of any polynomial in its original basis One may hope that new and more efficient polynomial-related algorithms, such as root-finding methods, can be developed using D These results can be easily extended to matrix polynomials in different bases Open Access This article is distributed under the terms of the Creative Commons Attribution International License (http//crea tivecommonsorg/licenses/by//), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made References Berrut, J, Trefethen, L Barycentric Lagrange interpolation SIAM Rev (), 5 5 () Chirikalov, VI Computation of the differentiation matrix for the Hermite interpolating polynomials J Math Sci (), (99) Davis, PJ Interpolation and Approximation Blaisdell, New York (9) Farin, G Curves and Surfaces for Computer-Aided Geometric Design Acadmeic Press, San Diego (99) 5 Farouki, RT, Goodman, TNT, Sauer, T Construction of orthogonal bases for polynomials in Bernstein form on triangular simplex domains Comput Aided Geom Design, 9 () Gander, W Change of basis in polynomial interpolation Numer Linear Algebra Appl (), 9 () Gautschi, W Orthogonal Polynomials Computation and Approximation Clarendon, Oxford () Hermann, T On the stability of polynomial transformations between Taylor, Bézier Hermite forms Numer Algo, (99) 9 Higham, N The numerical stability of barycentric Lagrange interpolation IMA J Numer Anal, 5 55 () Lorentz, G, Jetter, K, Riemenschneider, SD Birkhoff Interpolation Addison Wesley Publishing Company, Boston (9) Ravlin, T Chebyshev Polynomials Wiley, New York (99) Rabbah, A, Al-Refai, M, Al-Jarrah, R Computing derivatives of Jacobi polynomials using Bernstein transformation and differentiation matirx Numer Funct Anal Optim 9(5 ), () Shen, J, Tang, T, Wang, L Spectral Methods Algorithms, Analysis and Applications Springer Series in Computational Mathematics, vol Springer, New York () Wang, L, Samson, D, Zhao, X A well-conditioned collocation method using a pseudospectral integration matrix SIAM J Sci Comput (), 9 99 ()

Numerical solution of a Fredholm integro-differential equation modelling _ h-neural networks

Numerical solution of a Fredholm integro-differential equation modelling _ h-neural networks Available online at www.sciencedirect.com Applied Mathematics and Computation 195 (2008) 523 536 www.elsevier.com/locate/amc Numerical solution of a Fredholm integro-differential equation modelling _ h-neural

More information

Possible numbers of ones in 0 1 matrices with a given rank

Possible numbers of ones in 0 1 matrices with a given rank Linear and Multilinear Algebra, Vol, No, 00, Possible numbers of ones in 0 1 matrices with a given rank QI HU, YAQIN LI and XINGZHI ZHAN* Department of Mathematics, East China Normal University, Shanghai

More information

Improved Lebesgue constants on the triangle

Improved Lebesgue constants on the triangle Journal of Computational Physics 207 (2005) 625 638 www.elsevier.com/locate/jcp Improved Lebesgue constants on the triangle Wilhelm Heinrichs Universität Duisburg-Essen, Ingenieurmathematik (FB 10), Universitätsstrasse

More information

Gauss-Lobatto to Bernstein polynomials transformation

Gauss-Lobatto to Bernstein polynomials transformation Gauss-Lobatto to Bernstein polynomials transformation Loredana Coluccio, Alfredo Eisinberg, Giuseppe Fedele Dip. Elettronica Informatica e Sistemistica, Università degli Studi della Calabria, 87036, Rende

More information

Convergence of a linear recursive sequence

Convergence of a linear recursive sequence int. j. math. educ. sci. technol., 2004 vol. 35, no. 1, 51 63 Convergence of a linear recursive sequence E. G. TAY*, T. L. TOH, F. M. DONG and T. Y. LEE Mathematics and Mathematics Education, National

More information

Determinants and polynomial root structure

Determinants and polynomial root structure International Journal of Mathematical Education in Science and Technology, Vol 36, No 5, 2005, 469 481 Determinants and polynomial root structure L G DE PILLIS Department of Mathematics, Harvey Mudd College,

More information

Solving Homogeneous Systems with Sub-matrices

Solving Homogeneous Systems with Sub-matrices Pure Mathematical Sciences, Vol 7, 218, no 1, 11-18 HIKARI Ltd, wwwm-hikaricom https://doiorg/112988/pms218843 Solving Homogeneous Systems with Sub-matrices Massoud Malek Mathematics, California State

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Approximations to the distribution of sum of independent non-identically gamma random variables

Approximations to the distribution of sum of independent non-identically gamma random variables Math Sci (205) 9:205 23 DOI 0.007/s40096-05-069-2 ORIGINAL RESEARCH Approximations to the distribution of sum of independent non-identically gamma random variables H. Murakami Received: 3 May 205 / Accepted:

More information

Constrained Ultraspherical-Weighted Orthogonal Polynomials on Triangle

Constrained Ultraspherical-Weighted Orthogonal Polynomials on Triangle International Journal of Mathematical Analysis Vol. 9, 15, no., 61-7 HIKARI Ltd, www.m-hiari.com http://dx.doi.org/1.1988/ijma.15.411339 Constrained Ultraspherical-Weighted Orthogonal Polynomials on Triangle

More information

Numerical Solution of Nonlocal Parabolic Partial Differential Equation via Bernstein Polynomial Method

Numerical Solution of Nonlocal Parabolic Partial Differential Equation via Bernstein Polynomial Method Punjab University Journal of Mathematics (ISSN 116-2526) Vol.48(1)(216) pp. 47-53 Numerical Solution of Nonlocal Parabolic Partial Differential Equation via Bernstein Polynomial Method Kobra Karimi Department

More information

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract

Positive Denite Matrix. Ya Yan Lu 1. Department of Mathematics. City University of Hong Kong. Kowloon, Hong Kong. Abstract Computing the Logarithm of a Symmetric Positive Denite Matrix Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A numerical method for computing the logarithm

More information

Rational Chebyshev pseudospectral method for long-short wave equations

Rational Chebyshev pseudospectral method for long-short wave equations Journal of Physics: Conference Series PAPER OPE ACCESS Rational Chebyshev pseudospectral method for long-short wave equations To cite this article: Zeting Liu and Shujuan Lv 07 J. Phys.: Conf. Ser. 84

More information

Instructions for Matlab Routines

Instructions for Matlab Routines Instructions for Matlab Routines August, 2011 2 A. Introduction This note is devoted to some instructions to the Matlab routines for the fundamental spectral algorithms presented in the book: Jie Shen,

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Extensions of Fibonacci lattice rules Ronald Cools Dirk Nuyens Report TW 545, August 2009 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A B-3001 Heverlee (Belgium Extensions

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

The numerical stability of barycentric Lagrange interpolation

The numerical stability of barycentric Lagrange interpolation IMA Journal of Numerical Analysis (2004) 24, 547 556 The numerical stability of barycentric Lagrange interpolation NICHOLAS J. HIGHAM Department of Mathematics, University of Manchester, Manchester M13

More information

Symmetric functions and the Vandermonde matrix

Symmetric functions and the Vandermonde matrix J ournal of Computational and Applied Mathematics 172 (2004) 49-64 Symmetric functions and the Vandermonde matrix Halil Oruç, Hakan K. Akmaz Department of Mathematics, Dokuz Eylül University Fen Edebiyat

More information

Interpolation & Polynomial Approximation. Hermite Interpolation I

Interpolation & Polynomial Approximation. Hermite Interpolation I Interpolation & Polynomial Approximation Hermite Interpolation I Numerical Analysis (th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Commun Nonlinear Sci Numer Simulat

Commun Nonlinear Sci Numer Simulat Commun Nonlinear Sci Numer Simulat xxx ) xxx xxx Contents lists available at ScienceDirect Commun Nonlinear Sci Numer Simulat journal homepage: wwwelseviercom/locate/cnsns A note on the use of Adomian

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

The inverse of a tridiagonal matrix

The inverse of a tridiagonal matrix Linear Algebra and its Applications 325 (2001) 109 139 www.elsevier.com/locate/laa The inverse of a tridiagonal matrix Ranjan K. Mallik Department of Electrical Engineering, Indian Institute of Technology,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method

Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method Applied Mathematics Volume 2012, Article ID 605741, 10 pages doi:10.1155/2012/605741 Research Article The Numerical Solution of Problems in Calculus of Variation Using B-Spline Collocation Method M. Zarebnia

More information

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Backward error of polynomial eigenvalue problems solved by linearization of Lagrange interpolants Piers W. Lawrence Robert M. Corless Report TW 655, September 214 KU Leuven Department of Computer Science

More information

Asymptotic efficiency and small sample power of a locally most powerful linear rank test for the log-logistic distribution

Asymptotic efficiency and small sample power of a locally most powerful linear rank test for the log-logistic distribution Math Sci (2014) 8:109 115 DOI 10.1007/s40096-014-0135-4 ORIGINAL RESEARCH Asymptotic efficiency and small sample power of a locally most powerful linear rank test for the log-logistic distribution Hidetoshi

More information

EXPLICIT ERROR BOUND FOR QUADRATIC SPLINE APPROXIMATION OF CUBIC SPLINE

EXPLICIT ERROR BOUND FOR QUADRATIC SPLINE APPROXIMATION OF CUBIC SPLINE J. KSIAM Vol.13, No.4, 257 265, 2009 EXPLICIT ERROR BOUND FOR QUADRATIC SPLINE APPROXIMATION OF CUBIC SPLINE YEON SOO KIM 1 AND YOUNG JOON AHN 2 1 DEPT OF MATHEMATICS, AJOU UNIVERSITY, SUWON, 442 749,

More information

Bernstein Bases are Optimal, but, sometimes, Lagrange Bases are Better

Bernstein Bases are Optimal, but, sometimes, Lagrange Bases are Better Bernstein Bases are Optimal, but, sometimes, Lagrange Bases are Better Robert M. Corless and Stephen M. Watt Ontario Research Centre for Computer Algebra Department of Applied Mathematics Department of

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel 1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel Xiaoyong Zhang 1, Junlin Li 2 1 Shanghai Maritime University, Shanghai,

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Numerical Integration (Quadrature) Another application for our interpolation tools!

Numerical Integration (Quadrature) Another application for our interpolation tools! Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

STRUCTURAL AND SPECTRAL PROPERTIES OF k-quasi- -PARANORMAL OPERATORS. Fei Zuo and Hongliang Zuo

STRUCTURAL AND SPECTRAL PROPERTIES OF k-quasi- -PARANORMAL OPERATORS. Fei Zuo and Hongliang Zuo Korean J Math (015), No, pp 49 57 http://dxdoiorg/1011568/kjm01549 STRUCTURAL AND SPECTRAL PROPERTIES OF k-quasi- -PARANORMAL OPERATORS Fei Zuo and Hongliang Zuo Abstract For a positive integer k, an operator

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

On the inversion of the Vandermonde matrix

On the inversion of the Vandermonde matrix On the inversion of the Vandermonde matrix A. Eisinberg, G. Fedele Dip. Elettronica Informatica e Sistemistica, Università degli Studi della Calabria, 87036, Rende (Cs), Italy Abstract The inversion of

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0 Math 12 Final Exam - Dec 14 - PCYNH 122-6pm Fall 212 Name Student No. Section A No aids allowed. Answer all questions on test paper. Total Marks: 4 8 questions (plus a 9th bonus question), 5 points per

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Weighted G 1 -Multi-Degree Reduction of Bézier Curves

Weighted G 1 -Multi-Degree Reduction of Bézier Curves Vol. 7, No. 2, 216 Weighted G 1 -Multi-Degree Reduction of Bézier Curves Abedallah Rababah Department of Mathematics, Jordan University of Science and Technology Irbid 2211 Jordan Salisu Ibrahim Department

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Approximation of Circular Arcs by Parametric Polynomial Curves

Approximation of Circular Arcs by Parametric Polynomial Curves Approximation of Circular Arcs by Parametric Polynomial Curves Gašper Jaklič Jernej Kozak Marjeta Krajnc Emil Žagar September 19, 005 Abstract In this paper the approximation of circular arcs by parametric

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

arxiv: v1 [math.na] 4 Nov 2015

arxiv: v1 [math.na] 4 Nov 2015 ON WELL-CONDITIONED SPECTRAL COLLOCATION AND SPECTRAL METHODS BY THE INTEGRAL REFORMULATION KUI DU arxiv:529v [mathna 4 Nov 25 Abstract Well-conditioned spectral collocation and spectral methods have recently

More information

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION APPENDIX B EIGENVALUES AND SINGULAR VALUE DECOMPOSITION B.1 LINEAR EQUATIONS AND INVERSES Problems of linear estimation can be written in terms of a linear matrix equation whose solution provides the required

More information

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist

More information

Number of Complete N-ary Subtrees on Galton-Watson Family Trees

Number of Complete N-ary Subtrees on Galton-Watson Family Trees Methodol Comput Appl Probab (2006) 8: 223 233 DOI: 10.1007/s11009-006-8549-6 Number of Complete N-ary Subtrees on Galton-Watson Family Trees George P. Yanev & Ljuben Mutafchiev Received: 5 May 2005 / Revised:

More information

A null space method for solving system of equations q

A null space method for solving system of equations q Applied Mathematics and Computation 149 (004) 15 6 www.elsevier.com/locate/amc A null space method for solving system of equations q Pu-yan Nie 1 Department of Mathematics, Jinan University, Guangzhou

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Some inequalities for unitarily invariant norms of matrices

Some inequalities for unitarily invariant norms of matrices Wang et al Journal of Inequalities and Applications 011, 011:10 http://wwwjournalofinequalitiesandapplicationscom/content/011/1/10 RESEARCH Open Access Some inequalities for unitarily invariant norms of

More information

Numerical solutions of fourth-order Volterra integro-differential equations by the Green s function and decomposition method

Numerical solutions of fourth-order Volterra integro-differential equations by the Green s function and decomposition method Math Sci (16) 115 166 DOI 117/s46-16-1- ORIGINAL RESEARCH Numerical solutions of fourth-order Volterra integro-differential equations by the Green s function and decomposition method Randhir Singh 1 Abdul-Majid

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

Gaussian interval quadrature rule for exponential weights

Gaussian interval quadrature rule for exponential weights Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice

More information

Representation of doubly infinite matrices as non-commutative Laurent series

Representation of doubly infinite matrices as non-commutative Laurent series Spec. Matrices 217; 5:25 257 Research Article Open Access María Ivonne Arenas-Herrera and Luis Verde-Star* Representation of doubly infinite matrices as non-commutative Laurent series https://doi.org/1.1515/spma-217-18

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Note on Newton Interpolation Formula

Note on Newton Interpolation Formula Int. Journal of Math. Analysis, Vol. 6, 2012, no. 50, 2459-2465 Note on Newton Interpolation Formula Ramesh Kumar Muthumalai Department of Mathematics D.G. Vaishnav College, Arumbaam Chennai-600106, Tamilnadu,

More information

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved. 7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating Applied Mathematical Sciences, Vol. 4, 200, no. 9, 905-9 A New Method to Find the Eigenvalues of Convex Matrices with Application in Web Page Rating F. Soleymani Department of Mathematics, Islamic Azad

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Math Spring 2011 Final Exam

Math Spring 2011 Final Exam Math 471 - Spring 211 Final Exam Instructions The following exam consists of three problems, each with multiple parts. There are 15 points available on the exam. The highest possible score is 125. Your

More information

Commun Nonlinear Sci Numer Simulat

Commun Nonlinear Sci Numer Simulat Commun Nonlinear Sci Numer Simulat 7 (0) 3499 3507 Contents lists available at SciVerse ScienceDirect Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns Application of the

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Preservation of local dynamics when applying central difference methods: application to SIR model

Preservation of local dynamics when applying central difference methods: application to SIR model Journal of Difference Equations and Applications, Vol., No. 4, April 2007, 40 Preservation of local dynamics when applying central difference methods application to SIR model LIH-ING W. ROEGER* and ROGER

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

Numerical Methods I: Polynomial Interpolation

Numerical Methods I: Polynomial Interpolation 1/31 Numerical Methods I: Polynomial Interpolation Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 16, 2017 lassical polynomial interpolation Given f i := f(t i ), i =0,...,n, we would

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant

Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant Advances in Numerical Analysis Volume 2011, Article ID 593548, 6 pages doi:10.1155/2011/593548 Research Article A Generalization of a Class of Matrices: Analytic Inverse and Determinant F. N. Valvi Department

More information

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve:

MATH 2331 Linear Algebra. Section 1.1 Systems of Linear Equations. Finding the solution to a set of two equations in two variables: Example 1: Solve: MATH 2331 Linear Algebra Section 1.1 Systems of Linear Equations Finding the solution to a set of two equations in two variables: Example 1: Solve: x x = 3 1 2 2x + 4x = 12 1 2 Geometric meaning: Do these

More information

Applied Numerical Analysis

Applied Numerical Analysis Applied Numerical Analysis Using MATLAB Second Edition Laurene V. Fausett Texas A&M University-Commerce PEARSON Prentice Hall Upper Saddle River, NJ 07458 Contents Preface xi 1 Foundations 1 1.1 Introductory

More information

On the Hyers-Ulam Stability Problem for Quadratic Multi-dimensional Mappings on the Gaussian Plane

On the Hyers-Ulam Stability Problem for Quadratic Multi-dimensional Mappings on the Gaussian Plane Southeast Asian Bulletin of Mathematics (00) 6: 483 50 Southeast Asian Bulletin of Mathematics : Springer-Verlag 00 On the Hyers-Ulam Stability Problem f Quadratic Multi-dimensional Mappings on the Gaussian

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017

Inverses. Stephen Boyd. EE103 Stanford University. October 28, 2017 Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

On an Inverse Problem for a Quadratic Eigenvalue Problem

On an Inverse Problem for a Quadratic Eigenvalue Problem International Journal of Difference Equations ISSN 0973-6069, Volume 12, Number 1, pp. 13 26 (2017) http://campus.mst.edu/ijde On an Inverse Problem for a Quadratic Eigenvalue Problem Ebru Ergun and Adil

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Polynomial functions on upper triangular matrix algebras

Polynomial functions on upper triangular matrix algebras Monatsh Math DOI 10.1007/s00605-016-1013-y Polynomial functions on upper triangular matrix algebras Sophie Frisch 1 Received: 30 May 2016 / Accepted: 1 December 2016 The Author(s) 2016. This article is

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Boundary Layer Problems and Applications of Spectral Methods

Boundary Layer Problems and Applications of Spectral Methods Boundary Layer Problems and Applications of Spectral Methods Sumi Oldman University of Massachusetts Dartmouth CSUMS Summer 2011 August 4, 2011 Abstract Boundary layer problems arise when thin boundary

More information

Sequential dynamical systems over words

Sequential dynamical systems over words Applied Mathematics and Computation 174 (2006) 500 510 www.elsevier.com/locate/amc Sequential dynamical systems over words Luis David Garcia a, Abdul Salam Jarrah b, *, Reinhard Laubenbacher b a Department

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions Linear Algebra (MATH 4) Spring 2 Final Exam Practice Problem Solutions Instructions: Try the following on your own, then use the book and notes where you need help. Afterwards, check your solutions with

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 10,500 1.7 M Open access books available International authors and editors Downloads Our

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Review I: Interpolation

Review I: Interpolation Review I: Interpolation Varun Shankar January, 206 Introduction In this document, we review interpolation by polynomials. Unlike many reviews, we will not stop there: we will discuss how to differentiate

More information

Universally bad integers and the 2-adics

Universally bad integers and the 2-adics Journal of Number Theory 17 (24) 322 334 http://www.elsevier.com/locate/jnt Universally bad integers and the 2-adics S.J. Eigen, a Y. Ito, b and V.S. Prasad c, a Northeastern University, Boston, MA 2115,

More information

Some Formulas for the Principal Matrix pth Root

Some Formulas for the Principal Matrix pth Root Int. J. Contemp. Math. Sciences Vol. 9 014 no. 3 141-15 HIKARI Ltd www.m-hiari.com http://dx.doi.org/10.1988/ijcms.014.4110 Some Formulas for the Principal Matrix pth Root R. Ben Taher Y. El Khatabi and

More information