An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

Size: px
Start display at page:

Download "An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials"

Transcription

1 [ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate OPs Banff International Research Station October 2010 Construction Asymptotics Matrices

2 [ 2 ] University of Cyprus Bergman polynomials {p n } on an archipelago G G1 GN Γ1 G2 ΓN Γ2 Γ j, j = 1,..., N, a system of disjoint and mutually exterior Jordan curves in C, G j := int(γ j ), Γ := N j=1γ j, G := N j=1g j. f, g := f (z)g(z)da(z), f L2 (G) := f, f 1/2 G The Bergman polynomials {p n } n=0 of G are the orthonormal polynomials w.r.t. the area measure on G: p m, p n = p m (z)p n (z)da(z) = δ m,n, G with p n (z) = λ n z n +, λ n > 0, n = 0, 1, 2,.... Construction Asymptotics Matrices

3 [ 3 ] University of Cyprus Construction of p n s Algorithm: Conventional Gram-Schmidt (GS) Apply the Gram-Schmidt process to the monomials 1, z, z 2, z 3,... Main ingredient: the moments µ m,k := z m, z k = z m z k da(z), m, k = 0, 1,.... G The above algorithm has been been suggested by pioneers of Numerical Conformal Mapping (like P. Davis and D. Gaier) as the standard procedure for constructing Bergman polynomials. It was subsequently used by researchers in this area (e.g. Burbea, Kokkinos, Papamichael, Sideridis and Warby). It has been even employed in the numerical conformal FORTRAN package BKMPACK of Warby. Construction Asymptotics Matrices Conventional GS Arnoldi GS

4 [ 4 ] University of Cyprus Instability Indicator The GS method is notorious for its instability. For measuring it, when orthonormalizing a system S n := {u 0, u 1,..., u n } of functions, the following instability indicator has been proposed by J.M. Taylor, (Proc. R.S. Edin., 1978): u n 2 L I n := 2 (G) min u span(sn 1 ) u n u 2, n N. L 2 (G) Note that, when S n is an orthonormal system, then I n = 1. When S n is linearly dependent then I n =. Also, if G n := [ u m, u k ] n m,k=1, denotes the Gram matrix associated with S n then, κ 2 (G n ) I n, where κ 2 (G n ) is the spectral condition number of G n. Construction Asymptotics Matrices Conventional GS Arnoldi GS

5 [ 5 ] University of Cyprus Instability of the Conventional GS In the single-component case N = 1, consider the monomials u j := z j, j = 0, 1,..., n. Then, for the conventional GS we have the following result: Theorem (N. Papamichael and M. Warby, Numer. Math., 1986.) Assume that the curve Γ is piecewise-analytic without cusps and let L := z L (Γ)/cap(Γ) ( 1), where cap(γ) denotes the capacity of Γ. Then, c 1 (Γ) L 2n I n c 2 (Γ) L 2n. Note that L = 1, iff G D and that I n is sensitive to the relative position of G w.r.t. the origin. When G is the 8 2 rectangle centered at the origin, then L = 3/ In this case, I and the method breaks down in MATLAB, for n = 25. Construction Asymptotics Matrices Conventional GS Arnoldi GS

6 [ 6 ] University of Cyprus The Arnoldi algorithm in Numerical Liner Algebra Let A C m,m, b C m and consider the Krylov subspace K k := span{b, Ab, A 2 b..., A k 1 b}. The Arnoldi algorithm produces an orthonormal basis {v 1, v 2,..., v k } of K k as follows: W. Arnoldi (Quart. Appl. Math., 1951) At the n-th step, apply GS to orthonormalize the vector Av n 1 (instead of A n 1 b) against the (already computed) orthonormal vectors {v 1, v 2,..., v n 1 }. Construction Asymptotics Matrices Conventional GS Arnoldi GS

7 [ 7 ] University of Cyprus The Arnoldi algorithm for OP s Let µ be a (non-trivial) finite Borel measure with compact support Σ := supp(µ) on C and consider the series of orthonormal polynomials p n (z, µ) := λ n (µ)z n +, λ n (µ) > 0, n = 0, 1, 2,..., generated by the inner product f, g µ = f (z)g(z)dµ(z). Arnoldi GS for Orthonormal Polynomials At the n-th step, apply GS to orthonormalize the polynomial zp n 1 (instead of z n ) against the (already computed) orthonormal polynomials {p 0, p 1,..., p n 1 }. Used in B. Gragg & L. Reichel, Linear Algebra Appl. (1987), for the construction of Szegö polynomials. Construction Asymptotics Matrices Conventional GS Arnoldi GS

8 [ 8 ] University of Cyprus Stability of the Arnoldi GS In the case of the Arnoldi GS, the instability indicator is given by: zp n 1 2 L I n := 2 (G) min p Pn 1 zp n 1 p 2, n N. L 2 (G) Theorem It holds, 1 I n z L (Σ) λ 2 n 1 (µ) λ 2 n(µ), n N. Typically: When dµ dz (Szegö polynomials), or dµ da (Bergman polynomials), then c 1 (Γ) λ n 1(µ) λ n (µ) c 2 (Γ), n N. When dµ w(x)dx on [a, b] R, this ratio tends to a constant. Construction Asymptotics Matrices Conventional GS Arnoldi GS

9 [ 9 ] University of Cyprus Zeros of Bergman polys: Three Disks Zeros of the Bergman polynomials p n, n = 140, 150 and 160. Theory in: Gustafsson, Putinar, Saff & St, Adv. Math., Construction Asymptotics Matrices Conventional GS Arnoldi GS

10 [ 10 ] University of Cyprus Single-component case N = 1 Ω Φ Δ G ζ ϕζ D 0 Γ Ω := C \ G Φ(z) = γz + γ 0 + γ 1 z + γ 2 +. z2 cap(γ) = 1/γ The Bergman polynomials of G: p n (z) = λ n z n +, λ n > 0, n = 0, 1, 2,.... Construction Asymptotics Matrices N = 1 Archipelago

11 [ 11 ] University of Cyprus Strong asymptotics when Γ is analytic Ω Φ Δ G D 0 ρ 1 Lρ Γ T. Carleman, Ark. Mat. Astr. Fys. (1922) If ρ < 1 is the smallest index for which Φ is conformal in ext(l ρ ), then n + 1 γ 2(n+1) π λ 2 = 1 α n, where 0 α n c 1 (Γ) ρ 2n, n where p n (z) = n + 1 π Φn (z)φ (z){1 + A n (z)}, n N, A n (z) c 2 (Γ) n ρ n, z Ω. Construction Asymptotics Matrices N = 1 Archipelago

12 [ 12 ] University of Cyprus Strong asymptotics when Γ is smooth We say that Γ C(p, α), for some p N and 0 < α < 1, if Γ is given by z = g(s), where s is the arclength, with g (p) Lipα. Then both Φ and Ψ := Φ 1 are p times continuously differentiable in Ω \ { } and \ { } respectively, with Φ (p) and Ψ (p) Lipα. P.K. Suetin, Proc. Steklov Inst. Math. AMS (1974) Assume that Γ C(p + 1, α), with p + α > 1/2. Then n + 1 γ 2(n+1) 1 π λ 2 = 1 α n, where 0 α n c 1 (Γ) n n, 2(p+α) where p n (z) = n + 1 π Φn (z)φ (z){1 + A n (z)}, n N, A n (z) c 2 (Γ) log n, z Ω. np+α Construction Asymptotics Matrices N = 1 Archipelago

13 [ 13 ] University of Cyprus Strong asymptotics for Γ non-smooth Theorem (St, C. R. Acad. Sci. Paris, 2010) Assume that Γ is piecewise analytic without cusps. Then, n + 1 γ 2(n+1) π λ 2 = 1 α n, where 0 α n c(γ) 1 n n, n N. and for any z Ω, where p n (z) = A n (z) n + 1 π Φn (z)φ (z) {1 + A n (z)}, c 1 (Γ) dist(z, Γ) Φ (z) 1 n + c 2 (Γ) 1 n, n N. Construction Asymptotics Matrices N = 1 Archipelago

14 [ 14 ] University of Cyprus Ratio asymptotics for λ n Corollary (St, C. R. Acad. Sci. Paris, 2010) n + 1 n λ n 1 λ n = cap(γ) + ξ n, where ξ n c(γ) 1 n, n N. The above relation provides the means for computing approximations to the capacity of Γ, by using only the leading coefficients of the Bergman polynomials. In addition it yiels: Corollary c 1 (Γ) I n c 2 (Γ), n N. Hence, under the assumptions of the previous theorem, the Arnoldi GS for Bergman polynomials, in the single component case, is stable. Construction Asymptotics Matrices N = 1 Archipelago

15 [ 15 ] University of Cyprus Leading coefficients in an archipelago G1 GN Γ1 G2 ΓN Γ2 Theorem (Gustafsson, Putinar, Saff & St, Adv. Math., 2009) Assume that every Γ j is analytic, j = 1, 2,..., N. Then, c 1 (Γ) Corollary n + 1 π 1 cap(γ) n+1 λ n c 2 (Γ) n + 1 c 3 (Γ) I n c 4 (Γ), n N. π 1, n N. cap(γ) n+1 Hence, the Arnoldi GS, for Bergman polynomials on an archipelago, is stable. Construction Asymptotics Matrices N = 1 Archipelago

16 [ 16 ] University of Cyprus The inverse conformal map Ψ Ω Φ Δ G Γ ζ ϕζ D 0 Recall that Φ(z) = γz + γ 0 + γ 1 z + γ 2 z 2 +, and let Ψ := Φ 1 : {w : w > 1} Ω, denote the inverse conformal map. Then, where Ψ(w) = bw + b 0 + b 1 w + b 2 +, w > 1, w 2 b = cap(γ) = 1/γ. Construction Asymptotics Matrices Faber Hessenberg Recurrence

17 [ 17 ] University of Cyprus Faber polynomials of G The Faber polynomial F n (z) (n N) of G, is the polynomial part of the Laurent series expansion of Φ n (z) at : ( ) 1 F n (z) = Φ n (z) + O, z. z The Faber polynomial of the 2nd kind G n (z), is the polynomial part of the expansion of the Laurent series expansion of Φ n (z)φ (z) at : ( ) 1 G n (z) = Φ n (z)φ (z) + O, z. z Note: G n (z) = F n+1 (z) n + 1. Construction Asymptotics Matrices Faber Hessenberg Recurrence

18 [ 18 ] University of Cyprus The Faber matrix G The Faber polynomials of the 2nd kind satisfy the recurrence relation, zg n (z) = bg n+1 (z) + n b k G n k (z), n = 0, 1,..., k=0 and induce the upper Hessenberg Toeplitz matrix: b 0 b 1 b 2 b 3 b 4 b 5 b 6 b b 0 b 1 b 2 b 3 b 4 b 5 0 b b 0 b 1 b 2 b 3 b b b 0 b 1 b 2 b 3 G = b b 0 b 1 b b b 0 b b b Construction Asymptotics Matrices Faber Hessenberg Recurrence

19 [ 19 ] University of Cyprus The upper Hessenberg matrix M The Bergman polynomials satisfy the recurrence relation, n+1 zp n (z) = a k,n p k (z), n = 0, 1,..., k=0 where a k,n are Fourier coefficients: a k,n = zp n, p k, and induce the (infinite) upper Hessenberg matrix: a 00 a 01 a 02 a 03 a 04 a 05 a 06 a 10 a 11 a 12 a 13 a 14 a 15 a 16 0 a 21 a 22 a 23 a 24 a 25 a a 32 a 33 a 34 a 35 a 36 M = a 43 a 44 a 45 a a 54 a 55 a a 65 a Construction Asymptotics Matrices Faber Hessenberg Recurrence

20 [ 20 ] University of Cyprus Eigenvalues = Zeros Note The eigenvalues of the finite n n Faber matrix are the zeros of the Faber polynomial G n (z). Note The eigenvalues of the finite n n Hessenberg matrix are the zeros of the Bergman polynomial p n (z). Construction Asymptotics Matrices Faber Hessenberg Recurrence

21 [ 21 ] University of Cyprus Main subdiagonal Consider the main subdiagonal of the Hessenberg matrix: a n+1,n = zp n, p n+1 = λ n z n+1 +, p n+1 = λ n z n+1, p n+1 = Since cap(γ) = b, it follows from the ratio asymptotics for λ n, that: λ n λ n+1. Lemma n + 2 n + 1 a n+1,n = b + O ( ) 1, n N. n That is, the main subdiagonal of the Hessenberg matrix tends to the main subdiagonal of the Faber matrix. Construction Asymptotics Matrices Faber Hessenberg Recurrence

22 [ 22 ] University of Cyprus M G More generally, using the theory on strong asymptotics for non-smooth curves we have: Theorem (Saff & St) Assume that Γ is piecewise analytic w/o cusps, then for any fixed k N {0}, ( ) n n + k + 1 a n,n+k = b k + O n, n. That is, the k-th diagonal of the Hessenberg matrix tends to the k-th diagonal of the Faber matrix. Construction Asymptotics Matrices Faber Hessenberg Recurrence

23 [ 23 ] University of Cyprus Ratio asymptotics for p n (z) Theorem (St, C. R. Acad. Sci. Paris, 2010) Assume that Γ is piecewise analytic without cusps. Then, for any z Ω, and sufficiently large n N, n + 1 p n+1 (z) = Φ(z){1 + B n (z)}, n + 2 p n (z) where B n (z) c 1 (Γ) dist(z, Γ) Φ (z) 1 n + c 2 (Γ) 1 n. Construction Asymptotics Matrices Faber Hessenberg Recurrence

24 [ 24 ] University of Cyprus Only ellipses carry finite-term recurrences for {p n } Definition We say that the polynomials {p n } n=0 satisfy a (M + 1)-term recurrence relation, if for any n M 1, zp n (z) = a n+1,n p n+1 (z) + a n,n p n (z) a n M+1,n p n M+1 (z). Theorem (St, C. R. Acad. Sci. Paris, 2010) Assume that: Γ = G is piecewise analytic without cusps; the Bergman polynomials {p n } n=0 satisfy a (M + 1)-term recurrence relation, with some M 2. Then M = 2 and Γ is an ellipse. The above theorem refines results of Putinar & St (CAOT, 2007) and Khavinson & St (Springer, 2010). Construction Asymptotics Matrices Faber Hessenberg Recurrence

25 [ 25 ] University of Cyprus Banded Hessenberg matrices are tridiagonal Corollary If the Hessenberg matrix is banded with constant bandwidth 3, then is tridiagonal. This result should put an end to the long search in Numerical Linear Algebra, for practical polynomial iteration methods, based on short-term recurrence relations of orthogonal polynomials,. Construction Asymptotics Matrices Faber Hessenberg Recurrence

Bergman shift operators for Jordan domains

Bergman shift operators for Jordan domains [ 1 ] University of Cyprus Bergman shift operators for Jordan domains Nikos Stylianopoulos University of Cyprus Num An 2012 Fifth Conference in Numerical Analysis University of Ioannina September 2012

More information

Estimates for Bergman polynomials in domains with corners

Estimates for Bergman polynomials in domains with corners [ 1 ] University of Cyprus Estimates for Bergman polynomials in domains with corners Nikos Stylianopoulos University of Cyprus The Real World is Complex 2015 in honor of Christian Berg Copenhagen August

More information

Problems Session. Nikos Stylianopoulos University of Cyprus

Problems Session. Nikos Stylianopoulos University of Cyprus [ 1 ] University of Cyprus Problems Session Nikos Stylianopoulos University of Cyprus Hausdorff Geometry Of Polynomials And Polynomial Sequences Institut Mittag-Leffler Djursholm, Sweden May-June 2018

More information

ASYMPTOTICS FOR HESSENBERG MATRICES FOR THE BERGMAN SHIFT OPERATOR ON JORDAN REGIONS

ASYMPTOTICS FOR HESSENBERG MATRICES FOR THE BERGMAN SHIFT OPERATOR ON JORDAN REGIONS ASYMPTOTICS FOR HESSENBERG MATRICES FOR THE BERGMAN SHIFT OPERATOR ON JORDAN REGIONS EDWARD B. SAFF AND NIKOS STYLIANOPOULOS Abstract. Let G be a bounded Jordan domain in the complex plane. The Bergman

More information

Zeros of Polynomials: Beware of Predictions from Plots

Zeros of Polynomials: Beware of Predictions from Plots [ 1 / 27 ] University of Cyprus Zeros of Polynomials: Beware of Predictions from Plots Nikos Stylianopoulos a report of joint work with Ed Saff Vanderbilt University May 2006 Five Plots Fundamental Results

More information

Asymptotics for Polynomial Zeros: Beware of Predictions from Plots

Asymptotics for Polynomial Zeros: Beware of Predictions from Plots Computational Methods and Function Theory Volume 00 (0000), No. 0,? CMFT-MS XXYYYZZ Asymptotics for Polynomial Zeros: Beware of Predictions from Plots E. B. Saff and N. S. Stylianopoulos Dedicated to Walter

More information

Natural boundary and Zero distribution of random polynomials in smooth domains arxiv: v1 [math.pr] 2 Oct 2017

Natural boundary and Zero distribution of random polynomials in smooth domains arxiv: v1 [math.pr] 2 Oct 2017 Natural boundary and Zero distribution of random polynomials in smooth domains arxiv:1710.00937v1 [math.pr] 2 Oct 2017 Igor Pritsker and Koushik Ramachandran Abstract We consider the zero distribution

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials

Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Maxim L. Yattselev joint work with Christopher D. Sinclair International Conference on Approximation

More information

Matrix Functions and their Approximation by. Polynomial methods

Matrix Functions and their Approximation by. Polynomial methods [ 1 / 48 ] University of Cyprus Matrix Functions and their Approximation by Polynomial Methods Stefan Güttel stefan@guettel.com Nicosia, 7th April 2006 Matrix functions Polynomial methods [ 2 / 48 ] University

More information

Ratio Asymptotics for General Orthogonal Polynomials

Ratio Asymptotics for General Orthogonal Polynomials Ratio Asymptotics for General Orthogonal Polynomials Brian Simanek 1 (Caltech, USA) Arizona Spring School of Analysis and Mathematical Physics Tucson, AZ March 13, 2012 1 This material is based upon work

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (1997) 76: 489 513 Numerische Mathematik c Springer-Verlag 1997 Electronic Edition Approximation of conformal mappings of annular regions N. Papamichael 1, I.E. Pritsker 2,, E.B. Saff 3,,

More information

Matrix functions and their approximation. Krylov subspaces

Matrix functions and their approximation. Krylov subspaces [ 1 / 31 ] University of Cyprus Matrix functions and their approximation using Krylov subspaces Matrixfunktionen und ihre Approximation in Krylov-Unterräumen Stefan Güttel stefan@guettel.com Nicosia, 24th

More information

CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc.

CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc. Constr. Approx. (2001) 17: 267 274 DOI: 10.1007/s003650010021 CONSTRUCTIVE APPROXIMATION 2001 Springer-Verlag New York Inc. Faber Polynomials Corresponding to Rational Exterior Mapping Functions J. Liesen

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka Orthogonal polynomials with respect to generalized Jacobi measures Tivadar Danka A thesis submitted for the degree of Doctor of Philosophy Supervisor: Vilmos Totik Doctoral School in Mathematics and Computer

More information

Computation of Rational Szegő-Lobatto Quadrature Formulas

Computation of Rational Szegő-Lobatto Quadrature Formulas Computation of Rational Szegő-Lobatto Quadrature Formulas Joint work with: A. Bultheel (Belgium) E. Hendriksen (The Netherlands) O. Njastad (Norway) P. GONZÁLEZ-VERA Departament of Mathematical Analysis.

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

SZEGÖ ASYMPTOTICS OF EXTREMAL POLYNOMIALS ON THE SEGMENT [ 1, +1]: THE CASE OF A MEASURE WITH FINITE DISCRETE PART

SZEGÖ ASYMPTOTICS OF EXTREMAL POLYNOMIALS ON THE SEGMENT [ 1, +1]: THE CASE OF A MEASURE WITH FINITE DISCRETE PART Georgian Mathematical Journal Volume 4 (27), Number 4, 673 68 SZEGÖ ASYMPOICS OF EXREMAL POLYNOMIALS ON HE SEGMEN [, +]: HE CASE OF A MEASURE WIH FINIE DISCREE PAR RABAH KHALDI Abstract. he strong asymptotics

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Pseudospectra and Nonnormal Dynamical Systems

Pseudospectra and Nonnormal Dynamical Systems Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let

1. Introduction. Let µ(t) be a distribution function with infinitely many points of increase in the interval [ π, π] and let SENSITIVITY ANALYSIS OR SZEGŐ POLYNOMIALS SUN-MI KIM AND LOTHAR REICHEL Abstract Szegő polynomials are orthogonal with respect to an inner product on the unit circle Numerical methods for weighted least-squares

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Solutions: Problem Set 3 Math 201B, Winter 2007

Solutions: Problem Set 3 Math 201B, Winter 2007 Solutions: Problem Set 3 Math 201B, Winter 2007 Problem 1. Prove that an infinite-dimensional Hilbert space is a separable metric space if and only if it has a countable orthonormal basis. Solution. If

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Katholieke Universiteit Leuven Department of Computer Science

Katholieke Universiteit Leuven Department of Computer Science Convergence of the isometric Arnoldi process S. Helsen A.B.J. Kuijlaars M. Van Barel Report TW 373, November 2003 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A B-3001

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

EXTREMAL DOMAINS FOR SELF-COMMUTATORS IN THE BERGMAN SPACE

EXTREMAL DOMAINS FOR SELF-COMMUTATORS IN THE BERGMAN SPACE EXTREMAL DOMAINS FOR SELF-COMMUTATORS IN THE BERGMAN SPACE MATTHEW FLEEMAN AND DMITRY KHAVINSON Abstract. In [10], the authors have shown that Putnam's inequality for the norm of self-commutators can be

More information

A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space

A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space A New Necessary Condition for the Hyponormality of Toeplitz Operators on the Bergman Space Raúl Curto IWOTA 2016 Special Session on Multivariable Operator Theory St. Louis, MO, USA; July 19, 2016 (with

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Key words. cyclic subspaces, Krylov subspaces, orthogonal bases, orthogonalization, short recurrences, normal matrices.

Key words. cyclic subspaces, Krylov subspaces, orthogonal bases, orthogonalization, short recurrences, normal matrices. THE FABER-MANTEUFFEL THEOREM FOR LINEAR OPERATORS V FABER, J LIESEN, AND P TICHÝ Abstract A short recurrence for orthogonalizing Krylov subspace bases for a matrix A exists if and only if the adjoint of

More information

On the reduction of matrix polynomials to Hessenberg form

On the reduction of matrix polynomials to Hessenberg form Electronic Journal of Linear Algebra Volume 3 Volume 3: (26) Article 24 26 On the reduction of matrix polynomials to Hessenberg form Thomas R. Cameron Washington State University, tcameron@math.wsu.edu

More information

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS

ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS ORTHOGONAL POLYNOMIALS WITH EXPONENTIALLY DECAYING RECURSION COEFFICIENTS BARRY SIMON* Dedicated to S. Molchanov on his 65th birthday Abstract. We review recent results on necessary and sufficient conditions

More information

arxiv: v1 [math.cv] 30 Jun 2008

arxiv: v1 [math.cv] 30 Jun 2008 EQUIDISTRIBUTION OF FEKETE POINTS ON COMPLEX MANIFOLDS arxiv:0807.0035v1 [math.cv] 30 Jun 2008 ROBERT BERMAN, SÉBASTIEN BOUCKSOM Abstract. We prove the several variable version of a classical equidistribution

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

Inverses of regular Hessenberg matrices

Inverses of regular Hessenberg matrices Proceedings of the 10th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2010 27 30 June 2010. Inverses of regular Hessenberg matrices J. Abderramán

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

COMPOSITION OPERATORS INDUCED BY SYMBOLS DEFINED ON A POLYDISK

COMPOSITION OPERATORS INDUCED BY SYMBOLS DEFINED ON A POLYDISK COMPOSITION OPERATORS INDUCED BY SYMBOLS DEFINED ON A POLYDISK MICHAEL STESSIN AND KEHE ZHU* ABSTRACT. Suppose ϕ is a holomorphic mapping from the polydisk D m into the polydisk D n, or from the polydisk

More information

OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE

OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE OPUC, CMV MATRICES AND PERTURBATIONS OF MEASURES SUPPORTED ON THE UNIT CIRCLE FRANCISCO MARCELLÁN AND NIKTA SHAYANFAR Abstract. Let us consider a Hermitian linear functional defined on the linear space

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions

Commutants of Finite Blaschke Product. Multiplication Operators on Hilbert Spaces of Analytic Functions Commutants of Finite Blaschke Product Multiplication Operators on Hilbert Spaces of Analytic Functions Carl C. Cowen IUPUI (Indiana University Purdue University Indianapolis) Universidad de Zaragoza, 5

More information

Algebra Exam Topics. Updated August 2017

Algebra Exam Topics. Updated August 2017 Algebra Exam Topics Updated August 2017 Starting Fall 2017, the Masters Algebra Exam will have 14 questions. Of these students will answer the first 8 questions from Topics 1, 2, and 3. They then have

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Error Estimation and Evaluation of Matrix Functions

Error Estimation and Evaluation of Matrix Functions Error Estimation and Evaluation of Matrix Functions Bernd Beckermann Carl Jagels Miroslav Pranić Lothar Reichel UC3M, Nov. 16, 2010 Outline: Approximation of functions of matrices: f(a)v with A large and

More information

Derivatives of Faber polynomials and Markov inequalities

Derivatives of Faber polynomials and Markov inequalities Derivatives of Faber polynomials and Markov inequalities Igor E. Pritsker * Department of Mathematics, 401 Mathematical Sciences, Oklahoma State University, Stillwater, OK 74078-1058, U.S.A. E-mail: igor@math.okstate.edu

More information

Spectral Theory of Orthogonal Polynomials

Spectral Theory of Orthogonal Polynomials Spectral Theory of Orthogonal Polynomials Barry Simon IBM Professor of Mathematics and Theoretical Physics California Institute of Technology Pasadena, CA, U.S.A. Lecture 1: Introduction and Overview Spectral

More information

Introduction to Arnoldi method

Introduction to Arnoldi method Introduction to Arnoldi method SF2524 - Matrix Computations for Large-scale Systems KTH Royal Institute of Technology (Elias Jarlebring) 2014-11-07 KTH Royal Institute of Technology (Elias Jarlebring)Introduction

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

HADAMARD GAP THEOREM AND OVERCONVERGENCE FOR FABER-EROKHIN EXPANSIONS. PATRICE LASSÈRE & NGUYEN THANH VAN

HADAMARD GAP THEOREM AND OVERCONVERGENCE FOR FABER-EROKHIN EXPANSIONS. PATRICE LASSÈRE & NGUYEN THANH VAN HADAMARD GAP THEOREM AND OVERCONVERGENCE FOR FABER-EROKHIN EXPANSIONS. PATRICE LASSÈRE & NGUYEN THANH VAN Résumé. Principally, we extend the Hadamard-Fabry gap theorem for power series to Faber-Erokhin

More information

A complex geometric proof of Tian-Yau-Zelditch expansion

A complex geometric proof of Tian-Yau-Zelditch expansion A complex geometric proof of Tian-Yau-Zelditch expansion Zhiqin Lu Department of Mathematics, UC Irvine, Irvine CA 92697 October 21, 2010 Zhiqin Lu, Dept. Math, UCI A complex geometric proof of TYZ expansion

More information

Compact symetric bilinear forms

Compact symetric bilinear forms Compact symetric bilinear forms Mihai Mathematics Department UC Santa Barbara IWOTA 2006 IWOTA 2006 Compact forms [1] joint work with: J. Danciger (Stanford) S. Garcia (Pomona

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

The restarted QR-algorithm for eigenvalue computation of structured matrices

The restarted QR-algorithm for eigenvalue computation of structured matrices Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi

More information

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal . Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal 3 9 matrix D such that A = P DP, for A =. 3 4 3 (a) P = 4, D =. 3 (b) P = 4, D =. (c) P = 4 8 4, D =. 3 (d) P

More information

Krylov Subspaces. The order-n Krylov subspace of A generated by x is

Krylov Subspaces. The order-n Krylov subspace of A generated by x is Lab 1 Krylov Subspaces Lab Objective: matrices. Use Krylov subspaces to find eigenvalues of extremely large One of the biggest difficulties in computational linear algebra is the amount of memory needed

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

On multivariable Fejér inequalities

On multivariable Fejér inequalities On multivariable Fejér inequalities Linda J. Patton Mathematics Department, Cal Poly, San Luis Obispo, CA 93407 Mihai Putinar Mathematics Department, University of California, Santa Barbara, 93106 September

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Spurious Poles in Padé Approximation of Algebraic Functions

Spurious Poles in Padé Approximation of Algebraic Functions Spurious Poles in Padé Approximation of Algebraic Functions Maxim L. Yattselev University of Oregon Department of Mathematical Sciences Colloquium Indiana University - Purdue University Fort Wayne Transcendental

More information

ADJOINT OPERATOR OF BERGMAN PROJECTION AND BESOV SPACE B 1

ADJOINT OPERATOR OF BERGMAN PROJECTION AND BESOV SPACE B 1 AJOINT OPERATOR OF BERGMAN PROJECTION AN BESOV SPACE B 1 AVI KALAJ and JORJIJE VUJAINOVIĆ The main result of this paper is related to finding two-sided bounds of norm for the adjoint operator P of the

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Matrix Algebra: Summary

Matrix Algebra: Summary May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................

More information

The topology of random lemniscates

The topology of random lemniscates The topology of random lemniscates Erik Lundberg, Florida Atlantic University joint work (Proc. London Math. Soc., 2016) with Antonio Lerario and joint work (in preparation) with Koushik Ramachandran elundber@fau.edu

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 2002) 91: 503 542 Digital Object Identifier DOI) 10.1007/s002110100224 Numerische Mathematik L 2 -Approximations of power and logarithmic functions with applications to numerical conformal

More information

TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES

TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES TRUNCATED TOEPLITZ OPERATORS ON FINITE DIMENSIONAL SPACES JOSEPH A. CIMA, WILLIAM T. ROSS, AND WARREN R. WOGEN Abstract. In this paper, we study the matrix representations of compressions of Toeplitz operators

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Linear Algebra problems

Linear Algebra problems Linear Algebra problems 1. Show that the set F = ({1, 0}, +,.) is a field where + and. are defined as 1+1=0, 0+0=0, 0+1=1+0=1, 0.0=0.1=1.0=0, 1.1=1.. Let X be a non-empty set and F be any field. Let X

More information

ON LINEAR COMBINATIONS OF

ON LINEAR COMBINATIONS OF Física Teórica, Julio Abad, 1 7 (2008) ON LINEAR COMBINATIONS OF ORTHOGONAL POLYNOMIALS Manuel Alfaro, Ana Peña and María Luisa Rezola Departamento de Matemáticas and IUMA, Facultad de Ciencias, Universidad

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Is Every Matrix Similar to a Toeplitz Matrix?

Is Every Matrix Similar to a Toeplitz Matrix? Is Every Matrix Similar to a Toeplitz Matrix? D. Steven Mackey Niloufer Mackey Srdjan Petrovic 21 February 1999 Submitted to Linear Algebra & its Applications Abstract We show that every n n complex nonderogatory

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Boundary behaviour of optimal polynomial approximants

Boundary behaviour of optimal polynomial approximants Boundary behaviour of optimal polynomial approximants University of South Florida Laval University, in honour of Tom Ransford, May 2018 This talk is based on some recent and upcoming papers with various

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017

Linear Independence. Stephen Boyd. EE103 Stanford University. October 9, 2017 Linear Independence Stephen Boyd EE103 Stanford University October 9, 2017 Outline Linear independence Basis Orthonormal vectors Gram-Schmidt algorithm Linear independence 2 Linear dependence set of n-vectors

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information