Orthogonal Polynomials and Gaussian Quadrature

Size: px
Start display at page:

Download "Orthogonal Polynomials and Gaussian Quadrature"

Transcription

1 Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2 (I,dw). That is, we consider a Borel measure on a subinterval of the real line. Typically, we restrict to the cases of an integrable weight function, i.e., an absolutely continuous measure, or a discrete measure supported on a countable subset of R. The polynomial functions {1,x,x 2,...,x n,...} are defined on I. Form the real inner product space, with f,g = f(x)g(x)dw(x). I Nowapply Gram-Schmidt to getanons, {ψ 0,ψ 1,...}, where degψ n = n, n 0. In particular, with w(i) = 1dw we have ψ 0 = 1/ w(i). We proceed to consider some of the main features of a sequence of orthogonal polynomials. I We focus on two specific sequences of orthogonal polynomials for a given dw. The sequence {ψ n } will denote the ONS. The sequence {φ n } will denote the monic polynomials, i.e., with leading coefficient equal to 1. In particular, φ 0 = 1 in every case. Finally, denote φ n 2 = φ n (x) 2 dw(x) = γ n the sequence of squared norms. I

2 Three-term recurrences. Proposition 1.1. There exist sequences {a k } k 0 and {b k } k 0 such that the polynomials {φ k } k 0 satisfy, for k 0, the recurrence xφ k = φ k+1 +a k φ k +b k φ k 1 with b 0 = 0, eliminating the third term for k = 0. Proof. Since xφ k is a polynomial of degree k+1, we have an expansion xφ k = φ k+1 +a k φ k +b k φ k 1 + c j φ j. j<k 1 We will check that all of the coefficients c j, j < k 1 are zero. We have xφ k,φ j = φ k,xφ j = 0 = c j γ j by orthogonality, if j < k 1, since then degxφ j < k. Since γ j 0, we conclude that c j = 0. First, the recurrence relation gives xφ k,φ k = a k. Now, observe that by the recurrence relation xφ k,φ k 1 = b k 1 and, dropping k to k 1, for k 1, We thus have, with γ 0 = w(i), 1 = b k Now, the ONS, {ψ n }, satisfies xφ k 1,φ k = = = ( k ) b i γ0. i=1 φ k = ψ k γk = ψ k γ0 b 1 b 2 b k. Substituting into the recurrence relation yields xψ k γk = bk+1 ψ k+1 +a k γk ψ k +b k γk /b k ψ k 1. Clearing the common factor we have Proposition 1.2. The orthonormal polynomials satisfy, for k 0, the recurrence xψ k = b k+1 ψ k+1 +a k ψ k + b k ψ k 1 if the monic polynomials satisfy the recurrence of Proposition 1.1.

3 An interesting feature of this latter recurrence is that it can be written in matrix form in terms of a symmetric tridiagonal matrix. Namely, for n 1, x ψ 0 ψ 1 ψ 2. ψ n 1 a 0 b b1 a 1 b = 0 b2 a 2 b bn 1 a n 1 ψ 0 ψ 1 ψ 2. ψ n bn ψ n (1) 1.2. Zeros. A sequence of polynomials orthogonal on an interval I has some special properties. Here {φ n } denotes a sequence of polynomials forminganorthogonalsysteminl 2 (I,dw)withoutanyfurtherassumptions, i.e., we do not require them to be monic nor normalized. Proposition 1.3. For each n > 0, the polynomial φ n has n real roots lying in the interval I. Proof. Letλbearealrootofφ n lyingoutsideofi. Then, forx I, x λ doesnotchange sign. Ifit isnegative, we replace itby λ x, inanycase φ n (x)/ x λ isapolynomial ofdegree n 1 andφ n (x)(φ n (x)/ x λ ) 0 on I. By orthogonality, φ n (x)(φ n (x)/ x λ )dw = 0 I so that φ n (x) 2 / x λ = 0 on I. But then φ n (x) would be identically zero. Similarly, if there are complex conjugate roots, they yield a quadratic factor x 2 + ax + b that does not change sign on I. And φ n (x)/(x 2 +ax+b) is a polynomial of degree n 2 orthogonal to φ n, again leading to the contradiction φ n vanishing identically. A similar argument applies to a root λ I with multiplicity greater than1. Form φ n (x)/(x λ) 2 which would be a polynomial orthogonal to φ n, leading to a contradiction as before. 2. Gaussian quadrature A quadrature rule is a linear functional on C(I) designed to approximate I f dw. For convenience we will simply write f. The quadrature rule

4 4 has the form f n w k f(x k ) k=1 with x k I, 1 k n, called the nodes. The coefficients {w k } are the weights. Let {φ n } be a sequence of polynomials orthogonal on I with respect to the measure dw with degφ n = n, for n 0. Unless otherwise stated, we will assume that dw is normalized so that 1dw = 1 I with φ 0 = 1. If the quadrature formula holds for polynomials up to degree n, then we have 1 = 1 = w k and φ i = w k φ i (x k ) = 0,for 0 i < n. k k Fix a degree/order n > 1. If f is a polynomial of degree degf > n, we can use the division algorithm, dividing f by φ n, and write f = qφ n +r withquotientq andtheremainder r satisfyingdegr < n. Ifdegf < 2n, then degq < n as well and we have, by orthogonality, f = qφ n + r = r whereas by the quadrature rule, assuming it holds for polynomials of degree at most n, hence for r, f(x k )w k f = w k r(x k ) = ( w k q(xk )φ n (x k )+r(x k ) ) k k k That is, w k q(x k )φ n (x k ) = 0 k for polynomials q of degree less than n. By choosing the nodes to be the (n distinct) zeros of φ n, the quadrature rule becomes exact for polynomials of degree less than 2n. We will henceforth assume the nodes to be so chosen. This, then, is Gaussian quadrature.

5 The quadrature rule is now exact for polynomials of degree less than 2n. So take polynomials φ i and φ j, with 0 i,j < n. Then φ i φ j is a polynomial of degree less than 2n and we have φ i φ j = γ i δ ij = w k φ i (x k )φ j (x k ) (2) k with {γ i } the squared norms. It is convenient to introduce matrices to express these and subsequent useful relations. Let Φ ij = ( φ i 1 (x j ) ) 1 i,j n, W = diag(w 1,...,w n ), and Γ = diag(γ 0,...,γ n 1 ). Then equation (2) takes the form where the indicates transpose. ΦWΦ = Γ (3) We can also see at this point that the weights w k are positive. Let l j (x) = cφ n (x)/(x x j ) (4) a polynomial vanishing at all nodes except x j with c chosen so that l j (x j ) = 1. Sincedegl j = n 1 < n, degl 2 j < 2nsothatthequadrature rule holds for l 2 j. Thus, since this non-zero polynomial vanishes at all nodes except x j, we have 0 < l 2 j = w k l j (x k ) 2 = w j (5) k as required. Note that we have a formula for the weights from this discussion. Find c by letting x x j in equation (4): l j (x j ) = 1 = c lim x xj φ n (x) φ n (x j ) x x j = cφ n (x j) by definition of the derivative of φ n at x = x j, where we use the fact that φ n (x j ) = 0. So c = 1/φ n(x j ). Thus, using the equalities in equation (5) with l j instead of l 2 j, we have φ n (x) w j = φ n (x (6) j)(x x j ) 5

6 6 Remark. With the assumption 1 = 1, the sum in the quadrature rule is a convex combination of the values {f(x k )} 1 k n which we can formintoacolumnvectorf. Alternatively, definef = diag(f(x 1 ),...,f(x n )). Then the rule takes the form f trfw. Thus, the weights define a discrete probability distribution. Taking determinants in equation (3) (detφ) 2 detw = detγ > 0 so that Φ and W are invertible with detw > 0, as we have seen from positivity of the weights. Now we can define Then equation (3) reads V = Γ 1/2 ΦW 1/2 with Φ = Γ 1/2 VW 1/2 (7) VV = I. That is, V is an orthogonal matrix. And we have the dual relation V V = I = Φ Γ 1 Φ = W 1. (8) 2.1. Recurrence formula. Nodes and weights. In this discussion we will use the orthonormal polynomials {ψ n }. The nodes are the same, and we set with corresponding weights Ψ ij = ( ψ i 1 (x j ) ) 1 i,j n W = (Ψ Ψ) 1. and Γ = I Recall the matrix form of the recurrence formula, equation(1). Replacing x successively by the rootsx j of ψ n, the vector involving ψ n will vanish. Setting a 0 b b1 a 1 b A = 0 b2 a 2 b bn 1 a n 1

7 for x = x j we have, ψ j = (ψ 0 (x j ),...,ψ n 1 (x j )), as a column vector, satisfying A ψ j = x j ψj. Sincetherearendistinctvalues{x j },thesearepreciselytheeigenvalues of A with corresponding eigenvectors { ψ j }. Nowconstruct Ψ fromthecolumns ψ j andtherecurrence formula takes the matrix form AΨ = ΨΛ with Λ = diag(x 1,...,x n ), the diagonal matrix of eigenvalues. Since the eigenvalues of A are distinct, the matrix Ψ has orthogonal columns, being the eigenvectors of a symmetric matrix corresponding to distinct eigenvalues. For orthonormal polynomials, equation (3) has the form ΨWΨ = I so that ΨW 1/2 is an orthogonal matrix satisfying A(ΨW 1/2 ) = ΨΛW 1/2 = (ΨW 1/2 )Λ. Thatis, U = ΨW 1/2 isanorthogonalmatrixdiagonalizing A. Since the eigenvalues are distinct, adjusting signs so the top row is all positive, this matrix is unique up to permutation of its columns, corresponding to sorting the eigenvalues of A. Since Γ has diagonal entries equal to the squared norms of the orthogonal system, we see that Ψ = Γ 1/2 Φ. Thus, U = ΨW 1/2 = Γ 1/2 ΦW 1/2. (9) Comparing with equation (7), we identify U = V. Note that the recurrence relation AΨ = ΨΛ can be written AΓ 1/2 Φ = Γ 1/2 ΦΛ = Γ 1/2 AΓ 1/2 Φ = ΦΛ Takingfor{φ }themonicpolynomials, callthislastmatrixofcoefficients A monic. Thus, we have A = A orthonormal = Γ 1/2 A monic Γ 1/2 7

8 8 so that one could start directly from the original recurrence relation for the monic polynomials, construct A monic from the coefficients, calculate the values γ n via the coefficients b n and continue from there. Remark. When the eigenvectors of A are found by computation, form Ũ taking as columns a basis of eigenvectors. Then Ũ Ũ = D a diagonal matrix. Thus U = ŨD 1/2 will be an orthogonal matrix diagonalizing A. If necessary, adjust the signs so that the top row is all positive. Then, sorting the eigenvalues x 1 < x 2 < < x n produces a unique matrix U which must then equal ΨW 1/2 after suitable arrangement of its columns. The top row of U is U 1j = ψ 0 (x j ) w j With γ 0 = w(i), ψ 0 = 1/ γ 0 and solving for the weights, we have w j = γ 0 U 2 1j (10) directly from the top row of U. Any of the formulas (6), (8), or (10) can be used to evaluate the weights {w j } Christoffel-Darboux identity. Weights revisited. Proposition 2.1. For monic orthogonal polynomials {φ n }, we have n φ k (x)φ k (y) = 1 1 γ n x y φ n+1(x) φ n+1 (y) φ n (x) φ n (y) the vertical delimiters denoting determinant. Proof. For induction, start with n = 0. We have, using the recurrence formula at k = 0, φ 0 (x)φ 0 (y) =? 1 1 γ 0 γ 0 x y φ 1(x) φ 1 (y) φ 0 (x) φ 0 (y) = 1 1 γ 0 x y x a 0 y a = 1 γ 0

9 since φ 0 (x) = φ 0 (y) = 1. The inductive step is similar: 1 1 γ n x y φ n+1(x) φ n+1 (y) φ n (x) φ n (y) = 1 1 γ n x y (x a n)φ n (x) b n φ n 1 (x) (y a n )φ n (y) b n φ n 1 (y) φ n (x) φ n (y) = 1 1 γ n x y xφ n(x) yφ n (y) φ n (x) φ n (y) + b n γ n (x y) φ n(x) φ n (y) φ n 1 (x) φ n 1 (y) (switching rows) = φ n(x)φ n (y) γ n γ n 1 x y φ n(x) φ n (y) φ n 1 (x) φ n 1 (y) = φ n(x)φ n (y) n 1 + γ n = as required. n φ k (x)φ k (y) φ k (x)φ k (y) (by induction hypothesis) Now we have another formula for the Gaussian quadrature weights. Proposition 2.2. For monic polynomials, {φ n }, we have γ n 1 w j = φ n(x j )φ n 1 (x j ). Proof. Letting y = x j, the j th root of φ n, in the Christoffel-Darboux formula, with n 1 replacing n, yields n 1 φ k (x)φ k (x j ) = 1 γ n 1 φ n (x)φ n 1 (x j ) φ n (x j )φ n 1 (x) x x j with φ n (x j ) vanishing. Now let x = x i. If i j, then the factor φ n (x i ) = 0 yielding zero. If i = j, use L Hôpital s rule to get n 1 φ k (x j )φ k (x j ) = φ n 1(x j )φ n(x j ) γ n 1. Now the left side reads (Φ Γ 1 Φ) jj. By equation (8), we get which immediately gives the result. 1 = φ n 1(x j )φ n (x j) (11) w j γ n 1 9

10 10 Corollary 2.3. Let {p n } be an OPS. Let p k have leading coefficient α k, k 0. Then the weights are given by w j = α n γ n 1 α n 1 p n(x j )p n 1 (x j ) where now the squared norms are given by γ n 1 = p n 1 2. Proof. Let φ n = p n /α n denote the corresponding monic polynomials. Then in the Proposition, note that φ n 1 2 = p n 1 2 /α 2 n 1 with γ n 1 now referring to p n 1 2. Problem. Consider the Chebyshev polynomials of the first kind on dx [ 1, 1] with measure π 1 x Use the recurrence formula to show that the leading coefficient of T n is 2 n 1, for n Verify that T n 2 = 1/2 for n 1, while T 0 2 = 1. (2j 1)π 3. Check that the zeros of T n (x) are given by x j = cos, 2n 1 j n. 4. Show that T n (x) = nu n 1(x), Chebyshev polynomials of the second kind. 5. For n 2, use the formula in Corollary 2.3 to verify that the weights w j all equal 1/n Convergence. Now we check that if we increase n, the quadrature estimates indeed converge to f. In this section we take I to be a closed bounded interval. Start by denoting thenodes and weights at order nby {x nk } and {w nk } respectively. Then the quadrature rule at stage n is n P n (f) = w nk f(x nk ). k=1 Since the quadrature rule is exact for polynomials of degree less than 2n, we have P n (x m ) = x m for n > m. Hence lim P n(x m ) = n x m

11 11 for all m 0, hence, by linearity, for all polynomials p(x). Let f C(I). Then by Weierstrass Approximation Theorem, we can find a polynomial p such that f p < ε, denoting the sup norm on I. So, using the fact that P n (1) = γ 0, P n (f) f P n (f) P n (p) + P n (p) p + f p f p γ f p γ 0 2εγ 0 whenever n exceeds degp. So the error is the same order of magnitude as that in Weierstrass approximation. 3. Interpolation We are given a function f on I. We can interpolate its values at the nodes{x j }producingapolynomialwhichagreeswithf atthosepoints. Start by defining the column vector f = (f(x k )) 1 k n. If f has an expansion in the OPS {φ } up to order n 1, n 1 f(x) = c k φ k (x) (12) and, with c the vector with components (c 0,...,c n 1 ), we have n 1 f(x j ) = c k φ k (x j ) or Φ c = f. From equation (3), we have Thus, ΦWΦ = Γ = (Φ ) 1 = Γ 1 ΦW. c = Γ 1 ΦWf (13) gives the coefficients in the expansion of f. With the usual assumption φ 0 = 1, the integral of f is simply c 0 γ 0. Writing out the matrix relation yields the formula c k = 1 n φ k (x j )w j f(x j ) (14) j=1

12 12 for 0 k < n. By orthogonality, multiplying by φ k (x) and integrating in equation (12), c k = 1 f(x)φ k (x)dw I and the quadrature formula recovers equation (14) for polynomials of degree less than n. This shows that the orthogonal expansion, equation (12), agrees with the interpolation expansion. For general continuous, or piecewise continuous bounded, functions these give the truncated orthogonal expansion in L 2 (I,dw). Remark. When doing computations by machine, the eigenvalue problem for A yields the nodes as eigenvalues and the weights via the top row of the orthogonal matrix diagonalizing A. If we have the entire matrix U at hand, we can use it to do interpolation directly via equations (13) and (9): and for the orthonormal expansion we have as Γ is simply the identity matrix. c = Γ 1/2 UW 1/2 f n 1 f(x) = α k ψ k (x) α = UW 1/2 f Problem. Do Chebyshev interpolation on [ 1, 1] with the weight function(1 x 2 ) 1/2 /π. Foreachfunctionfindtheinterpolatingpolynomial as an expansion in Chebyshev polynomials. Then plot the interpolant and the function on [ 1,1]. Do for n = 5, n = 10, n = 20, for each function: (i) 1/(1+x 2 ) and (ii) cosx. Notes on References. The matrix approach to finding the nodes and weights is thoroughly discussed in [1]. [2] presents the matrix approach along with MATLAB code examples. Code is available at the web site.

13 [3] presents an overview with various formulas for the weights. See the material on Gaussian quadrature in[4], where the formula (10) is one of the exercises. References [1] Calculation of Gauss Quadrature Rules: Gene H. Golub, John H. Welsch, Mathematics of Computation, Vol. 23, No. 106 (Apr., 1969), pp s1-s10, American Mathematical Society, [2] John A. Gubner, [3] See [4] Herbert S. Wilf, Mathematics for the Physical Sciences, wilf/website/mathematics for the Physical Sciences.html 13

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Introduction to Orthogonal Polynomials: Definition and basic properties

Introduction to Orthogonal Polynomials: Definition and basic properties Introduction to Orthogonal Polynomials: Definition and basic properties Prof. Dr. Mama Foupouagnigni African Institute for Mathematical Sciences, Limbe, Cameroon and Department of Mathematics, Higher Teachers

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Real Variables # 10 : Hilbert Spaces II

Real Variables # 10 : Hilbert Spaces II randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,

More information

Approximation Theory

Approximation Theory Approximation Theory Function approximation is the task of constructing, for a given function, a simpler function so that the difference between the two functions is small and to then provide a quantifiable

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2. INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 = 0, x 1 = π/4, x

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Solutions: Problem Set 3 Math 201B, Winter 2007

Solutions: Problem Set 3 Math 201B, Winter 2007 Solutions: Problem Set 3 Math 201B, Winter 2007 Problem 1. Prove that an infinite-dimensional Hilbert space is a separable metric space if and only if it has a countable orthonormal basis. Solution. If

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices m:33 Notes: 7. Diagonalization of Symmetric Matrices Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman May 3, Symmetric matrices Definition. A symmetric matrix is a matrix

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions 301 APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION Let V be a finite dimensional inner product space spanned by basis vector functions {w 1, w 2,, w n }. According to the Gram-Schmidt Process an

More information

Functional Analysis Review

Functional Analysis Review Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Linear Analysis Lecture 16

Linear Analysis Lecture 16 Linear Analysis Lecture 16 The QR Factorization Recall the Gram-Schmidt orthogonalization process. Let V be an inner product space, and suppose a 1,..., a n V are linearly independent. Define q 1,...,

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Methods of Mathematical Physics X1 Homework 2 - Solutions

Methods of Mathematical Physics X1 Homework 2 - Solutions Methods of Mathematical Physics - 556 X1 Homework - Solutions 1. Recall that we define the orthogonal complement as in class: If S is a vector space, and T is a subspace, then we define the orthogonal

More information

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

MATH 235: Inner Product Spaces, Assignment 7

MATH 235: Inner Product Spaces, Assignment 7 MATH 235: Inner Product Spaces, Assignment 7 Hand in questions 3,4,5,6,9, by 9:3 am on Wednesday March 26, 28. Contents Orthogonal Basis for Inner Product Space 2 2 Inner-Product Function Space 2 3 Weighted

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Orthogonal Polynomial Ensembles

Orthogonal Polynomial Ensembles Chater 11 Orthogonal Polynomial Ensembles 11.1 Orthogonal Polynomials of Scalar rgument Let wx) be a weight function on a real interval, or the unit circle, or generally on some curve in the comlex lane.

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 Compact and Precompact Subsets of H

1 Compact and Precompact Subsets of H Compact Sets and Compact Operators by Francis J. Narcowich November, 2014 Throughout these notes, H denotes a separable Hilbert space. We will use the notation B(H) to denote the set of bounded linear

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Functional Analysis HW #5

Functional Analysis HW #5 Functional Analysis HW #5 Sangchul Lee October 29, 2015 Contents 1 Solutions........................................ 1 1 Solutions Exercise 3.4. Show that C([0, 1]) is not a Hilbert space, that is, there

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

The converse is clear, since

The converse is clear, since 14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The

More information

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63 Contents Appendix D (Inner Product Spaces W-5 Index W-63 Inner city space W-49 W-5 Chapter : Appendix D Inner Product Spaces The inner product, taken of any two vectors in an arbitrary vector space, generalizes

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

There are two things that are particularly nice about the first basis

There are two things that are particularly nice about the first basis Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix. Quadratic forms 1. Symmetric matrices An n n matrix (a ij ) n ij=1 with entries on R is called symmetric if A T, that is, if a ij = a ji for all 1 i, j n. We denote by S n (R) the set of all n n symmetric

More information

3.1 Interpolation and the Lagrange Polynomial

3.1 Interpolation and the Lagrange Polynomial MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

12.0 Properties of orthogonal polynomials

12.0 Properties of orthogonal polynomials 12.0 Properties of orthogonal polynomials In this section we study orthogonal polynomials to use them for the construction of quadrature formulas investigate projections on polynomial spaces and their

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later. 34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information