Algebra C Numerical Linear Algebra Sample Exam Problems

Similar documents
Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Numerical Methods - Numerical Linear Algebra

Iterative Methods. Splitting Methods

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra in Actuarial Science: Slides to the lecture

Chapter 7 Iterative Techniques in Matrix Algebra

MAT 610: Numerical Linear Algebra. James V. Lambers

Foundations of Matrix Analysis

Computational Methods. Systems of Linear Equations

CHAPTER 5. Basic Iterative Methods

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Computational Linear Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

This can be accomplished by left matrix multiplication as follows: I

The Solution of Linear Systems AX = B

Mathematical Foundations

Cheat Sheet for MATH461

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

JACOBI S ITERATION METHOD

Lecture 18 Classical Iterative Methods

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Applied Linear Algebra

ELE/MCE 503 Linear Algebra Facts Fall 2018

Basic Concepts in Linear Algebra

G1110 & 852G1 Numerical Linear Algebra

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Review of Basic Concepts in Linear Algebra

Numerical Linear Algebra And Its Applications

EXAM. Exam 1. Math 5316, Fall December 2, 2012

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Solving Linear Systems

Practical Linear Algebra: A Geometry Toolbox

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Homework 2 Foundations of Computational Math 2 Spring 2019

5.6. PSEUDOINVERSES 101. A H w.

Next topics: Solving systems of linear equations

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Numerical Linear Algebra

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math 408 Advanced Linear Algebra

LinGloss. A glossary of linear algebra

1 Number Systems and Errors 1

Properties of Matrices and Operations on Matrices

Linear Equations and Matrix

Solution of Linear Equations

CS 246 Review of Linear Algebra 01/17/19

Matrix & Linear Algebra

Matrices and Matrix Algebra.

There are six more problems on the next two pages

3 (Maths) Linear Algebra

Matrix Factorization and Analysis

Linear Algebra: Matrix Eigenvalue Problems

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Numerical Methods in Matrix Computations

Online Exercises for Linear Algebra XM511

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Linear Algebra Lecture Notes-II

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

MAT Linear Algebra Collection of sample exams

SUMMARY OF MATH 1600

Matrices A brief introduction

Numerical Linear Algebra Homework Assignment - Week 2

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Quantum Computing Lecture 2. Review of Linear Algebra

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Review of some mathematical tools

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Matrices and systems of linear equations

Numerical Methods I Non-Square and Sparse Linear Systems

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Iterative techniques in matrix algebra

Applied Linear Algebra in Geoscience Using MATLAB

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Conceptual Questions for Review

Review problems for MA 54, Fall 2004.

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

9.1 Preconditioned Krylov Subspace Methods

Lecture 10 - Eigenvalues problem

Linear Algebra Massoud Malek

Up to this point, our main theoretical tools for finding eigenvalues without using det{a λi} = 0 have been the trace and determinant formulas

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

Linear Algebra Review. Vectors

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

CLASSICAL ITERATIVE METHODS

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Eigenvalues and Eigenvectors

Transcription:

Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric positive definite. If a linear operator A : V V is SPD, then (, ) A := (Ax, y) is an inner product and the corresponding norm is denoted by A. For a square matrix A, λ j (A), 1 j n denote the eigenvalues of A. A generic linear iterative method towards the solution of Ax = b, A IR n n is as follows: Given initial guess x 0, for k = 0, 1,... define x k+1 as x k+1 = x k + B(b Ax k ). Different choices of B result in Jacobi, Gauss-Seidel, SOR, Richardson, etc methods. 1. Let A R n n, and let A k = {a ij } k i,j=1, k = 1, 2,..., n. a) Prove that if all of the A k are nonsingular, then there exists unique decomposition A = LU, where L is a unit lower triangular matrix and U is an upper triangular matrix. b) Prove or disprove: if A is nonsingular (but nothing is known about the invertibility of A k ), then there exists a permutation P, and unique pair consisting of a unit lower triangular matrix L and an upper triangular matrix U such that PA = LU. 2. Consider the conjugate gradient method for the minimization of F(u) := 1 (Au, u) (b, u), 2 where A IR n n is SPD. Thus starting with u 0 = 0, r 0 = b and p 0 = r 0 the successive approximations to the minimizer are computed by u k+1 = u k + α k p k, r k+1 = r k α k Ap k ; p k+1 = r k+1 β k p k, where α k = (r k, p k )/ p k 2 A and β k = (r k+1, p k ) A / p k 2 A. a) Show that for k = 0, 1, 2,... the following relations are true: span{p 0, p 1,..., p k } = span{r 0, r 1,..., r k } = span{r 0, Ar 0,..., A k r 0 }. 1

b) Show that for some m n, r m = 0. c) Prove that if u is the unique minimizer of F( ) then (1) u u k A = inf P(A)(u u 0 ) A, P P k ; P(0)=1 where P k is the space of polynomials of real variable of degree k. d) Bound the right side of (1) in terms of the condition number of A. 3. Let A : V V be a non-singular linear transformation. Let u be the solution to Au = f, and ^u be approximation to it, satisfying A^u = ^f. Show that u ^u V u A A 1 f ^f f 4. Let A IR n n be SPD. a) Prove that the SOR (with parameter 0 < ω < 2) iterative methods are convergent. b) Take n = 3 and define A = (I + σvv T ), σ > 0 (A is then SPD). Find a vector v IR 3, σ, and initial guess x 0 for which Jacobi iterative method will result in a bounded but not convergent sequence of iterates. 5. Let A : V V be a linear transformation. Let W V be any non-trivial subspace of V. Assume that for every x W, x 0, there exists y W such that (Ax, y) γ x y, where γ > 0 is independent of x and W. a) Prove that A is invertible on W, that is, for any g V there exists unique x W W such that (2) (Ax W, y W ) = (g, y W ), for all y W W. b) Prove that if x V and x U are the solutions to (2) with W = V and W = U V respectively, then the following estimate holds: x V x W K inf y W x V y, Here K is a constant which depends only on A and γ. 2

6. Let A IR n n be irreducible. Assume also that the following inequalities hold: for all 1 i n, and 1 j n, a ii > 0, a ij 0, i j, n a ii + a ij 0, with strict inequality for at least one i. i=1 ;i j a) Prove that if f IR n is such that f j 0 (resp. 0), for all 1 j n, then the solution u to Au = f satisfies u j 0 (resp. ( )0), for all 1 j n. b) Prove that A is invertible. c) Prove that both Gauss-Seidel and Jacobi iterative methods are convergent for this type of matrices. 7. Suppose the entries of A(t) IR n n are continuously differentiable functions of the scalar t. Assume that A(0) is non-singular and all its principal submatrices are non-singular. Show that for sufficiently small t, there exists an LU-factorization of A and the factors L(t) and U(t) are also continuously differentiable. 8. If A is real m n matrix and U, V are orthogonal matrices, such that U T AV = diag(σ 1,..., σ p ), p = min{m, n} (singular value decomposition of A), then min A B l2 = A A k l2 = σ k+1, for k < r = rank(a) rank(b)=k Here A k = k σ i u i v T i, and u i and v i are the columns of U and V, respectively. i=1 9. Let A IR n n and B IR n n. Find d dt det(a + tb) t=0. 10. Let A IR m n, m n, have rank n, and let b IR m. a) Show that there exists a unique x IR n minimizing Ax b 2 l 2 b) Show that the matrix A T A is invertible and x = (A T A) 1 A T b. c) Given the QR factorization of A, write down the solution of the least squares problem in in terms of the components of the factorization. 3

11. Let A = (a ij ) be an SPD matrix of order n. ( After one ) step of Gaussian a11 a T elimination A is converted to a matrix of the form. Show that the 0 à (n 1) (n 1) matrix is à is SPD. 12. Let P IR n m and Q IR m n and A IR n n. Assuming that A and (I + QA 1 P) are invertible, prove that (A + PQ) is invertible and (A + PQ) 1 = A 1 A 1 P(I + QA 1 P) 1 QA 1. 13. Let A C n n, and define ρ(a) = max λ j(a), α(a) = max Re λ j(a). 1 j n 1 j n Prove that: a) There exists a unitary matrix Q and an upper triangular matrix T, such that A = QTQ. b) lim k Ak = 0, iff ρ(a) < 1. c) lim t exp(ta) = 0, iff α(a) < 0. 14. Let x IR n, and y IR n be such that x l2 = y l2. a) Prove that there exists a symmetric orthogonal matrix H such that Hx = y. b) Show that for a given A IR m n one can construct symmetric orthogonal matrices (reflections) H (k), k = 1,..., l such that A (l+1) = H (l) H (l 1) H (1) A, l min(m 1, n) has row echelon structure, that is, A (l+1) kj = 0, for j < k. 15. Let A R n n be a strictly diagonally dominant square matrix and C L be the lower triangle of A (including the diagonal). Prove that ρ(i C 1 A) max where { n } / { } k 1 δ k = a kj a kk a kj. j=k+1 j=1 L 1 k n δ k 4

16. Let A and B be symmetric real matrices. a) Prove that all tridiagonal symmetric Toeplitz matrices commute with each other. b) Let A and B be SPD matrices. Prove that all the eigenvalues of AB are positive. 17. Consider the LDL t factorization of an n n tridiagonal positive definite Toeplitz matrix A: 1 0... 0 0 d 1 0... 0 A = LDL t l 21 1... 0 0, D =... L =..... 0 0... d n 0 0... l n,n 1 1 Show that d n and l n,n 1 converge as n. 5