UNIVERSITY of LIMERICK

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra)

Review problems for MA 54, Fall 2004.

Linear Algebra, part 3 QR and SVD

Lecture notes: Applied linear algebra Part 1. Version 2

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra Review. Vectors

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

UNIT 6: The singular value decomposition.

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Singular Value Decomposition

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Cheat Sheet for MATH461

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Vector and Matrix Norms. Vector and Matrix Norms

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Singular Value Decomposition

1 Last time: least-squares problems

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra in Actuarial Science: Slides to the lecture

I. Multiple Choice Questions (Answer any eight)

Maths for Signals and Systems Linear Algebra in Engineering

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

G1110 & 852G1 Numerical Linear Algebra

Singular Value Decomposition

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Linear Algebra. Session 12

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Econ Slides from Lecture 7

Orthonormal Transformations and Least Squares

Linear Algebra- Final Exam Review

Algebra C Numerical Linear Algebra Sample Exam Problems

Linear Analysis Lecture 16

Study Guide for Linear Algebra Exam 2

Math Fall Final Exam

2. Every linear system with the same number of equations as unknowns has a unique solution.

Maths for Signals and Systems Linear Algebra in Engineering

Notes on Eigenvalues, Singular Values and QR

Linear Algebra Practice Problems

CS 323: Numerical Analysis and Computing

Linear Algebra Primer

18.06SC Final Exam Solutions

STA141C: Big Data & High Performance Statistical Computing

Conceptual Questions for Review

Problem # Max points possible Actual score Total 120

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

NORMS ON SPACE OF MATRICES

The QR Factorization

Solution of Linear Equations

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

UNIVERSITY of LIMERICK

Math 407: Linear Optimization

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MATH 612 Computational methods for equation solving and function minimization Week # 2

5.6. PSEUDOINVERSES 101. A H w.

The Singular Value Decomposition

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Linear Algebra Review. Fei-Fei Li

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Section 6.4. The Gram Schmidt Process

Applied Linear Algebra in Geoscience Using MATLAB

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

MAT Linear Algebra Collection of sample exams

Linear Algebra Review. Fei-Fei Li

UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH

Numerical Linear Algebra Homework Assignment - Week 2

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Linear Least Squares. Using SVD Decomposition.

1. TRUE or FALSE. 2. Find the complete solution set to the system:

MIT Final Exam Solutions, Spring 2017

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra Massoud Malek

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

1 Last time: inverses

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

LINEAR ALGEBRA KNOWLEDGE SURVEY

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Pseudoinverse & Moore-Penrose Conditions

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Practical Linear Algebra: A Geometry Toolbox

Lecture 6, Sci. Comp. for DPhil Students

AMS526: Numerical Analysis I (Numerical Linear Algebra)

ANSWERS. E k E 2 E 1 A = B

2 Two-Point Boundary Value Problems

Chapter 7: Symmetric Matrices and Quadratic Forms

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Preliminary Examination, Numerical Analysis, August 2016

STA141C: Big Data & High Performance Statistical Computing

B553 Lecture 5: Matrix Algebra Review

1 Inner Product and Orthogonality

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MAT 2037 LINEAR ALGEBRA I web:

Chapter 2 Notes, Linear Algebra 5e Lay

Linear Least-Squares Data Fitting

Linear Algebra - Part II

Transcription:

UNIVERSITY of LIMERICK OLLSCOIL LUIMNIGH Faculty of Science & Engineering Department of Mathematics & Statistics END OF SEMESTER ASSESSMENT PAPER MODULE CODE: MS4105 SEMESTER: Autumn 2015 MODULE TITLE: Linear Algebra 2 DURATION OF EXAMINATION: 2 1/2 hours LECTURER: Dr. J. Kinsella PERCENTAGE OF TOTAL MARKS: 70% EXTERNAL EXAMINER: Prof. J. King INSTRUCTIONS TO CANDIDATES: Answer four questions correctly for full marks, 70%.

1 (a) Prove the following Lemma: Let V be a finite-dimensional vector space. Let L = {l 1,...,l n } be a linearly independent set in V and let S = {s 1,...,s m } be a second subset of V which spans V. Then n m. 10 Hint: Write each of thel i inlas a linear combination of the spanning setsand use the fact that a homogeneous linear systema T c = 0 with more unknowns than equations has non-trivial solutions where A is the coefficient matrix for expressing the vectors in L in terms of the vectors ins. (b) Show that this result leads to a definition for the dimension of a vector space. 2 (c) Let S and T be subsets of a vector spacev such that S T. (i) Prove that ifsis linearly dependent then so ist. 2 (ii) Prove that (i) is equivalent to the statement that if T is linearly independent, so is S. 1 (d) The Adding/Removing Theorem states that: Theorem 0.1 Let S be a non-empty set of vectors in a vector space V. Then: (i) ifsis a linearly independent set and if v V is outside span(s) (i.e. v cannot be expressed as a linear combination of the vectors ins) then the sets v is still linearly independent (i.e. adding v to the list of vectors in S does not affect the linear independence ofs), (ii) if v S is a vector that is expressible as a linear combination of other vectors insand ifs\v meansswith the vectorvremoved thensand S\vspan the same space, i.e. span(s) = span(s\v). Use the Adding/Removing Theorem to prove one of the following: 10 Theorem 0.2 If V is an n-dimensional vector space and if S is a set inv with exactlynelements then S is a basis forv if either (i) S spansv (ii) or S is linearly independent. (You may assume that all bases for a given vector space have the same number of elements.) 1

2 (a) Show that for any x R n, (i) x 1 n x 2 6 and (ii) x 2 x 1. 2 (b) Given a choice of norms onc m andc n, the induced matrix norm of them n matrixais defined as Ax A sup Ax sup x =1 x 0 x (i) Explain carefully why the two conditions 3 Ax B, for all unit vectors x, Ax 0 = B, for some unit vector x 0 imply that A = B. (ii) Show using the definition of an induced matrix norm and the results in Q.2(a) that for anym n matrixa, A 1 n A 2. (c) Show that for any m n matrix A, the eigenvalues of A A are real and non-negative. 2 (d) Using the two conditions in part (a), show that the induced 2 norm A 2 of a m n matrix A can be calculated as the square root of the largest eigenvalue ofa A: A 2 = λ max (A A) (e) Use the result found in part (d) to find the induced 2 norm of the 1 3 matrixa = 3 4. 1 7 (i) First show that the eigenvalues ofa A are75 and10. (Just check that λ 1 = 75 and λ 2 = 10 are roots of the characteristic polynomial ofa A.) 2 (ii) Why does this imply that A 2 = 75? 1 3 6 2

3 (a) Show that for everym n complex matrixawe can write: 12 [ ] σ 0 A 1 U AV = (1) 0 B whereσ = A,Bis an(m 1) (n 1) matrix,u = [ ] y 0 U 1 and V = [ ] x 0 V 1, the unit vector x0 satisfies Ax 0 = A and finally y 0 = Ax 0. The matrices U A 1 and V 1 are chosen so that U and V are unitarym m and n n respectively. (The vector and induced matrix 2 norm is used throughout this question. You may assume that OA = A for any unitary matrixo.) (b) Using part (a) prove by induction on the size of A that every m n complex matrixahas a Singular Value Decomposition (SVD)A = UΣV where U is m m unitary, V is n n unitary and Σ is an m n diagonal matrix of singular values. 6 (c) Use the SVD to show that for anym n matrixa, the singular values σ i (the diagonal elements ofσ) may be found by computing the eigenvalues λ i ofa A and taking square roots i.e. σ i = λ i and that the unitaryn n matrixv has the eigenvectors ofa A as its columns. 2 (d) The reduced SVD expressesaas A = ^U^ΣV where ^U consists of the first n columns of U and ^Σ the first n rows of Σ. Derive the equation AV = ^U^Σ and explain how it may be used to find the reduced matrix ^U. 2 (e) Given the matrix A = 0 7 4 0 0 5 find a reduced SVD ofausing any method you wish. (Remember that the singular values are in non-increasing order: σ 1 σ 2 σ n.) 4 Recall that a complexm m matrixp is a projection operator ifp 2 = P. (a) Show that if λ is an eigenvalue of a projection operator P then either λ = 0 orλ = 1. 1 (b) Given an m n complex matrix A with m n, show that A A is invertible if and only ifahas full rank (remember that a square matrix is invertible if and only ifax = 0 impliesx = 0.) 6 3 3

(c) Given a linearly independent set of vectors {a 1,...,a n } in C m, let A be the m n matrix whose j th column is a j. Use the result in (b) to show that P, the orthogonal projection operator onto the range of A, is given by the formula 6 P = A(A A) 1 A. (d) Let A be an m n complex matrix with m n and let b C m be given. For any x R n, define the residual r by r Ax b. Show that the following four conditions are equivalent (show that the first implies the second, etc.) 8 r range(a) (2) A r = 0 (3) A Ax = A b (4) Pb = Ax (5) wherep C m m is the orthogonal projection operator onto the range ofafound in part (c). (e) Use the results in (d) to show that the vector x R n that minimises Ax b 2 2 is just the vector x satisfying Pb = Ax where P is the projection operator defined in (c) and referred to in (d). 4 5 (a) For any vectorv C k, let the matrixh = I 2P v (where P v = vv v v ). (i) Show with a sketch that the effect on an arbitrary vector x C k of left-multiplyingx by H is to reflect x in P v x, the normal to v in thex v plane. 2 (ii) Find the choices of vector v that make Hx, the Householder reflection ofx, return a multiple ofe 1 where e 1 C k is a vector of zeroes with one in the first position. 10 (iii) Which of the two choices found should be used and why? 1 (b) The following algorithm (Alg. 0.1) takes as its input an arbitrary m n complex matrix A. Explain the effect of line 5 and relate it to the Householder reflection in part (a). 3 Algorithm 0.1 Householder QR Factorisation (1) fork = 1 ton (2) x = A k:m,k 4

(3) v k = x+sign(x 1 ) x e 1 (4) v k = v k / v k (5) A k:m,k:n = A k:m,k:n 2v k (v k A k:m,k:n) (6) end (c) The work done in Alg. 0.1 is dominated by the (implicit) inner loop j=k:n over the columns of the submatrixa k:m,k:n in line 5. Show that the total operation countw for the algorithm isw = 2mn 2 2/3n 3 to leading order. 9 (Just show that the coefficient ofn 3 in W is 2/3 and that the coefficient ofmn 2 is 2.) 6 For any m m matrix A, Gauss Elimination without pivoting consists of: Algorithm 0.2 Gauss Elimination Without Pivoting in words (1) fork = 1 tom 1 (2) Add suitable multiples of row k to the rows beneath (3) to introduce zeroes below the main diagonal in columnk. (4) end (a) Show that each iteration of the above algorithm can be effected by left-multiplyingaby a matrixl k = I l k e k wherel k is the vector of multipliers for thek th column ofa(the firstkentries ofl k are0) and e k is a vector inc n with one in thek th position and zeroes elsewhere. Give a simple formula for the non-zero entries ofl k. 5 (b) Show that for each k, L 1 k = I+l k e k. 2 (c) Show that the matrixl = L 1 1 L 2 1...L m 1 1 is justi+l 1 e 1 + + l m e m. 2 (d) Explain briefly why the result in (c) means thatlis lower triangular. 1 (e) What special structure does A have after the algorithm has completed? 1 (See over for the rest of Q.6.) 5

(f) When partial pivoting is applied, we have L m 1 P m 1 L m 2 P m 2...L 2 P 2 L 1 P 1 A = U where eachp j swaps rowjwith one of the rowsj+1,...,m (if necessary) to make the absolute value of the pivot A jj as large as possible. Defining Π j = P m 1 P m 2...P j L j = Π j+1 L j Π 1 j+1, show that 6 L m 1 P m 1 L m 2 P m 2...L 2 P 2 L 1 P 1 = L m 1 L m 2...L 2 L 1 Π 1. (g) Show that the LU factorisation A = LU (without pivoting) is now replaced bypa = LU (with pivoting) wherep = Π 1 andl = L 1 1 L 1 2...L 1 m 1. 2 (h) Finally, show that the matrix L is lower triangular as it was in the nopivoting case. 6 6