MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

Similar documents
EXAM. Exam 1. Math 5316, Fall December 2, 2012

Math Fall Final Exam

CS 323: Numerical Analysis and Computing

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Least-Squares Systems and The QR factorization

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Analysis Lecture 16

Review problems for MA 54, Fall 2004.

Matrix decompositions

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

This can be accomplished by left matrix multiplication as follows: I

lecture 2 and 3: algorithms for linear algebra

The 'linear algebra way' of talking about "angle" and "similarity" between two vectors is called "inner product". We'll define this next.

Orthogonal Transformations

Computational Methods. Least Squares Approximation/Optimization

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra Review

AMS526: Numerical Analysis I (Numerical Linear Algebra)

I. Multiple Choice Questions (Answer any eight)

Spring 2015 Midterm 1 03/04/15 Lecturer: Jesse Gell-Redman

MTH 215: Introduction to Linear Algebra

lecture 3 and 4: algorithms for linear algebra

Cheat Sheet for MATH461

Solution of Linear Equations

Orthonormal Transformations

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

MAT Linear Algebra Collection of sample exams

The QR Factorization

Lecture 4 Orthonormal vectors and QR factorization

Linear Algebra, part 3 QR and SVD

October 25, 2013 INNER PRODUCT SPACES

Orthonormal Transformations and Least Squares

Applied Linear Algebra in Geoscience Using MATLAB

There are six more problems on the next two pages

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

SUMMARY OF MATH 1600

Matrix Factorization Reading: Lay 2.5

Notes on Eigenvalues, Singular Values and QR

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 407: Linear Optimization

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

Important Matrix Factorizations

Lecture 3: QR-Factorization

EECS 275 Matrix Computation

Problem 1. CS205 Homework #2 Solutions. Solution

Lecture notes: Applied linear algebra Part 1. Version 2

PRACTICE PROBLEMS (TEST I).

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Conceptual Questions for Review

Lecture 6, Sci. Comp. for DPhil Students

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

AMS526: Numerical Analysis I (Numerical Linear Algebra)

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

Applied Linear Algebra in Geoscience Using MATLAB

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Stability of the Gram-Schmidt process

MATH 369 Linear Algebra

Linear Algebra in Actuarial Science: Slides to the lecture

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

MATH 1553-C MIDTERM EXAMINATION 3

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Section 6.4. The Gram Schmidt Process

MATH 581D FINAL EXAM Autumn December 12, 2016

LINEAR ALGEBRA KNOWLEDGE SURVEY

5.6. PSEUDOINVERSES 101. A H w.

AMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC

Math 344 Lecture # Linear Systems

MATH 1553 PRACTICE FINAL EXAMINATION

Matrix decompositions

Math 310 Final Exam Solutions

Computational Linear Algebra

MATH 532: Linear Algebra

Numerical Linear Algebra

2 Two-Point Boundary Value Problems

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Problem # Max points possible Actual score Total 120

5 Selected Topics in Numerical Linear Algebra

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Preliminary Examination, Numerical Analysis, August 2016

University of Ottawa

DEPARTMENT OF MATHEMATICS

Linear Algebra and Matrix Inversion

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Matrix Factorization and Analysis

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 9

ECE 204 Numerical Methods for Computer Engineers MIDTERM EXAMINATION /8:00-9:30

235 Final exam review questions

Transcription:

MAE 9A / FALL 3 Maurício de Oliveira MIDTERM Instructions: You have 75 minutes This exam is open notes, books No computers, calculators, phones, etc There are 3 questions for a total of 45 points and bonus question with 5 bonus points Good luck! Questions:. [ points] Consider the matrix and vectors 4 A, b, x. Answer the following questions: a) [ points] Verify that x solves the equation Ax b. Multiply to verify that Ax b. (+ points) b) [ points] Compute the LU Decomposition A LU or explain why one does not exist. The LU Decomposition does not exist Because the first pivot, the a entry is zero. (+ point) (+ point) c) [8 points] Compute the LU Decomposition P A LU with partial pivoting. Use the computed decomposition to solve the system of equations Ax b.

First iteration is: P, P A 4, M I, M P A 4 (+.5 points) Second iteration is: P I, M, U M P M P A 4 /4 /4 (+.5 points) Extracting the factors P P P P, L, U 4 /4 /4 (+ point) To solve Ax b we note that P Ax LUx P b and solve Ly P b Ux y Computing y y y Ly P b, /4 y 3 (+ point) y, y, y 3 (/4)y / / (+ point) then computing x x Ux 4 x /4 x 3 / (+ point) x 3 4(/), x ( x 3 )/4 ( + )/4, x x 3 / (+ point) d) [8 points] Compute the LU Decomposition P AQ LU with complete pivoting. Use the computed decomposition to solve the system of equations Ax b.

First iteration is: 4 P I, Q, P A Q, 4 M, M P A Q /4 /4 Second iteration is: 4 P Q M I, U M P M P A Q Q /4 (+.5 points) (+ point) Extracting the factors P P P I, Q Q Q Q, 4 L, U /4 /4 (+ point) To solve Ax b we note that P AQx LUQx P b and solve Ly P b Uz y, x Q T z Computing y y y Ly P b, /4 y 3 (+ point) y, y, y 3 (/4)y / / (+ point) then computing z 4 z Ux z /4 z 3 / (+ point) z 3 4(/), z x 3 / /, z ( z 3 )/4 ( + )/4 (+ point) and finally computing x x Q T z 3 (+.5 point)

. [ points] Consider the matrices [ ] [ ] ρ ρ A, B ρ. Answer the following questions: a) [5 points] Using as matrix norm the p induced norm show that when < ρ. Hint: Use the fact that Compute κ(a) +, κ(b) ( + ρ) ρ [ ] x y [ x x y y ]. Recall that κ(x) X X. A max( + ρ, ) + ρ, B max( + ρ, ρ ) + ρ. (+ point) Using the formula provided [ A ρ ρ ] [ ], B ρ ρ ρ (+ point) and A max(ρ, ) ρ, B max(ρ + ρ, ρ) ρ + ρ. (+ point) Therefore κ(a) A A ρ ( + ρ) + ρ (+ point) κ(b) B B ( + ρ )(ρ + ρ ) ( + ρ) (+ point) b) [5 points] Provide a short explanation about the implications of the above calculations on the accuracy of solutions to the matrix equations Ax b and Bx b computed by a backward stable linear algebra algorithm when ρ > is small. Hint: Recall that a backward stable algorithm is one which calculates the exact solution of the perturbed problem (A + A)y b + b, where A ɛ A, b ɛ b. 4

Based on sensitivity analysis we know the calculated solution y will have a relative error y x x O(ɛκ(X)) (+ point) where x is the solution to the linear equations Xx b when X A or X B. For X A we have y x x O(ɛκ(A)) O(ɛ( + /ρ)) (+ point) therefore we should expect large relative errors if ρ is small since ɛ( + /ρ) for a fixed ɛ. (+ point) For X B we have y x x O(ɛκ(A)) O(ɛ( + /ρ)) (+ point) therefore we should expect small relative errors if ρ is small since ɛ( + ρ) ɛ for a fixed ɛ. (+ point) 5

3. [5 points] The matrix is called a projector. P (v) I v T v vvt a) [ points] Show that y P (v) x x α v, where α v T x / (v T v). y P (v)x ( I ) v T v vvt x x vt x v T v v x αv (+ points) b) [ points] Show that P (v) v. With x v we have α (v T v)/(v T v) (+ point) and therefore P (v)v v αv. (+ point) c) [ points] Show that if y P (v) x then v y. Calculate the inner product v T y v T P (v)v v T x vt x v T v vt v (+ points) d) [ points] Show that if v x then P (v) x x. When v x then v T x α (+ point) so that P (v)x x αv x (+ point) e) [ points] Prove that P (v) is not full-rank unless v. For any v from item b) P (v)v hence P (v) is not full-rank. With v then P (v) I and full-rank. (+ point) (+ point) 6

f) [5 points] For v ( ), x ( ), x ( ) 3, 4 compute P (v), y P (v) x, and y P (v) x. Draw a cartesian plan representing the vectors v, x, x, y and y. What is the geometric interpretation of the operation P (v) x? Calculating P (v) I v T v vvt [ ] ( ) [ ] ( ) [ ] ( ) y P (v)x [ ] ( ) 3 y P (v)x 4 ( ) ( ) 4 (+ point) (+ point) (+ point) Graphically: y 4 y x 3 y x v 3 4x which reveals that P (v)x is an orthogonal projection onto axis y. (+ points) 4. [5 bonus points] Every complex matrix A C m n, m n, admits the complex QR Decomposition A QR where Q C m n is such that Q Q I and R C n n is upper triangular. a) [ bonus points] Show that matrices ˆQ Q Θ, ˆR Θ R, where Θ C n n is a diagonal matrix with diagonal entries equal to Θ ii e jθ i, θ i [, π), i,..., n, and j, also constitute a QR Decomposition of A. 7

Note that ΘΘ is a diagonal n n matrix with diagonal entries That is ΘΘ Θ Θ I is unitary. Therefore Θ ii Θ ii e jθ i e jθ i, i,..., n, (+ point) ˆQ ˆR QΘΘ R QR A, ˆQ ˆQ Θ Q QΘ Θ Θ I (+ point) is a QR decomposition because ˆR Θ R is still upper-triangular. b) [ bonus points] Explain how item a) can be used to construct a matrix ˆR which has real and positive diagonal entries, such as the one produced by the Gram-Schmidt Algorithm. In general r ii will be a complex number. Representing r ii in polar form With the choice r ii r ii e j r ii Θ ii e j r ii (+ point) The product Θ R is such that the diagonal entries of ˆR are ˆr ii e j r ii r ii r ii e j r ii e j r ii r ii (+ point) c) [ bonus point] What are the possible values Θ can assume when A, Q, R, ˆQ and ˆR are real matrices? In order for Q and QΘ be real we will need Θ to be real. This is only possible if in which case Θ ii is either equal to or. Θ ii e jθ i, θ i {, π}, i,..., n, (+ point) 8