CS 323: Numerical Analysis and Computing

Similar documents
CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Computational Methods. Systems of Linear Equations

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Iterative Methods. Splitting Methods

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

JACOBI S ITERATION METHOD

Matrix decompositions

Review. Example 1. Elementary matrices in action: (a) a b c. d e f = g h i. d e f = a b c. a b c. (b) d e f. d e f.

The Solution of Linear Systems AX = B

Solving Linear Systems

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Solving Linear Systems

CS412: Introduction to Numerical Methods

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

MIDTERM. b) [2 points] Compute the LU Decomposition A = LU or explain why one does not exist.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

LINEAR SYSTEMS (11) Intensive Computation

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Direct Methods for Solving Linear Systems. Matrix Factorization

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Next topics: Solving systems of linear equations

Algebra C Numerical Linear Algebra Sample Exam Problems

9.1 Preconditioned Krylov Subspace Methods

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

Let x be an approximate solution for Ax = b, e.g., obtained by Gaussian elimination. Let x denote the exact solution. Call. r := b A x.

TMA4125 Matematikk 4N Spring 2017

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Review Questions REVIEW QUESTIONS 71

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

MA3232 Numerical Analysis Week 9. James Cooley (1926-)

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Linear Systems and LU

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

22A-2 SUMMER 2014 LECTURE 5

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Matrices and systems of linear equations

1 Number Systems and Errors 1

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

MATH 3511 Lecture 1. Solving Linear Systems 1

Linear Algebraic Equations

CSE 160 Lecture 13. Numerical Linear Algebra

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

MAT 343 Laboratory 3 The LU factorization

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

MTH 464: Computational Linear Algebra

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

MH1200 Final 2014/2015

Math 22AL Lab #4. 1 Objectives. 2 Header. 0.1 Notes

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

30.5. Iterative Methods for Systems of Equations. Introduction. Prerequisites. Learning Outcomes

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Linear Equations and Matrix

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Gauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University

BlockMatrixComputations and the Singular Value Decomposition. ATaleofTwoIdeas

Low-Complexity Encoding Algorithm for LDPC Codes

Solving Linear Systems of Equations

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

Math Fall Final Exam

Roundoff Error. Monday, August 29, 11

Homework 2 Foundations of Computational Math 2 Spring 2019

Scientific Computing

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

Lecture 10b: iterative and direct linear solvers

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

Introduction to Applied Linear Algebra with MATLAB

EE 581 Power Systems. Admittance Matrix: Development, Direct and Iterative solutions

Numerical Linear Algebra

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Linear Systems and LU

The purpose of computing is insight, not numbers. Richard Wesley Hamming

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

CPE 310: Numerical Analysis for Engineers

Linear Algebra Exam 1 Spring 2007

2.1 Gaussian Elimination

Computational Economics and Finance

Numerical Methods - Numerical Linear Algebra

1 GSW Sets of Systems

Math 54 Second Midterm Exam, Prof. Srivastava October 31, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

CLASSICAL ITERATIVE METHODS

Math/CS 466/666: Homework Solutions for Chapter 3

Linear Algebra Math 221

Numerical Methods for Chemical Engineers

On the Stability of LU-Factorizations

Math Lecture 26 : The Properties of Determinants

Solving linear equations with Gaussian Elimination (I)

1.5 Gaussian Elimination With Partial Pivoting.

Transcription:

CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us. You are not permitted to use laptop computers, cell phones, tablets, or any other hand-held electronic devices. Name NetID Part #1 Part #2 Part #3 Part #4 Part #5 TOTAL 1

1. [24% = 4 questions 6% each] MULTIPLE CHOICE SECTION. Circle or underline the correct answer (or answers). No justification is required for your answer(s). (a) Which of the following statements regarding the cost of methods for solving an n n linear system Ax = b are true? i. The cost of computing the LU factorization is generally proportional to n 2. ii. The cost of backward substitution on a dense upper triangular matrix is generally proportional to n 2. iii. If a matrix A has no more than 3 non-zero entries per row, the cost of each iteration of the Jacobi method is proportional to n. (b) If an n n matrix A is poorly conditioned (i.e., it has a very large condition number), then i. Solving Ax = b would be difficult with LU decomposition or Gaussian elimination, but iterative methods (Jacobi, Gauss-Seidel) would not have a problem. ii. Solving Ax = b accurately with iterative methods (Jacobi, Gauss-Seidel) would be difficult, but LU decomposition with pivoting would not have a problem. iii. Solving Ax = b accurately will be challenging regardless of the method we use. (c) Consider the rectangular m n matrix A (with m > n), and the vector b R m. If x is the least squares solution to Ax b, can we say that x is an actual solution to Ax = b? i. Yes, in fact Ax = b has many solutions and the least squares solution is the one with the smallest L 2 -norm of the residual vector r 2. ii. No, the system Ax = b will generally not have a solution. What we call the least squares solution is the vector x with the smallest L 2 -norm of the error vector x x exact 2. iii. No, the system Ax = b will generally not have a solution. What we call the least squares solution is the vector x with the smallest L 2 -norm of the residual vector b Ax 2. (d) Which of the following methods can be used for solving the system Ax = b, where A is a symmetric, diagonally dominant, square n n matrix? i. LU factorization with full pivoting. ii. System of normal equations. iii. Gauss-Seidel method. iv. Jacobi method. 2

2. [18% = 3 questions 6% each] SHORT ANSWER SECTION. Answer each of the following questions in no more than 2-3 sentences. (a) Consider the following matrix A whose LU factorization we wish to compute using Gaussian elimination: A = 4 8 1 6 5 7 0 10 3 What will be the initial pivot element if (no explanation required) No pivoting is used? Partial pivoting is used? Full pivoting is used? (b) State one defining property of a singular matrix A. Suppose that the linear system Ax = b has two distinct solutions x and y. Use the property you gave to prove that A must be singular. (c) Mention one advantage of the Gauss-Seidel algorithm over the Jacobi algorithm and one disadvantage. 3

3. [14%] Consider the five points: (x 1, y 1 ) = ( 3, 1) (x 2, y 2 ) = ( 2, 1) (x 3, y 3 ) = (0, 2) (x 4, y 4 ) = (1, 3) (x 5, y 5 ) = (3, 2) (a) We want to determine a straight line y = c 1 x+c 0 that approximates these points as closely as possible, in the least squares sense. Write a least squares system Ax b which can be used to determine the coefficients c 1 and c 0. (b) Solve this least squares system, using the method of normal equations. 4

5

4. [18%] The general form of an iterative method for solving the system Ax = b has the form x (k) = T x (k 1) + c where the matrix T and the vector c are such that the equation x = T x+c is equivalent to the original system Ax = b. (a) If x is the exact solution of the system Ax = b, show that x (k) x = T ( x (k 1) x ) (b) If r (k) = b Ax (k) is the residual vector after the k th iteration of the method, show that r (k) = AT A 1 r (k 1) Hint: Use the identity r (k) = Ae (k), or equivalently e (k) = A 1 r (k). Here, e (k) = x (k) x is the error vector after the k th iteration. (c) Show that r (k) = AT k A 1 r (0) 6

7

5. [26%] Consider the elimination matrix M k = I m k e T k and its inverse L k = I + m k e T k used in the LU decomposition process, where m k = (0,..., 0, m (k) k+1,..., m(k) n and e k is the kth column of the identity matrix. Let P (ij) be the permutation matrix that results from swapping the i-th and j-th rows (or columns) of the identity matrix. (a) [6%] Show that if i, j > k then L k P (ij) = P (ij) (I + P (ij) m k e T k ). (b) [10%] Recall that the matrix L resulting from performing Gaussian elimination with partial pivoting is given by L = P 1 L 1... P n 1 L n 1 where the permutation matrix P i permutes row i with some row i where i < i. Show that L can be rewritten as where L P k = I + (P n 1... P k+1 m k )e T k. ) T L = P 1... P n 1 L P 1... L P n 1 (c) [10%] Show that L P 1... L P n 1 is lower triangular. 8

9

10