Preliminary Examination, Numerical Analysis, August 2016

Similar documents
Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

Lösning: Tenta Numerical Analysis för D, L. FMN011,

Computational Methods. Eigenvalues and Singular Values

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

5. Hand in the entire exam booklet and your computer score sheet.

x n+1 = x n f(x n) f (x n ), n 0.

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Preliminary Examination in Numerical Analysis

Math Fall Final Exam

Math 113 Winter 2005 Departmental Final Exam

Applied Linear Algebra in Geoscience Using MATLAB

1. Select the unique answer (choice) for each problem. Write only the answer.

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Math 113 Winter 2005 Key

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Linear Algebra. Session 12

Matrix decompositions

Math 411 Preliminaries

Homework and Computer Problems for Math*2130 (W17).

Linear Algebra. and

18.06SC Final Exam Solutions

I. Multiple Choice Questions (Answer any eight)

2.3. VECTOR SPACES 25

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

AMS526: Numerical Analysis I (Numerical Linear Algebra)

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

EXAM. Exam 1. Math 5316, Fall December 2, 2012

There are six more problems on the next two pages

MAT Linear Algebra Collection of sample exams

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Cheat Sheet for MATH461

Lecture 6. Numerical methods. Approximation of functions

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Q1 Q2 Q3 Q4 Tot Letr Xtra

5 Linear Algebra and Inverse Problem

Problem # Max points possible Actual score Total 120

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Numerical Analysis Comprehensive Exam Questions

Numerical Methods I: Eigenvalues and eigenvectors

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

UNIT 6: The singular value decomposition.

Introduction. Chapter One

Finite Difference Methods for Boundary Value Problems

The Singular Value Decomposition

. (a) Express [ ] as a non-trivial linear combination of u = [ ], v = [ ] and w =[ ], if possible. Otherwise, give your comments. (b) Express +8x+9x a

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra- Final Exam Review

Applied Numerical Linear Algebra. Lecture 8

MATH 369 Linear Algebra

Eigenvalue Problems and Singular Value Decomposition

Statistical Geometry Processing Winter Semester 2011/2012

CSE 554 Lecture 7: Alignment

Qualifying Examination Winter 2017

NAME: MA Sample Final Exam. Record all your answers on the answer sheet provided. The answer sheet is the only thing that will be graded.

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

This is a closed everything exam, except for a 3x5 card with notes. Please put away all books, calculators and other portable electronic devices.

CS205B / CME306 Homework 3. R n+1 = R n + tω R n. (b) Show that the updated rotation matrix computed from this update is not orthogonal.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Exam in TMA4215 December 7th 2012

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

APPLIED NUMERICAL LINEAR ALGEBRA

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Math 411 Preliminaries

Hands-on Matrix Algebra Using R

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Positive Definite Matrix


University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Singular Value Decomposition

Notes on PCG for Sparse Linear Systems

June 2011 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations

Final Exam Practice 3, May 8, 2018 Math 21b, Spring Name:

Conceptual Questions for Review

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

CS 257: Numerical Methods

Applied Linear Algebra in Geoscience Using MATLAB

Background Mathematics (2/2) 1. David Barber

Polynomial Review Problems

Linear Algebra Review. Vectors

Fundamentals of Matrices

Math 273 (51) - Final

Final Exam May 4, 2016

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

Lecture 6 Positive Definite Matrices

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Applied Linear Algebra in Geoscience Using MATLAB

Review of Some Concepts from Linear Algebra: Part 2

Eigenvalues and eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Transcription:

Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any two out of questions 5-7. All questions have equal weights and the passing score will be determined after all the exams are graded. Indicate clearly the work that you wish to be graded. Note: In problems 5-7, the notations k = t and h = x are used. 1. Singular Value Decomposition (SVD): a) Prove the following statement: Singular Value Decomposition: Any matrix A C m n can be factored as A = UΣV, where U C m m and V C n n are unitary and Σ R m n is a rectangular matrix whose only nonzero entries are non-negative entries on its diagonal. b) Use the SVD to prove that any matrix in C n n is the limit of a sequence of matrices of full rank. 2. Linear Least Squares: The Linear Least Squares problem for an m n real matrix A and b R m is the problem: Find x R n such that Ax b 2 is minimized. a) Suppose that you have data {(t, y )}, = 1, 2,..., m that you wish to approximate by an expansion n p(t) = x k φ k (t). k=1 Here, the functions φ k (t) are given functions. Which norm on the difference between the approximation function p and the data gives rise to a linear least squares problem for the unknown expansion coefficients x k? What is the matrix A in this case, and what is the vector b? b) Suppose that A is a real m n matrix of full rank and let b R m. What are the normal equations for the Least Squares problem? How can they be used to solve the Least Squares problem? What is the QR factorization of A and how can it be used to solve the Least Squares problem? Compare and contrast these approaches for numerically solving the Least Square problem. 1

3. Sensitivity: Consider a 6 6 symmetric positive definite matrix A with singular values σ 1 = 1000, σ 2 = 500, σ 3 = 300, σ 4 = 20, σ 5 = 1, σ 6 = 0.01. a) Suppose you use a Cholesky factorization package on a computer with a machine epsilon 10 14 to solve the system Ax = b for some nonzero vector b. How many digits of accuracy do you expect in the computed solution? Justify your answer in terms of condition number and stability. You may assume that the entries of A and b are exactly represented in the computer s floating-point system. b) Suppose that instead you use an iterative method to find an approximate solution to Ax = b and you stop iterating and accept iterate x (k) when the residual r (k) = Ax (k) b has 2-norm less than 10 9. Give an estimate of the maximum size of the relative error in the final iterate? Justify your answer. 4. Interpolation and Integration: a) Consider equally spaced points x = a+h, = 0,..., n on the interval [a, b], where nh = b a. Let f(x) be a smooth function defined on [a, b]. Show that there is a unique polynomial p(x) of degree n which interpolates f at all of the points x. Derive the formula for the interpolation error at an arbitrary point x in the interval [a, b]: for some η [a, b]. f(x) p(x) E(x) = 1 (n + 1)! (x x 0)(x x 1 ) (x x n )f n+1 (η). b) Let I n (f) denote the result of using the composite Trapezoidal rule to approximate I(f) b a f(x)dx using n equally sized subintervals of length h = (b a)/n. It can be shown that the integration error E n (f) I(f) I n (f) satisfies E n (f) = d 2 h 2 + d 4 h 4 + d 6 h 6 +... where d 2, d 4, d 6,... are known numbers that depend only on the values of f and its derivatives at a and b. Suppose you have a black-box program that, given f, a, b, and n, calculates I n (f). Show how to use this program to obtain an O(h 4 ) approximation and an O(h 6 ) approximation to I(f). 2

5. Elliptic Problems: For the one dimensional Poisson problem for v(x) v (x) + αv(x) = f(x), where α 0 is constant, along with Dirichlet boundary conditions in the interval [0,1], consider the scheme h U 1 h 2 ( U 1 + 2 U U +1 ) = f for = 1, 2,..., N 1 where Nh = 1, f f(h), and U 0 = U N = 0. The approximate solution satisfies a linear system AU = b, where U = (U 1, U 2,..., U N 1 ) T and b = h 2 (f 1, f 2,..., f N 1 ) T. a) State and prove the maximum principle for any grid function V = {V } with values for = 0, 1,...N, that satisfies h V 0. b) Derive the matrix A and show that it is symmetric and positive definite. c) Use the maximum principle to show that the global error e = v(x ) U satisfies e = O(h 2 ) as the space step h 0. 6. Numerical Methods for ODEs: Consider the Linear Multistep Method y n+2 4 3 y n+1 + 1 3 y n = 2 3 kf n+2 for solving an initial value problem y = f(y, x), y(0) = η. You may assume that f is Lipschitz continuous with respect to y uniformly for all x. a) Analyze the consistency, stability, accuracy, and convergence properties of this method. b) Sketch a graph of the solution to the following initial value problem. y = 10 8 [y cos(x)] sin(x), y(0) = 2. Would it be more reasonable to use this method or the forward Euler method for this problem? What would you consider in choosing a timestep k for each of the methods? Justify your answer. 3

7. Heat Equation Stability: a) Consider the initial value problem for the constant-coefficient diffusion equation (with β > 0) v t = βv xx, t > 0 with initial data v(x, 0) = f(x). A scheme for this problem is: u n+1 k u n = β { } h 2 u n+1 1 2un+1 + u n+1 +1. Analyze the 2-norm stability of this scheme. For which values of k > 0 and h > 0 is the scheme stable? (Note that there are no boundary conditions here.) b) Consider the variable coefficient diffusion equation with Dirichlet boundary conditions v t = (β(x)v x ) x, 0 < x < 1, t > 0 v(0, t) = 0, v(1, t) = 0 and initial data v(x, 0) = f(x). Assume that β(x) β 0 > 0, and that β(x) is smooth. Let β +1/2 = β(x +1/2 ). A scheme for this problem is: u n+1 k u n = 1 { } h 2 β 1/2 u n+1 1 (β 1/2 + β +1/2 )u n+1 + β +1/2 u n+1 +1. Analyze the 2-norm stability of this scheme for solving this initial boundary value problem. DO NOT NEGLECT THE FACT THAT THERE ARE BOUNDARY CONDITIONS! 4

Fact 1: A real symmetric n n matrix A can be diagonalized by an orthogonal similarity transformation, and A s eigenvalues are real. Fact 2: The (N 1) (N 1) matrix M defined by [ -2 1 0 0 0... 0 0 0 0 ] [ 1-2 1 0 0... 0 0 0 0 ] [ 0 1-2 1 0... 0 0 0 0 ] [ 0 0 1-2 1... 0 0 0 0 ] [ 0 0 0 0 0... 1-2 1 0 ] [ 0 0 0 0 0... 0 1-2 1 ] [ 0 0 0 0 0... 0 0 1-2 ] has eigenvalues µ l = 4 sin 2 ( πl 2N ), l = 1, 2,..., N 1. Fact 3: The (N + 1) (N + 1) matrix: [ -1 1 0 0 0... 0 0 0 0 ] [ 1-2 1 0 0... 0 0 0 0 ] [ 0 1-2 1 0... 0 0 0 0 ] [ 0 0 1-2 1... 0 0 0 0 ] [ 0 0 0 0 0... 1-2 1 0 ] [ 0 0 0 0 0... 0 1-2 1 ] [ 0 0 0 0 0... 0 0 1-1 ] ( ) has eigenvalues µ l = 4 sin 2 πl, l = 0, 1,..., N. 2(N+1) Fact 4: For a real n n matrix A, the Rayleigh quotient of a vector x R n is the scalar The gradient of r(x) is r(x) = xt Ax x T x. r(x) = 2 x T (Ax r(x)x). x If x is an eigenvector of A then r(x) is the corresponding eigenvalue and r(x) = 0. 5