Numerical Analysis Comprehensive Exam Questions

Similar documents
Exam in TMA4215 December 7th 2012

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Preliminary Examination, Numerical Analysis, August 2016

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

AIMS Exercise Set # 1

Iterative Methods. Splitting Methods

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Part IB Numerical Analysis

Fixed point iteration and root finding

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

CS 323: Numerical Analysis and Computing

Stabilization and Acceleration of Algebraic Multigrid Method

Poisson Equation in 2D

12.0 Properties of orthogonal polynomials

Lecture 6. Numerical methods. Approximation of functions

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

CS 323: Numerical Analysis and Computing

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Algebra C Numerical Linear Algebra Sample Exam Problems

STA141C: Big Data & High Performance Statistical Computing

Notes on Numerical Analysis

STA141C: Big Data & High Performance Statistical Computing

Theory of Iterative Methods

Do not turn over until you are told to do so by the Invigilator.

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Program : M.A./M.Sc. (Mathematics) M.A./M.Sc. (Final) Paper Code:MT-08 Numerical Analysis Section A (Very Short Answers Questions)

Positive Definite Matrix

Numerical Methods in Matrix Computations

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Examination paper for TMA4215 Numerical Mathematics

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Fundamentals of Unconstrained Optimization

Classical iterative methods for linear systems

x n+1 = x n f(x n) f (x n ), n 0.

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

Computation Fluid Dynamics

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Boundary Value Problems and Iterative Methods for Linear Systems

Iterative Methods and Multigrid

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Chapter 6. Finite Element Method. Literature: (tiny selection from an enormous number of publications)

Numerical Solutions to Partial Differential Equations

Linear Algebra Massoud Malek

Proper Orthogonal Decomposition (POD) for Nonlinear Dynamical Systems. Stefan Volkwein

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods - Numerical Linear Algebra

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Fast Numerical Methods for Stochastic Computations

Iterative Methods for Linear Systems

Final Exam May 4, 2016

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

JACOBI S ITERATION METHOD

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

n 1 f n 1 c 1 n+1 = c 1 n $ c 1 n 1. After taking logs, this becomes

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Introductory Numerical Analysis

CS 257: Numerical Methods

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

Orthonormal Transformations and Least Squares

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 10C - Fall Final Exam

A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere

CS 323: Numerical Analysis and Computing

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Linear Algebra. Session 12

Lecture 18 Classical Iterative Methods

Spectral transforms. Contents. ARPEGE-Climat Version 5.1. September Introduction 2

Numerical Algorithms as Dynamical Systems

Numerical Optimization

2. Review of Linear Algebra

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

A matrix over a field F is a rectangular array of elements from F. The symbol

9. Iterative Methods for Large Linear Systems

Iterative Methods for Solving A x = b

17 Solution of Nonlinear Systems

Computational Methods. Systems of Linear Equations

Iterative Methods for Smooth Objective Functions

Preface to Second Edition... vii. Preface to First Edition...

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Chapter 4: Interpolation and Approximation. October 28, 2005

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

The Conjugate Gradient Method

ASSIGNMENT BOOKLET. Numerical Analysis (MTE-10) (Valid from 1 st July, 2011 to 31 st March, 2012)

Numerical Solution I

Quarter-Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-differential Equations

Transcription:

Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order of convergence for the method. Can you propose a method that converges faster? Solution: Newton s method is x n+1 = x n f(x n) f (x n ) := h(x n) For the fixed-point iteration x n+1 = h(x n ), we can investigate h (α). If < h (α) < 1, the iteration converges linearly with the rate of conv. tending to h (α) ; if h (α) =, then if converges at least quadratically. In fact, h (x) = f(x)f (x) m 1, as x α. [f (x)] m < m 1 < 1 if m. m The Newton s method only converges linearly if x o is chosen close enough to α. To improve the order of convergence, once could use x n+1 = x n m f(x n) f (x n ) := H(x n). It s easy to show that H (x) as x α. Therefore, the new method converges at least quadratically.. Let x, x 1,..., x n be distinct real numbers and l k (x) be the Lagrange s basis function. Let Ψ n (x) = Π n k= (x x k). Prove that (1) For any polynomial p(x) of degree n + 1, p(x) n p(x k )l k (x) = k= 1 (n + 1)! p(n+1) (x)ψ n (x). () If further x,..., x n are Gauss-Legendre points in the interval [ 1, 1], then 1 l i (x)l j (x)dx = for i j.

Solution: (1) Note that p(x) n k= p(x k)l k (x) has zeros at x = x,..., x n, therefore, p(x) n k= p(x k)l k (x) = CΨ n (x) for some constant C. Since n k= p(x k)l k (x) is of degree n and p(x) is of degree n + 1, C must be the highest degree coeff. of p(x), thus C = 1 (n+1)! p(n+1) (x). () Since x,..., x n are Gauss-Legendre points, 1 p(x)dx = n w k p(x k ) k= for any polynomial p of degree n + 1. Therefore, 1 l i (x)l j (x)dx = n w k l i (x k )l j (x k ) =, if i j. k= 3. For the initial value problem y (t) = f(t, y), y() = y, t, consider the θ-method y n+1 = y n + h[θf(t n, y n ) + (1 θ)f(t n+1, y n+1 )], where time has been discretized such that t n = nh, and y n is the numerical approximation of y(t n ). (a) Is this method consistent? What is the order? Is this method zero-stable? How does the result differ for different θ? (b) What is the region of absolute stability? What is the region when θ =, 1, or 1? For what θ values is this method A stable? (c) Does this method have stiff decay? Show why or why not. Solution: (a) Consider the local truncation error around t n, y + hy + h! y + y h(θy + (1 θ)(y + hy + h! y + ) = h y ( 1 (1 θ)) + h3 ( 1 + 1 3 θ)y + The local truncation error is at least nd order, i.e. globally at least first order method, so this method is consistent. The order of the method becomes nd order if θ = 1, otherwise, it is a first order method. The first characteristic polynomial is ρ(z) = z 1, i.e. z = 1, so this method is zero-stable for any θ. (b) Consider y = λy, then y n+1 = y n + h[θλy n + (1 θ)λy n+1 ], 1 + hθλ y n+1 = 1 h(1 θ)λ y n, R(hθ) := 1 + hθλ 1 h(1 θ)λ 1 gives the stability region. Page

When θ =, the region is outside of the unit circle centered at z = 1 in the complex plane, when θ = 1, the negative half plane of the complex plane, and when θ = 1, inside of the unit circle centered at z = 1 in the complex plane. Therefore, this method is A-stable for θ 1, since the stability region includes the negative half plane. (c) To have stiff decay, one must show lim hθ R(hθ). lim R(hθ) = hθ lim hθ 1 + hθλ 1 h(1 θ)λ = θ 1 θ i.e. goes to only when θ =. Only when θ = (which is the backward Euler method), this method has stiff decay. 4. Consider the initial boundary values problem u t = u xx, (x, t) (, 1) (, T ] = D u(, t) = g 1 (t), u(1, t) = g (t), t [, T ] u(x, ) = f(x), x [, 1] Let (x i, t n ) be a grid point in a uniform rectangular grid, s.t. x i = i x, t n = n t for i =, 1,..., I and n =, 1,..., N where I x = 1 and N t = T, and let U n i be a numerical approximation of u(x i, t n ). Assuming the exact solution is sufficiently smooth, show that the scheme U n+1 i Ui n t = U n+1 i+1 n+1 Ui + U n+1 i 1 x is unconditionally stable and U n i u(x i, t n ) = O( t + x ) as t, x. Solution: Given some perturbation of f(x), g 1 (t), and g (t), the difference between the perturbed Solution and Ui n, say ρ n i will also satisfy ρ n+1 i ρ n i t = ρn+1 i+1 ρn+1 i + ρ n+1 i 1. x Let λ = t, then ρ n+1 x i = λ 1+λ ρn+1 i+1 + 1 1+λ ρn i + λ 1+λ ρn+1 i 1. Therefore, ρn+1 i is a strict convex combination of ρ n+1 i+1, ρn i and ρ n+1 i 1 and satisfy the maximum principle, which proves the stability. Note that the exact Solution u satisfies u n+1 i u n i t = un+1 i+1 un+1 i + u n+1 i 1 + O( t + x ). x Page 3

Let e n i = u n i Ui n, then e n+1 i = λ 1+λ en+1 i+1 + 1 1+λ en i + λ 1+λ en+1 i 1 + O( t + t x ). e n+1 i λ 1+λ en+1 + 1 1+λ en + λ 1+λ en+1 + O( t + t x ) e n+1 λ 1+λ en+1 + 1 1+λ en + λ 1+λ en+1 + O( t + t x ) λ 1+λ en+1 1 1+λ en + C( t + t x ) This implies since e. e n e + C(n t + n t x ) C( t + x ) 5. Consider the boundary-value problem u xx + u = f(x), x (, 1) u() = a u x (1) = b Given a partition = x < x 1 < < x n = 1, please formulate a piecewise linear continuous finite element method to solve the problem. Show that your method has a unique solution. Solution: Let φ i (x), i =,..., n be continuous piecewise linear functions defined on [, 1] s.t. φ i (x j ) = γ ij for j =,..., n. Let For any φ V h, V h = span{φ i (x) : i = 1,,..., n} and W h = aφ + V h. ( u xx + u)φdx = (u x φ) 1 + here (u x φ) 1 = u x (1)φ(1) = bφ(1). The FEM is to find U W h s.t. for any φ V h. U x φ x dx + Uφdx = u x φ x dx + uφdx = fφdx + bφ(1), fφdx This system has a unique solution if and only if the associated homogeneous system U x φ x dx + Uφdx = Page 4

has a unique Solution, assuming a, b =. In fact, let φ = U,then U x dx + U dx = = U. 6. Consider the following matrix A and solving the linear system A x = b by iterative methods, 1 α β A = α 1 γ. β γ 1 (a) What are the conditions on the variables α, β, and γ for Jacobi s method and Gauss-Seidel method to converge? (b) Describe the Jacobi s method and Gauss-Seidel method. (c) Find a set of values (if any exist) of α, β, and γ for which the Jacobi method converges but Gauss-Seidel does not, and vice versa. Solution: (a) The matrix A should be strictly diagonally dominant: α + β < 1, α + γ < 1, and β + γ < 1. (b) A = D L U, where D is the diagonal matrix, L lower triangular matrix and U upper triangular matrix. (D L U) x = b D x = (L + U) x + b x n+1 = D 1 (L + U) x n + D 1 b This is Jacobi iteration, and for Gauss-Seidel, (D L) x = U x + b x n+1 = (D L) 1 U x n + (D L) 1 b Algorithm: With any initial condition x, iterate x n+1 = D 1 (L + U) x n + D 1 b (Jacobi) or x n+1 = (D L) 1 U x n + (D L) 1 b (Gauss-Seidel) until it converges. (c) (Outline of solution) The standard convergence condition is when the spectral radius of the iteration matrix is less than 1. Compute the eigenvalues of D 1 (L+U), let s call them λ J s, and the eigenvalues of (D L) 1 U to be λ G s, then find the condition on α, β, and γ which corresponds to λ J < 1 and λ G > 1 and vice versa. Page 5

7. Describe an algorithm to compute least squares solution by singular value decomposition. Prove the solution obtained is the least squares solution and estimate the leading computation cost for your algorithm. Assume the least squares system is A x = b, A R m n, b R m, and x R n. Solution: Algorithm: (a) Compute the reduced SVD: A = UΣV. (b) Compute the vector y = U b. (c) Solve the diagonal system Σ w = y. (d) Set x = V w. Proof: Least squares solution A A x = A b V Σ U UΣV x = V Σ U b Σ ΣV x = Σ U b ΣV x = y Σ w = y. Leading cost: dominated by SVD mn + 11n 3 flops. 8. Describe the Conjugate Gradient Iteration method for solving a linear system Prove that the residuals are orthogonal. A x = b. Solution: Algorithm: (a) x o =, r o = b, p o = r o (b) for n = 1,, 3,... i. α n = ( r n 1 r T n 1 )/( p T n 1A p n 1 ) ii. x n = x n 1 + α n p n 1. iii. r n = r n 1 α n A p n 1. Page 6

iv. β n = ( r n r T n )/( r n 1 r T n 1 ) v. p n = r n + β n p n 1. end. Proof for r n r T j = for j < n. By induction on n, r n r T j = r n 1 r T j α n p T n 1A r j. If j < n 1, it is true by induction, If j = n 1, r n r T n 1 = α n = ( r n r T n 1 )/ p T na r n 1. Page 7