Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order of convergence for the method. Can you propose a method that converges faster? Solution: Newton s method is x n+1 = x n f(x n) f (x n ) := h(x n) For the fixed-point iteration x n+1 = h(x n ), we can investigate h (α). If < h (α) < 1, the iteration converges linearly with the rate of conv. tending to h (α) ; if h (α) =, then if converges at least quadratically. In fact, h (x) = f(x)f (x) m 1, as x α. [f (x)] m < m 1 < 1 if m. m The Newton s method only converges linearly if x o is chosen close enough to α. To improve the order of convergence, once could use x n+1 = x n m f(x n) f (x n ) := H(x n). It s easy to show that H (x) as x α. Therefore, the new method converges at least quadratically.. Let x, x 1,..., x n be distinct real numbers and l k (x) be the Lagrange s basis function. Let Ψ n (x) = Π n k= (x x k). Prove that (1) For any polynomial p(x) of degree n + 1, p(x) n p(x k )l k (x) = k= 1 (n + 1)! p(n+1) (x)ψ n (x). () If further x,..., x n are Gauss-Legendre points in the interval [ 1, 1], then 1 l i (x)l j (x)dx = for i j.
Solution: (1) Note that p(x) n k= p(x k)l k (x) has zeros at x = x,..., x n, therefore, p(x) n k= p(x k)l k (x) = CΨ n (x) for some constant C. Since n k= p(x k)l k (x) is of degree n and p(x) is of degree n + 1, C must be the highest degree coeff. of p(x), thus C = 1 (n+1)! p(n+1) (x). () Since x,..., x n are Gauss-Legendre points, 1 p(x)dx = n w k p(x k ) k= for any polynomial p of degree n + 1. Therefore, 1 l i (x)l j (x)dx = n w k l i (x k )l j (x k ) =, if i j. k= 3. For the initial value problem y (t) = f(t, y), y() = y, t, consider the θ-method y n+1 = y n + h[θf(t n, y n ) + (1 θ)f(t n+1, y n+1 )], where time has been discretized such that t n = nh, and y n is the numerical approximation of y(t n ). (a) Is this method consistent? What is the order? Is this method zero-stable? How does the result differ for different θ? (b) What is the region of absolute stability? What is the region when θ =, 1, or 1? For what θ values is this method A stable? (c) Does this method have stiff decay? Show why or why not. Solution: (a) Consider the local truncation error around t n, y + hy + h! y + y h(θy + (1 θ)(y + hy + h! y + ) = h y ( 1 (1 θ)) + h3 ( 1 + 1 3 θ)y + The local truncation error is at least nd order, i.e. globally at least first order method, so this method is consistent. The order of the method becomes nd order if θ = 1, otherwise, it is a first order method. The first characteristic polynomial is ρ(z) = z 1, i.e. z = 1, so this method is zero-stable for any θ. (b) Consider y = λy, then y n+1 = y n + h[θλy n + (1 θ)λy n+1 ], 1 + hθλ y n+1 = 1 h(1 θ)λ y n, R(hθ) := 1 + hθλ 1 h(1 θ)λ 1 gives the stability region. Page
When θ =, the region is outside of the unit circle centered at z = 1 in the complex plane, when θ = 1, the negative half plane of the complex plane, and when θ = 1, inside of the unit circle centered at z = 1 in the complex plane. Therefore, this method is A-stable for θ 1, since the stability region includes the negative half plane. (c) To have stiff decay, one must show lim hθ R(hθ). lim R(hθ) = hθ lim hθ 1 + hθλ 1 h(1 θ)λ = θ 1 θ i.e. goes to only when θ =. Only when θ = (which is the backward Euler method), this method has stiff decay. 4. Consider the initial boundary values problem u t = u xx, (x, t) (, 1) (, T ] = D u(, t) = g 1 (t), u(1, t) = g (t), t [, T ] u(x, ) = f(x), x [, 1] Let (x i, t n ) be a grid point in a uniform rectangular grid, s.t. x i = i x, t n = n t for i =, 1,..., I and n =, 1,..., N where I x = 1 and N t = T, and let U n i be a numerical approximation of u(x i, t n ). Assuming the exact solution is sufficiently smooth, show that the scheme U n+1 i Ui n t = U n+1 i+1 n+1 Ui + U n+1 i 1 x is unconditionally stable and U n i u(x i, t n ) = O( t + x ) as t, x. Solution: Given some perturbation of f(x), g 1 (t), and g (t), the difference between the perturbed Solution and Ui n, say ρ n i will also satisfy ρ n+1 i ρ n i t = ρn+1 i+1 ρn+1 i + ρ n+1 i 1. x Let λ = t, then ρ n+1 x i = λ 1+λ ρn+1 i+1 + 1 1+λ ρn i + λ 1+λ ρn+1 i 1. Therefore, ρn+1 i is a strict convex combination of ρ n+1 i+1, ρn i and ρ n+1 i 1 and satisfy the maximum principle, which proves the stability. Note that the exact Solution u satisfies u n+1 i u n i t = un+1 i+1 un+1 i + u n+1 i 1 + O( t + x ). x Page 3
Let e n i = u n i Ui n, then e n+1 i = λ 1+λ en+1 i+1 + 1 1+λ en i + λ 1+λ en+1 i 1 + O( t + t x ). e n+1 i λ 1+λ en+1 + 1 1+λ en + λ 1+λ en+1 + O( t + t x ) e n+1 λ 1+λ en+1 + 1 1+λ en + λ 1+λ en+1 + O( t + t x ) λ 1+λ en+1 1 1+λ en + C( t + t x ) This implies since e. e n e + C(n t + n t x ) C( t + x ) 5. Consider the boundary-value problem u xx + u = f(x), x (, 1) u() = a u x (1) = b Given a partition = x < x 1 < < x n = 1, please formulate a piecewise linear continuous finite element method to solve the problem. Show that your method has a unique solution. Solution: Let φ i (x), i =,..., n be continuous piecewise linear functions defined on [, 1] s.t. φ i (x j ) = γ ij for j =,..., n. Let For any φ V h, V h = span{φ i (x) : i = 1,,..., n} and W h = aφ + V h. ( u xx + u)φdx = (u x φ) 1 + here (u x φ) 1 = u x (1)φ(1) = bφ(1). The FEM is to find U W h s.t. for any φ V h. U x φ x dx + Uφdx = u x φ x dx + uφdx = fφdx + bφ(1), fφdx This system has a unique solution if and only if the associated homogeneous system U x φ x dx + Uφdx = Page 4
has a unique Solution, assuming a, b =. In fact, let φ = U,then U x dx + U dx = = U. 6. Consider the following matrix A and solving the linear system A x = b by iterative methods, 1 α β A = α 1 γ. β γ 1 (a) What are the conditions on the variables α, β, and γ for Jacobi s method and Gauss-Seidel method to converge? (b) Describe the Jacobi s method and Gauss-Seidel method. (c) Find a set of values (if any exist) of α, β, and γ for which the Jacobi method converges but Gauss-Seidel does not, and vice versa. Solution: (a) The matrix A should be strictly diagonally dominant: α + β < 1, α + γ < 1, and β + γ < 1. (b) A = D L U, where D is the diagonal matrix, L lower triangular matrix and U upper triangular matrix. (D L U) x = b D x = (L + U) x + b x n+1 = D 1 (L + U) x n + D 1 b This is Jacobi iteration, and for Gauss-Seidel, (D L) x = U x + b x n+1 = (D L) 1 U x n + (D L) 1 b Algorithm: With any initial condition x, iterate x n+1 = D 1 (L + U) x n + D 1 b (Jacobi) or x n+1 = (D L) 1 U x n + (D L) 1 b (Gauss-Seidel) until it converges. (c) (Outline of solution) The standard convergence condition is when the spectral radius of the iteration matrix is less than 1. Compute the eigenvalues of D 1 (L+U), let s call them λ J s, and the eigenvalues of (D L) 1 U to be λ G s, then find the condition on α, β, and γ which corresponds to λ J < 1 and λ G > 1 and vice versa. Page 5
7. Describe an algorithm to compute least squares solution by singular value decomposition. Prove the solution obtained is the least squares solution and estimate the leading computation cost for your algorithm. Assume the least squares system is A x = b, A R m n, b R m, and x R n. Solution: Algorithm: (a) Compute the reduced SVD: A = UΣV. (b) Compute the vector y = U b. (c) Solve the diagonal system Σ w = y. (d) Set x = V w. Proof: Least squares solution A A x = A b V Σ U UΣV x = V Σ U b Σ ΣV x = Σ U b ΣV x = y Σ w = y. Leading cost: dominated by SVD mn + 11n 3 flops. 8. Describe the Conjugate Gradient Iteration method for solving a linear system Prove that the residuals are orthogonal. A x = b. Solution: Algorithm: (a) x o =, r o = b, p o = r o (b) for n = 1,, 3,... i. α n = ( r n 1 r T n 1 )/( p T n 1A p n 1 ) ii. x n = x n 1 + α n p n 1. iii. r n = r n 1 α n A p n 1. Page 6
iv. β n = ( r n r T n )/( r n 1 r T n 1 ) v. p n = r n + β n p n 1. end. Proof for r n r T j = for j < n. By induction on n, r n r T j = r n 1 r T j α n p T n 1A r j. If j < n 1, it is true by induction, If j = n 1, r n r T n 1 = α n = ( r n r T n 1 )/ p T na r n 1. Page 7