Numerical Methods for CSE. Examination - Solutions

Similar documents
Numerical Methods for CSE. Examination Solutions

Numerical Methods for CSE. Examination

Algebra C Numerical Linear Algebra Sample Exam Problems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Numerical Analysis II. Problem Sheet 12

Numerical Analysis II. Problem Sheet 8

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Preliminary Examination, Numerical Analysis, August 2016

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Reduction to the associated homogeneous system via a particular solution

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Linear Algebra, part 3 QR and SVD

Linear Least Squares Problems

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Initial value problems for ordinary differential equations

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

Applied Linear Algebra in Geoscience Using MATLAB

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

CS137 Introduction to Scientific Computing Winter Quarter 2004 Solutions to Homework #3

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit V Solution of

A Glimpse at Scipy FOSSEE. June Abstract This document shows a glimpse of the features of Scipy that will be explored during this course.

Stability of the Gram-Schmidt process

Numerical Optimization

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Lecture 6, Sci. Comp. for DPhil Students

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Hands-on Session: Linear Algebra and Ordinary Differential Equations with SCILAB

Scientific Computing: Dense Linear Systems

Least Squares. Tom Lyche. October 26, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Linear Algebra in Actuarial Science: Slides to the lecture

Some Notes on Least Squares, QR-factorization, SVD and Fitting

Math Fall Final Exam

CS 257: Numerical Methods

Numerical Linear Algebra

AMATH 351 Mar 15, 2013 FINAL REVIEW. Instructor: Jiri Najemnik

Rank Revealing QR factorization. F. Guyomarc h, D. Mezher and B. Philippe

Linear least squares problem: Example

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Matrix decompositions

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

Numerical Methods in Matrix Computations

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

A Review of Linear Algebra

Notes on PCG for Sparse Linear Systems

Linear Algebra- Final Exam Review

Least-Squares Fitting of Model Parameters to Experimental Data

1. General Vector Spaces

LAB 2: Orthogonal Projections, the Four Fundamental Subspaces, QR Factorization, and Inconsistent Linear Systems

Linear Algebraic Equations

Sparse least squares and Q-less QR

Lecture 4. Tensor-Related Singular Value Decompositions. Charles F. Van Loan

Course Notes: Week 1

Last name Department Computer Name Legi No. Date Total. Always keep your Legi visible on the table.

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Applied Numerical Linear Algebra. Lecture 8

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

Cheat Sheet for MATH461

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

What is... Parameter Identification?

Class notes: Approximation

Math 310 Final Exam Solutions

Numerical Analysis II. Problem Sheet 9

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Qualifying Examination

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

Matrix Derivatives and Descent Optimization Methods

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

Chap 3. Linear Algebra

Lecture # 20 The Preconditioned Conjugate Gradient Method

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

MAT 610: Numerical Linear Algebra. James V. Lambers

Conjugate Gradient Method

Fourier Series, Fourier Transforms, and Periodic Response to Periodic Forcing

Modelling and Mathematical Methods in Process and Chemical Engineering

Math 671: Tensor Train decomposition methods

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

1 Conjugate gradients

Math 307 Learning Goals. March 23, 2010

G1110 & 852G1 Numerical Linear Algebra

TOPOLOGICAL GALOIS THEORY. A. Khovanskii

AMS-207: Bayesian Statistics

Statistical Geometry Processing Winter Semester 2011/2012

MATH 22A: LINEAR ALGEBRA Chapter 4

Dot product and linear least squares problems

Solution Manual for: Numerical Computing with MATLAB by Cleve B. Moler

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Computational Linear Algebra

Notes on Time Series Modeling

Table 1 Principle Matlab operators and functions Name Description Page reference

18.06SC Final Exam Solutions

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

FINAL EXAM SOLUTIONS, MATH 123

Mathematical Foundations

Transcription:

R. Hiptmair A. Moiola Fall term 2010 Numerical Methods for CSE ETH Zürich D-MATH Examination - Solutions February 10th, 2011 Total points: 65 = 8+14+16+16+11 (These are not master solutions but just a reference!) Problem 1. Advantages of conjugate gradient method [8 pts] In case the system matrix is large and sparse; in case a matrix-vector product routine is available; in case a good initial guess is available; in case only an approximated solution requested;... Problem 2. Cholesky and QR [14 pts] (2a) A T A has to be s.p.d.: (A T A) T = A T (A T ) T = A T A symmetric rank(a) = n A injective v R n s.t. v 0 it holds Av 0 v T A T Av = Av 2 2 > 0 positive definite. (2b) 3 things have to be verified: (2c) R R n,n upper triangular, from definition of Cholesky decomposition; Q = (R T A T ) T = AR 1 R m,n ; R T R = A T A R T A T AR 1 = Id Q T Q = R T A T AR 1 = Id, Q has orthogonal columns; QR = (R T A T ) T R = AR 1 R = A. A has rank 2 but A T A = ( 1+ 1 ǫ 1 ) 4 1 1+ 1ǫ 4 ( ) 1 1 1 1 has rank one in machine arithmetics. So, the matrix is symmetric but non positive definite, the hypothesis needed for the Cholesky decomposition are not satisfied. NumCSE Exam solution Page 1 of 6 Feb. 10th, 2011

Problem 3. Quadrature [16 pts] (3a) Routine : 1 f u n c t i o n GaussConv ( f h d ) 2 i f nargin<1; f h d = @( t ) sinh ( t ) ; end ; 3 I e x a c t = quad (@( t ) a s i n ( t ). f h d ( t ), 1,1, eps ) 4 5 n max = 5 0 ; nn = 1 : n max ; 6 e r r = z e r o s ( s i z e ( nn ) ) ; 7 f o r j = 1 : n max 8 [ x,w] = gaussquad ( nn ( j ) ) ; 9 I = dot (w, a s i n ( x ). f h d ( x ) ) ; 10 % I = GaussArcSin ( f h d, nn ( j ) ) ; % u s i n g pcode 11 e r r ( j ) = abs ( I I e x a c t ) ; 12 end 13 14 c l o s e a l l ; f i g u r e ; 15 l o g l o g ( nn, e r r, [ 1, n max ], [ 1, n max ]. ˆ ( 3 ),, l i n e w i d t h, 2 ) ; 16 t i t l e ( Convergence of Gauss q u a d r a t u r e ) ; 17 x l a b e l ( n = # of e v a l u a t i o n p o i n t s ) ; y l a b e l ( e r r o r ) ; 18 legend ( Gauss quad., O( n ˆ{ 3}) ) ; 19 p r i n t depsc2 GaussConv. eps ; 10 0 Convergence of Gauss quadrature Gauss quad. O(n 3 ) 10 1 10 2 error 10 3 10 4 10 5 10 6 10 0 10 1 10 2 n = # of evaluation points (3b) Algebraic convergence. (Not requested: approxim. O(n 3 ), expected O(n 2.7 )) (3c) With the change of variablet = sin(x), dt = cosxdx I = 1 1 arcsin(t) f(t)dt = π/2 π/2 x f(sin(x))cos(x)dx. (the change of variable has to provide a smooth integrand on the integration interval) (3d) Routine: 1 f u n c t i o n GaussConvCV ( f h d ) 2 i f nargin<1; f h d = @( t ) sinh ( t ) ; end ; 3 g = @( t ) t. f h d ( s i n ( t ) ). cos ( t ) ; 4 I e x a c t = quad (@( t ) a s i n ( t ). f h d ( t ), 1,1, eps ) NumCSE Exam solution Page 2 of 6 Feb. 10th, 2011

5 %I e x a c t = quad (@( t ) g ( t ), p i / 2, p i / 2, eps ) 6 7 n max = 5 0 ; nn = 1 : n max ; 8 e r r = z e r o s ( s i z e ( nn ) ) ; 9 f o r j = 1 : n max 10 [ x,w] = gaussquad ( nn ( j ) ) ; 11 I = p i / 2 dot (w, g ( x p i / 2 ) ) ; 12 % I = GaussArcSinCV ( f h d, nn ( j ) ) ; % u s i n g pcode 13 e r r ( j ) = abs ( I I e x a c t ) ; 14 end 15 16 c l o s e a l l ; f i g u r e ; 17 semilogy ( nn, e r r, [ 1, n max ], [ eps, eps ],, l i n e w i d t h, 2 ) ; 18 t i t l e ( Convergence of Gauss q u a d r a t u r e f o r r e p h r a s e d problem ) ; 19 x l a b e l ( n = # of e v a l u a t i o n p o i n t s ) ; y l a b e l ( e r r o r ) ; 20 legend ( Gauss quad., eps ) ; 21 p r i n t depsc2 GaussConvCV. eps ; 10 0 Convergence of Gauss quadrature for rephrased problem 10 2 Gauss quad. eps 10 4 10 6 error 10 8 10 10 10 12 10 14 10 16 0 5 10 15 20 25 30 35 40 45 50 n = # of evaluation points (3e) The convergence is now exponential. The integrand of the original integrand belongs to C 0 ([ 1,1]) but not to C 1 ([ 1,1]) because the derivative of the arcsin function blows up in ±1. The change of variable provides a smooth integrand: xcos(x)sinh(sinx) C (R). Gauss quadrature ensures exponential convergence only if the integrand is smooth (C ). This explains the algebraic and the exponential convergence. Problem 4. Exponential integrator [16 pts] (4a) (4b) Giveny(0), a step of sizetof the method gives y 1 = y(0)+t ϕ ( t Df(y(0)) ) f(y(0)) = y(0)+t ϕ ( t A ) A y(0) = y(0)+t ( exp(at) Id ) (ta) 1 A y(0) = y(0)+exp(at) y(0) y(0) = exp(at) y(0) = y(t). Function: 1 f u n c t i o n y1 = ExpEulStep ( y0, f, df, h ) 2 y1 = y0 ( : ) + h phim ( h df ( y0 ( : ) ) ) f ( y0 ( : ) ) ; NumCSE Exam solution Page 3 of 6 Feb. 10th, 2011

(4c) Error= O(h 2 ). 1 f u n c t i o n E x p I n t O r d e r 2 % e s t i m a t e o r d e r o f e x p o n e n t i a l i n t e g r a t o r on s c a l a r l o g i s t i c eq. 3 t 0 = 0 ; 4 T = 1 ; 5 y0 = 0. 1 ; 6 7 % l o g i s t i c f, d e r i v a t i v e and e x a c t s o l u t i o n : 8 f = @( y ) y. (1 y ) ; 9 df = @( y ) 1 2 y ; 10 exacty = @( t, y0 ) y0. / ( y0+(1 y0 ). exp( t ) ) ; 11 exactyt = exacty ( T t0, y0 ) ; 12 13 kk = 2. ˆ ( 1 : 1 0 ) ; % d i f f e r e n t numbers o f s t e p s 14 e r r = z e r o s ( s i z e ( kk ) ) ; 15 f o r j =1: l e n g t h ( kk ) 16 k = kk ( j ) ; 17 h = ( T t 0 ) / k ; 18 y = y0 ; 19 f o r s t = 1 : k % t i m e s t e p p i n g 20 y = y + h phim ( h df ( y ) ) f ( y ) ; 21 end 22 e r r ( j ) = abs ( y exactyt ) ; 23 end 24 hh = ( T t 0 ). / kk ; 25 c l o s e a l l ; f i g u r e ; 26 l o g l o g ( hh, e r r, hh, hh. ˆ ( 2 ) / 1 0 0, r, l i n e w i d t h, 2 ) ; 27 legend ( expon. i n t e g r a t o r e r r o r, O( h ˆ 2 ), l o c a t i o n, b e s t ) ; 28 x l a b e l ( s t e p s i z e h ) ; y l a b e l ( e r r o r a t f i n a l time ) ; 29 p r i n t depsc2 E x p I n t O r d e r. eps ; 10 2 10 3 10 4 error at final time 10 5 10 6 10 7 10 8 expon. integrator error O(h 2 ) 10 9 10 4 10 3 10 2 10 1 10 0 stepsize h (4d) Function: 1 f u n c t i o n yout = E xpintsys ( n, N, T ) 2 % e x p o n e n t i a l i n t e g r a t o r f o r s y s t e m y = Ay + y. ˆ 3 3 i f nargin<1; 4 n = 5 ; NumCSE Exam solution Page 4 of 6 Feb. 10th, 2011

5 N = 100; 6 T = 1 ; 7 end 8 d = n ˆ 2 ; % s y s t e m d i m e n s i o n 9 t 0 = 0 ; 10 y0 = ( 1 : d ) / d ; % r h s 11 12 % b u i l d ODE 13 A = g a l l e r y ( p o i s s o n, n ) ; 14 % % Octave v e r s i o n : 15 % B = s p d i a g s ([ ones ( n, 1 ),2 ones ( n, 1 ), ones ( n, 1 ) ], [ 1, 0, 1 ], n, n ) ; 16 % A = kron ( B, s p e y e ( n ) )+kron ( s p e y e ( n ),B ) ; 17 f = @( y ) A y ( : ) + y ( : ). ˆ 3 ; 18 df = @( y ) A + diag (3 y. ˆ 2 ) ; 19 20 % e x p o n e n t i a l i n t e g r a t o r 21 h = ( T t 0 ) /N; 22 %tout = l i n s p a c e ( t0, T, N+1) ; 23 yout = z e r o s (N+1, d ) ; 24 yout ( 1, : ) = y0 ( : ). ; 25 f o r s t = 1 :N % t i m e s t e p p i n g 26 yout ( s t +1, : ) = ExpEulStep ( yout ( s t, : ), f, df, h ) ; 27 end 28 c l o s e a l l ; 29 p l o t ( ( 1 : d ), yout ( end, : ), o, l i n e w i d t h, 2 ) ; 30 x l a b e l ( d i r e c t i o n e j ) ; y l a b e l ( v a l u e a t endtime y j ( T ) ) ; 31 p r i n t depsc2 E xpintsys. eps ; 0.9 0.8 0.7 0.6 value at endtime y j (T) 0.5 0.4 0.3 0.2 0.1 0 0 5 10 15 20 25 direction e j (4e) Routine: 1 f u n c t i o n E x p I n t E r r 2 n = 5 ; 3 d = n ˆ 2 ; % d i m e n s i o n 4 t 0 = 0 ; 5 T = 1 ; 6 y0 = ( 1 : d ) / d ; % RHS 7 N = 100; % number o f expon. i n t e g r a t o r s t e p s 8 NumCSE Exam solution Page 5 of 6 Feb. 10th, 2011

9 % b u i l d ODE 10 A = g a l l e r y ( p o i s s o n, n ) ; 11 % % Octave v e r s i o n : 12 % B = s p d i a g s ([ ones ( n, 1 ),2 ones ( n, 1 ), ones ( n, 1 ) ], [ 1, 0, 1 ], n, n ) ; 13 % A = kron ( B, s p e y e ( n ) )+kron ( s p e y e ( n ),B ) ; 14 f = @( y ) A y ( : ) + y ( : ). ˆ 3 ; 15 % s o l v e w i t h ode45 and E x p o n e n t i a l E u l e r I n t e g r a t o r : 16 [ t45, y45 ] = ode45 (@( t, y ) f ( y ), [ t0, T ], y0 ) ; 17 yout = E xpintsys ( n,n, T ) ; 18 % r e l a t i v e e r r o r i n 2 norm a t t i m e T 19 RELERR = norm ( y45 ( end, : ) yout ( end, : ) ) / norm ( y45 ( end, : ) ) 20 21 % % e x t r a : p l o t, l i n e s = e x p o n e n t i a l Euler, c i r c l e s = ode45 22 % f i g u r e ; p l o t ( ( T t 0 ) ( 0 : N) / N+t0, yout ) ; hold on ; p l o t ( t45, y45, o ) ; Problem 5. Matrix least squares in Frobenius norm [11 pts] (5a) We call m R n2 the vector obtained by the concatenation of the rows of M. Then the functional to be minimized is just the 2-norm of m, i.e., A = Id n 2 and b = 0. The constraint M z = g can be written ascm = d choosingd = g and then n 2 matrixcas the Kronecker product of then nidentity matrix and z T. In symbols (z 1,...,z n ) 0 0 0 z T 0. C =....... 0. }{{} 0 0 z T n Then it is possible to obtainm from the solution of the extended normal equations linear system: Id n 2 CT ( ) ( ) }{{} C m 0 }{{} =. p g (5b) Function: 1 f u n c t i o n M = MinFrob ( z, g ) 2 n = s i z e ( z, 1 ) ; 3 4 C = kron ( eye ( n ), z. ) ; 0 n 2 n 5 x = [ eye ( n ˆ 2 ), C ; C, z e r o s ( n, n ) ] \ [ z e r o s ( n ˆ 2, 1 ) ; g ] ; 6 m = x ( 1 : n ˆ 2 ) ; 7 M = reshape (m, n, n ) ; 8 9 % t e s t, n=10 > norm (M, f r o ) = 0. 1 6 1 1... : 10 %z = ( 1 : n ) ; g = ones ( n, 1 ) ; M = MinFrob ( z, g ) ; 11 %[ norm (M z g ), norm (M, f r o ), norm ( g ) / norm ( z ), norm (M g z / norm ( z ) ˆ 2 ) ] (from Matlab : M = gz T / x 2 2, M F = g / z ) NumCSE Exam solution Page 6 of 6 Feb. 10th, 2011