Scientific Computing II

Similar documents
Introduction to Scientific Computing II Multigrid

Multigrid Methods and their application in CFD

Kasetsart University Workshop. Multigrid methods: An introduction

AMS526: Numerical Analysis I (Numerical Linear Algebra)

1. Fast Iterative Solvers of SLE

Chapter 7 Iterative Techniques in Matrix Algebra

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Numerical Programming I (for CSE)

Conjugate Gradient (CG) Method

Solving PDEs with Multigrid Methods p.1

Numerical Methods - Numerical Linear Algebra

The Conjugate Gradient Method

PETROV-GALERKIN METHODS

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

PDE Solvers for Fluid Flow

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Lecture 18 Classical Iterative Methods

Algebraic Multigrid as Solvers and as Preconditioner

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Course Notes: Week 1

6. Iterative Methods: Roots and Optima. Citius, Altius, Fortius!

The Conjugate Gradient Method

New Multigrid Solver Advances in TOPS

17 Solution of Nonlinear Systems

6. Multigrid & Krylov Methods. June 1, 2010

Iterative Methods and Multigrid

Computational Linear Algebra

Iterative Methods for Solving A x = b

The Conjugate Gradient Method

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Stabilization and Acceleration of Algebraic Multigrid Method

Notes for CS542G (Iterative Solvers for Linear Systems)

Iterative methods for Linear System

7 Mathematical Methods 7.6 Insulation (10 units)

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Preliminary Examination, Numerical Analysis, August 2016

Review: From problem to parallel algorithm

Computational Linear Algebra

ECE580 Exam 2 November 01, Name: Score: / (20 points) You are given a two data sets

7.2 Steepest Descent and Preconditioning

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Bootstrap AMG. Kailai Xu. July 12, Stanford University

3D Space Charge Routines: The Software Package MOEVE and FFT Compared

Introduction to Multigrid Method

Iterative Methods for Ax=b

Poisson Equation in 2D

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

Iterative Methods and Multigrid

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Numerical Solution Techniques in Mechanical and Aerospace Engineering

MULTIGRID METHODS FOR NONLINEAR PROBLEMS: AN OVERVIEW

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

Lecture 17: Iterative Methods and Sparse Linear Algebra

Conjugate Gradients: Idea

Adaptive algebraic multigrid methods in lattice computations

Introduction to gradient descent

the method of steepest descent

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Theory of Iterative Methods

Constrained Minimization and Multigrid

9.1 Preconditioned Krylov Subspace Methods

Modelling and implementation of algorithms in applied mathematics using MPI

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

Notes on Some Methods for Solving Linear Systems

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Numerical Analysis Comprehensive Exam Questions

Computation Fluid Dynamics

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

Multigrid finite element methods on semi-structured triangular grids

Computational Linear Algebra

Linear Solvers. Andrew Hazel

Iterative Methods. Splitting Methods

The Conjugate Gradient Method

IMPLEMENTATION OF A PARALLEL AMG SOLVER

Numerical solutions of nonlinear systems of equations

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

APPLIED NUMERICAL LINEAR ALGEBRA

Iterative Rigid Multibody Dynamics A Comparison of Computational Methods

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes

Chapter 10 Exercises

EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC ANALYSIS

Scientific Computing: An Introductory Survey

Preface to the Second Edition. Preface to the First Edition

Multigrid absolute value preconditioning

HW4, Math 228A. Multigrid Solver. Fall 2010

The conjugate gradient method

INTRODUCTION TO MULTIGRID METHODS

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

AMG for a Peta-scale Navier Stokes Code

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

INTRODUCTION TO FINITE ELEMENT METHODS ON ELLIPTIC EQUATIONS LONG CHEN

Transcription:

Technische Universität München ST 008 Institut für Informatik Dr. Miriam Mehl Scientific Computing II Final Exam, July, 008 Iterative Solvers (3 pts + 4 extra pts, 60 min) a) Steepest Descent and Conjugate Gradients (3 pts) 4 min (i) The steepest descent method is an iterative method to detect the minimum of a given function. However, we can use it to solve systems of linear equations as well. Shortly describe and justify the step from a system of linear equations to a problem to which we can apply the steepest descent method (no formulas required, approx. two sentences, pts). We form a quadratic functional for which the gradient is the residual of the linear system to be solved and minimize this functional. As the gradient of the functional is the residual of the linear system, the minimum of the functional is the root of the residual that is the solution of the linear system. (ii) What happens in one iteration of the steepest descent method? (one sentence, no details!, pt) In one iteration of the steepest descent, a one-dimensional minimisation, namely the minimisation in the direction of the steepest descent of the functional, is performed. (iii) Name the three steps of one iteration of the steepest descent method in their correct algorithmic order (3 pts). ) determine the direction of the steepest descent (negative gradient), ) determine the step size to reach the minimum in this direction, 3) update the current approximation of the solution / the minimum. (iv) Name the main difference between the steepest descent and the conjugate graduent method (one sentence, no formulas, pt). The conjugate gradient method does not use the negative gradient / the direction of the steepest descent as search directions but modifies these directions such that each search direction is conjugate or A-orthogonal to all previous ones.

(v) Assume you have to solve the system Au = f of linear equations. Write a pseudo code for one iteration of the conjugate gradient method (including declaration of the repective function with all output and input parameters, 6 pts). You may use the operations v T v for the scalar product of two vectors v and v, Bv for the multiplication of a matrix B with a vector v. function [u,r,p]=cg iteration(a,b,u,r,p); a=r r; c=p Ap; u=u+(a/c)*p; r = b-au; c=r r; p=r+(c/a)*p; b) Relaxation Methods and Multigrid ( pts + 4 extra pts) 3 min We have to solve the two-dimensional Poisson equation u xx (x, y) + u yy (x, y) = f(x, y) in ]0; [ with boundary conditions u(x, y) = on ]0; [. We discretise the equation on a regular cartesian grid with N unknowns per coordinate direction and use the well-known five-point stencil 0 0 h 4 0 0 to discretise the Laplace operator. (i) Write down a pseudo-code for a function performing one Jacobi iteration for the resulting system of linear equations (including declaration of the respective function with all output and input parameters, 4 pts). function Jacobi iteration(n,f,u) { for i=,...,n for j=,...,n v[i, j] = (u[i, j] + u[i +, j] + u[i, j ] + u[i, j + ])/4 h 4 f[i, j]; for i=,...,n for j=,...,n u[i, j] = v[i, j]; }

(ii) We know from the lecture the eigenvectors and the respective eigenvalues q m,n = (sin(πmih) sin(πnjh)) i,j=,...,n λ m,n = (cos(πmh) + cos(πnh)) of the iteration matrix of the Jacobi method, where N = h is the number of unknowns per coordinate direction. Is the Jacobi method a good smoother? Shortly justify your answer (one or two sentences, pts). The Jacobi method is not a good smoother. High frequencies error components (m, n large) are multiplied by a factor λ m,n that is close to for m, n close to h. Thus, the absolute value of a high frequency error is harldy reduced. (iii) Name the seven main steps of a multigrid v-cycle in their correct algorithmic order (5 pts). ) presmoothing ) compute the residual 3) restriction 4) solve the coarse grid equation (recursive call) 5) interpolation 6) correction 7) postsmoothing (iv) Why do we need good smothers for a good multigrid method (give one reason, pt)? As on the coarse grid only low frequencies can be represented, high frequencies have to be eliminated already on the fine grid. (v) We introduce the damped Jacobi method. That is, we multiply the correction term by which we change the local values of the unknown variable in each iteration with a damping factor λ. In this case, we still have the same eigenvectors of the iteration matrix as for the original Jacobi method but the modified eigenvalues λ m,n = ω + ω (cos(mπh) + cos(nπh)). Determine min m,n λm,n With this question, you can earn extra points (over 00%), it is not neccessary to solve to get the full score in the exam. 3

and a damping factor for which the damped Jacobi method would be a good smoother (4 pts). min m,n λm,n = min ω + ω (cos(mπh) + cos(nπh)) m,n = ω + ω ( min m ) cos(mπh) + min cos(nπh) = ω + ω cos(π( h)). n For ω =, this minimum is close to zero / tends to zero for h tending to zero. As the mimimum is taken for high frequencies (maximal values of m and n), the damped Jacobi method with damping dfactor ω = is, thus, a good smoother. c) Convergence of Iterative Solvers (6 pts) In the following table, you see iteration numbers for different iterative solvers for the twodimensional Poisson equation described in b). Complete the tabular with iteration numbers you would expect in the empty fields (6 pts). h 3 Jacobi, 500 Gauss-Seidel, 50 SOR 90 Multigrid 5 steepest descent, 700 conjugate gradients 80 64 Molecular Dynamics (9 pts, 30 min) a) General Overview (4 pts)?? min A general task in scientific computing is to simulate a physical problem. There are usually different steps to be done to get from a physical scenario to the simulation. You ll have to describe the different steps involved in the case of a molecular dynamics simulation with n molecules. 8 56 (i) Which kind of problems are tackled by molecular dynamics? Give two examples where a molecular dynamics simulation could be useful. ( pts) Molecular Dynamics tackles problems on the nanoscale for which where real experiments are not feasible and other simulation techniques (CFD,...) are not accurate enough on the one side or too detailed and too costly (quantum mechanics) on the other side. Examples are the study of nucleation processes, the simulation of proteins, nanoflows,... 4

(ii) The first step is to build a physical model of the real world problem. One of these models is the Lennard-Jones -6 potential ( ( ) σ ( ) ) σ 6 U LJ (r ij ) = 4ɛ, r ij r ij which is composed of two parts, one of them is responsible for attraction and the other one for repulsion. Shortly describe and justify which part has which effect on the two involved molecules. Give two examples of forces on molecules which can t be modeled using the Lennary-Jones -6 potential. (3 pts) The force acting on the two involved molecules is calculated according to the formula F = U. Thus, the first term results in a negative that is attraction force whereas the second term results in a positive that is repulsive force. It can t be used to represent e.g. dipoles, gravity, three-body-interactions,... (iii) In molecular dynamics simulations, the positions of all molecules are calculated for successive time steps. Shortly explain (no details, no mathematical derivations) how to get from the Lennard-Jones potential to a formula which allows the calculation of the molecules positions at a new time step. (3 pts) The second derivative of the position is the acceleration which depends on the force via Newton s third law (F = m a). The force equals the negative gradient of the potential (here: the Lennard-Jones potential). As it is a pair potential, we have to evaluate it for each pair of particles and therewith calculate the total force on each molecule. Now, a time integration method has to be used to solve Newtons equation for each molecule (e.g. Störmer-Verlet) (iv) During the numerical calculation of the new positions, errors occur (discretisation errors, roundoff errors). Now assume you are performing a simulation with 00000 time steps. How strong will the calculated position of the particles differ from the real positions? Is this error critical for the usability of the simulation? ( pts) MD is a badly conditioned, chaotical problem. After 00000 time steps, the error will be of the order of the domain size, so the calculated positions have no correlation to the real positions any more. But this is not critical, as we are not interested in the particles positions but in macroscopic values (v) In (iii), you have described the necessary steps to get a formula for the new positions of all n molecules (interacting vai LJ potentials). Now pick one of those molecules at an arbitrary time step. What are the costs (consider all necessary operations and use the O()-notation) for calculating the new position of this single molecule? Rely on the most efficient algorithm you know and justify your answer! (3 pts) 5

The most expensive part is the calculation of the force. As the Lennard-Jones potential is short-range, only a small region around the selected molecule has to be considered. This region only contains a constant number of molecules. Using e.g. Linked-Cells, all these molecules can be found in constant time. The costs of each force-calculation and the evaluation of the integration method are also constant, so in total, the costs are constant (O()). b) Discretisation (5 pts) Assume that in a molecular dynamics program, the following discretisation scheme is used to calculate new positions for the molecules: x(t + t) = x(t) + t v(t) + t a(t) () (i) The scheme is missing a method for the calculation of the velocity. Construct a formula for v(t + t) in such a way that the discretisation scheme is time reversible. You have to prove the time reversibility for the position equation, but not for the velocity equation. First we perform one time step t to get x(t + t), and then starting from x(t + t) we perform a time step t to get x(t): x(t) = x(t + t) t v(t + t) + t a(t + t) () Inserting into : x(t) = x(t) + t v(t) + t a(t) t v(t + t) + t a(t + t) As x(t) and x(t) have to be equal: t x(t + t) = t v(t) + t a(t) + t a(t + t) v(t + t) = v(t) + t ( a(t) + a(t + t)) This formula for v(t + t) ensures that x(t) and x(t) are equal and therefore the position equation is time reversible. (ii) Is the discretisation scheme () a good discretisation scheme? Shortly Justify your answer! It is not a good discretisatin scheme. Using t a(t) instead of t a(t) gives a higher order without any disadvantages. 6