Introduction to Scientific Computing Benson Muite benson.muite@ut.ee http://kodu.ut.ee/ benson https://courses.cs.ut.ee/2018/isc/spring 26 March 2018 [Public Domain,https://commons.wikimedia.org/wiki/File1 / 21
Course Aims General introduction to numerical and computational mathematics Review programming methods Learn about some numerical algorithms Understand how these methods are used in real world situations [Public Domain,https://commons.wikimedia.org/wiki/File2 / 21
Course Overview Lectures Monday J. Livii 2-207 10.15-11.45 Benson Muite (benson dot muite at ut dot ee) Practical Monday J. Livii 2-205 12.15-13.45 Benson Muite (benson dot muite at ut dot ee) Homework typically due once a week until project. Expected to start this in the labs. Exam/final project presentation will be scheduled at end of course. Grading: Homework 50%, Exam 30%, Active participation 10%, Tests 10% Course Texts: Solomon Numerical Algorithms: Methods for Computer Vision, Machine Learning, and Graphics Pitt-Francis and Whiteley Guide to scientific computing in C++ [Public Domain,https://commons.wikimedia.org/wiki/File3 / 21
Lecture Topics 1) 12 February: Graphing, differentiation and integration dimension 2) 19 February: programming, recursion, arithmetic operations 3) 26 February: Linear algebra 4) 5 March: Floating point numbers, errors and ordinary differential equations 5) 12 March: Image analysis using statistics 6) 19 March: Image analysis using differential equations - Lecture by Gul Wali Shah 7) 26 March: Case study: Application of machine learning to analyse literary corpora 8) 2 April: Case study: DNA simulation using molecular dynamics [Public Domain,https://commons.wikimedia.org/wiki/File4 / 21
Lab Topics 1) 12 February: Mathematical functions, differentiation and integration, plotting, reading: https: //doi.org/10.1080/10586458.2017.1279092 2) 19 February: Sequences, Series summation, convergence and divergence, Fibonacci numbers and Collatz conjecture or similar experimental mathematics 3) 26 February: Monte Carlo integration, parallel computing introduction, Matrix Multiply, LU decomposition 4) 5 March: Finite difference method, error analysis, interval analysis, image segmentation by matrix differences Reading https://doi.org/10.1080/10586458.2016.1270858 [Public Domain,https://commons.wikimedia.org/wiki/File5 / 21
Lab Topics 5) 12 March: Eigenvalue computations, singular value decomposition, use of matrix operations in statistics - Eigenfaces 6) 19 March: fixed point iteration, image segmentation - solve partial differential equation from finite differences discretization with implicit timestepping (Mumford-Shah model), compare iterative and direct solvers 7) 26 March: Optimization algorithms, deep learning Reading https://arxiv.org/abs/1801.05894 8) 2 April: Molecular dynamics simulation using Gromacs [Public Domain,https://commons.wikimedia.org/wiki/File6 / 21
Reading 1) 12 February: Solomon chapter 1 and https://doi.org/10.1080/10586458.2017.1279092 2) 19 February: Solomon chapters 2 and 3 3) 26 February: Monte Carlo integration, parallel computing introduction Solomon chapters 4, 5 4) 5 March: Solomon chapters 14 and 15, interval analysis Reading Differential Equations and Exact Solutions in the Moving Sofa Problem https://doi.org/10.1080/10586458.2016.1270858 [Public Domain,https://commons.wikimedia.org/wiki/File7 / 21
Reading 5) 12 March: Matrix Multiply, Eigenvalue computations, singular value decomposition, use of matrix operations in statistics Solomon chapters 6 and 7 6) 19 March: Solomon chapters 11, 13 and 16 7) 26 March: Word count, clustering algorithms, optimization algorithms, deep learning Reading https://arxiv.org/abs/1801.05894 Solomon chapters 8, 9 and 12 8) 2 April: Molecular dynamics simulation http://www.bevanlab.biochem.vt.edu/pages/personal/justin/gmxtutorials/lysozyme/index.html [Public Domain,https://commons.wikimedia.org/wiki/File8 / 21
Root finding for polynomials - Fixed point iteration Theorem Let s be a solution of x = g(x) and g have a continuous derivative in some interval I containing s. The if g (s) K < 1 in I the iteration process x n+1 = g(x n ) converges for any initial value x 0 in I. If g (s) 0 then the convergence rate is linear, and if g (s) = 0, with g (s) 0 the the convergence rate is quadratic. [Public Domain,https://commons.wikimedia.org/wiki/File9 / 21
Root finding for polynomials - Newton-Raphson iteration Define f (x) = x g(x) x n+1 = x n f (x n) f (x n ) [Public Domain,https://commons.wikimedia.org/wiki/File10 / 21
Root finding for polynomials - Secant Method Approximate derivative by finite difference x n+1 = x n f (x n)(x n x n 1 ) f (x n ) f (x n 1 ) [Public Domain,https://commons.wikimedia.org/wiki/File11 / 21
Root finding for polynomials - Systems of Equations f 1 (x 1,..., x n ) = 0 f 2 (x 1,..., x n ) = 0... f n (x 1,..., x n ) = 0 or Let J = [ ] fi x j f(x) = 0 x n+1 = x n J 1 n f(x n ) [Public Domain,https://commons.wikimedia.org/wiki/File12 / 21
Root finding for polynomials - Systems of Equations Hard to find J analytically, therefore finite differences are often used It can be expensive to find J n every iteration, so sometimes just the initial value J 0 is used. [Public Domain,https://commons.wikimedia.org/wiki/File13 / 21
Steepest Descent for Linear Systems of Equations Solving Ax = b is equivalent to minimizing x T Ax x T b for symmetric and positive definite matrix A the algorithm is choose x 0 so that r 0 = Ax 0 b FOR n = 1, 2,.. ENDFOR rt n r n α n = r T n Ar n x n+1 = x n α n r n r n+1 = Ax n+1 b [Public Domain,https://commons.wikimedia.org/wiki/File14 / 21
Optimization find x such that f (x ) f (x) for all feasible x or find x such that f (x ) f (x) for all feasible x Constrained vs. Unconstrained Continuous vs. Discrete Differentiable vs. Non differentiable [Public Domain,https://commons.wikimedia.org/wiki/File15 / 21
Optimization: Hooke and Jeeves method a) choose a step size b) Successively sequentially check if f (x ± he i ) < f (x), and update x if so c) After checking all coordinate directions, check if x was updated. If the step size h < ɛ, stop, otherwise decrease h by a factor of 2. d) If x was updated If the step size h < ɛ, stop. Otherwise start searching from 2x new x old if f (2x new x old ) < f (x new x old ), if not search from f (x new x old ). [Public Domain,https://commons.wikimedia.org/wiki/File16 / 21
Optimization: Nelder and Mead method a) Generate a simplex of n + 1 points in R. b) Remove the vertex with the worst function value and replace it with a new point. Choose the point by reflecting, expanding or contracting the simplex along the line joining the worst vertex with the centroid of the remaining vertices. If this does not give a better value, keep the best vertex and shrink all other vertices towards the best one. [Public Domain,https://commons.wikimedia.org/wiki/File17 / 21
Optimization: Descent methods For a differentiable function x k+1 = x k + α k d k a) Newton s method: d k = H 1 (x k ) f (x k ) where H is the hessian of f b) Approximate Newton: d k = B 1 (x k ) f (x k ) where B approximates the hessian of f c) Steepest descent: d k = f (x k ) d) Conjugate gradient: d k = f (x k ) + β k d k 1 [Public Domain,https://commons.wikimedia.org/wiki/File18 / 21
Constrained Optimization Given f : R n R maximize f (x) subject to g i (x) b i g i (x) b i g i (x) = b i x i m 0 i = 1,..., l i = l + 1,..., k i = k + 1,..., m i = m + 1,..., n Typically define a Lagrangian L = f (x) + m λ i [b i g i (x)] i=1 n i=m+1 λ i x i m and use Kuhn-Tucker conditions to check for constrained extrema. [Public Domain,https://commons.wikimedia.org/wiki/File19 / 21
References Krasny Numerical Methods Lecture notes http://www. math.lsa.umich.edu/ krasny/math471.html Griffin Numerical Optimization: Penn State Math 555 Lecture Notes http: //www.personal.psu.edu/cxg286/math555.pdf Chi Wei Cliburn Chan Practical Optimization Routines https://people.duke.edu/ ccc14/sta-663/ BlackBoxOptimization.html Matot, Leung and Sim Application of MATLAB and Python optimizers to two case studies involving groundwater flow and contaminant transport modeling https://doi.org/10.1016/j.cageo.2011.03.017 [Public Domain,https://commons.wikimedia.org/wiki/File20 / 21
References Quarteroni, Sacco, Saleri chapters 1, 7, 9 and 10 Boyd and Vandenberghe Convex Optimization Cambridge (2004) Heath Scientific computing: An introductory survey McGraw Hill (2002) Greenbaum and Chartier Numerical Methods Princeton (2012) Greenbaum Iterative Methods for Solving Linear Systems SIAM (1997) http://dx.doi.org/10.1137/1.9781611970937 Epperson An introduction to numerical methods and analysis Wiley (2007) [Public Domain,https://commons.wikimedia.org/wiki/File21 / 21