Introduction to Scientific Computing

Similar documents
Introduction to Scientific Computing

Scientific Data Computing: Lecture 3

Optimization Tutorial 1. Basic Gradient Descent

FALL 2018 MATH 4211/6211 Optimization Homework 4

Constrained Optimization and Lagrangian Duality

Statistics 580 Optimization Methods

Scientific Computing: Optimization

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATH 4211/6211 Optimization Basics of Optimization Problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Introduction to unconstrained optimization - direct search methods

Numerical Optimization: Basic Concepts and Algorithms

Nonlinear Optimization: What s important?

Constrained Optimization

Lecture V. Numerical Optimization

minimize x subject to (x 2)(x 4) u,

Selected Topics in Optimization. Some slides borrowed from

LINEAR AND NONLINEAR PROGRAMMING

Optimization II: Unconstrained Multivariable

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 6: Monday, Mar 7. e k+1 = 1 f (ξ k ) 2 f (x k ) e2 k.

8 Numerical methods for unconstrained problems

2.098/6.255/ Optimization Methods Practice True/False Questions

Gradient Descent. Dr. Xiaowei Huang

ICS-E4030 Kernel Methods in Machine Learning

Numerical Optimization. Review: Unconstrained Optimization

Intro to Linear & Nonlinear Optimization

Review for Exam 2 Ben Wang and Mark Styczynski

Numerical Optimization

Intro to Linear & Nonlinear Optimization

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Numerical optimization

Interior Point Algorithms for Constrained Convex Optimization

Nonlinear Optimization for Optimal Control

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

CE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review

Computational Optimization. Augmented Lagrangian NW 17.3

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Newton s Method. Javier Peña Convex Optimization /36-725

Optimization Concepts and Applications in Engineering

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

IE 5531: Engineering Optimization I

An Iterative Descent Method

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Factors of Polynomials Factoring For Experts

Constrained optimization

Scientific Computing. Roots of Equations

Contents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Calculus from Graphical, Numerical, and Symbolic Points of View, 2e Arnold Ostebee & Paul Zorn

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

Multidisciplinary System Design Optimization (MSDO)

Root Finding and Optimization

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

1 Numerical optimization

Numerical Algorithms as Dynamical Systems

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Prerequisite: Qualification by assessment process or completion of Mathematics 1050 or one year of high school algebra with a grade of "C" or higher.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

CHAPTER 2: QUADRATIC PROGRAMMING

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Optimization II: Unconstrained Multivariable

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Dominican International School PRECALCULUS

Appendix A Taylor Approximations and Definite Matrices

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

1 Introduction

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

CS-E4830 Kernel Methods in Machine Learning

Algorithms for nonlinear programming problems II

Scientific Computing: An Introductory Survey

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Math 409/509 (Spring 2011)

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Course Information Course Overview Study Skills Background Material. Introduction. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Conditional Gradient (Frank-Wolfe) Method

Modern Optimization Techniques

Computational Finance

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

INTRODUCTION, FOUNDATIONS

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

Homework and Computer Problems for Math*2130 (W17).

Optimization Methods

Optimization and Root Finding. Kurt Hornik

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Homework 5. Convex Optimization /36-725

Lecture 1: Introduction

10. Unconstrained minimization

Conjugate Gradient Tutorial

Module 04 Optimization Problems KKT Conditions & Solvers

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Transcription:

Introduction to Scientific Computing Benson Muite benson.muite@ut.ee http://kodu.ut.ee/ benson https://courses.cs.ut.ee/2018/isc/spring 26 March 2018 [Public Domain,https://commons.wikimedia.org/wiki/File1 / 21

Course Aims General introduction to numerical and computational mathematics Review programming methods Learn about some numerical algorithms Understand how these methods are used in real world situations [Public Domain,https://commons.wikimedia.org/wiki/File2 / 21

Course Overview Lectures Monday J. Livii 2-207 10.15-11.45 Benson Muite (benson dot muite at ut dot ee) Practical Monday J. Livii 2-205 12.15-13.45 Benson Muite (benson dot muite at ut dot ee) Homework typically due once a week until project. Expected to start this in the labs. Exam/final project presentation will be scheduled at end of course. Grading: Homework 50%, Exam 30%, Active participation 10%, Tests 10% Course Texts: Solomon Numerical Algorithms: Methods for Computer Vision, Machine Learning, and Graphics Pitt-Francis and Whiteley Guide to scientific computing in C++ [Public Domain,https://commons.wikimedia.org/wiki/File3 / 21

Lecture Topics 1) 12 February: Graphing, differentiation and integration dimension 2) 19 February: programming, recursion, arithmetic operations 3) 26 February: Linear algebra 4) 5 March: Floating point numbers, errors and ordinary differential equations 5) 12 March: Image analysis using statistics 6) 19 March: Image analysis using differential equations - Lecture by Gul Wali Shah 7) 26 March: Case study: Application of machine learning to analyse literary corpora 8) 2 April: Case study: DNA simulation using molecular dynamics [Public Domain,https://commons.wikimedia.org/wiki/File4 / 21

Lab Topics 1) 12 February: Mathematical functions, differentiation and integration, plotting, reading: https: //doi.org/10.1080/10586458.2017.1279092 2) 19 February: Sequences, Series summation, convergence and divergence, Fibonacci numbers and Collatz conjecture or similar experimental mathematics 3) 26 February: Monte Carlo integration, parallel computing introduction, Matrix Multiply, LU decomposition 4) 5 March: Finite difference method, error analysis, interval analysis, image segmentation by matrix differences Reading https://doi.org/10.1080/10586458.2016.1270858 [Public Domain,https://commons.wikimedia.org/wiki/File5 / 21

Lab Topics 5) 12 March: Eigenvalue computations, singular value decomposition, use of matrix operations in statistics - Eigenfaces 6) 19 March: fixed point iteration, image segmentation - solve partial differential equation from finite differences discretization with implicit timestepping (Mumford-Shah model), compare iterative and direct solvers 7) 26 March: Optimization algorithms, deep learning Reading https://arxiv.org/abs/1801.05894 8) 2 April: Molecular dynamics simulation using Gromacs [Public Domain,https://commons.wikimedia.org/wiki/File6 / 21

Reading 1) 12 February: Solomon chapter 1 and https://doi.org/10.1080/10586458.2017.1279092 2) 19 February: Solomon chapters 2 and 3 3) 26 February: Monte Carlo integration, parallel computing introduction Solomon chapters 4, 5 4) 5 March: Solomon chapters 14 and 15, interval analysis Reading Differential Equations and Exact Solutions in the Moving Sofa Problem https://doi.org/10.1080/10586458.2016.1270858 [Public Domain,https://commons.wikimedia.org/wiki/File7 / 21

Reading 5) 12 March: Matrix Multiply, Eigenvalue computations, singular value decomposition, use of matrix operations in statistics Solomon chapters 6 and 7 6) 19 March: Solomon chapters 11, 13 and 16 7) 26 March: Word count, clustering algorithms, optimization algorithms, deep learning Reading https://arxiv.org/abs/1801.05894 Solomon chapters 8, 9 and 12 8) 2 April: Molecular dynamics simulation http://www.bevanlab.biochem.vt.edu/pages/personal/justin/gmxtutorials/lysozyme/index.html [Public Domain,https://commons.wikimedia.org/wiki/File8 / 21

Root finding for polynomials - Fixed point iteration Theorem Let s be a solution of x = g(x) and g have a continuous derivative in some interval I containing s. The if g (s) K < 1 in I the iteration process x n+1 = g(x n ) converges for any initial value x 0 in I. If g (s) 0 then the convergence rate is linear, and if g (s) = 0, with g (s) 0 the the convergence rate is quadratic. [Public Domain,https://commons.wikimedia.org/wiki/File9 / 21

Root finding for polynomials - Newton-Raphson iteration Define f (x) = x g(x) x n+1 = x n f (x n) f (x n ) [Public Domain,https://commons.wikimedia.org/wiki/File10 / 21

Root finding for polynomials - Secant Method Approximate derivative by finite difference x n+1 = x n f (x n)(x n x n 1 ) f (x n ) f (x n 1 ) [Public Domain,https://commons.wikimedia.org/wiki/File11 / 21

Root finding for polynomials - Systems of Equations f 1 (x 1,..., x n ) = 0 f 2 (x 1,..., x n ) = 0... f n (x 1,..., x n ) = 0 or Let J = [ ] fi x j f(x) = 0 x n+1 = x n J 1 n f(x n ) [Public Domain,https://commons.wikimedia.org/wiki/File12 / 21

Root finding for polynomials - Systems of Equations Hard to find J analytically, therefore finite differences are often used It can be expensive to find J n every iteration, so sometimes just the initial value J 0 is used. [Public Domain,https://commons.wikimedia.org/wiki/File13 / 21

Steepest Descent for Linear Systems of Equations Solving Ax = b is equivalent to minimizing x T Ax x T b for symmetric and positive definite matrix A the algorithm is choose x 0 so that r 0 = Ax 0 b FOR n = 1, 2,.. ENDFOR rt n r n α n = r T n Ar n x n+1 = x n α n r n r n+1 = Ax n+1 b [Public Domain,https://commons.wikimedia.org/wiki/File14 / 21

Optimization find x such that f (x ) f (x) for all feasible x or find x such that f (x ) f (x) for all feasible x Constrained vs. Unconstrained Continuous vs. Discrete Differentiable vs. Non differentiable [Public Domain,https://commons.wikimedia.org/wiki/File15 / 21

Optimization: Hooke and Jeeves method a) choose a step size b) Successively sequentially check if f (x ± he i ) < f (x), and update x if so c) After checking all coordinate directions, check if x was updated. If the step size h < ɛ, stop, otherwise decrease h by a factor of 2. d) If x was updated If the step size h < ɛ, stop. Otherwise start searching from 2x new x old if f (2x new x old ) < f (x new x old ), if not search from f (x new x old ). [Public Domain,https://commons.wikimedia.org/wiki/File16 / 21

Optimization: Nelder and Mead method a) Generate a simplex of n + 1 points in R. b) Remove the vertex with the worst function value and replace it with a new point. Choose the point by reflecting, expanding or contracting the simplex along the line joining the worst vertex with the centroid of the remaining vertices. If this does not give a better value, keep the best vertex and shrink all other vertices towards the best one. [Public Domain,https://commons.wikimedia.org/wiki/File17 / 21

Optimization: Descent methods For a differentiable function x k+1 = x k + α k d k a) Newton s method: d k = H 1 (x k ) f (x k ) where H is the hessian of f b) Approximate Newton: d k = B 1 (x k ) f (x k ) where B approximates the hessian of f c) Steepest descent: d k = f (x k ) d) Conjugate gradient: d k = f (x k ) + β k d k 1 [Public Domain,https://commons.wikimedia.org/wiki/File18 / 21

Constrained Optimization Given f : R n R maximize f (x) subject to g i (x) b i g i (x) b i g i (x) = b i x i m 0 i = 1,..., l i = l + 1,..., k i = k + 1,..., m i = m + 1,..., n Typically define a Lagrangian L = f (x) + m λ i [b i g i (x)] i=1 n i=m+1 λ i x i m and use Kuhn-Tucker conditions to check for constrained extrema. [Public Domain,https://commons.wikimedia.org/wiki/File19 / 21

References Krasny Numerical Methods Lecture notes http://www. math.lsa.umich.edu/ krasny/math471.html Griffin Numerical Optimization: Penn State Math 555 Lecture Notes http: //www.personal.psu.edu/cxg286/math555.pdf Chi Wei Cliburn Chan Practical Optimization Routines https://people.duke.edu/ ccc14/sta-663/ BlackBoxOptimization.html Matot, Leung and Sim Application of MATLAB and Python optimizers to two case studies involving groundwater flow and contaminant transport modeling https://doi.org/10.1016/j.cageo.2011.03.017 [Public Domain,https://commons.wikimedia.org/wiki/File20 / 21

References Quarteroni, Sacco, Saleri chapters 1, 7, 9 and 10 Boyd and Vandenberghe Convex Optimization Cambridge (2004) Heath Scientific computing: An introductory survey McGraw Hill (2002) Greenbaum and Chartier Numerical Methods Princeton (2012) Greenbaum Iterative Methods for Solving Linear Systems SIAM (1997) http://dx.doi.org/10.1137/1.9781611970937 Epperson An introduction to numerical methods and analysis Wiley (2007) [Public Domain,https://commons.wikimedia.org/wiki/File21 / 21