Review for Exam 2 Ben Wang and Mark Styczynski

Similar documents
Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Index. higher order methods, 52 nonlinear, 36 with variable coefficients, 34 Burgers equation, 234 BVP, see boundary value problems

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Applied Numerical Analysis

NUMERICAL METHODS FOR ENGINEERING APPLICATION

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Numerical Methods for Engineers

Ordinary Differential Equations

The family of Runge Kutta methods with two intermediate evaluations is defined by

Numerical Methods with MATLAB

PARTIAL DIFFERENTIAL EQUATIONS

Fall Exam II. Wed. Nov. 9, 2005

Computer Aided Design of Thermal Systems (ME648)

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Multistep Methods for IVPs. t 0 < t < T

Table 1 Principle Matlab operators and functions Name Description Page reference

Exam in TMA4215 December 7th 2012

Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

x n+1 = x n f(x n) f (x n ), n 0.

Applied Linear Algebra in Geoscience Using MATLAB

Math 7824 Spring 2010 Numerical solution of partial differential equations Classroom notes and homework

Some notes about PDEs. -Bill Green Nov. 2015

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

LINEAR AND NONLINEAR PROGRAMMING

FDM for parabolic equations

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Numerical Mathematics

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Numerical Optimization

Numerical Methods I Solving Nonlinear Equations

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Preliminary Examination in Numerical Analysis

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

NUMERICAL MATHEMATICS AND COMPUTING

Nonlinear Optimization: What s important?

MATHEMATICAL METHODS INTERPOLATION

Constrained Optimization

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Optimization Methods

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 1 : Introduction

CS 257: Numerical Methods

Computational Methods

Multidisciplinary System Design Optimization (MSDO)

Quasi-Newton Methods

BASIC EXAM ADVANCED CALCULUS/LINEAR ALGEBRA

Math 411 Preliminaries

NUMERICAL METHODS USING MATLAB

Notes for CS542G (Iterative Solvers for Linear Systems)

MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING CAMBRIDGE, MASSACHUSETTS NUMERICAL FLUID MECHANICS FALL 2011

Numerical Methods for Engineers and Scientists

Chapter 3 Numerical Methods

GENG2140, S2, 2012 Week 7: Curve fitting

Numerical Analysis of Electromagnetic Fields

Nonlinear Optimization for Optimal Control

5 Handling Constraints

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Preface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:

Written Examination

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 20

Chapter 9 Implicit Methods for Linear and Nonlinear Systems of ODEs

Mathematics Qualifying Exam Study Material

AM205: Assignment 3 (due 5 PM, October 20)

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Ordinary Differential Equations

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

ECE257 Numerical Methods and Scientific Computing. Ordinary Differential Equations

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Ordinary Differential Equations

Ordinary Differential Equations II

PDE Solvers for Fluid Flow

Scientific Computing: An Introductory Survey

MS&E 318 (CME 338) Large-Scale Numerical Optimization

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

Numerical Analysis Solution of Algebraic Equation (non-linear equation) 1- Trial and Error. 2- Fixed point

Preface. 2 Linear Equations and Eigenvalue Problem 22

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

17 Solution of Nonlinear Systems

FINITE-DIMENSIONAL LINEAR ALGEBRA

NUMERICAL SOLUTION OF ODE IVPs. Overview

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

The Chemical Kinetics Time Step a detailed lecture. Andrew Conley ACOM Division

M.A. Botchev. September 5, 2014

nonrobust estimation The n measurement vectors taken together give the vector X R N. The unknown parameter vector is P R M.

MATHEMATICS FOR COMPUTER VISION WEEK 8 OPTIMISATION PART 2. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

1 Computing with constraints

Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs) Section 3 : Analytical Solutions of Linear ODE-IVPs

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

Computation Fluid Dynamics

Preliminary Examination, Numerical Analysis, August 2016

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

5. Hand in the entire exam booklet and your computer score sheet.

Ordinary Differential Equations II

Scientific Computing: Optimization

Transcription:

Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note that this has not been proofread fully, so it may have typos or other things slightly wrong with it, but it s a good guide towards what you should know about. Finally, be familiar with what is in your book so that if there s something on the exam that you don t know well (e.g., LU decomposition from the last exam), you at least know where to look for it. Chapter 4: IVPs Objective: develop iterative rules for updating trajectory with the objective of getting numerical solution to equal to the integration of the exact solution Converting higher-order differential equations into systems of coupled first-order differential equations. What is quadrature? Numerical integration will involve integrating a polynomial that is why polynomial interpolation is important Polynomial interpolation: Polynomial interpolation is accomplished by definining N support point which the polynomial will equal the function at. We define the polynomial: p ( x) = a + a x + a x +... + a o 1 N x N such that for j = 0,1,, N N 1 j j N j f ( x j p ( x ) = a + a x + a x +... + a x = j o You can solve this like way back in the day with a linear system. But let s find a better way of doing this. Lagrange interpolation For N support points, Lagrange interpolation is the sum of N lagrange polynomials, each of which are of order x N-1, designed to fit closely (exactly?) at a specified support point. Now that we have a sum of polynomials or a single polynomial, we can integrate this numerically, using trapezoidal rule, Simpson s rule etc. Integration: Newton-Cotes uniformly spaced points )

Different methods: trapezoid rule, simpson s rule b ( b a) f ( x) dx [ f0 + f1 ] 3 O( b a ) a with errors b 4 ( b a) f ( x) dx [ f0 + 4 f1+ f ] O( b a ) 6 a How to get more accurate? Break into smaller segments before making too many more points How to integrate in two dimensions? Make a rectangle containing all points, and multiply the function you are integrating by an indicator function that says whether you are in the area to be integrated (indicator = 1) or not (indicator = 0) This all relates to time integrals Linear ODE systems and dynamic stability: Linear system, begin with a known ode problem x_dot = Ax. You can expand the exponential solution to analyze the results Looking at a linear system which gives rise to a discussion on stability and behavior of solution. Nonlinear ODE systems: Same type of anlaysis, now we use a Taylor expansion, invoking the Jacobian. Time marching ODE-IVP solvers: Explicit methods: generates a new state by taking a time-step based on the function value at the old time-step. Explicit Euler method Uses Taylor approximation, gives first order error globally (second order local, but O(delta t) time steps, so O(delta t) error) [ k+ 1] [ k] [ k] x = x +Δt f x Runge-Kutta methods Still explicit, but uses midpoints Second-order RK: [] 1 [ k] k =Δt f x [ ] [ k] 1 [] k t f x k 1 =Δ + where the global error is (delta t)^\ [ k+ 1] [ k] [ ] x = x + k + O Δt 3 ( ) Can get higher order ones. RK 4 th -order is basis of ode45 4 th order RK you just have to know the value of the function at the current state and a time step and you can generate the new state. How does ode45 get estimate of error? Use 5 function evaluations to get 5 th order accurate, also gets 4 th order accurate, compare two to see how accurate you are then know if need to make delta_t smaller or bigger

Global vs. Local error Implicit methods: update rule depends on previous values of x at time in the immediate past. Use of interpolation to approximate an update. Why is it difficult to use x [k+1] to get your derivative? Because that is nonlinear, requires a difficult solution Is this more accurate than explicit in all cases? No, same order-accuracy for explicit Euler vs. implicit Euler Are these methods used? Yes, quite frequently. Why or why not? Because they are more stable to larger time steps for stiff equations they won t blow up. Basic implicit Euler: [ k+ 1] [ k] [ k+ 1] x = x +Δt f x [ ] [ k+ 1] [ k] 1 [ k+ 1] k Others, like Crank-Nicholson: x = x +Δt f x + f x Predictor/corrector method: Use explicit method to generate an initial guess Then use Newton to iteratively find real solution to implicit equation Predictor/corrector method ought to converge well for Newton? Yes, because guess should be relatively near correct value DAE systems: What are they? (, ) y = F y z 0 = G y, z where y are variables in ODEs, z are other variables Mass matrix: indicates where the differential parts are Is mass matrix singular? If it s a DAE, yes. Otherwise, it s just an ODE system. What conditions do we need to solve this using an ODE solver? DAE system to be of index one: the determinant of the Jacobian of the nonlinear equalities must be nonzero. So that leaves us with: G G y + z = 0 T T y z Stability of systems steady states: Take Jacobian Evaluate at steady state Find eigenvalues If real part of all is < 0, stable If <=0, quasi-stable (indifferent to perturbations in some direction) If any > 0, unstable Don t be confused by stability of integrators and stability of a system s steady state. The integrator creates a system based on its solution method, and we evaluate that system s stability.

Which ODE integration routines are more stable? Explicit RK vs. Implicit Euler Which will have less error in a non-stiff system? Explicit RK vs. Implicit Euler Stiff/not stiff systems: Eigenvalues of Jacobian Definition Conceptual (different time scales) Condition number: indication of number of steps needed to integrate properly When to use ode15s vs. ode45? Condition number >>1 or not When will you know things will be stiff/not stiff? Single ODE equation: not stiff Discretized PDEs: stiff What to do if not sure? Try ode45, if fails, use 15s Ways to speed things up in Matlab? Return Jacobian of system of ODEs at the supplied point. Chapter 5: Optimization Use the gradient of the cost function to find the downhill direction you may want to look in Use the Hessian of the gradient to figure out how close to that gradient direction you want to look, and exactly how far you d want to go if local conditions stayed constant globally This is using (essentially) a Newton method in multiple dimensions to find the optimum Analogy to solving nonlinear algebraic systems Adding to the Hessian matrix: H B Add to the main diagonal to make sure matrix is positive definite (an application of Gershgorin s theorem) What does this do for us? Ensures that we are at least going in the right direction Then we figure out just how far in final direction we want to go: Alpha/ until we can get a decrease in function value. Gradient methods Newton line search methods to find better updates for p and step lengths. What about constraints? Use method of LaGrange multipliers Require not that we find local minimum of solution, but that we find place where local curvature of cost function is parallel to local curvature of constraint function Why not just use penalty function?

As penalty weight gets too big, it becomes even more important than the actual minimization, so it becomes a problem that is numerically difficult to solve. LaGrangian method converts problem into sequence of unconstrained minimizations, or relaxations. Inequality constraints: use slack variables to make equality constraints. For a minimum, the Hessian should be positive-definite. Lagrangian methods: Penalty method, Lagrange multipliers Equality constraints Inequality constraints Chapter 6: PDE-BVPs Finite difference method So that will give us a linear system of equations for simple problems but in -D or 3-D, fill-in makes Gaussian elimination an unacceptable option. For symmetric matrices, can turn into a minimization problem but don t use Hessian, just use identity matrix method of steepest descent But that may be slow to converge at times, so can add in some term that will make it not go directly downhill but will make sure it doesn t double back on self conjugate gradient method In N-D, solves in N iterations or less 1 T Utilizes the fact that when Ax=b, F( x) = x Ax b T x is minimized If matrix is not symmetric, can do a similar thing but slightly more general general minimized residual method 10.50-ish stuff: Dirichlet BCs (e.g., Ca = constant) von Neumann BCs (e.g., dca/dx = constant) Uneven gridpoint spacing How to do it (LaGrange interpolation) Why to do it (non-linear action, or action in a small portion of the graph) Danckwerts BC s dc j v z Cj( 0) C j0 D = 0 dz z= 0 What it does (allows for both von Neumann and Dirichlet boundary conditions to be possible and/or balanced) Outlet on end of reactor: BC? If know nothing better, assume reaction is done (dca/dz = 0) Convection: Central Difference Scheme: better for error Upwind Difference Scheme: better for stability Variable stacking: Put things that will be interacting near each other in the matrix

Things to note: - how to reduce order of differential equation so that you can solve a system of first order odes - Euler s formula - How to calculate Jacobian - How to calculate eigenvalues - Keep in mind Geshorgin s theorem - Symmetric Matrices - Positive definite or Negative Definite property of matrices Useful Matlab functions: **interp1 **trapz **quad **ode45 **ode15s **fmincon **fminunc ** optimset( JacobPattern,matrix)