Exercise_set7_programming_exercises

Size: px
Start display at page:

Download "Exercise_set7_programming_exercises"

Transcription

1 Exercise_set7_programming_exercises May 13, Part 1(d) We start by defining some function that will be useful: The functions defining the system, and the Hamiltonian and angular momentum. In [1]: %matplotlib inline import matplotlib import numpy as np import numpy.linalg as la import matplotlib.pyplot as plt # In the following, think of p and x as NumPy arrays of length 2. def f1(p,x): return p def f2(p,x): r = la.norm(x, ord=2) return -x/r**3 def H(p,x): pn = la.norm(p, ord=2) r = la.norm(x, ord=2) return 0.5*pn**2-1./r def L(p,x): return np.cross(x,p) We then implement functions for one time step of the Symplectic Euler method, and a function for doing a full simulation. In [2]: # Symplectic Euler: def sympeuler(pn, xn, h): p_next = pn + h * f2(pn,xn) # Notice that we use the updated value p_next when updating x x_next = xn + h * f1(p_next,xn) return p_next, x_next 1

2 def simulate_2bp(x0, p0, T, N): # Initialize results: x = np.zeros( (N,2), dtype=float) x[0] = x0 p = np.zeros( (N,2), dtype=float) p[0] = p0 # Set h: h = T/N # Do time steps: for i in range(n-1): p[i+1], x[i+1] = sympeuler(p[i], x[i], h) return p, x We define similar functions for the forward Euler method: In [3]: def fwdeuler(pn, xn, h): p_next = pn + h*f2(pn,xn) x_next = xn + h*f1(pn, xn) return p_next, x_next def simulate_2bp_fe(x0, p0, T, N): # Initialize results: x = np.zeros( (N,2), dtype=float) x[0] = x0 p = np.zeros( (N,2), dtype=float) p[0] = p0 # Set h: h = T/N # Do time steps: for i in range(n-1): p[i+1], x[i+1] = fwdeuler(p[i], x[i], h) return p, x Finally, we set the parameter values and do the simulations: In [4]: # Set some parameters: T = 200. N = 4000 e = 0.5 x0 = np.array( [1-e, 0.] ) p0 = np.array( [0., np.sqrt((1+e)/(1-e))] ) 2

3 # Do the simulation: p, x = simulate_2bp(x0, p0, T, N) pe, xe = simulate_2bp_fe(x0, p0, T, 100*N) The post-processing and plotting is fairlig straightforward: In [5]: # Calculate H and L: Eh = np.zeros(n, dtype=float) Lh = np.zeros(n, dtype=float) for i in range(n): Lh[i] = L(p[i], x[i]) Eh[i] = H(p[i], x[i]) t = np.linspace(0., T, N, endpoint=false) # Initialize figure: fig = plt.figure() # Plot symplectic Euler solution: ax = fig.add_subplot(2,2,1) ax.set_title("symplectic Euler") # I set a small linewdith to keep the plot from getting too cluttered. # (the 'k-' argument just says the the plotter should draw a black line) ax.plot(x[:,0], x[:,1], 'k-', linewidth=0.2) # Plot Forward Euler: ax = fig.add_subplot(2,2,2) ax.set_title("forward Euler") ax.plot(xe[:,0], xe[:,1], 'k-', linewidth=0.2) # Plot numerical energy: ax = fig.add_subplot(2,2,3) ax.set_title("$h(p,x)$") ax.plot(t,eh) # Plot angular momentum: ax = fig.add_subplot(2,2,4) ax.set_title("$l(p,x)$") ax.plot(t,lh) # Show plot: plt.show() 3

4 From these plots we see that the symplectic Euler method is much better than the Forward Euler method at keeping the numerical solution in orbit, but there is some precession going on. As we already saw in the mandatory assignment, the symplectic Euler method keeps the numerical energy bounded (although not strictly conserved). Finally the angular momentum is exactly conserved, as predicted. 2 Part 2 (a)-(b) In this part we are to solve some nonlinear ODEs numerically using backward Euler s method. In these notes, we ll solve the set of nonlinear equations at each time step using Newton s method, since I hope this approach will be the most instructive. Recall that with Newton s method, we aim to find solutions to the equation g(x) = 0, where x R d, and g : R d R d. Given some iterate x i, we get the next iteration by solving the linearized equation g(x i ) + g(x i )(x x i ) = 0. Solving for x, we get the next iteration as x i+1 = x i ( g(x i )) 1 g(x i ). by In backward Euler, with the previous time step y n R d given, the next time step y n+1 is given y n+1 = y n + h f (y n+1 ), 4

5 which we may rewrite as y n+1 y n h f (y n+1 ) = 0. So we set g(x) = x y n h f (x), in the Newton s method as written above. See then that g(x) = I h f (x). Taking one step of forward Euler as initial guess, the iterative method reads: y (0) n+1 = y n + h f (y n ), which we stop when the relative error y (i+1) n+1 = y (i) n+1 (I h f (y(i) n+1 )) 1 (y (i) n+1 y n h f (y (i) n+1 )), y (i+1) n+1 y(i) n+1 y (i) n+1 < ϵ. In the following implementation, we divide up the work a little bit. First, we implement a function for doing Newton s method, and the we use this function to do one time step of backward Euler. In [6]: def newtons_method(x0, f, df, eps=1e-9, maxiter=1000): """Newton iteration for system of equations. INPUT: x0: NumPy array, initial guess. f, df: Function, and function derivative. eps: Error tolerance (defaults to 1E-9). maxiter: Maximum number of iterations before giving up (defaults to 1000). OUTPUT: Returns the last step of the Newton iterations (whether converged or not). """ x = x0 # Initialize the relative error: gamma = 1. # Iteration counter: counter = 0 while (gamma > eps) and (counter < maxiter): # Update x: x_prev = x x = x - la.solve(df(x), f(x)) # Calculate gamma: gamma = la.norm(x-x_prev)/(la.norm(x_prev)+1e-12) #+1E-12 is just to avoid divid # Update counter: counter += 1 return x def bwdeuler(x, f, df, h): """ Do a time-step of bwd Euler. INPUT: x: Current time step 5

6 f, df: System functions, and derivative""" # Initial guess: x0 = x + h*f(x) # Set identity matrix: I = np.eye(len(x)) # Define functions for Newton iteration: phi = lambda y: y - x - h*f(y) dphi = lambda y: I - h*df(y) # Do Newton iteration: return newtons_method(x0, phi, dphi) We can now make a simulation function: In [7]: def simulate_bwdeuler(x0, f, df, T, N): """Simulation function. INPUT: x0: Initial value (NumPy array). f, df: System function and derivative. T: End time. N: Number of time steps. OUTPUT: Numerical simulation at each time step.""" h = T/N d = len(x0) x = np.zeros( (N, d), dtype=float) x[0] = x0 2.1 Part 2 (a) # Do time steps: for i in range(n-1): x[i+1] = bwdeuler(x[i], f, df, h) return x We now use the previously implemented functions to solve the modified logistic equation. Disclaimer: Because of the use la.solve(... ) in the above Newton s method, all values of x, f(x), and df(x) should be NumPy arrays. In the case of a 1D ODE (as this is), this doesn t make the most sense, but I opted for generality in the code instead of having to make two almost identical Newton methods. In [8]: # Define f(...) and df(...): def f(x): return np.array( [-x[0]*(1.-x[0])*(1-2*x[0])] ) def df(x): return np.array( [[-(1.-x[0])*(1.-2*x[0]) + x[0]*(1.-2*x[0]) + 2*x[0]*(1.-x[0]) ]]) # Set other parameters: T = 10. 6

7 N = 100 t = np.linspace(0., T, N, endpoint=false) # Initialize plot: fig = plt.figure() ax = fig.add_subplot(1,1,1) # Set initial conditions (each row is a separate initial value): x0s = np.linspace(0.,1.,21).reshape( (21,1) ) for x0 in x0s: # Simulate: x = simulate_bwdeuler(x0, f, df, T, N) # Plot: ax.plot(t,x[:,0], 'k-') ax.set_xlabel("$t$") ax.set_ylabel("$x(t)$") plt.show() From these simulations, it looks like x = 0 and x = 1 are stable equilibria, and x = 1 2 is unstable. You are more than welcome to verify this. 7

8 2.2 Part 2 (b) The code for this part is very similar to the last part. In [9]: # Set f(x) and df(x): def f(x): return np.array([ x[0]*(x[1]-2.), x[1]*(1.-x[0])]) def df(x): return np.array( [[ x[1]-2., x[0] ], [ -x[1], 1.-x[0] ]] ) # Set some parameters: T = 10. N = 200 x0 = np.linspace(0., 1., 11) x0s = np.vstack( (x0, 2*x0) ).T # Initialize figure: fig = plt.figure() ax = fig.add_subplot(1,1,1) for x0 in x0s: # Simulate: x = simulate_bwdeuler(x0, f, df, T, N) ax.plot(x[:,0], x[:,1], 'k-', linewidth=0.5) ax.set_xlabel("$x$") ax.set_ylabel("$y$") plt.show() 8

9 In [ ]: 9

Optimization with Scipy (2)

Optimization with Scipy (2) Optimization with Scipy (2) Unconstrained Optimization Cont d & 1D optimization Harry Lee February 5, 2018 CEE 696 Table of contents 1. Unconstrained Optimization 2. 1D Optimization 3. Multi-dimensional

More information

Lorenz Equations. Lab 1. The Lorenz System

Lorenz Equations. Lab 1. The Lorenz System Lab 1 Lorenz Equations Chaos: When the present determines the future, but the approximate present does not approximately determine the future. Edward Lorenz Lab Objective: Investigate the behavior of a

More information

Test 2 - Python Edition

Test 2 - Python Edition 'XNH8QLYHUVLW\ (GPXQG7UDWW-U6FKRRORI(QJLQHHULQJ EGR 10L Spring 2018 Test 2 - Python Edition Shaundra B. Daily & Michael R. Gustafson II Name (please print): NetID (please print): In keeping with the Community

More information

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1)

Lecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1) Lecture 2 In this lecture we will study the unconstrained problem minimize f(x), (2.1) where x R n. Optimality conditions aim to identify properties that potential minimizers need to satisfy in relation

More information

Gradient Descent Methods

Gradient Descent Methods Lab 18 Gradient Descent Methods Lab Objective: Many optimization methods fall under the umbrella of descent algorithms. The idea is to choose an initial guess, identify a direction from this point along

More information

Conjugate-Gradient. Learn about the Conjugate-Gradient Algorithm and its Uses. Descent Algorithms and the Conjugate-Gradient Method. Qx = b.

Conjugate-Gradient. Learn about the Conjugate-Gradient Algorithm and its Uses. Descent Algorithms and the Conjugate-Gradient Method. Qx = b. Lab 1 Conjugate-Gradient Lab Objective: Learn about the Conjugate-Gradient Algorithm and its Uses Descent Algorithms and the Conjugate-Gradient Method There are many possibilities for solving a linear

More information

Inverse Problems. Lab 19

Inverse Problems. Lab 19 Lab 19 Inverse Problems An important concept in mathematics is the idea of a well posed problem. The concept initially came from Jacques Hadamard. A mathematical problem is well posed if 1. a solution

More information

Linear Classification

Linear Classification Linear Classification by Prof. Seungchul Lee isystems Design Lab http://isystems.unist.ac.kr/ UNIS able of Contents I.. Supervised Learning II.. Classification III. 3. Perceptron I. 3.. Linear Classifier

More information

x k+1 = x k + α k p k (13.1)

x k+1 = x k + α k p k (13.1) 13 Gradient Descent Methods Lab Objective: Iterative optimization methods choose a search direction and a step size at each iteration One simple choice for the search direction is the negative gradient,

More information

Here we consider soliton solutions of the Korteweg-de Vries (KdV) equation. This equation is given by u t + u u x + 3 u

Here we consider soliton solutions of the Korteweg-de Vries (KdV) equation. This equation is given by u t + u u x + 3 u Lab 3 Solitons Lab Objective: We study traveling wave solutions of the Korteweg-de Vries (KdV) equation, using a pseudospectral discretization in space and a Runge-Kutta integration scheme in time. Here

More information

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Numerical Methods for Initial Value Problems; Harmonic Oscillators 1 Numerical Methods for Initial Value Problems; Harmonic Oscillators Lab Objective: Implement several basic numerical methods for initial value problems (IVPs), and use them to study harmonic oscillators.

More information

Test 2 Solutions - Python Edition

Test 2 Solutions - Python Edition 'XNH8QLYHUVLW\ (GPXQG73UDWW-U6FKRRORI(QJLQHHULQJ EGR 103L Fall 2017 Test 2 Solutions - Python Edition Michael R. Gustafson II Name (please print) NET ID (please print): In keeping with the Community Standard,

More information

Iterative Solvers. Lab 6. Iterative Methods

Iterative Solvers. Lab 6. Iterative Methods Lab 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

MAS212 Scientific Computing and Simulation

MAS212 Scientific Computing and Simulation MAS212 Scientific Computing and Simulation Dr. Sam Dolan School of Mathematics and Statistics, University of Sheffield Autumn 2017 http://sam-dolan.staff.shef.ac.uk/mas212/ G18 Hicks Building s.dolan@sheffield.ac.uk

More information

Complex Numbers. Visualize complex functions to estimate their zeros and poles.

Complex Numbers. Visualize complex functions to estimate their zeros and poles. Lab 1 Complex Numbers Lab Objective: Visualize complex functions to estimate their zeros and poles. Polar Representation of Complex Numbers Any complex number z = x + iy can be written in polar coordinates

More information

DM534 - Introduction to Computer Science

DM534 - Introduction to Computer Science Department of Mathematics and Computer Science University of Southern Denmark, Odense October 21, 2016 Marco Chiarandini DM534 - Introduction to Computer Science Training Session, Week 41-43, Autumn 2016

More information

Uniform and constant electromagnetic fields

Uniform and constant electromagnetic fields Fundamentals of Plasma Physics, Nuclear Fusion and Lasers Single Particle Motion Uniform and constant electromagnetic fields Nuno R. Pinhão 2015, March In this notebook we analyse the movement of individual

More information

(Artificial) Neural Networks in TensorFlow

(Artificial) Neural Networks in TensorFlow (Artificial) Neural Networks in TensorFlow By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Recall Supervised Learning Setup II. 2. Artificial Neural

More information

Complex Numbers. A complex number z = x + iy can be written in polar coordinates as re i where

Complex Numbers. A complex number z = x + iy can be written in polar coordinates as re i where Lab 20 Complex Numbers Lab Objective: Create visualizations of complex functions. Visually estimate their zeros and poles, and gain intuition about their behavior in the complex plane. Representations

More information

Conditioning and Stability

Conditioning and Stability Lab 17 Conditioning and Stability Lab Objective: Explore the condition of problems and the stability of algorithms. The condition number of a function measures how sensitive that function is to changes

More information

Homework 2 Computational Chemistry (CBE 60553)

Homework 2 Computational Chemistry (CBE 60553) Homework 2 Computational Chemistry (CBE 60553) Prof. William F. Schneider Due: 1 Lectures 1-2: Review of quantum mechanics An electron is trapped in a one-dimensional box described by

More information

Hysteresis. Lab 6. Recall that any ordinary differential equation can be written as a first order system of ODEs,

Hysteresis. Lab 6. Recall that any ordinary differential equation can be written as a first order system of ODEs, Lab 6 Hysteresis Recall that any ordinary differential equation can be written as a first order system of ODEs, ẋ = F (x), ẋ := d x(t). (6.1) dt Many interesting applications and physical phenomena can

More information

Supervised Learning. 1. Optimization. without Scikit Learn. by Prof. Seungchul Lee Industrial AI Lab

Supervised Learning. 1. Optimization. without Scikit Learn. by Prof. Seungchul Lee Industrial AI Lab Supervised Learning without Scikit Learn by Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSECH able of Contents I.. Optimization II.. Linear Regression III. 3. Classification (Linear)

More information

Engineering Computation in

Engineering Computation in Engineering Computation in Dr. Kyle Horne Department of Mechanical Engineering Spring, 2018 What is It? 140 Explicit Implicit 120 100 Temperature T [C] Position y [px] 15 10 5 0 5 10 15 24.5 24.0 23.5

More information

Numerical Methods for Initial Value Problems; Harmonic Oscillators

Numerical Methods for Initial Value Problems; Harmonic Oscillators Lab 1 Numerical Methods for Initial Value Problems; Harmonic Oscillators Lab Objective: Implement several basic numerical methods for initial value problems (IVPs), and use them to study harmonic oscillators.

More information

Chapter 3 - Derivatives

Chapter 3 - Derivatives Chapter 3 - Derivatives Section 3.1: The derivative at a point The derivative of a function f (x) at a point x = a is equal to the rate of change in f (x) at that point or equivalently the slope of the

More information

Ordinary Differential Equations II: Runge-Kutta and Advanced Methods

Ordinary Differential Equations II: Runge-Kutta and Advanced Methods Ordinary Differential Equations II: Runge-Kutta and Advanced Methods Sam Sinayoko Numerical Methods 3 Contents 1 Learning Outcomes 2 2 Introduction 2 2.1 Note................................ 4 2.2 Limitations

More information

Linear Transformations

Linear Transformations Lab 4 Linear Transformations Lab Objective: Linear transformations are the most basic and essential operators in vector space theory. In this lab we visually explore how linear transformations alter points

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

Lecture 16: Discrete Fourier Transform, Spherical Harmonics

Lecture 16: Discrete Fourier Transform, Spherical Harmonics Lecture 16: Discrete Fourier Transform, Spherical Harmonics Chris Tralie, Duke University 3/8/2016 Announcements Mini Assignment 3 due Sunday 3/13 11:55PM Group Assignment 2 Out around Friday/Saturday,

More information

Least squares and Eigenvalues

Least squares and Eigenvalues Lab 1 Least squares and Eigenvalues Lab Objective: Use least squares to fit curves to data and use QR decomposition to find eigenvalues. Least Squares A linear system Ax = b is overdetermined if it has

More information

Line Search Algorithms

Line Search Algorithms Lab 1 Line Search Algorithms Investigate various Line-Search algorithms for numerical opti- Lab Objective: mization. Overview of Line Search Algorithms Imagine you are out hiking on a mountain, and you

More information

Python Programs.pdf. University of California, Berkeley. From the SelectedWorks of David D Nolte. David D Nolte. April 22, 2019

Python Programs.pdf. University of California, Berkeley. From the SelectedWorks of David D Nolte. David D Nolte. April 22, 2019 University of California, Berkeley From the SelectedWorks of David D Nolte April 22, 2019 Python Programs.pdf David D Nolte Available at: https://works.bepress.com/ddnolte/26/ Python Scripts for 2D, 3D

More information

Perceptron. by Prof. Seungchul Lee Industrial AI Lab POSTECH. Table of Contents

Perceptron. by Prof. Seungchul Lee Industrial AI Lab  POSTECH. Table of Contents Perceptron by Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I.. Supervised Learning II.. Classification III. 3. Perceptron I. 3.. Linear Classifier II. 3..

More information

lektion10 1 Lektion 10 January 29, Lineare Algebra II

lektion10 1 Lektion 10 January 29, Lineare Algebra II lektion January 29, 28 Table of Contents Lineare Algebra II. Matrixplots.2 Eigenwerte, Eigenvektoren, Jordannormalform.3 Berechung des Rangs.4 Normen.5 Kreuzprodukt 2 Reihenentwicklung (Taylor) In [2]:

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

Eigenvalue problems III: Advanced Numerical Methods

Eigenvalue problems III: Advanced Numerical Methods Eigenvalue problems III: Advanced Numerical Methods Sam Sinayoko Computational Methods 10 Contents 1 Learning Outcomes 2 2 Introduction 2 3 Inverse Power method: finding the smallest eigenvalue of a symmetric

More information

SOLUTIONS to Exercises from Optimization

SOLUTIONS to Exercises from Optimization SOLUTIONS to Exercises from Optimization. Use the bisection method to find the root correct to 6 decimal places: 3x 3 + x 2 = x + 5 SOLUTION: For the root finding algorithm, we need to rewrite the equation

More information

Lecture 10: Linear Multistep Methods (LMMs)

Lecture 10: Linear Multistep Methods (LMMs) Lecture 10: Linear Multistep Methods (LMMs) 2nd-order Adams-Bashforth Method The approximation for the 2nd-order Adams-Bashforth method is given by equation (10.10) in the lecture note for week 10, as

More information

GES 554 PDE MEMO. Memo: GES554-Project-2. REF: Ext:

GES 554 PDE MEMO. Memo: GES554-Project-2. REF: Ext: GES 554 PDE MEMO Subject: Heat Diffusion with Rectified Sine Initial Condition TO: GES 554.1 GES 554.996 CC: Date: 12 Feb 214 Memo: GES554-Project-2 From: REF: Ext: 8-5161 Summary: Charles O Neill This

More information

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations S. Y. Ha and J. Park Department of Mathematical Sciences Seoul National University Sep 23, 2013 Contents 1 Logistic Map 2 Euler and

More information

CS 237 Fall 2018, Homework 07 Solution

CS 237 Fall 2018, Homework 07 Solution CS 237 Fall 2018, Homework 07 Solution Due date: Thursday November 1st at 11:59 pm (10% off if up to 24 hours late) via Gradescope General Instructions Please complete this notebook by filling in solutions

More information

CS 237 Fall 2018, Homework 06 Solution

CS 237 Fall 2018, Homework 06 Solution 0/9/20 hw06.solution CS 237 Fall 20, Homework 06 Solution Due date: Thursday October th at :59 pm (0% off if up to 24 hours late) via Gradescope General Instructions Please complete this notebook by filling

More information

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from

Lecture 44. Better and successive approximations x2, x3,, xn to the root are obtained from Lecture 44 Solution of Non-Linear Equations Regula-Falsi Method Method of iteration Newton - Raphson Method Muller s Method Graeffe s Root Squaring Method Newton -Raphson Method An approximation to the

More information

Solution of Algebric & Transcendental Equations

Solution of Algebric & Transcendental Equations Page15 Solution of Algebric & Transcendental Equations Contents: o Introduction o Evaluation of Polynomials by Horner s Method o Methods of solving non linear equations o Bracketing Methods o Bisection

More information

The Shooting Method for Boundary Value Problems

The Shooting Method for Boundary Value Problems 1 The Shooting Method for Boundary Value Problems Consider a boundary value problem of the form y = f(x, y, y ), a x b, y(a) = α, y(b) = β. (1.1) One natural way to approach this problem is to study the

More information

UNIVERSITETET I OSLO

UNIVERSITETET I OSLO (Continued on page 2.) UNIVERSITETET I OSLO Det matematisk-naturvitenskapelige fakultet Examination in: INF1100 Introduction to programming with scientific applications Day of examination: Tuesday, December

More information

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27,

Math Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, Due: Thursday, January 27, Math 371 - Introduction to Numerical Methods - Winter 2011 Homework 2 Assigned: Friday, January 14, 2011. Due: Thursday, January 27, 2011.. Include a cover page. You do not need to hand in a problem sheet.

More information

Here we consider soliton solutions of the Korteweg-de Vries (KdV) equation. This equation is given by

Here we consider soliton solutions of the Korteweg-de Vries (KdV) equation. This equation is given by 17 Solitons Lab Objective: We study traveling wave solutions of the Korteweg-de Vries (KdV) equation, using a pseudospectral discretization in space and a Runge-Kutta integration scheme in time. Here we

More information

Integer-Valued Polynomials

Integer-Valued Polynomials Integer-Valued Polynomials LA Math Circle High School II Dillon Zhi October 11, 2015 1 Introduction Some polynomials take integer values p(x) for all integers x. The obvious examples are the ones where

More information

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe)

SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS. REDUCED GRADIENT METHOD (Wolfe) 19 SIMPLEX LIKE (aka REDUCED GRADIENT) METHODS The REDUCED GRADIENT algorithm and its variants such as the CONVEX SIMPLEX METHOD (CSM) and the GENERALIZED REDUCED GRADIENT (GRG) algorithm are approximation

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations 1 An Oscillating Pendulum applying the forward Euler method 2 Celestial Mechanics simulating the n-body problem 3 The Tractrix Problem setting up the differential equations

More information

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations Reading: Numerical Recipes, chapter on Integration of Ordinary Differential Equations (which is ch. 15, 16, or 17 depending on

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Optimization and Calculus

Optimization and Calculus Optimization and Calculus To begin, there is a close relationship between finding the roots to a function and optimizing a function. In the former case, we solve for x. In the latter, we solve: g(x) =

More information

Variational Monte Carlo to find Ground State Energy for Helium

Variational Monte Carlo to find Ground State Energy for Helium Variational Monte Carlo to find Ground State Energy for Helium Chris Dopilka December 2, 2011 1 Introduction[1][2] The variational principle from quantum mechanics gives us a way to estimate the ground

More information

Temperature Control Lab E: Semi-Empirical Moving Horizon Estimation

Temperature Control Lab E: Semi-Empirical Moving Horizon Estimation Temperature Control Lab E: Semi-Empirical Moving Horizon Estimation Design a Moving Horizon Estimator (MHE) for the temperature control lab to estimate the two temperatures and any necessary parameters

More information

Image Processing in Numpy

Image Processing in Numpy Version: January 17, 2017 Computer Vision Laboratory, Linköping University 1 Introduction Image Processing in Numpy Exercises During this exercise, you will become familiar with image processing in Python.

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Simple model of mrna production

Simple model of mrna production Simple model of mrna production We assume that the number of mrna (m) of a gene can change either due to the production of a mrna by transcription of DNA (which occurs at a constant rate α) or due to degradation

More information

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS

Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS Chapter 11 ORDINARY DIFFERENTIAL EQUATIONS The general form of a first order differential equations is = f(x, y) with initial condition y(a) = y a We seek the solution y = y(x) for x > a This is shown

More information

TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout

TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout TF Mutiple Hidden Layers: Regression on Boston Data Batched, Parameterized, with Dropout This is adapted from Frossard's tutorial (http://www.cs.toronto.edu/~frossard/post/tensorflow/). This approach is

More information

Improved Interpolation

Improved Interpolation Improved Interpolation Interpolation plays an important role for motion compensation with improved fractional pixel accuracy. The more precise interpolated we get, the smaller our prediction residual becomes,

More information

Stochastic processes and Data mining

Stochastic processes and Data mining Stochastic processes and Data mining Meher Krishna Patel Created on : Octorber, 2017 Last updated : May, 2018 More documents are freely available at PythonDSP Table of contents Table of contents i 1 The

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 3 Lecture 3 3.1 General remarks March 4, 2018 This

More information

5615 Chapter 3. April 8, Set #3 RP Simulation 3 Working with rp1 in the Notebook Proper Numerical Integration 5

5615 Chapter 3. April 8, Set #3 RP Simulation 3 Working with rp1 in the Notebook Proper Numerical Integration 5 5615 Chapter 3 April 8, 2015 Contents Mixed RV and Moments 2 Simulation............................................... 2 Set #3 RP Simulation 3 Working with rp1 in the Notebook Proper.............................

More information

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester

Review Higher Order methods Multistep methods Summary HIGHER ORDER METHODS. P.V. Johnson. School of Mathematics. Semester HIGHER ORDER METHODS School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE 1 REVIEW 2 HIGHER ORDER METHODS 3 MULTISTEP METHODS 4 SUMMARY OUTLINE

More information

Aprendizagem Automática. Logistic Regression. Ludwig Krippahl

Aprendizagem Automática. Logistic Regression. Ludwig Krippahl Aprendizagem Automática Logistic Regression Ludwig Krippahl Logistic Regression Summary Classification, introduction Linear separability Logistic regression, and playing in higher dimensions 1 Logistic

More information

(Artificial) Neural Networks in TensorFlow

(Artificial) Neural Networks in TensorFlow (Artificial) Neural Networks in TensorFlow By Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Recall Supervised Learning Setup II. 2. Artificial Neural

More information

1 The best of all possible worlds

1 The best of all possible worlds Notes for 2017-03-18 1 The best of all possible worlds Last time, we discussed three methods of solving f(x) = 0: Newton, modified Newton, and bisection. Newton is potentially faster than bisection; bisection

More information

1/30. Rigid Body Rotations. Dave Frank

1/30. Rigid Body Rotations. Dave Frank . 1/3 Rigid Body Rotations Dave Frank A Point Particle and Fundamental Quantities z 2/3 m v ω r y x Angular Velocity v = dr dt = ω r Kinetic Energy K = 1 2 mv2 Momentum p = mv Rigid Bodies We treat a rigid

More information

Numerical Algorithms for ODEs/DAEs (Transient Analysis)

Numerical Algorithms for ODEs/DAEs (Transient Analysis) Numerical Algorithms for ODEs/DAEs (Transient Analysis) Slide 1 Solving Differential Equation Systems d q ( x(t)) + f (x(t)) + b(t) = 0 dt DAEs: many types of solutions useful DC steady state: state no

More information

The Metropolis Algorithm

The Metropolis Algorithm 16 Metropolis Algorithm Lab Objective: Understand the basic principles of the Metropolis algorithm and apply these ideas to the Ising Model. The Metropolis Algorithm Sampling from a given probability distribution

More information

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t)

Runga-Kutta Schemes. Exact evolution over a small time step: Expand both sides in a small time increment: d(δt) F (x(t + δt),t+ δt) Ft + FF x ( t) Runga-Kutta Schemes Exact evolution over a small time step: x(t + t) =x(t)+ t 0 d(δt) F (x(t + δt),t+ δt) Expand both sides in a small time increment: x(t + t) =x(t)+x (t) t + 1 2 x (t)( t) 2 + 1 6 x (t)+

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Machine Learning Documentation

Machine Learning Documentation Machine Learning Documentation Release 0.1 mansenfranzen Aug 05, 2017 Contents 1 Linear Regression 3 2 Transformation 7 i ii Machine Learning Documentation, Release 0.1 Contents: Contents 1 Machine Learning

More information

Lecture 8. Root finding II

Lecture 8. Root finding II 1 Introduction Lecture 8 Root finding II In the previous lecture we considered the bisection root-bracketing algorithm. It requires only that the function be continuous and that we have a root bracketed

More information

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods

1-D Optimization. Lab 16. Overview of Line Search Algorithms. Derivative versus Derivative-Free Methods Lab 16 1-D Optimization Lab Objective: Many high-dimensional optimization algorithms rely on onedimensional optimization methods. In this lab, we implement four line search algorithms for optimizing scalar-valued

More information

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations

MATH 415, WEEKS 14 & 15: 1 Recurrence Relations / Difference Equations MATH 415, WEEKS 14 & 15: Recurrence Relations / Difference Equations 1 Recurrence Relations / Difference Equations In many applications, the systems are updated in discrete jumps rather than continuous

More information

STATISTICAL THINKING IN PYTHON I. Introduction to summary statistics: The sample mean and median

STATISTICAL THINKING IN PYTHON I. Introduction to summary statistics: The sample mean and median STATISTICAL THINKING IN PYTHON I Introduction to summary statistics: The sample mean and median 2008 US swing state election results Data retrieved from Data.gov (https://www.data.gov/) 2008 US swing state

More information

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010 Announcements CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Dan Grossman Spring 2010 Project 1 posted Section materials on using Eclipse will be very useful if you have never used

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

INTRODUCTION TO NUMERICAL ANALYSIS

INTRODUCTION TO NUMERICAL ANALYSIS INTRODUCTION TO NUMERICAL ANALYSIS Cho, Hyoung Kyu Department of Nuclear Engineering Seoul National University 3. SOLVING NONLINEAR EQUATIONS 3.1 Background 3.2 Estimation of errors in numerical solutions

More information

L14. 1 Lecture 14: Crash Course in Probability. July 7, Overview and Objectives. 1.2 Part 1: Probability

L14. 1 Lecture 14: Crash Course in Probability. July 7, Overview and Objectives. 1.2 Part 1: Probability L14 July 7, 2017 1 Lecture 14: Crash Course in Probability CSCI 1360E: Foundations for Informatics and Analytics 1.1 Overview and Objectives To wrap up the fundamental concepts of core data science, as

More information

LECTURE 3 LINEAR ALGEBRA AND SINGULAR VALUE DECOMPOSITION

LECTURE 3 LINEAR ALGEBRA AND SINGULAR VALUE DECOMPOSITION SCIENTIFIC DATA COMPUTING 1 MTAT.08.042 LECTURE 3 LINEAR ALGEBRA AND SINGULAR VALUE DECOMPOSITION Prepared by: Amnir Hadachi Institute of Computer Science, University of Tartu amnir.hadachi@ut.ee OUTLINE

More information

Stochastic model of mrna production

Stochastic model of mrna production Stochastic model of mrna production We assume that the number of mrna (m) of a gene can change either due to the production of a mrna by transcription of DNA (which occurs at a rate α) or due to degradation

More information

Solving Systems of ODEs in Python: From scalar to TOV. MP 10/2011

Solving Systems of ODEs in Python: From scalar to TOV. MP 10/2011 Solving Systems of ODEs in Python: From scalar to TOV. MP 10/2011 Homework: Integrate a scalar ODE in python y (t) =f(t, y) = 5ty 2 +5/t 1/t 2 y(1) = 1 Integrate the ODE using using the explicit Euler

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations 1 An Oscillating Pendulum applying the forward Euler method 2 Celestial Mechanics simulating the n-body problem 3 The Tractrix Problem setting up the differential equations

More information

Python Analysis. PHYS 224 October 1/2, 2015

Python Analysis. PHYS 224 October 1/2, 2015 Python Analysis PHYS 224 October 1/2, 2015 Goals Two things to teach in this lecture 1. How to use python to fit data 2. How to interpret what python gives you Some references: http://nbviewer.ipython.org/url/media.usm.maine.edu/~pauln/

More information

Numerical Methods

Numerical Methods Numerical Methods 263-2014 Prof. M. K. Banda Botany Building: 2-10. Prof. M. K. Banda (Tuks) WTW263 Semester II 1 / 18 Topic 1: Solving Nonlinear Equations Prof. M. K. Banda (Tuks) WTW263 Semester II 2

More information

Leap Frog Solar System

Leap Frog Solar System Leap Frog Solar System David-Alexander Robinson Sch. 08332461 20th November 2011 Contents 1 Introduction & Theory 2 1.1 The Leap Frog Integrator......................... 2 1.2 Class.....................................

More information

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1.

Hence a root lies between 1 and 2. Since f a is negative and f(x 0 ) is positive The root lies between a and x 0 i.e. 1 and 1. The Bisection method or BOLZANO s method or Interval halving method: Find the positive root of x 3 x = 1 correct to four decimal places by bisection method Let f x = x 3 x 1 Here f 0 = 1 = ve, f 1 = ve,

More information

Driven, damped, pendulum

Driven, damped, pendulum Driven, damped, pendulum Note: The notation and graphs in this notebook parallel those in Chaotic Dynamics by Baker and Gollub. (There's a copy in the department office.) For the driven, damped, pendulum,

More information

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical

More information

CDS 101 Precourse Phase Plane Analysis and Stability

CDS 101 Precourse Phase Plane Analysis and Stability CDS 101 Precourse Phase Plane Analysis and Stability Melvin Leok Control and Dynamical Systems California Institute of Technology Pasadena, CA, 26 September, 2002. mleok@cds.caltech.edu http://www.cds.caltech.edu/

More information

Automatic differentiation

Automatic differentiation Automatic differentiation Matthew J Johnson (mattjj@google.com) Deep Learning Summer School Montreal 2017 Dougal Maclaurin David Duvenaud Ryan P Adams brain Our awesome new world Our awesome new world

More information

Exam in Numerical Methods (MA2501)

Exam in Numerical Methods (MA2501) Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 7 MA251 Numeriske Metoder Olivier Verdier (contact: 48 95 2 66) Exam in Numerical Methods (MA251) 211-5-25,

More information

Error, Accuracy and Convergence

Error, Accuracy and Convergence Error, Accuracy and Convergence Error in Numerical Methods i Every result we compute in Numerical Methods is inaccurate. What is our model of that error? Approximate Result = True Value + Error. x = x

More information

f = Xw + b, We can compute the total square error of the function values above, compared to the observed training set values:

f = Xw + b, We can compute the total square error of the function values above, compared to the observed training set values: Linear regression Much of machine learning is about fitting functions to data. That may not sound like an exciting activity that will give us artificial intelligence. However, representing and fitting

More information

ODEs. PHY 688: Numerical Methods for (Astro)Physics

ODEs. PHY 688: Numerical Methods for (Astro)Physics ODEs ODEs ODEs arise in many physics problems Classifications: As with the other topics, there are a large number of different methods Initial value problems Boundary value problems Eigenvalue problems

More information