Final year project. Methods for solving differential equations

Size: px
Start display at page:

Download "Final year project. Methods for solving differential equations"

Transcription

1 Final year project Methods for solving differential equations Author: Tej Shah Supervisor: Dr. Milan Mihajlovic Date: 5 th May 2010 School of Computer Science: BSc. in Computer Science and Mathematics

2 Abstract Methods for solving differential equations Author : Tej B Shah Ordinary differential equations are rigourously studied in a wide variety of fields, and being able to get numerical approximate solutions is vital due to the difficulty of getting exact solutions analytically. In this report, a boundary value linear differential problem has been numerically approximated using the finite difference method and Gaussian elimination. The accuracy of the finite method has been experimented on using various gridsizes, and the conclusion that increasing the gridsize, increases the accuracy of the results has been made. Finally a boundary value non-linear differential problem has been numerically approximated using Picard s method. This first order method is implemented and again analysis has been done on varying the gridsize, again giving the same conclusions as before. Supervisor: Dr Milan Mihajlovic 1

3 Acknowledements Foremost, I would like to thank my supervisor for his continuous support and guidance through out the year. I would also like to thank Andrew Hazel for always helping me, no matter the question asked. Thanks also go to Anthony Chiu for helping me understand how to write this report in latex, and Chris Heesome for all the help I recieved in my other modules throughout the duration of this project. 2

4 Contents 1 Introduction A brief history of differential equations Types of differential equations Previous Work Aims and Objectives Tasks Milestones Overview of Report Background More on Differential Equations Linear Ordinary Differential Equations Non-linear Ordinary Differential Equations Examples of ODEs dimensional Boundary Value Problem Finite Difference Method Domain disretization The Finite Difference Method The Taylor Series Deriving finite difference approximations A first derivative approximation Other first derivative approximations A second derivative approximation Other second derivative approximations Software - MATLAB Linear ODE problems u = 100sin(10x) Gaussian Elimination Errors u + 10u = 6x u + 10u = 6x using forward and backward approximation schemes u = 6x using the five point scheme Non-linear ODE problems u + uu = 2 + 2x Picard s method

5 5.2 u + uu = sin(x) + sin(x)cos(x) Analysis of results Linear ODEs Centred Derivative Approximations Alternative Derivative Approximations Non-linear ODEs Conclusion Areas of further work List of Figures 1 Discretizing the domain.[3] Solutions to (4.1) with 10 points Solutions to (4.1) with 1000 points Solutions to (4.1) with points Errors associated with solution to (4.1) Solutions to (4.2) with 10 points Solutions to (4.2) with 1000 points Solutions to (4.2) with points Errors associated with solution to (4.2) Solutions to (4.3) - 10points Solutions to (4.3) points Solutions to (4.3) points Global error associated with solution to (4.3) Five point scheme inaccuracy Five point scheme, 10 points Five point scheme, 1000 points Five point scheme, points Gridsize 10 points: log Rn R 0 against the number of iterations Gridsize 1000 points: log Rn R 0 against the number of iterations Gridsize 1000 points: log Rn R 0 against the number of iterations Global error associated with solutions to (5.1) Global error after every iteration All gridsizes: log Rn R 0 against the number of iterations Global error associated with solutions to (5.1)

6 1 Introduction 1.1 A brief history of differential equations Since their concoction in the 17th Century by Isaac Newton and Gottfried Leibniz, differential equations have been rigorously studied and examined by mathematicians due to its underlying importance to so much of what is around us. The study of differential equations has progressed further than just the fields of pure and applied mathematics. During the World War II, differential equations were essential in calculating trajectories of ballistics and being able to compute numerical approximations to differential equations was vital. Nowadays differential equations are fundamental in disciplines such as engineering, biology, physics and finance, where they play differing roles such as modelling population growth in biology, describing distribution of heat in physics and option pricing in finance. While differential equations have been studied thoroughly over the past 400 years, it is still an area of continuous investigation with new links to mathematics and other fields being found constantly. 1.2 Types of differential equations Differential equations can generally be classified in two groups: partial differential equations (PDEs) and ordinary differential equations (ODEs). These can be further categorised into either linear or non-linear differential equations. In my project I shall be focusing on both linear and non-linear ordinary differential equations. An ordinary differential equation is a relation involving a function, u(x), of one variable, and its derivative with respect to that variable. Ordinary differential equations are linear unless they contain non-linear terms involving the function u(x) and its derivative. Solving a linear ODE is significantly easier than solving a non-linear ODE, but both do require dividing the domain into a uniform grid and the use of the finite difference method to create discrete numerical approximations to our ODEs. Using the finite difference method, one can produce a system of linear equations that when solved will give an approximate solution to our ODE. Getting an exact solution to linear ODEs analytically is not always possible and exact solutions to non-linear ODEs analytically is very rarely possible, and when possible may contain integral terms, which will need to be approximated to by an infinite sum therefore, numerical solutions and their accuracy will be focused on in this project. 5

7 1.3 Previous Work There has been various previous work done on the finite difference method and differential approximations in a wide range of subjects. The previous work looks at the convergence of the solution produced when using finite difference approximations to discretize continuous differential equations. Different approaches have been taken in an attempt to increase the accuracy and reduce the computational cost. In reality, often the accuracy of numerical solutions of a differential equation is not required to be greater than an already set accuracy, therefore it is the number of steps we can reach this accuracy in that is pivotal. Attempts have also been made to solve non-linear differential equations using higher order methods to reduce computational costs. Examples include the Runge-Kutta methods which are order two[1] and greater[2]. 1.4 Aims and Objectives The aim of this project is to solve linear and non-linear ODEs by implementing the finite difference approximations to discretize our ODE, then implementing various methods (both direct and iterative), using the MathsWork program MATLAB, to solve the resulting system of equations. Once obtaining an approximate solution to our ODE, the error in the results will be worked out and. Then the accuracy of the results will be examined and commented on. Any anomalous results will be looked at further and the phenomenon behind it will be attempted to be explained Tasks The first task will be to derive the finite difference approximation to both the first derivative and the second derivative as this is fundamental in getting approximate solutions to ODEs. To do this we will need to use the Taylor series expansion. Secondly we will go about solving some linear ODEs by replacing first and second derivative terms by their derived approximations and solving the resulting system of linear equations using a Gaussian elimination. This will produce a numerical approximate solution to our ODE. Errors will be calculated and the mesh redefined to view how this affects the accuracy of the approximate solution. Next we will apply the first and second derivative approximations to non-linear ODEs to create a system of equations. Then Picard s 6

8 method of successive approximations will be applied to get an accurate approximate solution. Comment on the order of this method and what this means will be made. Finally, variations of the second derivative approximation will be observed and implemented along with two other schemes for the first derivative approximation: the upwind and downwind scheme. After doing this, comments will be made on the accuracy of the results obtained Milestones Complete project plan and poster. Give seminar. Derive finite difference approximations. Implement finite difference scheme on linear differential equations. Implement finite difference scheme on non-linear differential equations. Give demonstration. 1.5 Overview of Report The subsequent chapters will cover the following: Chapter 2 - Background: Background information on both differential equations and the finite difference method. Chapter 3 - The Finite Difference Method: Focuses on the derivation of the finite difference approximations using the Taylor series, and will cover my choice of software. Chapter 4 - Linear ODE problems: Using the derived approximations to solve linear ordinary differential equations. Chapter 5 - Non-linear ODE problems: Solve non-linear ordinary differential equations using Picard s method. Chapter 6 - Analysis of results: Comment on the accuracy of the methods used to solve the ordinary differential equations. 7

9 Chapter 7 - Conclusion: Summary of the achievements of the project and future paths this project can take to be furthered. 8

10 2 Background 2.1 More on Differential Equations As mentioned in the previous section, there are two types of ordinary differential equations; linear and non-linear Linear Ordinary Differential Equations A linear differential equation has no multiplications among dependent variables and their derivatives[5]. They are of the form: where A, B and C are constants. A d2 u dx + B du + Cu = q(x) (2.1) 2 dx Non-linear Ordinary Differential Equations A non-linear differential equation contains non-linear terms between the function u(x) and its derivative and can be written as: d 2 u dx + udu 2 dx The non-linear term in the ODE above is u du dx. Both equations (2.1) and (2.2) are ODEs of second order. = q(x) (2.2) Definition : The order of a differential equation is the power of the highest derivative that appears in the differential equation.[5] Second order differential equations will be focused on in this project Examples of ODEs dy dx = x2 y (linear) d2 y dx 2 y = 0 (linear) d2 y dx 2 y dy dx = 4 (non-linear) 9

11 dimensional Boundary Value Problem The problems we will be looking at in this report are 1D boundary value problems (BVPs). First here are some important definitions to understand the concept of 1D BVPs. Definition : A boundary value problem is a differential equations with constraints known as boundary conditions. Definition : Boundary conditions are imposed at the end points of the interval over which a boundary value problem is posed. Definition : A solution to a differential equation is a sufficiently smooth function thats satisfies the equation. A solution to a BVP is a function that satisfies both the differential equation and the imposed boundary values. Without boundary values, the ODE could have infintately many solutions that simply differ by a constant. However, boundary conditions do not guarantee unique solutions. A boundary value problem may have zero, one or multiple solutions. If a solution exists to our ODE, we will attempt to get an approximate numerical solution using the finite difference method. 2.2 Finite Difference Method Being able to find an analytic formula for the solution of our differential equation is not an easy process and more often than not, it is not possible to do so. Hence we try to find numerical approximations to the solution. The job of the finite difference method is to swap the derivatives in the differential equation by finite difference approximations. The finite difference method turns a continuous problem into a discrete problem, where the solution of the discrete problem is an approximate of the continuous problem. This produces a system of equations, which is simply a finite number of equations. The number of equations required to be solved is dependant on the manner in which the function s domain is discretized. The greater the grading of the mesh, the more equations there are to approximate the values of the interior points. Fortunately, this is something very elementary for a computer to solve. 10

12 2.2.1 Domain disretization Discretizing the domain is purely creating a finite number of uniformly spaced interior points between the boundary points as shown in figure 1. Figure 1: Discretizing the domain.[3] From the figure above, it is clear to see that the domain of interest is between the two boundary points, x 0 and x 6. It has been discretized to create 5 interior points, x 1 to x 5. The domain has been split into n intervals with n+1 uniformally spaced nodes, x 0, x 1,..., x n. The distance between each node is h, which is calculated using the equations : h = xn x 0. this often known n as the stepsize. It is at each of these discrete x i, i = 0, 1,..., n points that we will be getting approximate numerical values to our differential equation s solution. 11

13 3 The Finite Difference Method 3.1 The Taylor Series Before being able to derive the finite difference approximation equations to derivatives, we must take a look at the Taylor series, as it is from this that the approximations are derived. The Taylor series represents a sufficiently smooth function u, at a point x, in terms of an infinite sum of its derivatives at a single nearby point a: u(x) = u(a)+u (a)(x a)+ u (a) 2! (x a) 2 + u(3) (a) 3! (x a) u(n) (a) (x a) n +... n! Therefore, if a finite number of the functions derivatives were used, the Taylor series expansion would become an approximation to the function. This will be important when analysing errors in the results. 3.2 Deriving finite difference approximations To derive derivative approximations we must expand u(x+h) and u(x h) in the Taylor series about the point x i A first derivative approximation To approximate u at x i we will take h 2 error terms. It is necessary to note that x i + h is equivalent to x i+1 as two nodes next to each other are seperated by a distance of h. (This can be easily seen in figure 1 in chapter 2). This gives the following equations: u(x i + h) = u(x i+1 ) = u(x i ) + hu (x i ) + O(h 2 ) (3.1) u(x i h) = u(x i 1 ) = u(x i ) hu (x i ) + O(h 2 ) (3.2) Subtracting equation (3.2) for equation (3.1) gives us: u(x i+1 ) u(x i 1 ) = 2hu (x i ) + O(h 2 ) 12

14 and finally rearranging the equations so that u (x i ) is the subject of the equation and ignoring the error term leads to the finite difference approximation to the first derivative: u (x i ) u(x i+1) u(x i 1 ) 2h As u(x i ) is the actual solution at the point x i, we shall denote the approximation of it using u i. This gives us: u (x i ) u i+1 u i 1 2h (3.3) This equation is known as the centred difference approximation to the first derivative Other first derivative approximations Subtracting u(x i ) from both sides of equation (3.1), again neglecting the error term and rearranging to make u (x i ) the subject of the equation generates: u (x i ) u i+1 u i 2h (3.4) This equation is known as the forward difference approximation to the first derivative. Subtracting equation (3.2) from both sides of the equation u(x i ) = u(x i ), and then ignoring the error term and rearranging the equation to make u (x i ) the subject of it yields: u (x i ) u i u i 1 2h (3.5) This equation is known as the backward difference approximation to the first derivative. All three equations can be used to replace first derivative terms in our ODEs, however the main focus will be on the centred difference approximation to the first derivative. 13

15 3.2.3 A second derivative approximation Similarly to deriving the first derivative approximation, we will be using the Taylor series expansions of u(x + h) and u(x h) about the point x i, however as we are approximating u at x i, we shall be taking error terms of h 3. This leads to the following Taylor series expansions: u(x i + h) = u(x i+1 ) = u(x i ) + hu (x i ) + h2 2 u (x i ) + O(h 3 ) (3.6) u(x i h) = u(x i 1 ) = u(x i ) hu (x i ) + h2 2 u (x i ) + O(h 3 ) (3.7) This time, adding equations (3.6) and (3.7) together leads to the following equation: u(x i+1 ) + u(x i 1 ) = h 2 u (x i ) + O(h 3 ) and again, just as before, rearranging the equation to make u (x i ) the subject of the equation and neglecting the h 3 error term, we get an equation to approximate the second derivative: u (x i ) u(x i+1) 2u(x i ) + u(x i 1 ) h 2 and then taking into account u(x i ) u i gives us: u (x i ) u i+1 2u i + u i 1 h 2 (3.8) This equation is known as the centred difference approximation to the second derivative. 14

16 3.2.4 Other second derivative approximations One final second derivative approximation we will look at is one that uses the value of the function u at the points x i, x i 2 and x i+2 to approximate the value of u at x i. This is very easy to derive by expanding u(x + 2h) and u(x 2h) using the Taylor series and manipulating the two resulting expansions. We end up getting: u (x i ) u i+2 2u i + u i 2 4h 2 (3.9) This will now be referred to as a five point scheme. To implement the finite difference method and the matrices it will create, we will require suitable and reliable software. 3.3 Software - MATLAB All of the implementation of this project will be done in Matrix Laboratory also known as MATLAB. MATLAB is principally intended for numerical counting and allows three features that will be extremely useful during the project: plotting of data and graphs, quick matrix multiplication and simple implementation of algorithms, as well as several other characteristics making it highly used across industry and the academic world. Due to the need to use varying dimensions of matrices (10 10 to ) and the fact that the matrices contain only three diagonals of information, a lot of memory would be wasted; however MATLAB allows the creation of sparse matrices. This means only the elements in the matrix with a non-zero value are stored along with their position in the matrix. This feature is imperative if this project was to be further extended as it leads to fewer calculations having to be made and less memory used, hence a lower computational cost. [4] 15

17 4 Linear ODE problems As mentioned earlier, to find an approximate solution to a linear ODE, we will be using the finite difference approximations to both the first and second derivatives, to create a system of equations, which solved will give us numerical values to all the points on our discretized domain. As this problem is a linear one, the resulting system of equations will also be linear. The three linear problems we shall be looking at are: u = 100sin(10x) with boundary conditions u(0) = 1 and u(8) = u + 10u = 6x with boundary conditions u(0) = 1 and u(1) = 0 u = 6x with boundary conditions u(0) = 0 and u(1) = u = 100sin(10x) The first step to getting a numerical approximation is to discretize the domain (x-axis). This is done between our boundary conditions which are x = 0 and x = 8. The discretizing of the domain shall be programmed so it is easy to increase or decrease the number of nodes. Once we have discretized the domain, we will approximate the value of u at each discrete x value. This is done using the finite difference method. Using our centred difference approximation to the second derivative we get the following set of equations: At x 0 : -( u 1 2u 0 +u 1 h 2 ) = 100sin(10x 0 ) At x 1 : -( u 0 2u 1 +u 2 h 2 ) =100sin(10x 1 ) At x n : -( u n 1 2u n+u n+1 h 2 ) = 100sin(10x n ) The next step is to simply express all these equations in matrices and vectors. This is done by putting the equations into the form Au = f. Doing this gives us: 16

18 Au = 1 h u 0 u 1. u n 1 u n 100sin(10x 0 ) 100sin(10x 1 ) f =. 100sin(10x n 1 ) 100sin(10x n ) The next step is to impose the given bondary conditions to ensure that there are not infinitely many solutions differening by a constant. To do this, we must wipe out the top row and the bottow row of the matrix A and of vector f. We must then put in the trivial solutions (boundary values) by putting the value 1 in the top left corner and bottom right corner element of A and the boundary values in the top and bottom elements of f (as shown below). If we were to multiply out Au = f, we would get the first equation as the first boundary conditions, then all the equations from using the finite difference method and then finally the last equation would be the second boundary condition. A = 1 h Once the boundary conditions have been imposed, we can then numerically solve the equations by working out the inverse of A, then multiplying both sides of Au = f. This would give you the solution u = fa 1. MAT- LAB has its very own function that works out the solution, this is u = A \ f. The backslash function on MATLAB solves the system of equations using Gaussian elimination. 17

19 4.1.1 Gaussian Elimination Gaussian elimination was invented by Carl Friedrich Gauss and is an example of a direct method to solve a system of linear equations. Direct methods try to solve the system of equations in a finite number of steps. The Gaussian elimination algorithm works in two steps. Once we have the augmented matrix for the system of linear equations, the algorithm uses elementary row operations to convert the augmented matrix (which in our example is the matrix A) into row echelon form. This means that only the middle diagonal of the matrix A and all the elements above the middle triangle will have values in them. The middle diagonal of the transformed coefficient matrix A must not have any zeros in it otherwise it is referred to as singular and this means that the system of equations either has an inifinite number of solutions or it has no solutions. The second and final step in the Gaussian elimination algorithm is back substitution to find the solution to the system. More on this method can be observed on [6]. Now that we have numerical approximates to our solution to the differential equations, we can plot it against the actual solution to the differential equation. Working out the actual solution to the differential equation is done by twice integrating 100sin(10x) between the boundary values. This gives us the function sin(10x) as the actual solution. Below are plots of our solution aganst the actual solution with various different gradings of the mesh. 18

20 Figure 2: Solutions to (4.1) with 10 points. Figure 3: Solutions to (4.1) with 1000 points. 19

21 Figure 4: Solutions to (4.1) with points Errors We shall also tabulate the largest residual and the greatest absolute global error for this problem with different mesh sizes. Mesh size Residual Global Error 10 points points points Figure 5: Errors associated with solution to (4.1) Definition : The residual, R i, is the amount by which the continuous solution fails to satisfy the discrete formula. In other words, the value we get when we insert the exact solution into the finite difference approximation. So for equation (4.1), this would be shown as: R i = u(x i+1) 2u(x i ) + u(x i 1 ) h 2 f(x i ) 20

22 Definition : The local truncation error, T i, is defined as the value obtained due to the the finite difference approximations at each step. As the value u i will be an approximate, it will then be used to get an approximate value for u i+1, and so on. This carry on approximation error and the error from the finite approximation will cause a local estimation, or truncation error at each point[7]. For the centred first derivative approximation, this can be shown as: T i = f (x) f(x i+1) f(x i 1 ) 2h Definition : The global error, e i, is the value obtained when you subtract our approximate solution from the actual solution[8]. This can be shown as: e = max 1 i n u(x i ) u i where u(x i ) is the exact solution and u i is our approximate solution. 4.2 u + 10u = 6x Now we shall look at the second linear differential equation. When working out an approximate numerical solution to this equation, we must take the same steps as in the previous example, but also include the first derivative finite approximation. So using our finite difference approximations, we get the following sets of equations: At x 0 : -( u 1 2u 0 +u 1 ) + 10( u 1 u 1 ) = 6x h 2 2h 0 At x 1 : -( u 0 2u 1 +u 2 h 2 ) + 10( u 2 u 0 2h ) = 6x 1 At x n : -( u n 1 2u n+u n+1 h 2 ) + 10( u n+1 u n 1 ) = 6x 2h n Now if we were to rearrange the above equations to get all u i, u i 1, and u i+1 terms together we get the following equations: At x 0 : ( 5 h 1 h 2 )u h 2 u 0 + ( 5 h 1 h 2 )u 1 = 6x 0 At x 1 : 21

23 ( 5 h 1 h 2 )u h 2 u 1 + ( 5 h 1 h 2 )u 2 = 6x 1 At x n : ( 5 h 1 h 2 )u n h 2 u n + ( 5 h 1 h 2 )u n+1 = 6x n The remaining steps are all the same as in the previous example. So we must put this into the form Au = f, using our coefficients of u i, u i+1 and u i 1 to populate the tridiagonal matrix A. This gives us: Au = ( 2 ) ( 5 1 ) h 2 h h 2 ( 5 1 ) ( 2 ) ( 5 1 ) h h 2 h 2 h h ( 5 h 1 h 2 ) ( 2 h 2 ) ( 5 h 1 h 2 ) ( 5 h 1 h 2 ) ( 2 h 2 ) 6x 0 6x 1 f =. 6x n 1 6x n u 0 u 1. u n 1 u n Then we must impose the boundary conditions u(0) = 1 and u(1) = 0. Following this, we must then use the backslash function in MATLAB to perform the Gaussian elimination algorithm. This will give us an approximate solution to our linear differential equation, which we shall then plot graphs for at various mesh gradings and work out both the residuals and global errors for our solutions. 22

24 Figure 6: Solutions to (4.2) with 10 points. Figure 7: Solutions to (4.2) with 1000 points. 23

25 Figure 8: Solutions to (4.2) with points. Mesh size Residual Global Error 10 points points points Figure 9: Errors associated with solution to (4.2) It is clear to see from both figure 5 and figure 9 that as the number of nodes increase, both the residual and the global error decreases. 4.3 u + 10u = 6x using forward and backward approximation schemes Using the forward and backward first derivative approximation requires the same steps, we must populate our matrix, A, with the correct coefficients before using Gaussian elimination to solve our system of linear equations. Below is a graph of our numerical approximation using all three first derivative approximations and a table comparing the global error produced for each at varying mesh gradings. 24

26 Figure 10: Solutions to (4.3) - 10points. Figure 11: Solutions to (4.3) points. 25

27 Figure 12: Solutions to (4.3) points. Gridsize Centred Backward Forward 10 points points points Figure 13: Global error associated with solution to (4.3) 4.4 u = 6x using the five point scheme Using the five point scheme to solve this differential equation is programmed in a similar way to our first example in section 4.1. We populate our coefficient matrix and use Gaussian elimination. However we must account for our second and penultimate equations: u (x 1 ) u 3 2u 1 +u 1 4h 2 and u (x n 1 ) u n+1 2u n 1 +u n 3 4h 2 26

28 This is because this contains terms that have no values and will not be approximated to, u 1 and u n+1. If these terms are not accounted for, we would get an approximate solution that was accurate every other point, as shown below. Figure 14: Five point scheme inaccuracy To account for this error, we will have to use a different finite difference approximation to get values for u 1 and u n 1. We shall use the centred finite difference approximation to the second derivative to get values at these specific points, as both the five point scheme and the centred finite difference approximation to the second derivative are second order methods. Below are the graphs for different mesh gradings. 27

29 Figure 15: Five point scheme, 10 points Figure 16: Five point scheme, 1000 points 28

30 Figure 17: Five point scheme, points 29

31 5 Non-linear ODE problems For non-linear ordinary differential equations, we shall be using our finite difference approximations to the first and second derivative to convert the differential equation into a system of non-linear equations. We shall not be able to solve the system of non-linear equations using a direct method such as the Gaussian elimination algortihm. Instead, we shall try get an approximate numerical solution using an iterative method. The non-linear differential equation that we are trying to get a solution to is: u +uu = 2+2x 3 with boundary conditions u( 1) = 1 and u(1) = 1 u + uu = sin(x) + sin(x)cos(x) with boundary conditions u( 1) = and u(1) = It is easy to see that the non-linear term is uu. 5.1 u + uu = 2 + 2x 3 The steps to solving a non-linear differential equation are very similar to those for solving a linear differential equation. Firstly, we must discretize the domain of interest, between the given boundary conditions (x = 0 and x = 1), so that we have a finite number of points at which we will be making numerical approximations to the solution. Then we must use our finite difference approximations and our discretized domain to express our continuous differential equation as a finite number of equations at discrete points. This gives us the following sets of equations: At x 0 : -( u 1 2u 0 +u 1 ) + y( u 1 u 1 ) = 2 2(x h 2 2h 0 ) 3 At x 1 : -( u 0 2u 1 +u 2 h 2 ) + y( u 2 u 0 2h ) = 2 + 2(x 1) 3 At x n : -( u n 1 2u n+u n+1 ) + y( u n+1 u n 1 ) = 2 + 2(x h 2 2h n ) 3 Again we rearrange the above equations to get: At x 0 : 30

32 ( u 2h 1 h 2 )u h 2 u 0 + ( u 2h 1 h 2 )u 1 = 2 2(x 0 ) 3 At x 1 : ( u 2h 1 h 2 )u h 2 u 1 + ( u 2h 1 h 2 )u 2 = 2 + 2(x 1 ) 3 At x n : ( u 2h 1 h 2 )u n h 2 u n + ( u 2h 1 h 2 )u n+1 = 2 + 2(x n ) 3 As shown by the equations above, we have now got a system of non-linear equations. Just as we did in example (4.2), we will put these equations into the form Au = f. We will use the backslash method to solve this system of equations and then use Picard s method to produce an approximate solution of greater accuracy Picard s method Picard s method is a first order iterative method [9] for getting an approximate solution to our differential equation. An iterative method attempts to solve the problem using successive approximations, every approximation is more accurate, than the previous one, to the exact solution. An iterative method requires an initial guess. This guess, after each iteration, gets progressively better. This sort of a method normally is terminated after the solution matches a certain criteria or has reached a certain tolerance. If allowed to continue forever, an exact solution will never be reached, it will always be an approximate solution. In our example (5.1), Picard s method will take our initial guess, the function u = 0, and produce a new approximation. This new approximation will be put into our differential equation as u, and the system of equations will be solved to produce a new approximation. This is illustrated in the following equation: u n+1 = ( u + u n u ) \ F This will continue to loop until our set tolerance has been reached. We shall use a while loop to continue getting new approximations until the residual, R n, of our numerical approximation is less than our tolerance. Once we havel populated our matrix A with the coefficients of u n, u n 1 and u n+1, and used Gaussian elimination to get our first approximation, we will then work out the residual of it. If it is greater than our tolerance, the solution will be used to get our next approximation. This will continue until 31

33 the residual is less than the set tolerance. Below are graphs of log Rn R 0 against the number of iterations for different gridsizes and a graph and table of how the error changes with every iteration for different gridsizes. Figure 18: Gridsize 10 points: log Rn R 0 against the number of iterations. 32

34 Figure 19: Gridsize 1000 points: log Rn R 0 against the number of iterations. Figure 20: Gridsize 1000 points: log Rn R 0 against the number of iterations. 33

35 No. of iterations 10pt mesh 100pt mesh 1000pt mesh Figure 21: Global error associated with solutions to (5.1) Figure 22: Global error after every iteration. 5.2 u + uu = sin(x) + sin(x)cos(x) This final non-linear differential equation example is programmed in exactly the same way as the previous one. Due to the complexity of the equation, we will be able to see the effect changing the mesh grading has on 34

36 the errors. For this example, we have not set a tolerance, but rather set a minimum iteration count. Below are the results in graphs and tables. Figure 23: All gridsizes: log Rn R 0 against the number of iterations. No. of iterations 10pt mesh 100pt mesh 1000pt mesh Figure 24: Global error associated with solutions to (5.1) 35

37 6 Analysis of results Before we begin to analyse the results obtained for both linear and nonlinear ordinary differential equations, it is necessary to introduce terms that will be used to explain the results. Definition : The order of our approximation scheme or method is defined in terms of the absolute error also known as the global error. If the error of our scheme or method is proportional to h, then the scheme or method is a first order approximation. If the error was proportional to h 2, then it would be second order. Definition : A method is said to be convergent if the numerical approximation obtained tends towards the exact solution as we increase the mesh size [11]. Therefore a convergent method s global error tends to zero upon increasing the grading of the mesh. Definition : A scheme is said to be consistent if the local truncation error tends to zero as the gridsize approaches zero: lim h 0 T i = 0 A method is said to be consistent if it has order greater than one. Now that we have defined these terms, we shall look at the methods used to solve the differential equations. 6.1 Linear ODEs To solve our linear ODE, we used the finite difference approximations for both the first and second derivative. We shall look at the order of the finite difference method. We shall also see if the solution we obtain converges to the actual solution and if the method is consistent Centred Derivative Approximations The differential equation we looked at in section 4.1 uses only a second derivative approximation to compute an approximate solution. Therefore we can easily work out the order of the second derivative approximation using the Taylor series. 36

38 Just as when we derived the approximation, we expand about u(x i + h) and u(x i h) to gives us: u(x i + h) = u(x i ) + hu (x i ) + h2 2 u (x i ) + h3 6 u(3) (x i ) + h4 24 u(4) (x i ) +... (6.1) u(x i h) = u(x i ) hu (x i ) + h2 2 u (x i ) h3 6 u(3) (x i ) + h4 24 u(4) (x i ) +... (6.2) Again we add the two expansions together, recalling that u(x i +h) = u(x i+1 ) and u(x i h) = u(x i 1 ), to get: u(x i+1 ) + u(x i 1 ) = 2u(x i ) + h 2 u (x i ) + h4 12 u(4) (x i ) +... (6.3) Now if we were to divide (6.3) by h 2 and subtract our approximation to u (x i ) from (6.3) we would be left with the error terms, i.e. the difference between the exact value and the approximation, which is h2 12 u(4) (x i ) + O(h 3 ). The error is proportional to h 2, so we have shown that approximation is second order accurate and thus, it is also consistent. Finally, using our results in figure 5, we can see that our method is also convergent, as increasing the gridsize causes our global error to tend towards zero. If we look at the order of accuracy of the centred first derivative approximation, we will be able to make a judgement on the convergence of our numerical approximation for the example in section 4.2. If the order of this approximation is greater or equal to h 2, then our method for solving u + 10u = 6x will be second order as the error from our approximation to u (x) will be dominant, otherwise our method s order will be the same as that of our first derivative approximate. To find the order of the centred first derivative approximation, we shall subtract (6.1) from (6.2) to give us: u(x i+1 ) u(x i 1 ) = 2hu (x i ) + h3 6 u(3) (x i ) +... (6.4) If we divide (6.4) by 2h and then subtract our approximation to u from equation (6.4) we are again left with only the error terms, which are h2 12 u(3) (x i ) + O(h 3 ). Therefore the order of the centred first derivative approximation is 2, our method is consistent and using the error results in figure 9, we can see that as we increase our gridsize, our error tends towards zero, making our method convergent. 37

39 6.1.2 Alternative Derivative Approximations The order of our forward and backward first derivative approximations can be worked out using equations (6.1) and (6.2). Manipulation of the two equations leads to error terms of h 2 u (x i ) + O(h 3 ) and h 2 u (x i ) O(h 3 ) for the forward and backward first derivative approximation respectively. This means that these two schemes are both first order schemes, so increasing the gridsize does not have the same effect as when using the centred finite difference approximation. Therefore, using the centred finite difference approximation would require 100 times less steps than the forward or backward approximation and thus using the centred scheme has a much lower computational cost when used to find the numerical approximation of a differential equation to a set accuracy. To work out the order of our alternative five point second derivative scheme, the process is very much similar to all our other derivative approximations. We must use the Taylor series expansion of both u(x + 2h) and u(x 2h) to rederive the approximation and include the error term. This gives us: u (x i ) = u(x i 2) u(x i) + u(x i 2) + h2 u (4) (x i ) + O(h 3 ) 4h 2 2h 2 4h 2 3 As we can see from the equation above, the error term from our approximation in (3.9) is the term h2 u (4) (x i ) + O(h 3 ). The error term is proportional to 3 h 2, therefore, from our definitions above, the five point scheme is a second order approximation. 6.2 Non-linear ODEs When discussing the order of an iterative method, we refer to the order of convergence of the method. This gives an idea of the speed the method converges to a solution after each iteration. An iterative method is said to converge when successive approximations are sufficiently close.[10] A first order iterative method converges to a solution at a linear rate, a second order iterative method converges to a solution at a quadratic rate and so on. Therefore Picard s method converges to a solution at a linear rate, as it is a first order iterative method. This can be seen by our graphs in section 5. Order for iterative methods is a very important concept. When differential equations need to be solved to a certain error tolerance, it does not matter if we can improve on the accuracy of our solution as it will be stopped 38

40 at the set tolerance. Therefore the number of steps we can get to the set tolerance in is vital. The less steps taken to reach our tolerance, the cheaper solving differential equations is computationally. So if we want the error of our solution to the differential equation to be less than 10 4, then a second order iterative method would require fewer steps than a first order method. The final aspect of our iterative method for solving non-linear differential equations that we shall look at is: how changing the mesh grading affects the accuracy of the results. From figure 24, we can see that increasing out gridsize leads to an increase in the accuracy of our results. This is because we are using the centred finite difference method to discretize our equation, and as discussed earlier, the centred difference approximations are second order accurate. Therefore the smaller the distance between our nodes, h, the greater the accuracy of our results. However it is necessary to note that this can only be seen on more complex differential equations, as in our first non-linear ODE example, there was no change in accuracy when increasing the mesh grading. 39

41 7 Conclusion During the course of this project, there were very few problems encountered. The derivation and implementation of the finite difference method was done without any hassle up to a certain point. Problems were encountered when the size of the matrix was extended beyond This was due to the speed of the processor used and the memory capabilities. An out of memory message would appear when a limit had been reached, however the matrix sizes we were able to implement, gave a fair understanding of the pattern on increasing the gridsize and accuracy. Picard s iterative method was implemented to solve non-linear differential equations and gave us results that satisfied what was expected. The results showed Picard s method to be a first order method and that increasing the gridsize gave us more accurate results. All the tasks stated in the first chapter were completed. Milestones and project deadlines were all met. Attached at the end of this report is a gant chart showing the outline of timing for various aspects of the project. 7.1 Areas of further work This project has many directions in which it can be extended. If we were to break the project up into two sections, the finite difference method and iterative methods for solving non-linear ordnary differential equations, both paths could be investigated further than has been in this project. The finite difference method could be looked at in further dimensions, or even attempted to be used to solve partial differential equation. Higher derivatives could be approximated and hence, numerical solutions to higher order differential equations could be examined. It is also possible to derive approximations that use more points and have a greater order of accuracy. In this report, we have only looked at a uniformly discretized grid when applying the finite difference method, it could be that a non-uniform grid could provide us with more accurate results or more efficient results. When trying to solve non-linear differential equations with iterative methods, we could possibly attempt to implement higher ordered methods. This would allow us to reach a set error tolerance in less steps and therefore provide a less computational cost. Additionally, other direct methods, to solve a linear system of equations, could be explored to improve computational costs. 40

42 References [1] The 2nd Order Runge-Kutta Method [2] Method-for-ODE, Runge-Kutta 4th Order Method for ODE, 2003 [3] difference method, Discretized Domain Picture [4] MATLAB tutorial, 2009 [5] General Terms of Ordinary differential equations, 2010 [6] Gaussian Elimination, Feb 2009 [7] Uri M Ascher, Robert M M Mattheij, Robert D Russell, Chapter 2 Numerical Solution of Boundary Value Problems for Ordinary Differential Equations 1988 [8] Dr Catherine Powell - Partial Differential Equations and Vector Calculus, 2009 [9] Picard s method [10] On the order of convergence of Iterative Methods, February 2006 [11] Mitchell and Griffiths, The Finite Difference Method in Partial Differential Equations

43 42

Section Gaussian Elimination

Section Gaussian Elimination Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

An Introduction to Numerical Methods for Differential Equations. Janet Peterson

An Introduction to Numerical Methods for Differential Equations. Janet Peterson An Introduction to Numerical Methods for Differential Equations Janet Peterson Fall 2015 2 Chapter 1 Introduction Differential equations arise in many disciplines such as engineering, mathematics, sciences

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Homework and Computer Problems for Math*2130 (W17).

Homework and Computer Problems for Math*2130 (W17). Homework and Computer Problems for Math*2130 (W17). MARCUS R. GARVIE 1 December 21, 2016 1 Department of Mathematics & Statistics, University of Guelph NOTES: These questions are a bare minimum. You should

More information

PARTIAL DIFFERENTIAL EQUATIONS

PARTIAL DIFFERENTIAL EQUATIONS MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL

More information

Numerical Methods. King Saud University

Numerical Methods. King Saud University Numerical Methods King Saud University Aims In this lecture, we will... Introduce the topic of numerical methods Consider the Error analysis and sources of errors Introduction A numerical method which

More information

Finite Difference Methods (FDMs) 1

Finite Difference Methods (FDMs) 1 Finite Difference Methods (FDMs) 1 1 st - order Approxima9on Recall Taylor series expansion: Forward difference: Backward difference: Central difference: 2 nd - order Approxima9on Forward difference: Backward

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Math 0 Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination and substitution

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Gregory's quadrature method

Gregory's quadrature method Gregory's quadrature method Gregory's method is among the very first quadrature formulas ever described in the literature, dating back to James Gregory (638-675). It seems to have been highly regarded

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Finite Difference Methods for Boundary Value Problems

Finite Difference Methods for Boundary Value Problems Finite Difference Methods for Boundary Value Problems October 2, 2013 () Finite Differences October 2, 2013 1 / 52 Goals Learn steps to approximate BVPs using the Finite Difference Method Start with two-point

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS

Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS Chapter 5 HIGH ACCURACY CUBIC SPLINE APPROXIMATION FOR TWO DIMENSIONAL QUASI-LINEAR ELLIPTIC BOUNDARY VALUE PROBLEMS 5.1 Introduction When a physical system depends on more than one variable a general

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Linear Equations in Linear Algebra

Linear Equations in Linear Algebra 1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.

PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9. PHYS 410/555 Computational Physics Solution of Non Linear Equations (a.k.a. Root Finding) (Reference Numerical Recipes, 9.0, 9.1, 9.4) We will consider two cases 1. f(x) = 0 1-dimensional 2. f(x) = 0 d-dimensional

More information

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004

Department of Applied Mathematics and Theoretical Physics. AMA 204 Numerical analysis. Exam Winter 2004 Department of Applied Mathematics and Theoretical Physics AMA 204 Numerical analysis Exam Winter 2004 The best six answers will be credited All questions carry equal marks Answer all parts of each question

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

Chapter 4 Systems of Linear Equations; Matrices

Chapter 4 Systems of Linear Equations; Matrices Chapter 4 Systems of Linear Equations; Matrices Section 5 Inverse of a Square Matrix Learning Objectives for Section 4.5 Inverse of a Square Matrix The student will be able to identify identity matrices

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Finite Elements for Nonlinear Problems

Finite Elements for Nonlinear Problems Finite Elements for Nonlinear Problems Computer Lab 2 In this computer lab we apply finite element method to nonlinear model problems and study two of the most common techniques for solving the resulting

More information

MATLAB Project: LU Factorization

MATLAB Project: LU Factorization Name Purpose: To practice Lay's LU Factorization Algorithm and see how it is related to MATLAB's lu function. Prerequisite: Section 2.5 MATLAB functions used: *, lu; and ludat and gauss from Laydata4 Toolbox

More information

Solving Boundary Value Problems (with Gaussians)

Solving Boundary Value Problems (with Gaussians) What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations An Overly Simplified and Brief Review of Differential Equation Solution Methods We will be dealing with initial or boundary value problems. A typical initial value problem has the form y y 0 y(0) 1 A typical

More information

Introduction to Finite Di erence Methods

Introduction to Finite Di erence Methods Introduction to Finite Di erence Methods ME 448/548 Notes Gerald Recktenwald Portland State University Department of Mechanical Engineering gerry@pdx.edu ME 448/548: Introduction to Finite Di erence Approximation

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods

Chapter 8. Numerical Solution of Ordinary Differential Equations. Module No. 1. Runge-Kutta Methods Numerical Analysis by Dr. Anita Pal Assistant Professor Department of Mathematics National Institute of Technology Durgapur Durgapur-71309 email: anita.buie@gmail.com 1 . Chapter 8 Numerical Solution of

More information

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations

Sparse Linear Systems. Iterative Methods for Sparse Linear Systems. Motivation for Studying Sparse Linear Systems. Partial Differential Equations Sparse Linear Systems Iterative Methods for Sparse Linear Systems Matrix Computations and Applications, Lecture C11 Fredrik Bengzon, Robert Söderlund We consider the problem of solving the linear system

More information

Exercise Sketch these lines and find their intersection.

Exercise Sketch these lines and find their intersection. These are brief notes for the lecture on Friday August 21, 2009: they are not complete, but they are a guide to what I want to say today. They are not guaranteed to be correct. 1. Solving systems of linear

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic

1300 Linear Algebra and Vector Geometry Week 2: Jan , Gauss-Jordan, homogeneous matrices, intro matrix arithmetic 1300 Linear Algebra and Vector Geometry Week 2: Jan 14 18 1.2, 1.3... Gauss-Jordan, homogeneous matrices, intro matrix arithmetic R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca Winter 2019 What

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Homework 1.1 and 1.2 WITH SOLUTIONS

Homework 1.1 and 1.2 WITH SOLUTIONS Math 220 Linear Algebra (Spring 2018) Homework 1.1 and 1.2 WITH SOLUTIONS Due Thursday January 25 These will be graded in detail and will count as two (TA graded) homeworks. Be sure to start each of these

More information

Finite difference models: one dimension

Finite difference models: one dimension Chapter 6 Finite difference models: one dimension 6.1 overview Our goal in building numerical models is to represent differential equations in a computationally manageable way. A large class of numerical

More information

1 Systems of Linear Equations

1 Systems of Linear Equations 1 Systems of Linear Equations Many problems that occur naturally involve finding solutions that satisfy systems of linear equations of the form a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x

More information

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination

Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Winfried Just, Ohio University September 22, 2017 Review: The coefficient matrix Consider a system of m linear equations in n variables.

More information

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise: Math 2250-004 Week 4 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent

More information

Introduction to Partial Differential Equations

Introduction to Partial Differential Equations Introduction to Partial Differential Equations Philippe B. Laval KSU Current Semester Philippe B. Laval (KSU) Key Concepts Current Semester 1 / 25 Introduction The purpose of this section is to define

More information

Ordinary Differential Equations

Ordinary Differential Equations CHAPTER 8 Ordinary Differential Equations 8.1. Introduction My section 8.1 will cover the material in sections 8.1 and 8.2 in the book. Read the book sections on your own. I don t like the order of things

More information

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued

Lecture 3: Gaussian Elimination, continued. Lecture 3: Gaussian Elimination, continued Definition The process of solving a system of linear equations by converting the system to an augmented matrix is called Gaussian Elimination. The general strategy is as follows: Convert the system of

More information

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics Elementary Matrices MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Outline Today s discussion will focus on: elementary matrices and their properties, using elementary

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126 12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR Cramer s Rule Solving a physical system of linear equation by using Cramer s rule Cramer s rule is really

More information

MATHEMATICAL METHODS INTERPOLATION

MATHEMATICAL METHODS INTERPOLATION MATHEMATICAL METHODS INTERPOLATION I YEAR BTech By Mr Y Prabhaker Reddy Asst Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad SYLLABUS OF MATHEMATICAL METHODS (as per JNTU

More information

Row Reduced Echelon Form

Row Reduced Echelon Form Math 40 Row Reduced Echelon Form Solving systems of linear equations lies at the heart of linear algebra. In high school we learn to solve systems in or variables using elimination and substitution of

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Chapter 1. Root Finding Methods. 1.1 Bisection method

Chapter 1. Root Finding Methods. 1.1 Bisection method Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

MTH 2032 Semester II

MTH 2032 Semester II MTH 232 Semester II 2-2 Linear Algebra Reference Notes Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2 ii Contents Table of Contents

More information

NUMERICAL SOLUTION OF ODE IVPs. Overview

NUMERICAL SOLUTION OF ODE IVPs. Overview NUMERICAL SOLUTION OF ODE IVPs 1 Quick review of direction fields Overview 2 A reminder about and 3 Important test: Is the ODE initial value problem? 4 Fundamental concepts: Euler s Method 5 Fundamental

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1

1. (7pts) Find the points of intersection, if any, of the following planes. 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1 Math 125 Exam 1 Version 1 February 20, 2006 1. (a) (7pts) Find the points of intersection, if any, of the following planes. Solution: augmented R 1 R 3 3x + 9y + 6z = 3 2x 6y 4z = 2 x + 3y + 2z = 1 3 9

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

THE NULLSPACE OF A: SOLVING AX = 0 3.2

THE NULLSPACE OF A: SOLVING AX = 0 3.2 32 The Nullspace of A: Solving Ax = 0 11 THE NULLSPACE OF A: SOLVING AX = 0 32 This section is about the space of solutions to Ax = 0 The matrix A can be square or rectangular One immediate solution is

More information

A Brief Introduction to Numerical Methods for Differential Equations

A Brief Introduction to Numerical Methods for Differential Equations A Brief Introduction to Numerical Methods for Differential Equations January 10, 2011 This tutorial introduces some basic numerical computation techniques that are useful for the simulation and analysis

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Making the grade: Part II

Making the grade: Part II 1997 2009, Millennium Mathematics Project, University of Cambridge. Permission is granted to print and copy this page on paper for non commercial use. For other uses, including electronic redistribution,

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability. Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability. REVIEW: We start with the differential equation dy(t) dt = f (t, y(t)) (1.1) y(0) = y 0 This equation can be

More information

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs

AM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs AM 205: lecture 14 Last time: Boundary value problems Today: Numerical solution of PDEs ODE BVPs A more general approach is to formulate a coupled system of equations for the BVP based on a finite difference

More information

5. Hand in the entire exam booklet and your computer score sheet.

5. Hand in the entire exam booklet and your computer score sheet. WINTER 2016 MATH*2130 Final Exam Last name: (PRINT) First name: Student #: Instructor: M. R. Garvie 19 April, 2016 INSTRUCTIONS: 1. This is a closed book examination, but a calculator is allowed. The test

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information