DonnishJournals

Size: px
Start display at page:

Download "DonnishJournals"

Transcription

1 DonnishJournals Donnish Journal of Educational Research and Reviews. Vol 2() pp January, Copyright 205 Donnish Journals Original Research Paper Solution of Linear Systems with MAXIMA Niyazi Ari and Savaş uylu* Computer Science, Nigerian urkish Nile University, Nigeria. Accepted 9th January, 205. he methods of calculus lie at the heart of the physical sciences and engineering. Maxima can help you make faster progress, if you are just learning calculus. he examples in this research paper will offer an opportunity to see some Maxima tools in the context of simple examples, but you will likely be thinking about much harder problems you want to solve as you see these tools used here. his research paper includes Solution of Linear Systems with MAXIMA. Keywords: Linear system, Gauss elimination, LU Decomposition, Gauss-Jordan elimination, Jacobi iteration method, Gauss-Seidel method, Conjugate Gradient method, Maxima.. INRODUCION Maxima is a system for the manipulation of symbolic and he Maxima branch of Macsyma was maintained by William numerical expressions, including differentiation, integration, Schelter from 982 until he passed away in 200. In 998 he aylor series, Laplace transforms, ordinary differential obtained permission to release the source code under the equations, systems of linear equations, polynomials, sets, lists, GNU General Public License (GPL). It was his efforts and skills vectors, matrices, tensors, and more. Maxima yields high that made the survival of Maxima possible. precision numeric results by using exact fractions, arbitrary MAXIMA has been constantly updated and used by researcher precision integers, and variable precision floating point and engineers as well as by students. numbers. Maxima can plot functions and data in two and three dimensions. 2. SOLUION OF LINEAR SYSEMS Maxima source code can be compiled on many computer operating systems, including Windows, Linux, and MacOS X. Ax = b he source code for all systems and precompiled binaries for Windows and Linux are available at the SourceForge file 2.. Introduction manager. Maxima is a descendant of Macsyma, the legendary he linear system has the form computer algebra system developed in the late 960s at the Massachusetts Institute of echnology. It is the only system Ax = b based on that effort still publicly available and with an active user community, thanks to its open source nature. Macsyma where A is the coefficient matrix, x is unknown vector, and b is was revolutionary in its day, and many later systems, such as known vector. Maple and Mathematica, were inspired by it. Corresponding Author: savastyludf@hotmail.com

2 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 009 here are two methods used to solve systems of linear and algebraic equations. hey are; 2... Direct methods Gauss elimination, LU Decomposition, Gauss-Jordan elimination Iterative methods: (indirect methods) Jacobi iteration method, Gauss-Seidel method, Conjugate Gradient method Gauss Elimination Method Gauss elimination is the most familiar method based on a row x 2 +x 3 =35 x by row elimination. he function of the elimination is to transform the equations into the form Ux = c, where U is an upper triangular matrix. he rule of elimination: Multiply one hen, by using x2 and x3 in Eq.(2.) gives equation (pivot equation) by a constant and subtract it from another equation. hen, the equations are solved by using the back substitution. x 2x 2 5x 3 2 x Example 2.. Eq.(2.5): 7x 2-9x 3 = xEq.(2.4) : 7x 2 +77x 3 = 245 herefore -96x (2.6) x Putting x 3 in Eq. (2.4) gives Writing back substitution in terms of Ux = c by using Eqs.(2.), (2.4), (2.6) x 2x 2 5x 3 2 (2.) 2x 5x 2 x 3 (2.2) 4x x 2 x 3 5 (2.3) Find the x, x 2 by using Gauss Elimination Method. ake Eq.(2.) as pivot equation, x is eliminated by subtracting -2 x Eq.(2.) from Eq.(2.2) as x 2x 2 5x 3 2 (2.) x 2 +x 3 =35 (2.4) -96x (2.6) 2 5 x 2 0 x x x x 2 2 x 3 3 (2.7) Eq.(2.2) : 2x 5x 2 x 3 x +x = xEq.(2.) 2x 4x 2 0x 3 24 Subtract 4 x Eq.(2.) from Eq.(2.3) as A Eq.(2.3): 4x x 2 x 3 5 7x -9x = is same as the determinant of the upper triangular matrix U. 4xEq.(2.) :4x 8x 2 20x 3 48 herefore; x 2 +x 3 =35 (2.4) 7x 2-9x 3 =-43 (2.5) he determinant of the coefficient matrix Pivoting If the diagonal component of the coefficient matrix is zero or become zero during the elimination, there will a problem. We can solve this problem by interchanging row. he algorithm of pivoting provides the following: he same procedure is applied for Eqs.(2.4) and (2.5) by taking Eq.(2.4) as pivot equation, x 2 is eliminated by subtracting 7 x Eq.(2.4) from Eq.(2.5) as Leading diagonal element Largest coefficient in the rows Moving it into the leading diagonal position Factorization (elimination)

3 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 00 Example 2.2. x 2x 2 5x 3 2 (2.8) 2x 5x 2 x 3 (2.9) 4x x 2 x 3 5 (2.0) Find the result: x x 2 2 x LU Decomposition Find x, x 2 by using pivoting. Examine the values of the first column, 2 and 4. he A = LU (2.7) largest absolute value is 4. So 4x x 2 x 3 5 (2.) LU decomposition, modified form of Gaussian elimination, is a matrix decomposition with the product of lower triangular matrix (L) and upper triangular matrix (U) of the form where L has zeros above the diagonal and U has zeros below the diagonal of a square matrix A. For 3 x 3 A matrix, 2x 5x 2 x 3 (2.2) x 2x 2 5x 3 2 (2.3) Eq.(2.2) 2x 5x x 2 3 Eq.(2.) 2x 0.5x 2 0.5x x 2.5x Eq.(2.3) x 2x 5x Eq.(2.) x 0.25x 0.25x herefore; 4.5x 2.5x (2.4).75x x (2.5).75x x Eq.(2.5).75x 4.75x Eq.(2.4).75x 2 x x herefore; 6 x3 6 (2.6) 3 x 3 3. Putting x 3 in Eq.(2.5) gives.75x x x By using x 2 in Eq.(2.2) gives 2x 5x 2 x 3 x a a 2 a 3 l 0 0 u u 2 u 3 a 2 a 22 a 23 l 2 l u 22 u 23 a 3 a 32 a 33 l 3 l 32 l u 33 (2.8) either the diagonal of L or the diagonal of U can be taken as unity for Doolittle s decomposition and Crout s decomposition respectively. For Doolittle s decomposition u 2 a a 2 a 3 u u 3 u u a 2 a 22 a l 2 2 u 22 l 2 u 3 u 23 (2.9) 23 l 2 a a 3 32 a 33 l u 3 l 3 u 2 l 32 u 22 l 3 u 3 l 32 u 23 u 33 For the linear system Ax = b L(Ux) = b with forward substitution in terms of Lc = b for c, and back substitution in terms of Ux = c for x, the linear system is solved. Example 2.3. x 2x 2 5x 3 2 (2.20) 2x 5x 2 x 3 (2.2) 4x x 2 x 3 5 (2.22) Find x, x 2 by using the LU Doolittle s decomposition. he coefficient matrix is factorized with lower and upper triangular matrices

4 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v u u 2 u l u 22 u 23 (2.23) 4 l 3 l u 33 he upper and lower triangular matrices are solved..u u.u 2 2 u 2 2 (2.24).u 3 5 u 3 5 l 2.u 2 l 2 2 l 2.u 2 u 22 5 u (2.25) l 2.u 3 u 23 u 23 0 l 3.u 4 l 3 4 l 3.u 2 l 32.u 22 l (2.26) u l 3.u 3 l 32.u 23 u L 2 0 (2.27) U 0 (2.28) Forward substitution: 2.4. Gauss-Jordan Elimination Method he Gauss-Jordan method is an elimination method by putting zeros above and below each pivot element. It is generally used for matrix inverse calculation. he steps of the Gauss-Jordan elimination are he augmented matrix [A b] is used. With the help of row operations (interchanging, scaling, replacement), the identity matrix is tried to be obtained. Example 2.4. x 2x 2 5x 3 2 (2.33) 2x 5x 2 x 3 (2.34) 4x x 2 x 3 5 (2.35) Find x, x 2 by using the Gauss-Jordan Elimination Method. he augmented matrix for the above example is (2.36) 4 5 Lc = b (2.29) 0 0 c c 2 c c (2.30) he third row is moved to the first row considering pivoting (2.37) 2 5 Back substitution: 2 5 x 2 0 x x (2.32) Multiply first row by -/4 and add to the second row (2.38) 2 5 Multiply first row by /2 and add to the third row Find solution x x 2 2 x (2.39)

5 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 02 he third row is moved to the second row considering pivoting (2.40) he necessary condition to use this method is that the diagonal has no zeros. he coefficient matrix (A) can be written as a summation of diagonal, lower and upper triangular parts of it. It is important that the triangular matrices have no relation with the matrices L and U used in the previous section (direct methods). hey are only additive components of A. So, the solution of linear equations can be shown Multiply second row by 2/9 and add to the first row Ax = b (2.46) (2.4) (I + L + U)x = b (2.47) x = b - (L + U)x (2.48) As a result, for the i th equation of system, Multiply second row by 7/8 and add to the third row (2.42) Multiply third row by -/4 and add to the first row (2.43) Multiply third row by -9/32 and add to the second row (2.44) Multiply first, second and third row by /4, 2/9, and 3/6 respectively x x 2 2 x 3 3 (2.45) he last column gives the solution vector 2.5. Jacobi Iteration Method he Jacobi method is an iterative method, which means guessing the solution and then improving the solution iteratively. n a ij x j b (2.49) i j while considering a ii 0, the th iteration solution can be written x i i j a b a ij x (2.50) ii ji he steps of the Jacobi iteration method are Guess initial solution x (0). Divide each equation by the diagonal term. (k Find S i a x ) ij j for j i. Find the iterated solution x b S i i. i a ii Repeat for n equation and find x i for i =,2, :n. Go to the third step for the next iteration using the new iterated solution. Example 2.5. x 2x 2 5x 3 2 (2.5) 2x 5x 2 x 3 (2.52) 4x x 2 x 3 5 (2.53) Find x, x 2 by using the Jacobi Iteration Method. Divide all coefficients by the diagonal term, then find L and U

6 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v x x 2 x (2.54) 2 x x 2 2 x (2.55) x 2 x 2 x (2.56) L (2.57) Ax = b (2.60) (I + L + U)x = b (2.6) x = (I + L) - b - Ux (2.62) As a result, while considering a ii 0, the th iteration solution can be written x i b i a ij x j a ij x j (2.63) a ii ji ji and U 0 0 (2.58) he iteration process equation for k-step to the (k+) step 5 0 x x 2 x x x 2 x 3 k k Calculation: Guess x = 0, x 2 = 0, x 3 = 0 to start. Results of the iteration are shown in able 2. (2.59) he steps of the Gauss-Seidel Method are Guess initial solution x (0). Divide each equation by the diagonal term. Find S i a ij x j ji Find S i2 a ij x j ji Find S i S i S i2. (k ) for for j i. j i. Find the iterated solution x b S i i. i a ii Repeat for n equation and find xi for i =, 2, n. Go to the third step for the next iteration using the new iterated solution. Example 2.6. x 2x 2 5x 3 2 (2.64) 2x 5x 2 x 3 (2.65) Final result: 4x x 2 x 3 5 (2.66) x x 2 2 x 3 3 Find x, x 2 by using the Gauss-Seidel Method Gauss-Seidel Method Divide all coefficients by the diagonal term, then For Jacobi Iteration Method, all x are modified for the next iteration. For Gauss-Seidel Method, the new component of x is immediately evaluated before all of linear equations are completed. Same as before, the condition that the diagonal has no zeros has been considered. he coefficient matrix (A) can be written as A = I + L + U in terms of diagonal, lower and upper triangular parts of itself. So, the solution of linear equations can be shown 5 x x 2 x 3 (2.67) x 2x 2 x 3 (2.68) x 2 x 2 x 3 2 (2.69) 5 5 5

7 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 04 able 2.: Results of iteration, obtained after each step k x x 2 x L 2 0 0and U x x 2 0 x x x 3 k x 2 3k (2.70) x Ax 0, the minimization of quadratic function with the form 2 x Ax b x can be considered for the solution. he steps of the Conjugate gradient Method are as follows: Guess initial solution x 4 2 x 0 x 2 x 2 5 x 5 0 x k k k x 0. (2.7) Find the initial residual r 0 b Ax 0 Iteration process: Guess x = 0, x 2 = 0, x 3 = 0 to start. Use forward substitution. he initial search direction (descent vector) is taken same as the initial residual Results of iteration, obtained after each step are shown in s 0 r (0) able 2.2. Find result: x x 2 2 x Conjugate Gradient Method If the coefficients matrix is symmetric A A Find the iterated solution x x r s Find the iterated residual r r r s r s As r As As and positive definite

8 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 05 able 2.2: Results of iteration, obtained after each step k x x 2 x Find the iterated search direction s r r r r s r Go to the fourth step for the next iteration using the new iterated solution. Example x x 2 2x 3 9 (2.72) Find result: x x 2 2 x Stabilized Bi-conjugate Gradient Method If the coefficient matrix is not symmetric, the Stabilized Biconjugate Gradient Method can be considered. he steps of this method can be summarized as follows Guess initial solution x (0). x 4x 2 3x 3 2 (2.73) Find the initial residual r 0 b Ax 0. 2x 3x 2 5x 3 (2.74) Find x, x 2 by using the Conjugate Gradient Method. 5 A and b 2 5 Guess x (0) 000 to start. 9 he initial residual r (0) b Ax (0) 2. 9 he initial descent vector s (0) 2. he iteration results are shown in able 2.3 below. he initial values for v 0 p (0) and c 0 r (0) r (0), c Find k k p r c c k k v k ) and k(p k c Find k 0. k (r (0) ) v Find s r v k. Find t As. Find k (t k ) s (t k. ) t v k Ap k Find c k 0 c k and c k k (r 0 ) t. Find the iterated solution x x p k k s k.

9 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 06 able 2.3: Results of iteration, obtained after each step k x x 2 x 3 r S [9,-2,] [9,-2,] [0.06,4.48,0.77] [0.96,4.28, [.84,0.23,.47] [-.58,0.93,.97] [0.00,0.00,0.00] [0.00,0.00,0 Find the iterated residual r s k t k. 2. Go to the fourth step for the next iteration using the new iterated solution. Example x x 2 2x 3 9 (2.72) x 4x 2 3x 3 2 (2.73) 2x 3x 2 5x 3 (2.74) 4. Find x, x 2 by using the Conjugate Gradient Method. Guess x (0) = [0 0 0] to start 5. he iteration results are shown in able 2.4 below. Find result: x 6. x 2 2 x MAXIMA Application Introduction he solution of simultaneous equations occurs frequently in applications of engineering mathematics. Nonlinear equations arise in a number of instances, such as optimization, eigenvalue calculations, simulations, etc Examples.

10 N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 07 able 2.4: Results of iteration, obtained after each step k x x 2 x CONCLUSION his research paper can apply each and every part of Solution of Linear Systems, help applications of both the physical sciences and engineering, make faster progress, and help to understand Solution of Linear Systems faster. he paper particularity helps to understand parts of Linear Algebra and is going to extend to other parts of the Linear Algebra. 4. ACKNOWLEDGEMENS We would like to thank the Maxima developers and Nigerian urkish Nile University (NNU) for their friendly help. REFERENCES [] R. H. Rand, Introduction to Maxima, [2] R. Dodier, Minimal Maxima, 2005 [3] [4] N. Ari, G. Apaydin, Symbolic Computation echniques for Electromagnetics with MAXIMA and MAPLE, Lambert Academic Publishing, [5] Fred Szabo. Linear Algebra, An Introduction Using MAHEMAICA Academic Press 2000 [6] S. Lipschutz, M. Lipson, Linear Algebra Schaum s Outlines McGraw Hill2009 [7] S. Lipschutz, Linear Algebra Schaum s Solved Problems Series McGraw Hill 989 [8] Minh Van Nguyen, Linear Algebra with MAXIMA ermath [9] N. ARI, Engineering Mathematics with MAXIMA SDU, Almaty, 202 [0] Niyazi ARI, Lecture notes, University of echnology, Zurich, Switzerland. [] Niyazi ARI, Symbolic computation of electromagnetics with Maxima (203). [2]

DonnishJournals

DonnishJournals DonnishJournals 2041-1189 Donnish Journal of Educational Research and Reviews. Vol 2(1) pp. 018-024 January, 2015. http:///djerr Copyright 2015 Donnish Journals Original Research Paper Plotting with MAXIMA

More information

DonnishJournals

DonnishJournals DonnishJournals 2041-1189 Donnish Journal of Educational Research and Reviews. Vol 1(2) pp. 041-046 December, 2014. http:///djerr Copyright 2014 Donnish Journals Original Research Paper Laplace Transforms,

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

TMA4125 Matematikk 4N Spring 2017

TMA4125 Matematikk 4N Spring 2017 Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

DonnishJournals

DonnishJournals DonnishJounals 041-1189 Donnish Jounal of Educational Reseach and Reviews. Vol 1(1) pp. 01-017 Novembe, 014. http:///dje Copyight 014 Donnish Jounals Oiginal Reseach Pape Vecto Analysis Using MAXIMA Savaş

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n

More information

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015 Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved. 7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply

More information

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Introduction to Applied Linear Algebra with MATLAB

Introduction to Applied Linear Algebra with MATLAB Sigam Series in Applied Mathematics Volume 7 Rizwan Butt Introduction to Applied Linear Algebra with MATLAB Heldermann Verlag Contents Number Systems and Errors 1 1.1 Introduction 1 1.2 Number Representation

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

CPE 310: Numerical Analysis for Engineers

CPE 310: Numerical Analysis for Engineers CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Linear Algebra I Lecture 8

Linear Algebra I Lecture 8 Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Numerical Methods for Chemical Engineers

Numerical Methods for Chemical Engineers Numerical Methods for Chemical Engineers Chapter 3: System of Linear Algebraic Equation Morteza Esfandyari Email: Esfandyari.morteza@yahoo.com Mesfandyari.mihanblog.com Page 4-1 System of Linear Algebraic

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other

More information

lecture 2 and 3: algorithms for linear algebra

lecture 2 and 3: algorithms for linear algebra lecture 2 and 3: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 27, 2018 Solving a system of linear equations

More information

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 Matrix notation A nm : n m : size of the matrix m : no of columns, n: no of rows Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1 n = m square matrix Symmetric matrix Upper triangular matrix: matrix

More information

Mathematical Methods for Numerical Analysis and Optimization

Mathematical Methods for Numerical Analysis and Optimization Biyani's Think Tank Concept based notes Mathematical Methods for Numerical Analysis and Optimization (MCA) Varsha Gupta Poonam Fatehpuria M.Sc. (Maths) Lecturer Deptt. of Information Technology Biyani

More information

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin A LINEAR SYSTEMS OF EQUATIONS By : Dewi Rachmatin Back Substitution We will now develop the backsubstitution algorithm, which is useful for solving a linear system of equations that has an upper-triangular

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Matrix Factorization Reading: Lay 2.5

Matrix Factorization Reading: Lay 2.5 Matrix Factorization Reading: Lay 2.5 October, 20 You have seen that if we know the inverse A of a matrix A, we can easily solve the equation Ax = b. Solving a large number of equations Ax = b, Ax 2 =

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

MODEL ANSWERS TO THE THIRD HOMEWORK

MODEL ANSWERS TO THE THIRD HOMEWORK MODEL ANSWERS TO THE THIRD HOMEWORK 1 (i) We apply Gaussian elimination to A First note that the second row is a multiple of the first row So we need to swap the second and third rows 1 3 2 1 2 6 5 7 3

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

M.SC. PHYSICS - II YEAR

M.SC. PHYSICS - II YEAR MANONMANIAM SUNDARANAR UNIVERSITY DIRECTORATE OF DISTANCE & CONTINUING EDUCATION TIRUNELVELI 627012, TAMIL NADU M.SC. PHYSICS - II YEAR DKP26 - NUMERICAL METHODS (From the academic year 2016-17) Most Student

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44) 1 Introduction Lecture 12 Linear systems of equations II We have looked at Gauss-Jordan elimination and Gaussian elimination as ways to solve a linear system Ax=b. We now turn to the LU decomposition,

More information

1.Chapter Objectives

1.Chapter Objectives LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

lecture 3 and 4: algorithms for linear algebra

lecture 3 and 4: algorithms for linear algebra lecture 3 and 4: algorithms for linear algebra STAT 545: Introduction to computational statistics Vinayak Rao Department of Statistics, Purdue University August 30, 2016 Solving a system of linear equations

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1 LU decomposition -- manual demonstration Instructor: Nam Sun Wang lu-manualmcd LU decomposition, where L is a lower-triangular matrix with as the diagonal elements and U is an upper-triangular matrix Just

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation

More information

Lecture 4: Linear Algebra 1

Lecture 4: Linear Algebra 1 Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n... .. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1]. Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights

More information

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2 Ch.10 Numerical Applications 10-1 Least-squares Start with Self-test 10-1/459. Linear equation Y = ax + b Error function: E = D 2 = (Y - (ax+b)) 2 Regression Formula: Slope a = (N ΣXY - (ΣX)(ΣY)) / (N

More information

DonnishJournals

DonnishJournals DoishJournls 2041-1189 Doish Journl of Eductionl Reserch nd Reviews Vol 2(1) pp 001-007 Jnury, 2015 http://wwwdoishjournlsorg/djerr Copyright 2015 Doish Journls Originl Reserch Article Algebr of Mtrices

More information

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018 Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

CS 323: Numerical Analysis and Computing

CS 323: Numerical Analysis and Computing CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

MTH 215: Introduction to Linear Algebra

MTH 215: Introduction to Linear Algebra MTH 215: Introduction to Linear Algebra Lecture 5 Jonathan A. Chávez Casillas 1 1 University of Rhode Island Department of Mathematics September 20, 2017 1 LU Factorization 2 3 4 Triangular Matrices Definition

More information

Exercise Sketch these lines and find their intersection.

Exercise Sketch these lines and find their intersection. These are brief notes for the lecture on Friday August 21, 2009: they are not complete, but they are a guide to what I want to say today. They are not guaranteed to be correct. 1. Solving systems of linear

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Exact and Approximate Numbers:

Exact and Approximate Numbers: Eact and Approimate Numbers: The numbers that arise in technical applications are better described as eact numbers because there is not the sort of uncertainty in their values that was described above.

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Methods for Solving Linear Systems Part 2

Methods for Solving Linear Systems Part 2 Methods for Solving Linear Systems Part 2 We have studied the properties of matrices and found out that there are more ways that we can solve Linear Systems. In Section 7.3, we learned that we can use

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB (Mathematical Operations with Arrays) Contents Getting Started Matrices Creating Arrays Linear equations Mathematical Operations with Arrays Using Script

More information

Computational Methods

Computational Methods Numerical Computational Methods Revised Edition P. B. Patil U. P. Verma Alpha Science International Ltd. Oxford, U.K. CONTENTS Preface List ofprograms v vii 1. NUMER1CAL METHOD, ERROR AND ALGORITHM 1 1.1

More information