DonnishJournals

Similar documents
DonnishJournals

DonnishJournals

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

TMA4125 Matematikk 4N Spring 2017

Next topics: Solving systems of linear equations

Linear Algebraic Equations

Solving Linear Systems of Equations

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

DonnishJournals

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

Process Model Formulation and Solution, 3E4

MATH 3511 Lecture 1. Solving Linear Systems 1

Gaussian Elimination and Back Substitution

JACOBI S ITERATION METHOD

Applied Linear Algebra in Geoscience Using MATLAB

Linear Algebraic Equations

Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015

Numerical Methods - Numerical Linear Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

1 Number Systems and Errors 1

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

Iterative Methods. Splitting Methods

Introduction to Applied Linear Algebra with MATLAB

Direct Methods for Solving Linear Systems. Matrix Factorization

The Solution of Linear Systems AX = B

2.1 Gaussian Elimination

CPE 310: Numerical Analysis for Engineers

Introduction to Scientific Computing

Linear Algebra I Lecture 8

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

Numerical Methods for Chemical Engineers

Systems of Linear Equations

lecture 2 and 3: algorithms for linear algebra

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

Mathematical Methods for Numerical Analysis and Optimization

A LINEAR SYSTEMS OF EQUATIONS. By : Dewi Rachmatin

9.1 Preconditioned Krylov Subspace Methods

Numerical Linear Algebra

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

Matrix Factorization Reading: Lay 2.5

Scientific Computing

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

CHAPTER 6. Direct Methods for Solving Linear Systems

MODEL ANSWERS TO THE THIRD HOMEWORK

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

M.SC. PHYSICS - II YEAR

Contents. Preface... xi. Introduction...

Matrix decompositions

Solution of Linear Equations

Matrix decompositions

Linear Systems of n equations for n unknowns

Lecture 12. Linear systems of equations II. a 13. a 12. a 14. a a 22. a 23. a 34 a 41. a 32. a 33. a 42. a 43. a 44)

1.Chapter Objectives

AIMS Exercise Set # 1

Lecture 18 Classical Iterative Methods

Chapter 7 Iterative Techniques in Matrix Algebra

Course Notes: Week 1

lecture 3 and 4: algorithms for linear algebra

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

y b where U. matrix inverse A 1 ( L. 1 U 1. L 1 U 13 U 23 U 33 U 13 2 U 12 1

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Algebra C Numerical Linear Algebra Sample Exam Problems

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

LINEAR SYSTEMS (11) Intensive Computation

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Lecture 4: Linear Algebra 1

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Linear Equations and Matrix

7.6 The Inverse of a Square Matrix

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

Matrix Algebra for Engineers Jeffrey R. Chasnov

Numerical Solution Techniques in Mechanical and Aerospace Engineering

MODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].

Y = ax + b. Numerical Applications Least-squares. Start with Self-test 10-1/459. Linear equation. Error function: E = D 2 = (Y - (ax+b)) 2

DonnishJournals

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Computational Methods. Systems of Linear Equations

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

CS 323: Numerical Analysis and Computing

Linear System of Equations

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

MTH 215: Introduction to Linear Algebra

Exercise Sketch these lines and find their intersection.

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Review Questions REVIEW QUESTIONS 71

Exact and Approximate Numbers:

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Methods for Solving Linear Systems Part 2

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

Computational Methods

Transcription:

DonnishJournals 204-89 Donnish Journal of Educational Research and Reviews. Vol 2() pp. 008-07 January, 205. http:///djerr Copyright 205 Donnish Journals Original Research Paper Solution of Linear Systems with MAXIMA Niyazi Ari and Savaş uylu* Computer Science, Nigerian urkish Nile University, Nigeria. Accepted 9th January, 205. he methods of calculus lie at the heart of the physical sciences and engineering. Maxima can help you make faster progress, if you are just learning calculus. he examples in this research paper will offer an opportunity to see some Maxima tools in the context of simple examples, but you will likely be thinking about much harder problems you want to solve as you see these tools used here. his research paper includes Solution of Linear Systems with MAXIMA. Keywords: Linear system, Gauss elimination, LU Decomposition, Gauss-Jordan elimination, Jacobi iteration method, Gauss-Seidel method, Conjugate Gradient method, Maxima.. INRODUCION Maxima is a system for the manipulation of symbolic and he Maxima branch of Macsyma was maintained by William numerical expressions, including differentiation, integration, Schelter from 982 until he passed away in 200. In 998 he aylor series, Laplace transforms, ordinary differential obtained permission to release the source code under the equations, systems of linear equations, polynomials, sets, lists, GNU General Public License (GPL). It was his efforts and skills vectors, matrices, tensors, and more. Maxima yields high that made the survival of Maxima possible. precision numeric results by using exact fractions, arbitrary MAXIMA has been constantly updated and used by researcher precision integers, and variable precision floating point and engineers as well as by students. numbers. Maxima can plot functions and data in two and three dimensions. 2. SOLUION OF LINEAR SYSEMS Maxima source code can be compiled on many computer operating systems, including Windows, Linux, and MacOS X. Ax = b he source code for all systems and precompiled binaries for Windows and Linux are available at the SourceForge file 2.. Introduction manager. Maxima is a descendant of Macsyma, the legendary he linear system has the form computer algebra system developed in the late 960s at the Massachusetts Institute of echnology. It is the only system Ax = b based on that effort still publicly available and with an active user community, thanks to its open source nature. Macsyma where A is the coefficient matrix, x is unknown vector, and b is was revolutionary in its day, and many later systems, such as known vector. Maple and Mathematica, were inspired by it. Corresponding Author: savastyludf@hotmail.com

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 009 here are two methods used to solve systems of linear and algebraic equations. hey are; 2... Direct methods Gauss elimination, LU Decomposition, Gauss-Jordan elimination. 2..2. Iterative methods: (indirect methods) Jacobi iteration method, Gauss-Seidel method, Conjugate Gradient method. 2.2. Gauss Elimination Method Gauss elimination is the most familiar method based on a row x 2 +x 3 =35 x 2 35 33 2. by row elimination. he function of the elimination is to transform the equations into the form Ux = c, where U is an upper triangular matrix. he rule of elimination: Multiply one hen, by using x2 and x3 in Eq.(2.) gives equation (pivot equation) by a constant and subtract it from another equation. hen, the equations are solved by using the back substitution. x 2x 2 5x 3 2 x 2 4 5. Example 2.. Eq.(2.5): 7x 2-9x 3 = - 43 7xEq.(2.4) : 7x 2 +77x 3 = 245 herefore -96x 3 288 (2.6) x 3 288 3. 96 Putting x 3 in Eq. (2.4) gives Writing back substitution in terms of Ux = c by using Eqs.(2.), (2.4), (2.6) x 2x 2 5x 3 2 (2.) 2x 5x 2 x 3 (2.2) 4x x 2 x 3 5 (2.3) Find the x, x 2 by using Gauss Elimination Method. ake Eq.(2.) as pivot equation, x is eliminated by subtracting -2 x Eq.(2.) from Eq.(2.2) as x 2x 2 5x 3 2 (2.) x 2 +x 3 =35 (2.4) -96x 3 288 (2.6) 2 5 x 2 0 x 2 35 0 0 96 x 3 288 x x 2 2 x 3 3 (2.7) Eq.(2.2) : 2x 5x 2 x 3 x +x =35 2 3 2xEq.(2.) 2x 4x 2 0x 3 24 Subtract 4 x Eq.(2.) from Eq.(2.3) as A 2 5 96 4 Eq.(2.3): 4x x 2 x 3 5 7x -9x =-43 2 3 is same as the determinant of the upper triangular matrix U. 4xEq.(2.) :4x 8x 2 20x 3 48 herefore; x 2 +x 3 =35 (2.4) 7x 2-9x 3 =-43 (2.5) he determinant of the coefficient matrix 2 5 2.2.. Pivoting If the diagonal component of the coefficient matrix is zero or become zero during the elimination, there will a problem. We can solve this problem by interchanging row. he algorithm of pivoting provides the following: he same procedure is applied for Eqs.(2.4) and (2.5) by taking Eq.(2.4) as pivot equation, x 2 is eliminated by subtracting 7 x Eq.(2.4) from Eq.(2.5) as Leading diagonal element Largest coefficient in the rows Moving it into the leading diagonal position Factorization (elimination)

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 00 Example 2.2. x 2x 2 5x 3 2 (2.8) 2x 5x 2 x 3 (2.9) 4x x 2 x 3 5 (2.0) Find the result: x x 2 2 x 3 3 2.3. LU Decomposition Find x, x 2 by using pivoting. Examine the values of the first column, 2 and 4. he A = LU (2.7) largest absolute value is 4. So 4x x 2 x 3 5 (2.) LU decomposition, modified form of Gaussian elimination, is a matrix decomposition with the product of lower triangular matrix (L) and upper triangular matrix (U) of the form where L has zeros above the diagonal and U has zeros below the diagonal of a square matrix A. For 3 x 3 A matrix, 2x 5x 2 x 3 (2.2) x 2x 2 5x 3 2 (2.3) Eq.(2.2) 2x 5x x 2 3 Eq.(2.) 2x 0.5x 2 0.5x 3 2.5 4.5x 2.5x 3 3.5 2 Eq.(2.3) x 2x 5x 2 2 3 4 Eq.(2.) x 0.25x 0.25x.25 2 3 herefore; 4.5x 2.5x 3 3.5 (2.4).75x 2 4.75x 3 0.75 (2.5).75x 2 4.75x 3 0.75 Eq.(2.5).75x 4.75x 0.75 2 3 6 7 7 Eq.(2.4).75x 2 x 3 5.25 3 x 6 3 8 2 herefore; 6 x3 6 (2.6) 3 x 3 3. Putting x 3 in Eq.(2.5) gives.75x 2 4.75x 3 0.75 x 2 0.75 4.25 2..75 By using x 2 in Eq.(2.2) gives 2x 5x 2 x 3 x 30. 2 a a 2 a 3 l 0 0 u u 2 u 3 a 2 a 22 a 23 l 2 l 22 0 0 u 22 u 23 a 3 a 32 a 33 l 3 l 32 l 33 0 0 u 33 (2.8) either the diagonal of L or the diagonal of U can be taken as unity for Doolittle s decomposition and Crout s decomposition respectively. For Doolittle s decomposition u 2 a a 2 a 3 u u 3 u u a 2 a 22 a l 2 2 u 22 l 2 u 3 u 23 (2.9) 23 l 2 a a 3 32 a 33 l u 3 l 3 u 2 l 32 u 22 l 3 u 3 l 32 u 23 u 33 For the linear system Ax = b L(Ux) = b with forward substitution in terms of Lc = b for c, and back substitution in terms of Ux = c for x, the linear system is solved. Example 2.3. x 2x 2 5x 3 2 (2.20) 2x 5x 2 x 3 (2.2) 4x x 2 x 3 5 (2.22) Find x, x 2 by using the LU Doolittle s decomposition. he coefficient matrix is factorized with lower and upper triangular matrices

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 0 2 5 0 0 u u 2 u 3 5 2 l 2 0 0 u 22 u 23 (2.23) 4 l 3 l 32 0 0 u 33 he upper and lower triangular matrices are solved..u u.u 2 2 u 2 2 (2.24).u 3 5 u 3 5 l 2.u 2 l 2 2 l 2.u 2 u 22 5 u 22 5 4 (2.25) l 2.u 3 u 23 u 23 0 l 3.u 4 l 3 4 l 3.u 2 l 32.u 22 l 32 8 7 (2.26) u 33 20 77 96 l 3.u 3 l 32.u 23 u 33 0 0 L 2 0 (2.27) 4 7 2 5 U 0 (2.28) 0 0 96 Forward substitution: 2.4. Gauss-Jordan Elimination Method he Gauss-Jordan method is an elimination method by putting zeros above and below each pivot element. It is generally used for matrix inverse calculation. he steps of the Gauss-Jordan elimination are he augmented matrix [A b] is used. With the help of row operations (interchanging, scaling, replacement), the identity matrix is tried to be obtained. Example 2.4. x 2x 2 5x 3 2 (2.33) 2x 5x 2 x 3 (2.34) 4x x 2 x 3 5 (2.35) Find x, x 2 by using the Gauss-Jordan Elimination Method. he augmented matrix for the above example is 2 5 2 2 5 (2.36) 4 5 Lc = b (2.29) 0 0 c 2 2 2 0 c 2 c 35 4 7 c 3 5 288 (2.30) he third row is moved to the first row considering pivoting 4 5 2 5 2 (2.37) 2 5 Back substitution: 2 5 x 2 0 x2 35 0 0 96 x 3 288 (2.32) Multiply first row by -/4 and add to the second row 4 5 0.75 4.75 0.75 (2.38) 2 5 Multiply first row by /2 and add to the third row Find solution x x 2 2 x 3 3 4 5 0.75 4.75 0.75 0 4.5.5 3.5 (2.39)

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 02 he third row is moved to the second row considering pivoting 4 5 4.5.5 0 3.5 (2.40) 0.75 4.75 0.75 he necessary condition to use this method is that the diagonal has no zeros. he coefficient matrix (A) can be written as a summation of diagonal, lower and upper triangular parts of it. It is important that the triangular matrices have no relation with the matrices L and U used in the previous section (direct methods). hey are only additive components of A. So, the solution of linear equations can be shown Multiply second row by 2/9 and add to the first row Ax = b (2.46) 4 0.33 8 4.5.5 0 3.5 0.75 4.75 0.75 (2.4) (I + L + U)x = b (2.47) x = b - (L + U)x (2.48) As a result, for the i th equation of system, Multiply second row by 7/8 and add to the third row 4 0.33 8 0 4.5.5 3.5 0 0 5.33 6 (2.42) Multiply third row by -/4 and add to the first row 4 0 0 4 0 4.5.5 3.5 0 0 5.33 6 (2.43) Multiply third row by -9/32 and add to the second row 4 0 0 4 0 4.5 0 9 0 0 5.33 6 (2.44) Multiply first, second and third row by /4, 2/9, and 3/6 respectively 0 0 0 0 2 0 0 3 x x 2 2 x 3 3 (2.45) he last column gives the solution vector 2.5. Jacobi Iteration Method he Jacobi method is an iterative method, which means guessing the solution and then improving the solution iteratively. n a ij x j b (2.49) i j while considering a ii 0, the th iteration solution can be written x i i j a b a ij x (2.50) ii ji he steps of the Jacobi iteration method are Guess initial solution x (0). Divide each equation by the diagonal term. (k Find S i a x ) ij j for j i. Find the iterated solution x b S i i. i a ii Repeat for n equation and find x i for i =,2, :n. Go to the third step for the next iteration using the new iterated solution. Example 2.5. x 2x 2 5x 3 2 (2.5) 2x 5x 2 x 3 (2.52) 4x x 2 x 3 5 (2.53) Find x, x 2 by using the Jacobi Iteration Method. Divide all coefficients by the diagonal term, then find L and U

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 03 5 x x 2 x 3 4 4 4 (2.54) 2 x x 2 2 x 3 5 5 5 (2.55) x 2 x 2 x 3 2 5 5 5 (2.56) 0 0 0 L 2 0 0 5 2 0 5 5 (2.57) Ax = b (2.60) (I + L + U)x = b (2.6) x = (I + L) - b - Ux (2.62) As a result, while considering a ii 0, the th iteration solution can be written x i b i a ij x j a ij x j (2.63) a ii ji ji and 0 4 4 U 0 0 (2.58) 5 0 0 0 he iteration process equation for k-step to the (k+) step 5 0 x 4 4 4 x 2 x 2 5 5 0 5 x x 2 x 3 k 2 2 3 k 0 5 5 5 Calculation: Guess x = 0, x 2 = 0, x 3 = 0 to start. Results of the iteration are shown in able 2. (2.59) he steps of the Gauss-Seidel Method are Guess initial solution x (0). Divide each equation by the diagonal term. Find S i a ij x j ji Find S i2 a ij x j ji Find S i S i S i2. (k ) for for j i. j i. Find the iterated solution x b S i i. i a ii Repeat for n equation and find xi for i =, 2, n. Go to the third step for the next iteration using the new iterated solution. Example 2.6. x 2x 2 5x 3 2 (2.64) 2x 5x 2 x 3 (2.65) Final result: 4x x 2 x 3 5 (2.66) x x 2 2 x 3 3 Find x, x 2 by using the Gauss-Seidel Method. 2.6. Gauss-Seidel Method Divide all coefficients by the diagonal term, then For Jacobi Iteration Method, all x are modified for the next iteration. For Gauss-Seidel Method, the new component of x is immediately evaluated before all of linear equations are completed. Same as before, the condition that the diagonal has no zeros has been considered. he coefficient matrix (A) can be written as A = I + L + U in terms of diagonal, lower and upper triangular parts of itself. So, the solution of linear equations can be shown 5 x x 2 x 3 (2.67) 4 4 4 2 x 2x 2 x 3 (2.68) 5 5 5 x 2 x 2 x 3 2 (2.69) 5 5 5

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 04 able 2.: Results of iteration, obtained after each step k x x 2 x 3 0.0000 0.0000 0.0000 2.2500 2.2000 2.4000 3.2000 2.2200 3.0300 4.0475 2.0740 3.0480 5.0065 2.0094 3.020 6 0.9973.9986 3.0025 7 0.9990.9984 3.0000 8 0.9996.9996 2.9996 9.0000.9999 2.9999 0.0000 2.0000 3.0000 0 0 0 0 4 4 L 2 0 0and U 0 0 5 5 2 0 0 0 0 5 5 5 0 0 0 x 4 4 4 x 2 0 x 0 0 5 5 5 x 2 0 0 x 3 k 2 0 0 5 5 5 2 x 2 3k (2.70) x Ax 0, the minimization of quadratic function with the form 2 x Ax b x can be considered for the solution. he steps of the Conjugate gradient Method are as follows: Guess initial solution 5 0 0 4 0 4 x 4 2 x 0 x 2 x 2 5 x 5 0 x 2 5 2 0 0 0 3 k 0 0 3 k 5 5 5 k x 0. (2.7) Find the initial residual r 0 b Ax 0 Iteration process: Guess x = 0, x 2 = 0, x 3 = 0 to start. Use forward substitution. he initial search direction (descent vector) is taken same as the initial residual Results of iteration, obtained after each step are shown in s 0 r (0) able 2.2. Find result: x x 2 2 x 3 3 2.7. Conjugate Gradient Method If the coefficients matrix is symmetric A A Find the iterated solution x x r s Find the iterated residual r r r s r s As r As As and positive definite

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 05 able 2.2: Results of iteration, obtained after each step k x x 2 x 3 0.0000 0.0000 0.0000 2.2500 2.7000 3.2300 3.75 2.000 2.9769 4.0060 2.0070 3.006 5.004 2.0002 2.9998 6.000 2.000 3.0000 7.0000 2.0000 3.0000 Find the iterated search direction s r r r r s r Go to the fourth step for the next iteration using the new iterated solution. Example 2.7. 5x x 2 2x 3 9 (2.72) Find result: x x 2 2 x 3 3 2.7. Stabilized Bi-conjugate Gradient Method If the coefficient matrix is not symmetric, the Stabilized Biconjugate Gradient Method can be considered. he steps of this method can be summarized as follows Guess initial solution x (0). x 4x 2 3x 3 2 (2.73) Find the initial residual r 0 b Ax 0. 2x 3x 2 5x 3 (2.74) Find x, x 2 by using the Conjugate Gradient Method. 5 A 4 2 3 2 9 3 and b 2 5 Guess x (0) 000 to start. 9 he initial residual r (0) b Ax (0) 2. 9 he initial descent vector s (0) 2. he iteration results are shown in able 2.3 below. he initial values for v 0 p (0) and c 0 r (0) r (0), c 0 0 0 0. Find k k p r c c k k v k ) and k(p k c Find k 0. k (r (0) ) v Find s r v k. Find t As. Find k (t k ) s (t k. ) t v k Ap k Find c k 0 c k and c k k (r 0 ) t. Find the iterated solution x x p k k s k.

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 06 able 2.3: Results of iteration, obtained after each step k x x 2 x 3 r S 0.0000 0.000 0.0000 [9,-2,] [9,-2,] 2.660-0.25 2.42 [0.06,4.48,0.77] [0.96,4.28, 3.595.645 2.257 [.84,0.23,.47] [-.58,0.93,.97] 4.0000 2.000 3.000 [0.00,0.00,0.00] [0.00,0.00,0 Find the iterated residual r s k t k. 2. Go to the fourth step for the next iteration using the new iterated solution. Example 2.8. 3. 5x x 2 2x 3 9 (2.72) x 4x 2 3x 3 2 (2.73) 2x 3x 2 5x 3 (2.74) 4. Find x, x 2 by using the Conjugate Gradient Method. Guess x (0) = [0 0 0] to start 5. he iteration results are shown in able 2.4 below. Find result: x 6. x 2 2 x 3 3 2.8. MAXIMA Application 7. 2.8.. Introduction he solution of simultaneous equations occurs frequently in applications of engineering mathematics. Nonlinear equations arise in a number of instances, such as optimization, eigenvalue calculations, simulations, etc. 8. 2.8.2. Examples.

N i y a z i A r i a n d S a v a ş u y l u D o n n. J. E d u. R e s. R e v. 07 able 2.4: Results of iteration, obtained after each step k x x 2 x 3 0.0000 0.0000 0.0000 2.807 2.2039 3.035 3.078 2.020 3.005 4.0000 2.0000 3.0000 9. 3. CONCLUSION his research paper can apply each and every part of Solution of Linear Systems, help applications of both the physical sciences and engineering, make faster progress, and help to understand Solution of Linear Systems faster. he paper particularity helps to understand parts of Linear Algebra and is going to extend to other parts of the Linear Algebra. 4. ACKNOWLEDGEMENS We would like to thank the Maxima developers and Nigerian urkish Nile University (NNU) for their friendly help. REFERENCES [] R. H. Rand, Introduction to Maxima, [2] R. Dodier, Minimal Maxima, 2005 [3] https://www.ma.utexas.edu/maxima/maxima\_9.html. [4] N. Ari, G. Apaydin, Symbolic Computation echniques for Electromagnetics with MAXIMA and MAPLE, Lambert Academic Publishing, 2009. [5] Fred Szabo. Linear Algebra, An Introduction Using MAHEMAICA Academic Press 2000 [6] S. Lipschutz, M. Lipson, Linear Algebra Schaum s Outlines McGraw Hill2009 [7] S. Lipschutz, Linear Algebra Schaum s Solved Problems Series McGraw Hill 989 [8] Minh Van Nguyen, Linear Algebra with MAXIMA http://www.maths.uwa.edu.au/students/undergraduate/ms/aft ermath [9] N. ARI, Engineering Mathematics with MAXIMA SDU, Almaty, 202 [0] Niyazi ARI, Lecture notes, University of echnology, Zurich, Switzerland. [] Niyazi ARI, Symbolic computation of electromagnetics with Maxima (203). [2] http://maxima.sourceforge.net/