Solving System of Linear Equations in Parallel using Multithread

Similar documents
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

CHAPTER 6. Direct Methods for Solving Linear Systems

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

Calculate Sensitivity Function Using Parallel Algorithm

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

MTH 215: Introduction to Linear Algebra

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

1.Chapter Objectives

2.1 Gaussian Elimination

PowerPoints organized by Dr. Michael R. Gustafson II, Duke University

The Solution of Linear Systems AX = B

M.SC. PHYSICS - II YEAR

Process Model Formulation and Solution, 3E4

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Linear Algebraic Equations

Methods for the Solution

Syllabus For II nd Semester Courses in MATHEMATICS

Design and Realization of Quantum based Digital Integrator

Direct Methods for Solving Linear Systems. Matrix Factorization

Section 5.6. LU and LDU Factorizations

SOLVING LINEAR SYSTEMS

The parallelization of the Keller box method on heterogeneous cluster of workstations

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

LU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU

Parallel Circuit Simulation. Alexander Hayman

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

LU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Example: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods

Linear Systems of Equations. ChEn 2450

Linear System of Equations

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

Math 240 Calculus III

Matrices: 2.1 Operations with Matrices

Lecture 4: Linear Algebra 1

Solution of Linear Equations

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Transformation Techniques for Real Time High Speed Implementation of Nonlinear Algorithms

St. Xavier s College Autonomous Mumbai. Syllabus For 4 th Semester Core and Applied Courses in. Economics (June 2019 onwards)

An Implicit Partial Pivoting Gauss Elimination Algorithm for Linear System of Equations with Fuzzy Parameters

DonnishJournals

Semester 3 MULTIVARIATE CALCULUS AND INTEGRAL TRANSFORMS

Dense LU factorization and its error analysis

SYLLABUS UNDER AUTONOMY MATHEMATICS

VEER NARMAD SOUTH GUJARAT UNIVERSITY, SURAT

9. Numerical linear algebra background

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

CSE 160 Lecture 13. Numerical Linear Algebra

Numerical Linear Algebra

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Solution of Linear Systems

Adding Relations in Multi-levels to an Organization Structure of a Complete Binary Tree Maximizing Total Shortening Path Length

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems

Adding Relations in the Same Level of a Linking Pin Type Organization Structure

Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors

Next topics: Solving systems of linear equations

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Lecture 9: Submatrices and Some Special Matrices

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Math/CS 466/666: Homework Solutions for Chapter 3

Section Partitioned Matrices and LU Factorization

Linear System of Equations

To Obtain Initial Basic Feasible Solution Physical Distribution Problems

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

ARecursive Doubling Algorithm. for Solution of Tridiagonal Systems. on Hypercube Multiprocessors

Engineering Computation

Direct solution methods for sparse matrices. p. 1/49

Matrix decompositions

Inverse Orchestra: An approach to find faster solution of matrix inversion through Cramer s rule

ICT Tool: - C Language Program for Gauss Elimination Method

Solving Linear Systems of Equations

MA2501 Numerical Methods Spring 2015

A Modified Predictor-Corrector Formula For Solving Ordinary Differential Equation Of First Order And First Degree

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Sr. No. Subject Code. Subject Name

Algorithms and Complexity Theory. Chapter 8: Introduction to Complexity. Computer Science - Durban - September 2005

Parallel Scientific Computing

On Application of Matrix-Vector Operations for Solving Constrained Optimization Problems

MATH 3511 Lecture 1. Solving Linear Systems 1

QR Decomposition in a Multicore Environment

Gaussian Elimination and Back Substitution

Scientific Computing: Dense Linear Systems

Two years of high school algebra and ACT math score of at least 19; or DSPM0850 or equivalent math placement score.

EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS

Numerical Linear Algebra

9. Numerical linear algebra background

Mathematical Preliminaries

Algorithms PART II: Partitioning and Divide & Conquer. HPC Fall 2007 Prof. Robert van Engelen

Matrix Factorization Reading: Lay 2.5

INDUCTION AND RECURSION

William Stallings Copyright 2010

Phys 201. Matrices and Determinants

Scientific Computing

Syllabus (Session )

Solving Linear Systems of Equations

Numerical Methods - Numerical Linear Algebra

Transcription:

Solving System of Linear Equations in Parallel using Multithread K. Rajalakshmi Sri Parasakthi College for Women, Courtallam-627 802, Tirunelveli Dist., Tamil Nadu, India Abstract- This paper deals with the efficient parallel algorithms for solving linear equations using multithread. Since using Multi thread, each thread represent one processor, execution time is minimized. Here the compare Time Complexity and Processor Complexity. Finally we conclude that the parallel algorithm performance better than the sequential algorithm. Key Words - Parallel, Multithread, Crout s Reduction, Sequential, Algorithm I. INTRODUCTION Solving the system of linear equations is probably one of the most visible applications of Linear Algebra. The system of linear equations appears in almost every branch of Science and Engineering. Many scientific and engineering problems can take the form of a system of linear equations. The system of linear equations has many applications such as Digital Signal Processing, Geometry, Networks, Temperature Distribution, Heat Distribution, Chemistry, Linear Programming, Games, Estimation, Weather Forecasting, Economics, Image Processing, Video Conferencing, Oceanography and many Statistical analysis (for example in Econometrics, Biostatistics and Experimental Design). The applications of specialized types are Transportation problems and Traffic Flow problems. These enquire special techniques for setting up the system of equations. The Linear equations can be used to find out the best method of Maximizing Profits. Other applications are Design and Computer analysis of Circuits, Power System Networks, Chemical engineering processes, Macroeconomics models, Queuing systems, Polynomial curve fitting, Medical etc. Parallel Algorithm as opposed to a traditional sequential algorithm, is one which can be executed a piece at a time on many different processing devices, and then put back together again at the end to get the correct result. Java is a popular language and has support for parallel execution that is integrated into the language. Hence it is interesting to investigate the usefulness of Java for executing Scientific Programs in Parallel. I have described one Method of Linear equations which can be solved in parallel using the thread mechanism of Java. Java let programs interleave multiple program steps through the use of threads. For example, one thread controls an animation, while another does a computation. On a multiprocessor or multi-core system, threading can be achieved via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores. Threads and processes differ from one operating system to another but, in general, a thread is contained inside a process and different threads in the same process share some resources while different processes do not. We consider implementations of linear equation method and compare the result of performance. As platform, we use a Windows XP system. The rest of the paper is organized as follows: Solving linear equations by Crout s Reduction Method, both Sequential and parallel Programming in Section II. Comparison of result in sequential execution and parallel execution of programs in Section III. Conclusion in Section IV. II. LITERATURE REVIEW An efficient Parallel Algorithm for the solution of a Tridiagonal linear system of equations was formed by H. S. Stone (1973). A new class of iterative methods for the numerical solution of linear Simultaneous equations which were applicable for use on parallel computers was introduced. Cetin K. Koc (1990) formed a Parallel Algorithm for exact solution of linear equations via Congruence technique. New Technique for efficient parallel solution of very large linear systems of equations on a SIMD processor was arrived by Okon Hanson Akpan in 1994. Which was implemented in parallel computer? His result was N > P. Where N is the problem size and P is the Number of Processor. Solving Linear Equations Systems using parallel processing was attempted by Jorge C. pais, Rainundo M. Delgado. He made use of 2 to 16 processors to compute parallel execution because this work was developed in a parallel computer with 16 transputers IMS T800-20 every one with 2 Mega Bytes of RAM and also compared the efficiency. Volume 9 Issue 1 Oct 2017 51 ISSN: 2319-1058

Scalable Parallel Algorithms for solving sparse Systems of Linear Equations was done by Anshul Gupta (1998). A class of Parallel Algorithms for solving large sparse Linear Systems on Multiprocessors was carried out by Xiaoge Wang, Richard M. M. Chen, Xue Wu and Xinghua an (2000). They were implemented the algorithm using Cluster of PCs. A Threaded Sparse LU Factorization Utilizing Hierarchical Parallelism and Data Layouts done by Joshua Dennis Booth, Sivasankaran Rajamanickam and Heidi K. Thornquist (2012). Solving System of Interval Linear Equations in parallel using Multi-threaded Model and Interval Extended Zero Method (2012) done by Mariusz Pilarck and Roman Wyrzykowski. A. CROUT S Reduction METHOD Let the system of equations to be solved be AX = B In this method also, we decompose the coefficient matrix A as LU, where L is general lower triangular matrix and U is a unit upper triangular matrix with each of the leading diagonal elements equal to unity. l 11 0 0 0 1 u 12 u 13... u 1n l 21 l 22 0 0 0 1 u 23... u 2n L = l 31 l 32 l 33 0 U = 0 0 1... u 3n............... l n1 l n2 l n3 l nn 0 0 0... 1 Using A = LU in (1), we get LUX = B L -1 LUX = L -1 B UX = C System (3) can easily solved for X by back substitution. Thus Crout s Reduction method expresses A in the form LU and solves the equivalent system UX = C, instead of solving the given system AX = B. the same operation of pre-multiplying by L -1, which reduces A to U, also transforms B to C. The procedure to decompose A to LU, to get the elements of L and U. l 11 0 0 0 1 u 12 u 13... u 1n a 11 a 12 a 13 a 1n l 21 l 22 0 0 0 1 u 23... u 2n a 21 a 22 a 23 a 2n l 31 l 32 l 33 0 0 0 1... u 3n = a 31 a 32 a 33 a 3n............... l n1 l n2 l n3 l nn 0 0 0... 1 a n1 a n2 a n3 a nn In this method, the order in which the elements of the triangular matrices computed is as follows: First column element of L, first row elements of U, second elements of L, second row elements of U and so on. B. Implementation of Sequential Algorithm Compute the elements of the first column of L (Lower Triangular) matrix. For i = 1 to n Aim = b i l i1 = a i1 Next i Compute the elements of the first row of U (Upper Triangular) matrix. For j = 1 to n u 1j = a 1j / l 11 Next j Volume 9 Issue 1 Oct 2017 52 ISSN: 2319-1058

Compute the elements of the successive columns of L (Lower Triangular Matrix) and those of successive rows of U (Upper Triangular Matrix) alternatively. l 11 0 0 0 1 u 12 u 13... u 1n l 21 l 22 0 0 0 1 u 23... u 2n l 31 l 32 l 33 0 0 0 1... u 3n............... l n1 l n2 l n3 i nn 0 0 0... 1 For m = 1 to n-1 For i = m to n-1 For k = 1 to m Aim = a im l ike * u mp Next k Lime = a im Next i For j = m+1 to n For k = 1 to m Am = a m l ike * auk Next k Ump = a m / m m Next j Next m Compute the values of the unknowns an, x n-1, x n-2, x 1 by backward substitution. for k = 1 to n i = n-k-1 x i = u in for j = i+1 to n-1 x i = x i - u ij * x j next j next k Sequential Algorithm (CRM) Input : Size of the equation is n Given matrix is a[1:n,1:n] and right hand side value is b[1:n] Output : x[1:n] 1. for i = 1 to n 2. a in = b i 3. l i1 = a i1 4. next i 5. for j = 1 to n 6. u 1j = a 1j / l 11 7. next j 8. for m = 1 to n-1 9. for i = m to n-1 10. for k = 1 to m 11. a im = a im l ik * u km 12. next k 13. l im = a im 14. next i 15. for j = m+1 to n 16. for k = 1 to m 17. a mj = a mj l mk * u kj Volume 9 Issue 1 Oct 2017 53 ISSN: 2319-1058

18. next k 19. u mj = a mj / l mm 20. next j 21. next m 22. 22 for k = 1 to n 23. i = n-k-1 24. x i = u in 25. for j = i+1 to n-1 26. x i = x i - u ij * x j 27. next j 28. next k 29. end Implementation of Parallel Algorithm Compute the elements of the first column of Lower triangular matrix and the first row of Upper triangular matrix. Compute the elements of the successive columns of Lower Triangular matrix and the elements of the successive rows of Upper triangular matrix and Upper triangular matrix are calculated in parallel. Compute the values of unknowns by back Substitution. Parallel Algorithm (CRM) Input : Size of the equation is n Given Matrix is a[1:n,1:n+1] Output : x values x[1:n] 1. for i = 1 to n 2. b i1 = a i1 3. next i 4. for j = 1 to n 5. u 1j = a 1j / b 11 6. next j 7. for m = 1 to n-1 do in parallel 8. for i = m to n-1 do in parallel 9. for k = 1 to m 10. a im = a im b ik * u km 11. next k 12. b im = a im 13. end parallel 14. for j = m+1 to n 15. for k = 1 to m 16. a mj = a mj b mk * u kj 17. next k 18. u mj = a mj / b mm 19. next j 20. end parallel 21. for k = 1 to n 22. i = n-k-1 23. x i = u in 24. for j = i+1 to n-1 25. x i = x i - u ij * x j 26. next j 27. next k 28. end III. RESULTS The analysis of the algorithms will take into consideration the following issues The time complexity of the Sequential algorithm. The time Complexity of the parallel algorithm Volume 9 Issue 1 Oct 2017 54 ISSN: 2319-1058

(a) (b) Figure 4. (a) Serial Execution (CRM) (b) Parallel Execution (CRM) IV. CONCLUSION From the above results and observations, we come to the following conclusions. The Crout s Reduction Method developed in this study is a very effective tool for efficient parallel execution of linear systems. The system size (size of the Equations) N is equal to the number of processor. That is, N = P, where P is the Processors. The system performance of parallel execution, more effective than a serial execution of a system. REFERENCES [1] Alan Chalmers and Jonathan tidmus, Practical Parallel Processing An Introduction to problem solving in Parallel, An International Thomson Publishing Company (1996). [2] Anaho, Hoperaft and Ullm, Design and Analysis of Algorithms (1984). [3] Anany Levitin, Introduction to the Design and Analysis of Algorithms, Pearson Education (2003). [4] K. A. Barman and J. L. Paul Sequential and Parallel Algorithms Thomson Learning, India Edition, 2003 [5] Doug Lea, Concurrent programming in Java Design Principles and Patterns, Second Edition, Addison-Wesley (2000). [6] A. Gourdin and M. Boumahrat, Applied Numerical Methods, Prentice Hall of India Private Limited (1996). [7] A. Grama,Introduction to Parallel Computing II Edition, Tata mcgraw Hill, New Delhi, 2006 [8] K. Hwang and F. A. Briggs Computer Architecture and parallel Processing Tata mcgraw Hill, New Delhi, 2004 [9] S. R. K. Iyengar and R. K. Jain, Mathematical Methods, Narosa Publishing House Private. Limited (2006). [10] M. K. Jain, S. R. K. Iyengar and R. K. Jain, Numerical Methods for Scientific and Engineering Computation, Third Edition, New age International (P) Limited. (1995). [11] Leiserson and Ronald, Intoduction to Algorithms, Prentice Hall of India Private Limited (1990). [12] Paul Hyde, Java Thread Programming The Authoritative Solution, A Division of Macmillan Computer Publishing (1999). [13] Pradip Niyogi, Numerical Analysis and Algorithms, Tata mcgraw-hill Publishing Company limited (2003). [14] Rajaraman, Computer Oriented Numerical methods, Second Edition, Prentice Hall of India Private limited (1989). [15] L. Rivest, Thomas H. Cormen, Charles E. V. and Michael J. Quinn, Designing Efficient Algorithms for Parallel Computers, mcgraw- Hill Book Company (1987). Volume 9 Issue 1 Oct 2017 55 ISSN: 2319-1058