An Efficient FETI Implementation on Distributed Shared Memory Machines with Independent Numbers of Subdomains and Processors

Similar documents
Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic

An Iterative Domain Decomposition Method for the Solution of a Class of Indefinite Problems in Computational Structural Dynamics

Uncertainty analysis of large-scale systems using domain decomposition

Parallel Scalability of a FETI DP Mortar Method for Problems with Discontinuous Coefficients

GRUPO DE GEOFÍSICA MATEMÁTICA Y COMPUTACIONAL MEMORIA Nº 8

Parallel scalability of a FETI DP mortar method for problems with discontinuous coefficients

Multispace and Multilevel BDDC. Jan Mandel University of Colorado at Denver and Health Sciences Center

FETI-DPH: A DUAL-PRIMAL DOMAIN DECOMPOSITION METHOD FOR ACOUSTIC SCATTERING

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers

ON THE CONVERGENCE OF A DUAL-PRIMAL SUBSTRUCTURING METHOD. January 2000

SOME PRACTICAL ASPECTS OF PARALLEL ADAPTIVE BDDC METHOD

The All-floating BETI Method: Numerical Results

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

Multilevel and Adaptive Iterative Substructuring Methods. Jan Mandel University of Colorado Denver

A Balancing Algorithm for Mortar Methods

A FETI-DP method for the parallel iterative solution of indefinite and complex-valued solid and shell vibration problems

18. Balancing Neumann-Neumann for (In)Compressible Linear Elasticity and (Generalized) Stokes Parallel Implementation

A Balancing Algorithm for Mortar Methods

Selecting Constraints in Dual-Primal FETI Methods for Elasticity in Three Dimensions

A smooth variant of the fictitious domain approach

Domain Decomposition solvers (FETI)

Application of Preconditioned Coupled FETI/BETI Solvers to 2D Magnetic Field Problems

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo

arxiv: v1 [math.na] 28 Feb 2008

ETNA Kent State University

BETI for acoustic and electromagnetic scattering

GRUPO DE GEOFÍSICA MATEMÁTICA Y COMPUTACIONAL MEMORIA Nº 6

Multi-Point Constraints

Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches)

Multipréconditionnement adaptatif pour les méthodes de décomposition de domaine. Nicole Spillane (CNRS, CMAP, École Polytechnique)

Matrix Assembly in FEA

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains

An Iterative Substructuring Method for Coupled Fluid-Solid Acoustic Problems 1

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems

FETI domain decomposition method to solution of contact problems with large displacements

A new algorithm for solving 3D contact problems with Coulomb friction

33 RASHO: A Restricted Additive Schwarz Preconditioner with Harmonic Overlap

Scalable BETI for Variational Inequalities

FPGA Implementation of a Predictive Controller

Auxiliary space multigrid method for elliptic problems with highly varying coefficients

Sharp condition number estimates for the symmetric 2-Lagrange multiplier method

2 CAI, KEYES AND MARCINKOWSKI proportional to the relative nonlinearity of the function; i.e., as the relative nonlinearity increases the domain of co

Incomplete Cholesky preconditioners that exploit the low-rank property

ON THE CONVERGENCE OF A DUAL-PRIMAL SUBSTRUCTURING METHOD. January 2000 Revised April 2000

A FETI-DP Method for Mortar Finite Element Discretization of a Fourth Order Problem

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

Model Order Reduction via Matlab Parallel Computing Toolbox. Istanbul Technical University

Parallel Implementation of BDDC for Mixed-Hybrid Formulation of Flow in Porous Media

Recent Progress of Parallel SAMCEF with MUMPS MUMPS User Group Meeting 2013

Large-scale Electronic Structure Simulations with MVAPICH2 on Intel Knights Landing Manycore Processors

Scalable Domain Decomposition Preconditioners For Heterogeneous Elliptic Problems

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

Enhancing Scalability of Sparse Direct Methods

Binary Decision Diagrams and Symbolic Model Checking

FETI-DP for Elasticity with Almost Incompressible 2 Material Components 3 UNCORRECTED PROOF. Sabrina Gippert, Axel Klawonn, and Oliver Rheinbach 4

Technische Universität Graz

Indefinite and physics-based preconditioning

ERLANGEN REGIONAL COMPUTING CENTER

FETI Methods for the Simulation of Biological Tissues

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

All-Floating Coupled Data-Sparse Boundary and Interface-Concentrated Finite Element Tearing and Interconnecting Methods

Inexact Data-Sparse BETI Methods by Ulrich Langer. (joint talk with G. Of, O. Steinbach and W. Zulehner)

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

Acceleration of a Domain Decomposition Method for Advection-Diffusion Problems

Distributed Nonsmooth Contact Domain Decomposition (NSCDD): algorithmic structure and scalability

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

Concepts. 3.1 Numerical Analysis. Chapter Numerical Analysis Scheme

A multi-solver quasi-newton method for the partitioned simulation of fluid-structure interaction

From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D

Contents. Preface... xi. Introduction...

ASM-BDDC Preconditioners with variable polynomial degree for CG- and DG-SEM

Domain Decomposition-based contour integration eigenvalue solvers

A Simultaneous Augmented Lagrange Approach for UNCORRECTED PROOF. the Simulation of Soft Biological Tissue 3

NCU EE -- DSP VLSI Design. Tsung-Han Tsai 1

- Part 4 - Multicore and Manycore Technology: Chances and Challenges. Vincent Heuveline

Parallel Discontinuous Galerkin Method

Parallel Performance Studies for a Numerical Simulator of Atomic Layer Deposition Michael J. Reid

Groupe de Travail Méthodes Numériques

Chapter 2 Finite Element Formulations

IsogEometric Tearing and Interconnecting

A Nested Dissection Parallel Direct Solver. for Simulations of 3D DC/AC Resistivity. Measurements. Maciej Paszyński (1,2)

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Solving PDEs with CUDA Jonathan Cohen

An explicit time-domain finite-element method for room acoustics simulation

Advantages, Limitations and Error Estimation of Mixed Solid Axisymmetric Modeling

Domain Decomposition Algorithms for an Indefinite Hypersingular Integral Equation in Three Dimensions

Some Geometric and Algebraic Aspects of Domain Decomposition Methods

TR A Comparison of the Performance of SaP::GPU and Intel s Math Kernel Library (MKL) for Solving Dense Banded Linear Systems

A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

for three dimensional problems are often more complicated than the quite simple constructions that work well for problems in the plane; see [23] for a

Parallel Scalable Iterative Substructuring: Robust Exact and Inexact FETI-DP Methods with Applications to Elasticity

IMPLEMENTATION OF A PARALLEL AMG SOLVER

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A dissection solver with kernel detection for unsymmetric matrices in FreeFem++

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

A Hybrid Method for the Wave Equation. beilina

On the Use of Inexact Subdomain Solvers for BDDC Algorithms

SIMULTANEOUS DETERMINATION OF STEADY TEMPERATURES AND HEAT FLUXES ON SURFACES OF THREE DIMENSIONAL OBJECTS USING FEM

Transcription:

Contemporary Mathematics Volume 218, 1998 B 0-8218-0988-1-03024-7 An Efficient FETI Implementation on Distributed Shared Memory Machines with Independent Numbers of Subdomains and Processors Michel Lesoinne and Kendall Pierson 1. Introduction Until now, many implementations of the FETI method have been designed either as sequential codes on a single CPU, or as parallel implementations with a One Subdomain per Processor approach. This approach has been particularly typical of implementations on distributed memory architectures such as the IBM SP2. In the last couple of years, several computer manufacturers have introduced new machines with a Distributed Shared Memory (DSM) programming model e.g. SGI Origin 2000, or HP Exemplar. In such architectures, the physical memory is distributed among the processors or CPU boards but any memory location can be accessed logically by any CPU independently of where the particular memory page being accessed has physically been allocated. As more and more machines of this type are available with a relatively small number of processors, the interest in implementing FETI with an independent number of subdomains and processor has increased. We report on such an implementation of FETI and highlight the benefits of this feature. We have found that medium size to large problems can be solved even on a sequential machine with time and memory requirements that are one to two order of magnitude better than a direct solver. 2. Objectives When writing our new FETI code, the main objectives were: Efficient data structures for distributed shared memory Number of subdomains independent of the number of processors The second requirement was the most important requirement and, when taken to the extreme of a single processor, naturally leads to being able to run the same code sequentially 1991 Mathematics Subject Classification. Primary 65Y05; Secondary 65N55, 65Y10. The first author acknowledges partial support by ANSYS, Inc. 318 c 1998 American Mathematical Society

FETI ON DISTRIBUTED SHARED MEMORY 319 3. Overview of FETI In order to keep this paper self-contained as much as possible, we begin with an overview of the original FETI method [6, 1, 2, 4, 5]. Theproblemtobesolvedis (1) Ku = F where K is an n n symmetric positive semi-definite sparse matrix arising from the finite element discretization of a second- or fourth-order elastostatic (or elastodynamic) problem defined over a region Ω, and F is a right hand side n-long vector representing some generalized forces. If Ω is partitioned into a set of N s disconnected substructures Ω (s), the FETI method consists in replacing Eq (1) with the equivalent system of substructure equations (2) K (s) u (s) = F (s) B (s)t λ s = 1,..., N s N s = B (s) u (s) = 0 s=1 where K (s) and F (s) are the unassembled restrictions of K and F to substructure Ω (s), λ is a vector of Lagrange multipliers introduced for enforcing the constraint = 0 on the substructure interface boundary Γ (s) I,andB(s) is a signed Boolean matrix that describes the interconnectivity of the substructures. A more elaborate derivation of (2) can be found in [6, 3] In general, a mesh partition may contain N f N s floating substructures that is, substructures without enough essential boundary conditions to prevent the substructure matrices K (s) from being singular inwhichcasen f of the local Neumann problems (3) K (s) u (s) = F (s) B (s)t λ s = 1,..., N f are ill-posed. To guarantee the solvability of these problems, we require that (4) (F (s) B (s)t λ) Ker (K (s) ) s = 1,..., N f and compute the solution of Eq. (3) as (5) u (s) = K (s)+ (F (s) B (s)t λ)+r (s) α (s) where K (s)+ is a generalized inverse of K (s) that needs not be explicitly computed [4], R (s) =Ker(K (s) ) is the null space of K (s),andα (s) is a vector of six or fewer constants. The introduction of the additional unknowns α (s) is compensated by the additional equations resulting from (4) (6) R (s)t (F (s) B (s)t λ) = 0 s = 1,..., N f Substituting Eq. (5) into the second of Eqs. (2) and using Eq. (6) leads after some algebraic manipulations to the following FETI interface problem [ ][ [ FI G I λ d (7) T = G I 0 α] e]

320 MICHEL LESOINNE AND KENDALL PIERSON where (8) F I = N s s=1 B (s) K (s)+ B (s)t ; G I = [ B (1) R (1)... B (N f ) B ] (N f ) ] ; α T = [α (1)T... α (N f ) T ; d = e T = N s B (s) K (s)+ F (s) ; [ s=1 ] F (1)T B (1)... F (Ns)T B (N f ) K (s)+ = K (s) 1 if Ω (s) is not a floating substructure (9) K (s)+ = a generalized inverse of K (s) if Ω (s) is a floating substructure For structural mechanics and structural dynamics problems, F I is symmetric because the substructure matrices K (s) are symmetric. The objective is to solve by a PCG algorithm the interface problem (7) instead of the original problem (1). The PCG algorithm is modified with a projection enforcing that the iterates λ k satisfy (6). Defining the projector P using (10) P = I G I (G T I G I) 1 G T I the algorithm can be written as: (11) 1. Initialize λ 0 = G I (G T I G I ) 1 e r 0 = d F I λ 0 2. Iterate k =1, 2,... until convergence w k 1 = P T r k 1 z k 1 y k 1 = F 1 I w k 1 = P z k 1 ζ k = y k 1T w k 1 /y k 2T w k 2 (ζ 1 =0) p k = y k 1 + ζ k p k 1 (p 1 = y 0 ) ν k = y k 1T w k 1 /p kt F I p k λ k = λ k 1 + ν k p k r k = r k 1 ν k F I p k 4. Data organization on DSM computer architecture To be able to efficiently organize data for the FETI solver, we need to examine how the operating system will distribute memory inside the physical memory units and how this distribution affects the cost of accessing that memory. Simple observations of the impact of the computer architecture will give us guidelines to organize the elements involved in the FETI solver. The single distributed shared memory model of DSM machines simplifies the writing of parallel codes. However programmers must be conscious that the cost of accessing memory pages is not uniform and depends on the actual location in hardware of a page being accessed. On the SGI Origin 2000 machine, the operating systems distributes pages of memory onto physical pages by a first touch rule. This means that if possible, a memory page is allocated in the local memory of the first CPU that touches the page.

FETI ON DISTRIBUTED SHARED MEMORY 321 Fortunately, the FETI method, because it is decomposition based, lends itself in a natural way to a distribution of data across CPUs that guarantees that most of the memory is always accessed by the same CPU. To achieve this, one simply applies a distributed memory programming style on a shared memory architecture. This means that all operations relative to a subdomain s are always executed by the same CPU. This way, such objects as the local stiffness matrix K (s) will be created, factored and used for resolution of linear systems always on the same CPU. 5. Parallel programming paradigm One can easily see that the operations involved in the FETI method are mostly matrix and vector operations that can be performed subdomain-per-subdomain. Such quantities as the residual vector or search direction vectors can be thought of as global quantities made of the assembly of subvectors coming from each subdomain. Operations on such vectors such as sum or linear combinations can be performed on a subdomain per subdomain basis. On the other hand, coefficients such as ν k and ζ k are scalar values which are global to the problem. These coefficients ensue mostly from dot products. Dot products can be performed by having the dot product of the subparts of the vectors performed by each subdomain in parallel, and then all the contributions summed up globally. Such remarks have led us to write the program with a single thread executing the main PCG loop. In that way, only one CPU allocates and computes the global variables such as ν k and ζ k. To perform parallel operations, this single thread creates logical tasks to be performed by each CPU, and these tasks are distributed to as many parallel threads as CPUs being used. Examples of tasks are the assembly or factorization of K (s), or the update of the subpart of a vector for subdomain s. Because the number of threads is arbitrary and independent of the number of tasks to be performed at a given step i.e. several tasks can be assigned to the same thread, independence between the number of subdomains and the number of CPUs is trivially achieved. In the extreme case, all tasks are executed by a single thread the main thread and the program can run on a sequential machine. 6. Implementation of the projector P The application of the projector P is often referred to as the coarse problem because it couples all subdomains together. The application of the projector to a vector z can be seen as a three step process: Compute γ = G t z Solve (G t G)α = γ Compute y = z Gα The first and last operation can be obtained in parallel with a subdomain per subdomain operation, since the columns of G (the trace of the rigid body modes) are non zero only for one subdomain interface. The second operation however is a global operation that couples all subdomains together. Past implementations of the projector at the University of Colorado have relied on an iterative solution of the second equation of Eqs (6). Though such an approach was justified by the fact that these implementations were targeted at distributed memory machines, an experimental study of the problem has revealed that on DSM machine, it is more economical to solve this system with a direct solver. This direct solver can only be parallelized for a low number of CPU before losing performance.

322 MICHEL LESOINNE AND KENDALL PIERSON Figure 1. Finite element models of a lens (left) and a wheel carrier (right) 78 Optical Lens Problem 500 500 76 450 Number of Iterations 74 72 70 68 66 64 Solution Time (seconds) 400 350 300 250 Solution Time Memory Usage 450 400 350 Memory Usage (MB) 200 62 300 60 150 58 20 40 60 80 100 120 140 160 Number of Subdomains 100 250 20 40 60 80 100 120 140 160 Number of Subdomains Figure 2. Number of iterations, solution time and memory requirements vs number of subdomains (Lens problem, 1 CPU) Consequently, it is the least parallelizable part of the FETI solver and sets an upper limit to the performance that can be attained by adding CPUs. 7. Numerical experimentations We have run our code on two significant example problems. For each problem we made runs with a varying number of subdomains and have recorded both timing of the solution and the memory required by the solver. 7.1. Lens problem. The first problem, shown in Fig. 1 on the left has 40329 nodes, 35328 brick elements and 120987 degrees of freedom. Solving this problem sequentially using a skyline solver and renumbered using RCM uses 2.2GB of memory and 10,000 seconds of CPU time. By contrast, on the same machine, running FETI sequentially with 128 subdomains requires 379MB of memory and 183.1 seconds of CPU. This results dramatically highlights that FETI is an excellent solution method, even on sequential machines. As can be seen on the left of Fig. 2, the number of iterations remains stable as the number of subdomains increases. The right part of the same figure shows the variation of timings and memory usage

FETI ON DISTRIBUTED SHARED MEMORY 323 6.00 600.00 5.50 5.00 4.50 4.00 3.50 500.00 400.00 300.00 200.00 1 2 3.00 0.00 2.00 4.00 6.00 8.00 100.00 Figure 3. Solution time vs number of processors (lens problem, 128 subdomains) 50 95 45 40 Solution Time Memory Usage 94 93 Solution Time (seconds) 35 30 25 20 15 92 91 90 89 88 Memory Usage (MB) 10 87 5 86 0 85 20 40 60 80 100 120 Number of Subdomains Figure 4. Solution time and memory requirements vs number of subdomains (Wheel carrier problem, 1CPU) with the number of subdomains. It can be noticed that as the number of subdomains increases, the memory required to store all the local stiffness matrices K (s) decreases. This is mainly due to the reduction of the bandwidth of the local problems. Our experimentations show that the optimum decomposition for CPU time has 128 subdomains. Such a number would make it impractical for most users to use FETI with an implementation that requires one CPU per subdomain. After determining the optimum number of subdomains, we ran the same test case with an increasing number of processors. The resulting timings show good scalability (Fig. 3) 7.2. Wheel carrier problem. The second problem is a wheel carrier problem (see Fig. 1 on the right) with 67768 elements, 17541 nodes and 52623 degrees of freedom. The skyline solver requires 576 MB of memory and 800 seconds of CPU time. Fig. 4 shows a single CPU performance of FETI with 80 subdomains of 26.7s. On this problem, the memory requirement beyond 40 subdomains is stable around 88MB but tends to slightly increase as the number of subdomains increases. This is explained by the fact that as the number of subdomains increases, the size of the

324 MICHEL LESOINNE AND KENDALL PIERSON interface increases and the memory required to store larger interface vectors offsets the reduction of memory required by the local stiffness matrices K (s). 8. Conclusions We have implemented the FETI method on Distributed Shared Memory machines. We have achieved independence of the number of subdomains with respect to the number of CPUs. This independence has allowed us to explore the use of a large number of CPUs for various problems. We have seen from this experimentation that using a relatively large number of subdomains (around 100) can be very beneficial both in solution time and in memory usage. With such a high number of subdomains, the FETI method was shown to require CPU times and memory usage that are almost two orders of magnitude lower than those of a direct skyline solver. This strongly suggests that the FETI method is a viable alternative to direct solvers on medium size to very large scale problems. References 1. C. Farhat, A Lagrange multiplier based divide and conquer finite element algorithm, J. Comput. Sys. Engrg. 2 (1991), 149 156. 2. C. Farhat, A saddle-point principle domain decomposition method for the solution of solid mechanics problems, Proc. Fifth SIAM Conference on Domain Decomposition Methods for Partial Differential Equations (D.E. Keyes, T.F. Chan, G.A. Meurant, J.S. Scroggs, and R.G. Voigt, eds.), SIAM, 1991, pp. 271 292. 3. C. Farhat, J. Mandel, and F.X. Roux, Optimal convergence properties of the FETI domain decomposition method, Comput. Meths. Appl. Mech. Engrg. 115 (1994), 367 388. 4. C. Farhat and F.X. Roux, A method of finite element tearing and interconnecting and its parallel solution algorithm, Internat. J. Numer. Meths. Engrg. 32 (1991), 1205 1227. 5., An unconventional domain decomposition method for an efficient parallel solution of large-scale finite element systems, SIAM J. Sc. Stat. Comput. 13 (1992), 379 396. 6., Implicit parallel processing in structural mechanics, Computational Mechanics Advances 2 (1994), 1 124. Department of Aerospace Engineering and Sciences and Center for Aerospace Structures University of Colorado at Boulder Boulder, CO 80309-0429, U.S.A.