Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches)

Size: px
Start display at page:

Download "Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches)"

Transcription

1 Projected Schur Complement Method for Solving Non-Symmetric Saddle-Point Systems (Arising from Fictitious Domain Approaches) Jaroslav Haslinger, Charles University, Prague Tomáš Kozubek, VŠB TU Ostrava Radek Kučera, VŠB TU Ostrava SNA 07, Januar 22, Ostrava

2 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

3 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

4 Formulation: Non-symmetric sadle-point system ( A B 1 B 2 0 ) ( u λ ) = ( f g ) Assumptions A... non-symmetric (n n) matrix... singular with p = dim Ker A as V h H 1 per(ω)... actions of A can be computed by Poisson-like solver based on FFT B 1, B 2... full rank (m n) matrices, m n... sparse, their actions are cheap... B 1 B 2 as Γ γ or as mixed boudary condions are prescribed Algorithms based on Schur complement reductions Case 1: symmetric non-singular Case 2: non-symmetric non-singular Case 3: symmetric singular Case 4: non-symmetric singular

5 Case 1: symmetric non-singular (positive definite A) ( A B ) ( u ) ( f ) = u = A 1 (f B λ) B 0 λ = g = BA 1 B λ = BA 1 f g Algorithm Step 1: Assemble d := BA 1 f g. Step 2: Solve Sλ = d by the CGM with S := BA 1 B. Step 3: Assemble u = A 1 (f B λ). Matrix-vector products Sµ are computed by: Sµ := ( B ( A 1 ( B µ ))) Algorithm requires O((m + 2)n A ) flops in the worst case.

6 Case 2: non-symmetric non-singular ( A B 1 B 2 0 λ = g = B 2 A 1 B 1 λ = B 2A 1 f g ) ( u ) ( f ) = u = A 1 (f B 1 λ) Algorithm is analogous. the Schur complement S := B 2 A 1 B 1 is non-symmetric a Krylov method for non-symmetric matrices is required (GMRES, BiCG,...) A := ( A B 1 B 2 0 = B 2 A 1 I ) ( A B 1 0 S ) ( I 0 ) Theorem. Let A be non-singular. Then A is invertible iff S is invertible. First Prev Next Last Go Back Full Screen Close Quit

7 Case 3: symmetric singular (positive semidefinite A, FETI) a generalized inverse A satisfying A = AA A an (n p) matrix N whose columns span Ker A Au + B λ = f f B λ Im A Ker A u = A (f B λ) + Nα N (f B λ) = 0 & Bu = g BA B λ BNα = BA f g Reduced system: ( BA B BN N B 0 )( λ α ) ( BA = f g N f an orthogonal projector + the projected CG )

8 Case 4: non-symmetric and singular a generalized inverse A satisfying A = AA A an (n p) matrices N, M whose columns span Ker A, Ker A, respectively Au + B 1 λ = f f B 1 λ Im A Ker A u = A (f B 1 λ) + Nα M (f B 1 λ) = 0 & B 2 u = g B 2 A B 1 λ B 2Nα = B 2 A f g Reduced system: ( B2 A B 1 B 2 N M B 1 0 )( λ α ) ( B2 A = f g M f two orthogonal projectors + the projected BiCG )

9 ( A B ) Theorem 1. The saddle-point matrix A := 1 is invertible iff B 2 0 B 1, B 2 have full ranks Ker A Ker B 2 = {0} (NSC) A Ker B 2 Im B 1 = {0} The generalized Schur complement: ( B2 A B 1 B 2 N S := M B 1 0 ) Theorem 2. The following three statements are equivalent: The necessary and sufficient condition (NSC) holds. A is invertible. S is invertible.

10 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

11 New notation in the reduced system: F = B 2 A B 1, G 1 = N B 2, G 2 = M B 1, d = B 2 A f g, e = M f ( F G 1 G 2 0 ) ( λ α ) = ( d e ) Two orthogonal projectors: P 1 : R m Ker G 1, P 1 = I G 1 (G 1 G 1 ) 1 G 1 P 1 splits the saddle-point structure of the system: P 2 : R m Ker G 2, P 2 = I G 2 (G 2 G 2 ) 1 G 2 P 1 Fλ = P 1 d, G 2 λ = e P 2 decomposes λ = λ Im + λ Ker, λ Im Im G 2, λ Ker Ker G 2. At first: λ Im = G 2 (G 2 G 2 ) 1 e At second: P 1 Fλ Ker = P 1 (d Fλ Im ) on Ker G 2

12 Theorem 3. The linear operator P 1 F: Ker G 2 Ker G 1 is invertible. Proof. As both null-spaces Ker G 1 and Ker G 2 have the same dimension, it is enough to prove that P 1 F is injective. Let µ Ker G 2 be such that P 1 Fµ = 0. Then Fµ Ker P 1 = Im G 1 and, therefore, there is β R l so that Fµ = G 1 β and G 2µ = 0. Letting y = (µ, β ), we get Sy = 0 implying µ = 0 as S is invertible.

13 Algorithm PSCM Step 1.a: Assemble G 1 := N B 2, G 2 := M B 1. Step 1.b: Assemble d := B 2 A f g, e := M f. Step 1.c: Assemble H 1 := (G 1 G 1 ) 1, H 2 := (G 2 G 2 ) 1. Step 1.d: Assemble λ Im := G 2 H 2e, d := P1 (d Fλ Im ). Step 1.e: Solve P 1 Fλ Ker = d on Ker G2. Step 1.f: Assemble λ := λ Im + λ Ker. Step 2: Assemble α := H 1 (d Fλ). Step 3: Assemble u = A (f B 1 λ) + Nα. An iterative projected Krylov subspace method for non-symmetric operators can be used in Step 1.e. Matrix-vector products Fµ and P k µ, k = 1, 2 are computed by: Fµ := ( B 2 ( A ( B 1 µ))), P k µ := µ ( G k (H k (G k µ)) ).

14 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

15 Fλ = d on R m Algorithm BiCGSTAB [ɛ, λ 0, F, d ] λ Initialize: r 0 := d Fλ 0, p 0 := r 0, r 0 arbitrary, k := 0 While r k > ɛ 1 p k := Fp k 2 α k := (r k ) r 0 /( p k ) r 0 3 s k := r k α k p k 4 s k := Fs k 5 ω k := ( s k ) s k /( s k ) s k 6 λ k+1 := λ k + α k p k + ω k s k 7 r k+1 := s k ω k s k 8 β k+1 := (α k /ω k )(r k+1 ) r 0 /(r k ) r 0 9 p k+1 := r k+1 + β k+1 (p k ω k p k ) 10 k := k + 1 end

16 P 1 Fλ = d on Ker G2 Algorithm ProjBiCGSTAB [ɛ, λ 0, F, P 1, P 2, d ] λ Initialize: λ 0 Ker G 2, r 0 := d P1 Fλ 0, p 0 := r 0, r 0 arbitrary, k := 0 While r k > ɛ 1 p k := P 1 Fp k 2 α k := (r k ) r 0 /( p k ) r 0 3 s k := r k α k p k 4 s k := P 1 Fs k 5 ω k := ( s k ) s k /( s k ) s k 6 λ k+1 := λ k + α k P 2 p k + ω k P 2 s k 7 r k+1 := s k ω k s k 8 β k+1 := (α k /ω k )(r k+1 ) r 0 /(r k ) r 0 9 p k+1 := r k+1 + β k+1 (p k ω k p k ) 10 k := k + 1 end

17 P 1 Fλ = d on Ker G2 Algorithm ProjBiCGSTAB [ɛ, λ 0, F, P 1, P 2, d ] λ Initialize: λ 0 Ker G 2, r 0 := P 2 ( d P1 Fλ 0 ), p 0 := r 0, r 0, k := 0 While r k > ɛ 1 p k := P 2 P 1 Fp k 2 α k := (r k ) r 0 /( p k ) r 0 3 s k := r k α k p k 4 s k := P 2 P 1 Fs k 5 ω k := ( s k ) s k /( s k ) s k 6 λ k+1 := λ k + α k p k + ω k s k 7 r k+1 := s k ω k s k 8 β k+1 := (α k /ω k )(r k+1 ) r 0 /(r k ) r 0 9 p k+1 := r k+1 + β k+1 (p k ω k p k ) 10 k := k + 1 end

18 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

19 Consider a family of nested partitions of the fictitious domain Ω with stepsizes: h j, 0 j J the first iterate is determined by the result from the nearest lower level the terminating tolerance ɛ on each level is ɛ := νh p j Algorithm: Hierarchical Multigrid Scheme Initialize: Let λ 0,(0) Ker Ker (G (0) 2 ) be given. ProjBiCGSTAB[νh p 0, λ 0,(0) Ker, F (0), P (0) 1, P (0) 2, d(0) ] λ Ker (0). For j = 1,..., J, 1 prolongate λ (j 1) Ker 2 project λ0,(j) Ker λ 0,(j) Ker λ0,(j) Ker := P (j) 0,(j) 2 λ Ker 3 ProjBiCGSTAB[νh p j, λ 0,(j) Ker, F (j), P (j) 1, P (j) 2, d(j) ] λ (j) Ker end Return: λ Ker := λ (J) Ker.

20 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

21 Circulant matrices and Fourier transform A = a 1 a n... a 2 a 2 a 1... a 3 a 3 a 2... a a n a n 1... a 1 = ( a, Ta, T 2 a,, T n 1 a ) T k f(ω) = R f(x k)e ixω dx = e ikω f(ω) XA = (Dx 0, Dx 1, Dx 2,, Dx n 1 ) = DX Let A be circulant. Then A = X 1 DX, where X is the DFT matrix and D = diag(â), â = Xa, a = A(:, 1).

22 Multiplying procedure: A v = X 1 ( D (Xv) )... Moore-Penrose 0 d := fft(a) 1 v := fft(v) 2 v := v. d 1 3 A v := ifft(v) O(2n log 2 n) Multiplying procedures: Nα, N v and Mα, M v I DD = diag(1, 1, 1, 0,..., 0) = X 1 = (N, Y) Therefore we can define the operation: ind(α) = 1 v α := ind(α) 1 v := ifft(v) ( α 0 2 Nα := ifft(v α ) 2 N v := ind 1 (v) ) } R n O(n log 2 n)

23 Kronecker product of matrices: A x A y = A x R n x n x, A y R n y n y a y 11A x... a y 1n y A x..... a y n y 1A x... a y n y n y A x Lemma 1: (A x A y )(B x B y ) = A x B x A y B y (A x A y ) = A x A y N = N x N y Lemma 2: (A x A y )v = vec(a x VA y ), where V = vec 1 (v). V = (v 1,..., v ny ) R n x n y vec(v) = v 1. R n xn y v ny

24 Kronecker product and circulant matrices: Let A x, A y be circulant, then: A = A x I y + I x A y = X 1 x D xx x X 1 X y + X 1 X x X 1 D yx y y = (X 1 x X 1 y )(D x I y + I x D y )(X x X y ) = X 1 DX x y with X = X x X y (DFT matrix in 2D) D = D x I y + I x D y (diagonal matrix) where X x, X y are the DFT matrices, D x = diag(x x a x ), D y = diag(x y a y ) and a x = A x (:, 1), a y = A y (:, 1), respectively.

25 Multiplying procedure: A v = X 1 ( D (Xv) ) 0 d x := fft(a x ), d y := fft(a y ) V := vec 1 (v) 1 V := fft(v) 2 V := fft(v ) 3 V := vec 1 (D vec(v)) 4 V := ifft(v) 5 V := ifft(v ) A v := vec(v) Number of arithmetic operations : O(2n(log 2 n x + log 2 n y ) + n) O(n log 2 n), n = n x n y Multiplying procedures: Nα, N v, Mα, M v... analogous

26 OUTLINE Preliminaries on saddle-point systems Projected Schur complement method Projected BiCGSTAB algorithm Hierarchical multigrid Poisson-like solver for singular matrices Numerical experiments

27 Ω = (0, 1) (0, 1) ω = {(x, y) R 2 : (x 0.5) 2 + (y 0.5) 2 = } { u = f in ω u = g on γ where the right hand-sides f, g are chosen appropriately to the exact solution û ex (x, y) = 100((x 0.5) 3 (y 0.5) 3 ) x 2. The auxiliary boundary Γ is obtained by shifting γ in the normal direction with δ = 8h. 1 Ω ν ω γ Γ Figure 1: Geometry of ω. Figure 2: Right hand side f. Figure 3: Ex. solution û ex ω.

28 Classical FDM Step h n/m Iters. C.time[s] Err L2 (ω) Err H1 (ω) Err L2 ( ω) 1/ / e e e-2 1/ / e e e-2 1/ / e e e-2 1/ / e e e-3 1/ / e e e-3 Convergence rates: New FDM, PSCM Step h n/m Iters. C.time[s] Err L2 (ω) Err H1 (ω) Err L2 ( ω) 1/ / e e e-3 1/ / e e e-4 1/ / e e e-4 1/ / e e e-5 1/ / e e e-5 Convergence rates: New FDM, PSCM+Multigrid Step h n/m Iters. C.time[s] Err L2 (ω) Err H1 (ω) Err L2 ( ω) 1/ / e e e-3 1/ / e e e-4 1/ / e e e-5 1/ / e e e-5 1/ / e e e-5 Convergence rates:

29 New FDM: BiCGM - iterations, discretization error Classical FDM New FDM New FDM+Multigrid Step h Iters. Err H 1 (ω) Iters. Err H 1 (ω) Iters. Err H 1 (ω) 1/ e e e-2 1/ e e e-3 1/ e e e-3 1/ e e e-3 1/ e e e-4 Conv. rates: H 1 error 10 1 cond(p 1 F N(G 2 )) δ δ Figure 1: H 1 (ω)-error sensitivity on δ. Figure 2: cond(p 1 F N(G 2 )) sensitivity on δ.

30 Conclusions The efficient implementation of the FDM with an auxiliary boundary. The saddle-point system is solved by the PSCM (non-symmetric analogy of FETI). The fast Poisson-like solver for singular matrices are used. The fast implementation is matrix free.

31 References R. Glowinski, T. Pan, J. Periaux (1994): A fictitious domain method for Dirichlet problem and applications. Comput. Meth. Appl. Mech. Engrg. 111, Farhat, C., Mandel, J., Roux, F., X. (1994): Optimal convergence properties of the FETI domain decomposition method, Comput. Methods Appl. Mech. Engrg., 115, R. K. (2005): Complexity of an algorithm for solving saddle-point systems with singular blocks arising in wavelet-galerkin discretizations, Appl. Math. 50, J. Haslinger, T. Kozubek, R.K., G. Peichel (2007): Projected Schur complement method for solving non-symmetric systems arising from a smooth fictitious domain approach, in preparing.

A smooth variant of the fictitious domain approach

A smooth variant of the fictitious domain approach A smooth variant of the fictitious domain approach J. Haslinger, T. Kozubek, R. Kučera Department of umerical Mathematics, Charles University, Prague Department of Applied Mathematics, VŠB-TU, Ostrava

More information

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic Short title: Total FETI Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ-70833 Ostrava, Czech Republic mail: zdenek.dostal@vsb.cz fax +420 596 919 597 phone

More information

Conferences Fictitious domain methods for numerical realization of unilateral problems. J.H. (jointly with T. Kozubek and R. Kučera) p.

Conferences Fictitious domain methods for numerical realization of unilateral problems. J.H. (jointly with T. Kozubek and R. Kučera) p. p. /2 Conferences 28 Fictitious domain methods for numerical realization of unilateral problems J.H. (jointly with T. Kozubek and R. Kučera) Lyon 23-24.6. 28 p. 2/2 Outline Fictitious domain methods -

More information

Indefinite and physics-based preconditioning

Indefinite and physics-based preconditioning Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

A new algorithm for solving 3D contact problems with Coulomb friction

A new algorithm for solving 3D contact problems with Coulomb friction A new algorithm for solving 3D contact problems with Coulomb friction Radek Kučera VŠB-TU Ostrava, Department of Mathematics and Descriptive Geometry, name@email.address Summary. The paper deals with solving

More information

Multilevel and Adaptive Iterative Substructuring Methods. Jan Mandel University of Colorado Denver

Multilevel and Adaptive Iterative Substructuring Methods. Jan Mandel University of Colorado Denver Multilevel and Adaptive Iterative Substructuring Methods Jan Mandel University of Colorado Denver The multilevel BDDC method is joint work with Bedřich Sousedík, Czech Technical University, and Clark Dohrmann,

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers Jan Mandel University of Colorado at Denver Bedřich Sousedík Czech Technical University

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

An Efficient FETI Implementation on Distributed Shared Memory Machines with Independent Numbers of Subdomains and Processors

An Efficient FETI Implementation on Distributed Shared Memory Machines with Independent Numbers of Subdomains and Processors Contemporary Mathematics Volume 218, 1998 B 0-8218-0988-1-03024-7 An Efficient FETI Implementation on Distributed Shared Memory Machines with Independent Numbers of Subdomains and Processors Michel Lesoinne

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Multispace and Multilevel BDDC. Jan Mandel University of Colorado at Denver and Health Sciences Center

Multispace and Multilevel BDDC. Jan Mandel University of Colorado at Denver and Health Sciences Center Multispace and Multilevel BDDC Jan Mandel University of Colorado at Denver and Health Sciences Center Based on joint work with Bedřich Sousedík, UCDHSC and Czech Technical University, and Clark R. Dohrmann,

More information

EFFICIENT NUMERICAL TREATMENT OF HIGH-CONTRAST DIFFUSION PROBLEMS

EFFICIENT NUMERICAL TREATMENT OF HIGH-CONTRAST DIFFUSION PROBLEMS EFFICIENT NUMERICAL TREATMENT OF HIGH-CONTRAST DIFFUSION PROBLEMS A Dissertation Presented to the Faculty of the Department of Mathematics University of Houston In Partial Fulfillment of the Requirements

More information

FETI domain decomposition method to solution of contact problems with large displacements

FETI domain decomposition method to solution of contact problems with large displacements FETI domain decomposition method to solution of contact problems with large displacements Vít Vondrák 1, Zdeněk Dostál 1, Jiří Dobiáš 2, and Svatopluk Pták 2 1 Dept. of Appl. Math., Technical University

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

FFT-based Galerkin method for homogenization of periodic media

FFT-based Galerkin method for homogenization of periodic media FFT-based Galerkin method for homogenization of periodic media Jaroslav Vondřejc 1,2 Jan Zeman 2,3 Ivo Marek 2 Nachiketa Mishra 4 1 University of West Bohemia in Pilsen, Faculty of Applied Sciences Czech

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

A Balancing Algorithm for Mortar Methods

A Balancing Algorithm for Mortar Methods A Balancing Algorithm for Mortar Methods Dan Stefanica Baruch College, City University of New York, NY 11, USA Dan Stefanica@baruch.cuny.edu Summary. The balancing methods are hybrid nonoverlapping Schwarz

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Scalable BETI for Variational Inequalities

Scalable BETI for Variational Inequalities Scalable BETI for Variational Inequalities Jiří Bouchala, Zdeněk Dostál and Marie Sadowská Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VŠB-Technical University

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

A Balancing Algorithm for Mortar Methods

A Balancing Algorithm for Mortar Methods A Balancing Algorithm for Mortar Methods Dan Stefanica Baruch College, City University of New York, NY, USA. Dan_Stefanica@baruch.cuny.edu Summary. The balancing methods are hybrid nonoverlapping Schwarz

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this

More information

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein

Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory. Dennis S. Bernstein Matrix Mathematics Theory, Facts, and Formulas with Application to Linear Systems Theory Dennis S. Bernstein PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Contents Special Symbols xv Conventions, Notation,

More information

Construction of a New Domain Decomposition Method for the Stokes Equations

Construction of a New Domain Decomposition Method for the Stokes Equations Construction of a New Domain Decomposition Method for the Stokes Equations Frédéric Nataf 1 and Gerd Rapin 2 1 CMAP, CNRS; UMR7641, Ecole Polytechnique, 91128 Palaiseau Cedex, France 2 Math. Dep., NAM,

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

The All-floating BETI Method: Numerical Results

The All-floating BETI Method: Numerical Results The All-floating BETI Method: Numerical Results Günther Of Institute of Computational Mathematics, Graz University of Technology, Steyrergasse 30, A-8010 Graz, Austria, of@tugraz.at Summary. The all-floating

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Auxiliary space multigrid method for elliptic problems with highly varying coefficients

Auxiliary space multigrid method for elliptic problems with highly varying coefficients Auxiliary space multigrid method for elliptic problems with highly varying coefficients Johannes Kraus 1 and Maria Lymbery 2 1 Introduction The robust preconditioning of linear systems of algebraic equations

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2017) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T. Rank & nullity Aim lecture: We further study vector space complements, which is a tool which allows us to decompose linear problems into smaller ones. We give an algorithm for finding complements & an

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Exploiting off-diagonal rank structures in the solution of linear matrix equations

Exploiting off-diagonal rank structures in the solution of linear matrix equations Stefano Massei Exploiting off-diagonal rank structures in the solution of linear matrix equations Based on joint works with D. Kressner (EPFL), M. Mazza (IPP of Munich), D. Palitta (IDCTS of Magdeburg)

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

ON THE CONVERGENCE OF A DUAL-PRIMAL SUBSTRUCTURING METHOD. January 2000

ON THE CONVERGENCE OF A DUAL-PRIMAL SUBSTRUCTURING METHOD. January 2000 ON THE CONVERGENCE OF A DUAL-PRIMAL SUBSTRUCTURING METHOD JAN MANDEL AND RADEK TEZAUR January 2000 Abstract In the Dual-Primal FETI method, introduced by Farhat et al [5], the domain is decomposed into

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

BETI for acoustic and electromagnetic scattering

BETI for acoustic and electromagnetic scattering BETI for acoustic and electromagnetic scattering O. Steinbach, M. Windisch Institut für Numerische Mathematik Technische Universität Graz Oberwolfach 18. Februar 2010 FWF-Project: Data-sparse Boundary

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

arxiv: v1 [math.na] 28 Feb 2008

arxiv: v1 [math.na] 28 Feb 2008 BDDC by a frontal solver and the stress computation in a hip joint replacement arxiv:0802.4295v1 [math.na] 28 Feb 2008 Jakub Šístek a, Jaroslav Novotný b, Jan Mandel c, Marta Čertíková a, Pavel Burda a

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

x R d, λ R, f smooth enough. Goal: compute ( follow ) equilibrium solutions as λ varies, i.e. compute solutions (x, λ) to 0 = f(x, λ).

x R d, λ R, f smooth enough. Goal: compute ( follow ) equilibrium solutions as λ varies, i.e. compute solutions (x, λ) to 0 = f(x, λ). Continuation of equilibria Problem Parameter-dependent ODE ẋ = f(x, λ), x R d, λ R, f smooth enough. Goal: compute ( follow ) equilibrium solutions as λ varies, i.e. compute solutions (x, λ) to 0 = f(x,

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n

Matrix invertibility. Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n Matrix invertibility Rank-Nullity Theorem: For any n-column matrix A, nullity A +ranka = n Corollary: Let A be an R C matrix. Then A is invertible if and only if R = C and the columns of A are linearly

More information

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz Ansätze zur numerischen Lösung hochdimensionaler Lyapunov- und Sylvestergleichungen Peter Benner Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz S N SIMULATION www.tu-chemnitz.de/~benner

More information

Multigrid Methods and their application in CFD

Multigrid Methods and their application in CFD Multigrid Methods and their application in CFD Michael Wurst TU München 16.06.2009 1 Multigrid Methods Definition Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Fast Structured Spectral Methods

Fast Structured Spectral Methods Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

ADI iterations for. general elliptic problems. John Strain Mathematics Department UC Berkeley July 2013

ADI iterations for. general elliptic problems. John Strain Mathematics Department UC Berkeley July 2013 ADI iterations for general elliptic problems John Strain Mathematics Department UC Berkeley July 2013 1 OVERVIEW Classical alternating direction implicit (ADI) iteration Essentially optimal in simple domains

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Linear and Non-Linear Preconditioning

Linear and Non-Linear Preconditioning and Non- martin.gander@unige.ch University of Geneva June 2015 Invents an Iterative Method in a Letter (1823), in a letter to Gerling: in order to compute a least squares solution based on angle measurements

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work. Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has

More information

Solutions to Exam I MATH 304, section 6

Solutions to Exam I MATH 304, section 6 Solutions to Exam I MATH 304, section 6 YOU MUST SHOW ALL WORK TO GET CREDIT. Problem 1. Let A = 1 2 5 6 1 2 5 6 3 2 0 0 1 3 1 1 2 0 1 3, B =, C =, I = I 0 0 0 1 1 3 4 = 4 4 identity matrix. 3 1 2 6 0

More information

Scalable kernel methods and their use in black-box optimization

Scalable kernel methods and their use in black-box optimization with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Accurate eigenvalue decomposition of arrowhead matrices and applications

Accurate eigenvalue decomposition of arrowhead matrices and applications Accurate eigenvalue decomposition of arrowhead matrices and applications Nevena Jakovčević Stor FESB, University of Split joint work with Ivan Slapničar and Jesse Barlow IWASEP9 June 4th, 2012. 1/31 Introduction

More information

A STOKES INTERFACE PROBLEM: STABILITY, FINITE ELEMENT ANALYSIS AND A ROBUST SOLVER

A STOKES INTERFACE PROBLEM: STABILITY, FINITE ELEMENT ANALYSIS AND A ROBUST SOLVER European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2004 P. Neittaanmäki, T. Rossi, K. Majava, and O. Pironneau (eds.) O. Nevanlinna and R. Rannacher (assoc. eds.) Jyväskylä,

More information

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator Artur Strebel Bergische Universität Wuppertal August 3, 2016 Joint Work This project is joint work with: Gunnar

More information

Problem set 2. Math 212b February 8, 2001 due Feb. 27

Problem set 2. Math 212b February 8, 2001 due Feb. 27 Problem set 2 Math 212b February 8, 2001 due Feb. 27 Contents 1 The L 2 Euler operator 1 2 Symplectic vector spaces. 2 2.1 Special kinds of subspaces....................... 3 2.2 Normal forms..............................

More information

arxiv: v2 [math.ra] 6 Oct 2015

arxiv: v2 [math.ra] 6 Oct 2015 Generalizing Block LU Factorization: A Lower-Upper-Lower Block Triangular Decomposition with Minimal Off-Diagonal Ranks François Serre and Markus Püschel Department of Computer Science, ETH Zurich arxiv:48.994v2

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

Selecting Constraints in Dual-Primal FETI Methods for Elasticity in Three Dimensions

Selecting Constraints in Dual-Primal FETI Methods for Elasticity in Three Dimensions Selecting Constraints in Dual-Primal FETI Methods for Elasticity in Three Dimensions Axel Klawonn 1 and Olof B. Widlund 2 1 Universität Duisburg-Essen, Campus Essen, Fachbereich Mathematik, (http://www.uni-essen.de/ingmath/axel.klawonn/)

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

arxiv: v2 [math.na] 5 Jan 2015

arxiv: v2 [math.na] 5 Jan 2015 A SADDLE POINT NUMERICAL METHOD FOR HELMHOLT EQUATIONS RUSSELL B. RICHINS arxiv:1301.4435v2 [math.na] 5 Jan 2015 Abstract. In a previous work, the author and D.C. Dobson proposed a numerical method for

More information

From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes

From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes Dylan Copeland 1, Ulrich Langer 2, and David Pusch 3 1 Institute of Computational Mathematics,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 5: Projectors and QR Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 14 Outline 1 Projectors 2 QR Factorization

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information