Krylov methods for the solution of parameterized linear systems in the simulation of structures and vibrations: theory, applications and challenges

Size: px
Start display at page:

Download "Krylov methods for the solution of parameterized linear systems in the simulation of structures and vibrations: theory, applications and challenges"

Transcription

1 Krylov methods for the solution of parameterized linear systems in the simulation of structures and vibrations: theory, applications and challenges Karl Meerbergen K.U. Leuven Autumn School on Model Order Reduction September 21 25, 2009

2 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

3 Collaborators Zhaojun Bai Yao Yue Maryam Saadvandi Jeroen De Vlieger Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

4 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

5 Examples of vibrating systems Car tyres Windscreens Structural damping Choice of connection (glue) to the car Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

6 Examples of vibrating systems Planes Bridge vibrating under footsteps and Thames wind Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

7 Examples of vibrating systems Maxwell-equation electrical circuits micro-gyroscope for navigation systems Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

8 Finite element analysis Numerical simulation of vibration problems. Spatial (finite element) discretization: with initial values x(0) and ẋ(0) Mẍ(t) + Cẋ(t) + Kx(t) = f (t) f and x : vectors of length n K, C and M : n n sparse matrices. In real applications n varies from 10 3 to over Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

9 Fourier analysis If f (t) = f e iωt, then (under certain conditions) for t, x(t) = xe iωt where (K + iωc ω 2 M) x = f The engineer is usually interested in the periodic regime solution, i.e. after a long integration time. Material properties are often frequency dependent. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

10 Fourier analysis (K + iωc ω 2 M) x = f x is called the frequency response function. Compute x for ω = ω 1,...,ω p. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

11 Acoustic industrial applications : vibro-acoustics vibrating structure (modelized by structural modes) acoustic domain (finite elements) acoustic radiation towards infinity (infinite elements) structure is modelized by modes (eigen functions) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

12 Acoustic industrial applications : vibro-acoustics Linear system has three parts Fluid (finite elements) radiation to infinity (infinite elements) structure (modes) Modes FE IFE = 0 0 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

13 Acoustic industrial applications : aero-acoustics airplane nacelle turning engine modelized by rotating modes acoustic domain with modeling of flow (finite elements) acoustic radiation towards infinity (infinite elements) Usually a few frequencies only: MOR not required. Often large linear systems: 1M dofs or more. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

14 Infinite elements Acoustic radiation towards infinity: no finite elements, but infinite elements. Index 1 Differential algebraic equation (DAE) with M = [ ] M Mẍ(t) + Cẋ(t) + Kx(t) = f (t) [ ] C1 C, C = 1,2 C 2,1 C 2 [ ] K1 K and K = 1,2 K 2,1 K 2 Some models are unstable (zero blocks become nonzero) But those correspond to high frequency unphysical modes Unsuitable for time integration!. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

15 Traditional frequency response computation 1. For ω = ω 1,...,ω p 1.1. Solve the linear system (K + iωc ω 2 M)x = f for x For each frequency, a large system of algebraic equations needs to be solved. This requires a linear solver for a large sparse matrix. For a direct solver (based on LU factorization): a sparse matrix factorization LU = K ω 2 M + iωc (expensive) and a backward solve LUx = f (relatively cheap). Note: no output The goal is to reduce the number of matrix factorizations. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

16 Linear system solvers Discretization error depends on largest frequency: larger frequency means finer mesh Direct linear system solver: up to 1M dofs: no problem For a complex valued system of 3D volume discretization with 100,000 dofs, direct method solution time is of the order of 10 seconds. Iterative linear system solver The last ten years effective preconditioners for the Helmholtz equation have been developed. Iterative methods can be seen as validation of model AMLS: automated level substructuring Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

17 Damping models No damping: C = 0 Most general situation: C is frequency dependent (e.g. porous material, foams): Properties are meausured for different frequencies Large amount of uncertainty about damping parameters Stochastic methods may be appropriate Often a constant C is fine for a large frequency range as long as the tendency is right Proportional damping: Eigenvectors do not change with the damping Simple model often used for a zero order analysis Is often a result of measurements Valid for small damping Only valid for specific materials (glass, concrete, steel,... ) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

18 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

19 Overview of methods Consider (K ω 2 M)x = f with K and M large sparse, real symmetric matrices M positive definite f independent of ω: typically point loads Three basic methods: Modal truncation Padé approximation Mixed direct iterative procedure Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

20 Modal truncation Consider the eigendecomposition Ku j = λ j Mu j The solution of (K ω 2 M)x = f is x = n j=1 u j u T j f λ j ω 2 Rational function with poles λ j. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

21 Modal superposition, cont. x = n j=1 u j uj T f λ j ω 2 k j=1 u j u T j f λ j ω "undamped" "undamped10" "undamped7" Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

22 Vector-Padé approximation Approximation of x = (K αm) 1 f by x = x 0 + αx α k 1 x k 1 (α λ 1 ) (α λ k ) This is a rational function with k poles. Determine the coefficients so that the first k derivatives in σ match Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

23 Frequency sweeping For each ω precondition (K ω 2 M)x = f into (K σm) 1 (K ω 2 M)x = (K σm) 1 f and solve by an iterative method. Use linear system solver for applying (K σm) 1 For the AMLS method, K σm is a diagonal matrix. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

24 Input-output system SISO (K ω 2 M)x = b y = d T x Compute y accurately and fast Use MOR as fast solver Often many outputs (100 s or 1000 s) Twosided methods (MOR) are not often used in this case Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

25 Summary Modal truncation: Padé approximation: x = k j=1 u j u T j f λ j α x = x 0 + αx α k 1 x k 1 (α µ 1 ) (α µ k ) Frequency sweeping Solve (M ω 2 M)x = f by an iterative method MOR: find reduced model for linear system (K ω 2 M)x = b y = d T x Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

26 Notation Assume σ = 0 and define α = ω 2 A = (K σm) 1 M and b = (K σm) 1 f then we solve or (K αm)x = f (I αa)x = b Eigenvalue problem: Ku j = λ j Mu j Assume A symmetric. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

27 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

28 Lanczos method Krylov space: span{b, Ab,...,A k 1 b} Lanczos method builds orthogonal basis V k = [v 1,...,v k ]. Range(V k ) = span{b, Ab,...,A k 1 b} and a tridiagonal matrix T k = V T k AV k major cost: k matrix vector products with A : w = Av small cost when k is small Also called Ritz vector technique (mechanical engineering) Recall Rixen s talk yesterday Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

29 Lanczos method Transform a large size matrix into a small size matrix T k = V T k A V k Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

30 Shift-invariance property Krylov spaces and {v, Av, A 2 v,...} {v, (A + αi)v, (A + αi) 2 v,...} are equal, since (A + αi)v = Av + αv Applying the Lanczos method to A, applies it for free to A + αi for all α. Similarly, the Lanczos method applied to A produces the same Krylov space as the Lanczos method applied to I αa provided α 0. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

31 Shifted or parameterized linear systems Analyzed in the context of model reduction methods (Connection with rational approximation) [Gallivan, Grimme, Van Dooren 1994], [Feldman, Freund 1995], [Gallivan, Grimme, Van Dooren 1996], [Grimme, Sorensen, Van Dooren 1996], [Ruhe & Skoogh 1998], [Bai & Freund 2000], [Bai & Freund 2001] [Bai & Su 2006] in the context of parameterized linear systems [Freund 1993], [Frommer & Glässner, 1993], [Simoncini & Gallopoulos 1998], [Simoncini, 1999], [Simoncini & Perotti 2002], [M. 2003], [Edema, Vuik 2008] Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

32 Undamped vibration problem When A = K 1 M, A is non-symmetric. However, x MAy = y MAx for all x, y. So, A is self-adjoint with the M inner product Use the Lanczos method with M orthogonalization: Matrix vector products with A: V k MV k = I One matrix factorization of K = LDL T k solves of the form LDL T w = Mv Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

33 Iterative solver connection For the solution of (I αa)x = b build a Krylov space of dimension k with matrix A and starting vector b (i.e. independent of α) Compute a solution of the form x = V k z = k j=1 v jζ j so that the residual is orthogonal to the Krylov space: V T k (b (I αa) x) = 0 V T k (b (I αa) x) = 0 Vk T (v 1 b (I αa)v k z) = 0 e 1 b (I αt k )z = 0 i.e. (I αt k )z = e 1 b Conjugate gradients or Lanczos Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

34 Lanczos convergence Let Ku j = λ j Mu j Eigenvalues of K 1 (K ω 2 M) are θ j = λ j ω 2 Eigenvalues are clustered around one. λ j λ j θ j 0 ω When there are no eigenvalues λ between 0 and ω 2, then we have a positive definite linear system Fast convergence when most eigenvalues are clustered around one: ω close to 0 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

35 MINRES versus Lanczos Lanczos: Vertical asymptotes MINRES: x = x = y j α µ j y j (α) α µ j (α) Denominator is never zero No vertical asymptotes Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

36 Example 10 1 Lanczos MINRES Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

37 Padé connection Recall (I αa)x = f The solution computed by the Lanczos method can be written as x = x 0 + αx α k 1 x k 1 (α µ 1 ) (α µ k ) where x (j) (0) = x (j) (0) for j = 0,...,k 1 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

38 Eigenvalue connection Lanczos method produces eigenvalue estimates in a similar way as the linear solves. Let Au = θu Then choose ũ = V k z so that the residual is orthogonal to the Krylov space: V T k (Aû θû) = 0 V T k (AV kz θv k z) = 0 T k z θz = 0 For the Ku = λmu problem: is small for the small λ s. Kũ λũm Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

39 Eigenvalue connection As for modal truncation, we can project K, M and f on the Ritz vectors. We can show that the Lanczos method computes x = k j=1 ũ j w T j f λ j α where ũ j is a Ritz vector. There are k terms, so we can only compute k vertical asymptotes in the function The number of eigenvalues in the frequency range should be smaller than k. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

40 Eigenvalue connection: example Hard problem: 10,000 eigenvalues more than Easy problem: less than 20 eigenvalues Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

41 Numerical example: BMW Windscreen Glaverbel-BMW windscreen grid : 3 layers of HEX08 elements (n = 22, 692) unit point force at one of the corners wanted : displacement for ω = [0.5Hz, 200Hz]. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

42 Numerical example: BMW Windscreen k = 10 vectors k = 20 vectors e-06 1e-06 1e-08 1e-08 1e-10 1e-10 1e e Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

43 Industrial example with NASTRAN Traditional computation For each frequency, perform factorization of K ω 2 M and solve Lanczos computation One matrix factorization of K σm and solve k solves. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

44 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

45 Damping Damping often introduces a C term in the equation: (K ω 2 M + iωc)x = f If damping is global, i.e. in the fluid or structure itself, we often have Rayleigh damping, i.e. structural damping in a windscreen in order to reduce vibrations. We make the damping ω dependent: D(ω) (K ω 2 M + D(ω))x = f Rayleigh damping : D = γk + δm f is independent of ω Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

46 No damping D(ω) 0 Linear system: (K ω 2 M)x = f Corresponding eigenvalue problem: Ku = λmu Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

47 Structural Rayleigh damping D(ω) = iγk Linear system: ((1 + iγ)k ω 2 M)x = f Corresponding eigenvalue problem: (1 + iγ)ku = λmu Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

48 Fluid Rayleigh damping D(ω) = iω(α 0 M + α 1 K) Linear system: (K + iω(α 0 M + α 1 K) ω 2 M)x = f Corresponding eigenvalue problem: (K + iλ(α 0 M + α 1 K) λ 2 M)u = 0 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

49 Modal superposition Define U = [u 1,...,u n ] and Λ = diag(λ 1,...,λ n ) KU = MUΛ D = βk + γm U T MU = I U T KU = Λ U T DU = βi + γλ Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

50 Modal superposition, cont. Without damping: The solution of (K ω 2 M)x = f With damping x = n j=1 u j u T j f λ j ω 2 Simultaneous diagonalization of K, M, and D The solution of (K ω 2 M + D)x = f x = n j=1 u j u T j f λ j ω 2 + ζ j (ω) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

51 Modal superposition, cont. x = n j=1 u j uj T f λ j ω 2 + ζ j k j=1 u j u T j f λ j ω 2 + ζ j 10 "damped" "damped10" "damped7" Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

52 Lanczos method For the similar reasons, we can use the Lanczos method [M. 2008]. Compute V k, T k for A = K 1 M with starting vector b = K 1 f (real arithmetic) For each α, solve the k k tridiagonal system (complex arithmetic) V T k MK 1 (K ω 2 M + D(ω))V k z = e 1 b Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

53 Numerical example: BMW Windscreen Glaverbel-BMW windscreen with 10% structural damping Direct method : 2653 seconds (complex arithmetic) Lanczos method : 14 seconds (mostly real arithmetic) e Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

54 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

55 Nonproportional damping (K + iωc ω 2 M)x = f Damping is not proportional: K, C and M cannot be diagonalized simultaneously. Linearization : Define matrices A and B [ ] [ ] K ic M A = B = I I so that ( x (A ωb) ωx ) = ( f 0 This is called a linearization, a similar trick as the solution of second order ODE s. ) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

56 Linearizations Linearizations have been studied for the solution of the quadratic eigenvalue problem (K + λc + λ 2 M)u = 0 [Gohberg, Lancaster, Rodman, 1982] [Tisseur, M. 2001] Suppose that K, C and M are symmetric, then we can choose [ ] [ ] K C M A = B = M M symmetric, but both indefinite, so the (symmetric) Lanczos method cannot be used. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

57 Methods [Parlett & Chen 1990] Pseudo Lanczos method (pretends B is positive definite) [Simoncini & al, 2005] similar [Freund, 2005]: analysis of Krylov spaces [Bai & Su, 2005] SOAR: based on Arnoldi s method Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

58 Structure preserving? Krylov space for linerizations of the form [Mackey,Mackey,Mehl,Mehrmann,2006] [ ] [ η1 K A A = 1 A1 η B = 1 C η 1 M η 2 K A 2 A 2 η 2 C η 2 M ] Arnoldi method for this linearization produces Krylov space with [ K 1 C K 1 ] M I 0 Structure preserving is not so easy from the point of view of the Krylov method It is possible on the level of projection (SOAR) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

59 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

60 Multiple right-hand sides (K ω 2 M)[x 1,...,x s ] = [f 1,...,f s ] for ω Ω = [ω min, ω max ]. Methods: Use Lanczos method for each f j separately Low memory cost The cost is proportional to s Use block-lanczos method for each all f j together Fast method High memory cost Recycling Ritz vectors in Krylov methods [Giraud,Ruiz & Touhami, 2006] [Kilmer & de Sturler 2006] [Darnell, Morgan, Wilcox 2007] [Stathopoulos & Orginos, 2009][Bai & M. 2008] Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

61 Frequency sweeping For ω j solve (K ω 2 j M)x j = f iteratively Speed-up by preconditioning into (K σm) 1 (K ω 2 j M)x j = (K σm) 1 f and by using x j 1 as starting vector. Assume that a number of eigenvectors is given: speed up iterative process (See Daniel Rixen yesterday) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

62 Frequency sweeping with modal acceleration The solution of for ω Ω is split into two parts. (K ω 2 M)x = f (1) Let U p = [u 1,...,u p ] be the eigenvectors corresponding to the eigenvalues in Ω 2. Compute x p = p j=1 u j u j f λ j ω 2 Solve (1) iteratively using starting vector x p, i.e. x = x p + y with y the solution of (K ω 2 M)y = f (K ω 2 M)x p = (I U p U pm)f Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

63 Preconditioning Preconditioner for the remainder system: (K σm) 1 Preconditioned system is with Ay = b b = (I U p U pm)(k σm) 1 f A = (K σm) 1 (K ω 2 M) y is also the solution of with By = b B = (I U p U pm)(k σm) 1 (K ω 2 M)(I U p U pm) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

64 Spectral analysis of Lanczos method Eigenvalues of Kx = λmx Eigenvalues of A ω 2 min ω 2 max 0 1 Most eigenvalues of A lie near one. The number of required iterations is the number of isolated eigenvalues of A away from one. Convergence for all ω 2 Ω 2 requires the number of iterations, k, to be at least the number of eigenvalues in Ω 2. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

65 Spectral analysis of deflated iterative method Eigenvalues of Kx = λmx Eigenvalues of A ω 2 min ω 2 max 0 1 Let B = A {Up} M Black eigenvalues only: B is positive definite Eigenvalues of B clustered around 1 spectral radius of I B is smaller than one Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

66 Inexact deflation Be careful for deflation with Ritz vectors [Darnell, Morgan, Wilcox 2007] Reason is that the system s residual need not be small and the direction may depend on ω r(ω) = R p z p (ω) + v k+1 ζ k (ω) with R p the residual vectors of the deflated Ritz vectors and v k+1 the k + 1st Lanczos vector. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

67 Eigenvalue solver Use Ritz pairs of the (spectral transformation) Lanczos method [Grimes, Lewis, Simon 94] No exact eigenpairs But interesting properties as we now see: Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

68 Convergence of Ritz vectors Eigenvalues in Ω 2 are computed fairly accurately If Aˆx b M γ x M for all ω Ω then ρ j = Aû j û j ˆθj M with ρ j γ λ j σ Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

69 Padé via Lanczos Lanczos method: ˆx = k j=1 û j ûj f ˆλ j ω 2 First k moments of ˆx and x match. With deflation of simple eigenvalues: ˆx = p j=1 u j uj f λ j ω 2 + k j=p+1 û j ûj f ˆλ j ω 2 First k p moments of ˆx and x match. Interpolation in the p deflated eigenvalues. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

70 Applications AMLS frequency sweeping Multiple right-hand sides: Parameterized Lanczos for right-hand side 1 Keep Ritz vectors Recycle Ritz vectors for coming right-hand sides Changing σ: recycle Ritz vectors for new pole. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

71 Windscreen Glaverbel-BMW windscreen grid : 3 layers of HEX08 elements (n = 22, 692) Ω = [0, 100] First run: unit point force at one of the corners Use Lanczos method with k = 20 vectors. We keep the Ritz values in [0, ] : p = 14 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

72 Windscreen Second run with other right-hand side Perform 6 additional Lanczos steps The largest κ(b) is Six iterations reduce the error in the M ˆB norm by Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

73 Acoustic cavity n = 48, 158 Frequency range : [0, 10000] 202 right-hand sides matrix factorization: 8 seconds Lanczos method with 40 vectors: 6 seconds Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

74 Acoustic cavity (cont.) 2nd right-hand side: keep the 31 Ritz values in [0, ]. 9 additional Lanczos iterations recycling 31 Ritz vectors: 2 seconds 1000 Exact recycling 1000 Exact k= Exact k= With recycling Lanczos k = 50 Lanczos k = 19 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

75 Acoustic cavity n = 140, 228 Frequency range : [0, 10000] 202 right-hand sides matrix factorization: 13 seconds Lanczos method with 50 vectors: 15 seconds Recycling 36 vectors: only 4 seconds For 201 right-hand sides: 800 instead of 3000 seconds. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

76 Multiple eigenvalues 3D Laplacian on a cube. 30 Lanczos iterations with first right-hand side Recycle 22 Ritz pairs Run 8 iterations with the second right-hand side Exact Recycling 10 Exact k= Lanczos for f 1 Recylcing for f 2 Lanczos k = 8 for f 2 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

77 Conclusions Solving parameterized linear systems with multiple right-hand sides can benefit from recycling Ritz vectors Does not work well when eigenvalues are multiple Also works for Rayleigh damping Extension for block methods [Robbé & Sadkane] is straightforward Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

78 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

79 Software Software developed for FFT.Actran and MSC.Nastran: Shifting of K, C and M for faster convergence, Multiple right-hand sides, Arnoldi and Lanczos, possibility for out-of-core storage of iteration vectors, Error estimation Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

80 Software x is computed for a discrete number of ω s. We assume that ω 1 < ω 2 < < ω m. 1 Build Krylov subspace of dimension k for pole σ = ω 1. 2 For i = 1,...,m: 1 Compute x(ω i ) from the Krylov subspace. 2 If the solution is not accurate enough, pick a new σ and build a new Krylov subspace 3 Choice of k depends on the ratio of the cost of the sparse matrix factorization and the backward solves Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

81 Pole selection First pole σ = ω 1 If x(ω l ) did not converge, pick a new pole: σ j 1 ω l 1 ω l σ j Not too close to ω l, and not too far: σ j = σ j 1 + ω l + τ σ (ω l 1 σ j 1 ) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

82 More on pole selection Similar to eigenvalue computations by Krylov methods: Pole close to eigenvalue: one eigenvalue converges Pole away from eigenvalues: slow but steady convergence The eigenvalues that matter are computed. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

83 Example on pole selection Mushroom model Comparison between poles for 100 iterations Residual for σ = 300 Residual for σ = i Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

84 Storage Evaluation of x(ω): Storage of v j k is fixed (determined by available memory and execution time) k iteration vectors need to be stored ωi can be selected in a flexible way Storage of x(ω i ) Update x(ωi ) for i = 1,...,m at each iteration of the Krylov method Number of Krylov steps need not be selected beforehand All x(ωi ) need to be stored, sometimes additional vectors too. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

85 Reorthogonalization The Lanczos process builds an M orthogonal basis in exact arithmetic Influence of reorthogonalization: 1 "exact" "noreorth" "reorth" e-06 1e-08 1e-10 1e-12 1e With reorthogonalization, we can solve for more frequencies Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

86 Finite precision Similar to eigenvalue computations: Solution of linear systems: choice of σ Not close to an eigenvalue: blows up the error Lanczos method: orthogonalization Error estimation of the solution On iteration j: (K σm)w j = Mv j + f j with f j 2 u( Mv j + K σm w j ) For the recurrence relation that implies with E k e j = (K σm) 1 f j. So E k 2 uκ(k σm). (K σm) 1 MV k V k+1 T k = E k Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

87 Example Figure shows the error norm "exact" "error-10" "ERR4-10" "ERR330-10" Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

88 Acoustic box ( m m 0.55m) with walls covered with carpet. K, C and M have order n = 13, 623. Wanted frequencies: ω {600, 605, 610,...,1500} k = 40 Arnoldi Lanczos direct factorizations time Speed-up of Arnoldi is 14. Loss of orthogonality of Lanczos vectors. Reorthogonalization Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

89 Example from vibro-acoustics Cube filled with air (source: Free Field technologies) with a steel plate inside The faces have infinite elements for radiation to infinity Point load on the plate Dimension is n = 36, 816. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

90 For each frequency: (K + iωc ω 2 M)x = f The solution for 250 frequencies by the direct method costs 186 min. With Arnoldi s method, we only need 4.6 min. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

91 3D head phone model Coupling structure acoustics. Problem of dimension 63, iteration vectors in Arnoldi s method with out-of-core linear solver. For ω = 10,...,335Hz Arnoldi Lanczos direct factorizations 1 (breakdown) 326 time (min.) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

92 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

93 Future work Increasing model sizes Use of iterative linear system solvers Substructuring No Krylov methods POD type methods Other MOR methods needed Many (nonlinear) parameters Uncertainties Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

94 Many non-linear parameters Increase of computing power: larger models, but also more model parameters that need to be determined Can be an optimization loop fully automatically or tuned by hand Bottom line: model reduction helps to work with these models Making a reduced model for the entire parameter space is very unlikely to be possible. Therefore: Now reduced model is fed to a post-processing algorithm With many parameters, post-processing and modelreduction will be mixed: this may lead to new post-processing algorithms (related to talk by Yao Yue). Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

95 Parameter uncertainties Currently a hot topic in mechanical engineering, but many computational challenges. Optimal choice of parameters does not necessarily give information about how good the optimal model is. A sensitivity analysis around the optimum may be wanted. New is that the pertubrations can be large (so computing derivatives is not sufficient). When there are many parameters, practical approaches reduce the number of parameters. One idea is to compute the worst case scenario. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

96 Uncertainties Traditional approach: fix all parameters and perform a deterministic analysis Variability (irreducible uncertainty): probability distribution stochastic methods compute the probability of the result performed when design is finished and ready for production Uncertainty (reducible uncertainty): early in the design stage interval analysis or fuzzy numbers possibilistic methods for evaluating the impact of the uncertainty Example: thickness of a plate. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

97 Uncertain parameters: worst case Consider a vector of uncertain parameters x R m. We have a core value of x: x 0 We want to compute output y = f (x) with x [x] α with [x] α = {x : x x 0 α} for increasing values of α. [x] α is a hypercube. y α = min x [x] α f (x) y + α = max x [x] α f (x) This is an analysis that takes into account order α perturbations for all parameters = worst case scenario Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

98 Computing bounds Three approaches: Interval arithmetic: Use interval arithmetic in all operations (e.g. eigenvalue computation) Usually, it produces an overestimation (too large intervals) Design of experiments (DOE): Montecarlo approach Usually, it produces an underestimation (too small intervals) [Donders, 2008] Optimization problem: Global optimization method y α = min x [x] α f (x) y + α = max x [x] α f (x) Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

99 Computing bounds Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

100 Vertex methods If f is monotonous in all x j, then the minimum and maximum of f are attained at opposite corners of the α- cut. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

101 Perturbed eigenvalues Let B, and E j, j = 1,...,m be real symmetric matrices. Let x R m. Define m A = B + x j E j. Assume eigenvalues ordered such that λ 1 (x) λ 2 (x) λ n (x) for all x. Denote by v j, v j 2 = 1, an eigenvector associated with λ j (x 0 ). j=1 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

102 Smallest (or largest) eigenvalue Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

103 Derivatives λ i (x) = vi E j v i x j 2 λ i (x) = x 2 j 2 v i (A(x) λ i I) v i x j x j 2 λ 1 (x) = x 2 j 2 v 1 x j (A(x) λ 1 I) v 1 x j 0 Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

104 Tangent plane as lower bound t x0 (x) = λ 1 (x 0 ) + (x x 0 )(v Ev) where v, v 2 = 1, is an eigenvector associated with λ 1. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

105 Optimization problem Since λ 1 is a convex function in x and x lives on a square domain, the maximum of λ 1 is attained at one of the corner points of the [x] α domain, and there is one and only one local minimum on each edge (face) of [x] α. Therefore, we have to compute the eigenvalues for x on the 2 m corners (for the maximum) and solve a convex optimization problems. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

106 Local minimization problem When we have multiple eigenvalue problem, the optimization problem is non-smooth. For finding the local minimum, we can use line search methods, where the search direction is based on the tangent plane since the gradient might not exist. Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

107 Outline 1 Motivation 2 Overview of methods Modal truncation Vector-Padé approximation Frequency sweeping Input/output MOR 3 Lanczos method 4 Rayleigh damping 5 Nonproportional damping 6 Multiple right-hand sides 7 Software 8 Future work 9 Conclusions Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

108 Conclusions Krylov methods usually work well for acoustic simulation Recycling Ritz vectors is a reliable and efficient method for the solution with multiple right-hand sides Parametrized models with many parameters are current challenges Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

109 Bibliography (Also: Z. Bai and K. Meerbergen. The Lanczos method for parameterized symmetric linear systems with multiple right-hand sides. Technical Report TW527, Department of Computer Science KULeuven, Celestijnenlaan 200A, 3001 Heverlee, Belgium, J. De Vlieger and K. Meerbergen. Analysis and computation of eigenvalues of symmetric fuzzy matrices. In T. Simos, editor, Proceedings of the ICNAAM09 Conference, K. Meerbergen. The solution of parametrized symmetric linear systems. SIAM J. Matrix Anal. Appl., 24(4): , K. Meerbergen. Fast frequency response computation for Rayleigh damping. International Journal of Numerical Methods in Engineering, 73(1):96 106, K. Meerbergen. The Quadratic Arnoldi method for the solution of the quadratic eigenvalue problem. SIAM Journal on Matrix Analysis and Applications, 30(4): , K. Meerbergen and J.P. Coyette. Connection and comparison between frequency shift time integration and a spectral transformation preconditioner. Numerical Linear Algebra with Applications, 16:1 17, F. Tisseur and K. Meerbergen. The quadratic eigenvalue problem. SIAM Review, 43(2): , Karl Meerbergen (K.U. Leuven) Parameterized linear systems MOR - September / 109

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Nested Krylov methods for shifted linear systems

Nested Krylov methods for shifted linear systems Nested Krylov methods for shifted linear systems M. Baumann, and M. B. van Gizen Email: M.M.Baumann@tudelft.nl Delft Institute of Applied Mathematics Delft University of Technology Delft, The Netherlands

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

On complex shifted Laplace preconditioners for the vector Helmholtz equation

On complex shifted Laplace preconditioners for the vector Helmholtz equation On complex shifted Laplace preconditioners for the vector Helmholtz equation C. Vuik, Y.A. Erlangga, M.B. van Gijzen, C.W. Oosterlee, D. van der Heul Delft Institute of Applied Mathematics c.vuik@tudelft.nl

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

An Efficient Two-Level Preconditioner for Multi-Frequency Wave Propagation Problems

An Efficient Two-Level Preconditioner for Multi-Frequency Wave Propagation Problems An Efficient Two-Level Preconditioner for Multi-Frequency Wave Propagation Problems M. Baumann, and M.B. van Gijzen Email: M.M.Baumann@tudelft.nl Delft Institute of Applied Mathematics Delft University

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

On Dominant Poles and Model Reduction of Second Order Time-Delay Systems

On Dominant Poles and Model Reduction of Second Order Time-Delay Systems On Dominant Poles and Model Reduction of Second Order Time-Delay Systems Maryam Saadvandi Joint work with: Prof. Karl Meerbergen and Dr. Elias Jarlebring Department of Computer Science, KULeuven ModRed

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Parametric Dominant Pole Algorithm for Parametric Model Order Reduction Maryam Saadvandi Karl Meerbergen Wim Desmet Report TW 625, March 2013 KU Leuven Department of Computer Science Celestijnenlaan 200A

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators

Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

MULTIGRID ARNOLDI FOR EIGENVALUES

MULTIGRID ARNOLDI FOR EIGENVALUES 1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi

More information

Iterative projection methods for sparse nonlinear eigenvalue problems

Iterative projection methods for sparse nonlinear eigenvalue problems Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay

A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay A Multi-Step Hybrid Method for Multi-Input Partial Quadratic Eigenvalue Assignment with Time Delay Zheng-Jian Bai Mei-Xiang Chen Jin-Ku Yang April 14, 2012 Abstract A hybrid method was given by Ram, Mottershead,

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 17 AA242B: MECHANICAL VIBRATIONS Solution Methods for the Generalized Eigenvalue Problem These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation

Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation C. Vuik, Y.A. Erlangga, M.B. van Gijzen, and C.W. Oosterlee Delft Institute of Applied Mathematics c.vuik@tudelft.nl

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

R-Linear Convergence of Limited Memory Steepest Descent

R-Linear Convergence of Limited Memory Steepest Descent R-Linear Convergence of Limited Memory Steepest Descent Frank E. Curtis, Lehigh University joint work with Wei Guo, Lehigh University OP17 Vancouver, British Columbia, Canada 24 May 2017 R-Linear Convergence

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Multilevel Methods for Eigenspace Computations in Structural Dynamics

Multilevel Methods for Eigenspace Computations in Structural Dynamics Multilevel Methods for Eigenspace Computations in Structural Dynamics Ulrich Hetmaniuk & Rich Lehoucq Sandia National Laboratories, Computational Math and Algorithms, Albuquerque, NM Joint work with Peter

More information

Motivation: Sparse matrices and numerical PDE's

Motivation: Sparse matrices and numerical PDE's Lecture 20: Numerical Linear Algebra #4 Iterative methods and Eigenproblems Outline 1) Motivation: beyond LU for Ax=b A little PDE's and sparse matrices A) Temperature Equation B) Poisson Equation 2) Splitting

More information

Krylov-based model reduction of second-order systems with proportional damping

Krylov-based model reduction of second-order systems with proportional damping Krylov-based model reduction of second-order systems with proportional damping Christopher A Beattie and Serkan Gugercin Abstract In this note, we examine Krylov-based model reduction of second order systems

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi Lecture March, 2016

Identification Methods for Structural Systems. Prof. Dr. Eleni Chatzi Lecture March, 2016 Prof. Dr. Eleni Chatzi Lecture 4-09. March, 2016 Fundamentals Overview Multiple DOF Systems State-space Formulation Eigenvalue Analysis The Mode Superposition Method The effect of Damping on Structural

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 50 AA242B: MECHANICAL VIBRATIONS Undamped Vibrations of n-dof Systems These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical Vibrations:

More information

Block Krylov Space Solvers: a Survey

Block Krylov Space Solvers: a Survey Seminar for Applied Mathematics ETH Zurich Nagoya University 8 Dec. 2005 Partly joint work with Thomas Schmelzer, Oxford University Systems with multiple RHSs Given is a nonsingular linear system with

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES

PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES 1 PERTURBED ARNOLDI FOR COMPUTING MULTIPLE EIGENVALUES MARK EMBREE, THOMAS H. GIBSON, KEVIN MENDOZA, AND RONALD B. MORGAN Abstract. fill in abstract Key words. eigenvalues, multiple eigenvalues, Arnoldi,

More information

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line Karl Meerbergen en Raf Vandebril Karl Meerbergen Department of Computer Science KU Leuven, Belgium

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data

Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Sensitivity of Gauss-Christoffel quadrature and sensitivity of Jacobi matrices to small changes of spectral data Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

Is there life after the Lanczos method? What is LOBPCG?

Is there life after the Lanczos method? What is LOBPCG? 1 Is there life after the Lanczos method? What is LOBPCG? Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver SIAM ALA Meeting, July 17,

More information

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

An Arnoldi method with structured starting vectors for the delay eigenvalue problem An Arnoldi method with structured starting vectors for the delay eigenvalue problem Elias Jarlebring, Karl Meerbergen, Wim Michiels Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001

More information

Adaptive preconditioners for nonlinear systems of equations

Adaptive preconditioners for nonlinear systems of equations Adaptive preconditioners for nonlinear systems of equations L. Loghin D. Ruiz A. Touhami CERFACS Technical Report TR/PA/04/42 Also ENSEEIHT-IRIT Technical Report RT/TLSE/04/02 Abstract The use of preconditioned

More information

Deflation for inversion with multiple right-hand sides in QCD

Deflation for inversion with multiple right-hand sides in QCD Deflation for inversion with multiple right-hand sides in QCD A Stathopoulos 1, A M Abdel-Rehim 1 and K Orginos 2 1 Department of Computer Science, College of William and Mary, Williamsburg, VA 23187 2

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information