A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

Size: px
Start display at page:

Download "A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1."

Transcription

1 A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS MICHAEL EIERMANN AND OLIVER G. ERNST Abstract. We show how the Arnoldi algorith for approxiating a function of a atrix ties a vector can be restarted in a anner analogous to restarted Krylov subspace ethods for solving linear systes of equations. The resulting restarted algorith reduces to other known algoriths for the reciprocal and the exponential functions. We further show that the restarted algorith inherits the superlinear convergence property of its unrestarted counterpart for entire functions and present the results of nuerical experients. Key words. atrix function, Krylov subspace approxiation, Krylov projection ethod, restarted Krylov subspace ethod, linear syste of equations, initial value proble AMS subject classifications. 65F10, 65F99, 65M20, 1. Introduction. The evaluation of f(a)b, where A C n n, b C n (1.1) and f : C D C is a function for which f(a) is defined, is a coon coputational task. Besides the solution of linear systes of equations, which involves the reciprocal function f(λ) = 1/λ, by far the ost iportant application is the tie evolution of a syste under a linear operator, in which case f(λ) = f t (λ) = e tλ and tie acts as a paraeter t. Other applications involving differential equations require the evaluation of (1.1) for the square root and trigonoetric functions (see [8, 1]). Further applications include identification probles for seigroups involving the logarith (see e.g. [29]) and lattice quantu chroodynaics siulations requiring the evaluation of the atrix sign function (see [34] and the references therein). In any of the applications entioned above the atrix A is large and sparse or structured, typically resulting fro discretization of an infinite-diensional operator. In this case evaluating (1.1) by first coputing f(a) is usually unfeasible, so that ost of the algoriths for the latter task (see e.g. [18, 5]) cannot be used. The standard approach for approxiating (1.1) directly is based on a Krylov subspace of A with initial vector b [8, 9, 27, 14, 17, 4, 19]. The advantage of this approach is that it requires A only for coputing atrix-vector products and that, for sooth functions such as the exponential, it converges superlinearly [8, 27, 31, 17]. One shortcoing of the Krylov subspace approxiation, however, lies in the fact that coputing an approxiation of (1.1) fro a Krylov subspace K (A, b) of diension involves all the basis vectors of K (A, b), and hence these need to be stored. Meory constraints therefore often liit the size of the proble that can be solved, which is an issue especially when A is the discrete representation of a partial differential operator in three space diensions. When A is Heritian, the Heritian Lanczos process allows the basis of K (A, b) to be constructed by a three-ter recurrence. When solving linear systes of equations, this recurrence for the basis vectors Institut für Nuerische Matheatik und Optiierung, Technische Universität Bergakadeie Freiberg, D Freiberg, Gerany (eierann@ath.tu-freiberg.de) Institut für Nuerische Matheatik und Optiierung, Technische Universität Bergakadeie Freiberg, D Freiberg, Gerany (ernst@ath.tu-freiberg.de) Revision June 7,

2 2 M. EIERMANN AND O. G. ERNST iediately translates to efficient update forulas for the approxiation. This, however, is a consequence of the siple for of the reciprocal function and such update forulas are not available for general (non-rational) functions. When solving non-heritian linear systes of equations by Krylov subspace approxiation, a coon reedy is to liit storage requireents by restarting the algorith each tie the Krylov space has reached a certain axial diension [28, 10]. The subject of this work is the extension of this restarting approach to general functions. How such a generalization ay be accoplished is not iediately obvious, since the restarting approach for linear systes is based on solving successive residual equations to obtain corrections to the ost recent approxiation. The availability of a residual, however, is another property specific to probles where f(a)b solves an (algebraic or differential) equation. The reainder of this paper is organized as follows: Section 2 recalls the definition and properties of atrix functions and their approxiation in Krylov spaces, ephasizing the role of Herite interpolation, and closes with an error representation forula for Krylov subspace approxiations. In Section 3 we introduce a new restarted Krylov subspace algorith for the approxiation of (1.1) for general functions f. We derive two atheatically equivalent forulations of the restarted algorith, the second of which, while slightly ore expensive, was found to be ore stable in the presence of rounding errors. In Section 4 we show that, for the reciprocal and exponential functions, our restarted ethod reduces to the restarted full orthogonalization ethod (FOM, see [26]) and is closely related to an algorith by Celledoni and Moret [4], respectively. We further establish that, for entire functions of order one (such as the exponential function), the superlinear convergence property of the Arnoldi/Lanczos approxiation of (1.1) is retained by our restarted ethod. In Section 5 we deonstrate the perforance of the restarted ethod for several test probles. 2. Matrix functions and their Krylov subspace approxiation. In this section we fix notation, provide soe background aterial on functions of atrices and their approxiation using Krylov subspaces, highlight the connection with Herite interpolation and derive a new representation forula for the error of Krylov subspace approxiations of f(a)b Functions of atrices. We recall the definition of functions of atrices (as given, e.g., in Gantacher [15, Chapter 5]): Let Λ(A) = {λ 1, λ 2,..., λ k } denote the k distinct eigenvalues of A C n n and let the inial polynoial of A be given by A (λ) = k (λ λ j ) nj P K, where K = j=1 k n j. Given a coplex-valued function f, the atrix f(a) is defined if f (r) (λ j ) exists for r = 0, 1,..., n j 1; j = 1, 2,..., k. In this case f(a) := q f,a (A), where q f,a P K 1 denotes the unique polynoial of degree at ost K 1 which satisfies the K Herite interpolation conditions q (r) f,a (λ j) = f (r) (λ j ), r = 0, 1,..., n j 1; j = 1, 2,..., k. (2.1) In the reainder of the paper, we denote the unique polynoial q which interpolates f in the Herite sense at a set of nodes {ϑ j } k j=1 with ultiplicities n j by I p f P K 1, j=1

3 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 3 K = j n j, where p P K is a (not necessarily onic) nodal polynoial with zeros ϑ j of ultiplicities n j. In this notation, equation (2.1) reads q f,a = I A f. Our objective is the evaluation of f(a)b rather than f(a), and this can possibly be achieved with polynoials of lower degree than q f,a. To this end, let the inial polynoial of b C n with respect to A be given by A,b (λ) = l (λ λ j ) j P L, where L = L(A, b) = j=1 l j. (2.2) Proposition 2.1. Given a function f, a atrix A C n n such that f(a) is defined and a vector b C n whose inial polynoial with respect to A is given by (2.2), there holds f(a)b = q f,a,b (A)b, where q f,a,b := I A,b f P L 1 denotes the unique Herite interpolating polynoial deterined by the conditions j=1 q (r) f,a,b (λ j) = f (r) (λ j ), r = 0, 1,..., j 1; j = 1, 2,..., l Krylov subspace approxiations. We recall the definition of the -th Krylov subspace of A C n n and 0 b C n given by K (A, b) := span{b, Ab,..., A 1 b} = {q(a)b : q P 1 }. By Proposition 2.1, f(a)b lies in K L (A, b). The index L = L(A, b) N (cf. (2.2)) is the sallest nuber for which K L (A, b) = K L+1 (A, b). Note that for certain functions such as f(λ) = 1/λ, we have f(a)b K L (A, b) \ K L 1 (A, b); in general, however, f(a)b ay lie in a space K (A, b) with < L. In what follows, we consider a sequence of approxiations y := q(a)b K (A, b) to f(a)b with polynoials q P 1 which in soe sense approxiate f. The ost popular of these approaches (see [27, 14, 17]), to which we shall refer as the Arnoldi approxiation, is based on the Arnoldi decoposition of K (A, b), AV = V +1 H = V H + η +1, v +1 e. (2.3) Here, the coluns of V = [v 1, v 2,..., v ] for an orthonoral basis of K (A, b) with v 1 = b/ b, H = [η j,l ] C (+1) as well as H := [I, 0] H C are unreduced upper Hessenberg atrices and e R denotes the -th unit coordinate vector. The Arnoldi approxiation to f(a)b is then defined by f := βv f(h )e 1, where β = b. The rationale behind this approxiation is that H represents the copression of A onto K (A, b) with respect to the basis V and that b = βv e 1. The non-heritian (or two-sided) Lanczos algorith is another procedure for generating a decoposition of the for (2.3). In that case the coluns of V still for a basis of K (A, b), albeit one that is, in general, not orthogonal, and the For the exponential function it was shown in [27, Theore 3.6] that e ta b K (A, b) for all t R if and only if L.

4 4 M. EIERMANN AND O. G. ERNST upper Hessenberg atrices H are tridiagonal (or block tridiagonal if a look-ahead technique is eployed). The associated approxiation to f(a)b is again defined by f := βv f(h )e 1 (see, e.g., [14, 27, 17]). Both approxiations f, based either on the Arnoldi or Lanczos decoposition, result fro an interpolation procedure: If q P 1 denotes the polynoial which interpolates f in the Herite sense on the spectru of H (counting ultiplicities) then f = βv f(h )e 1 = βv q(h )e 1 = q(a)b, = 1, 2,..., L, (see [27, Theore 3.3]). For later applications, we next show that siilar results hold true for ore general decopositions of K (A, b). To this end, we introduce a sequence of ascending (not necessarily orthonoral) basis vectors {w } L =1 such that K (A, b) = span{w 1, w 2,..., w }, = 1, 2,..., L. (2.4) As is well known, there exists a unique unreduced upper Hessenberg atrix H = [η j, ] C L L such that, with W := [w 1, w 2,..., w L ] C n L, there holds AW = W H and, for = 1, 2,..., L 1, we have AW = W +1 H = W H + η +1, w +1 e, (2.5) where H is the ( + 1) principal subatrix of H, H := [I, 0] H and W = [w 1, w 2,..., w ]. We shall refer to (2.5) as an Arnoldi-like decoposition to distinguish it fro a proper Arnoldi decoposition (2.3). We shall require the following lea, which is a siple generalization of the corresponding result for (proper) Arnoldi decopositions (cf. [27, 23]). Lea 2.2. For any polynoial q(λ) = α λ + +α 1 λ+α 0 P, the vector q(a)b ay be represented in ters of the Arnoldi-like decoposition (2.5) as { β [ ] W q(h )e 1 + α γ w +1, < L, q(a)b = (2.6) βw L q(h L )e 1, L. where γ := j=1 η j+1,j and βw 1 = b. In particular, for any q P 1 there holds q(a)b = βw q(h )e 1. The proof follows by verifying the assertion for onoials, taking account of the sparsity pattern of powers of a Hessenberg atrix (see e.g. [13]). We next introduce polynoial notation to describe Krylov subspaces. To each vector w of the nested basis (2.4) there corresponds a unique polynoial w 1 P 1 such that w = w 1 (A)b. Via this correspondence, the Arnoldi-like recurrence (2.5) becoes λ[w 0 (λ), w 1 (λ)..., w 1 (λ)] = [w 0 (λ), w 1 (λ)..., w 1 (λ)]h + η +1, [0, 0,..., 0, w (λ)]. (2.7) Fro this equation it is evident that each zero of w is an eigenvalue of H. Moreover, by differentiating (2.7), one observes that zeros of ultiplicity l are eigenvalues of H We ention that the related ter Krylov decoposition introduced by Stewart in [32] refers to a decoposition of the for (2.5) without the restriction that the basis be ascending and, consequently, a atrix H which is not necessarily Hessenberg.

5 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 5 with Jordan blocks of diension l. Since H is an unreduced Hessenberg atrix and hence nonderogatory, we conclude that the zeros of w coincide with the eigenvalues of H counting ultiplicity. Lea 2.3. Let H be the unreduced upper Hessenberg atrix in (2.5) and (2.7) and let f be a function such that f(h ) is defined. Then a polynoial q 1 P 1 satisfies q 1 (H ) = f(h ) if and only if q 1 = I w f, i.e., if q 1 interpolates f in the Herite sense at the eigenvalues of H. Proof. The proof follows directly fro the definition of f(h ) and the fact that the zeros of w are the eigenvalues of H with ultiplicity. We suarize the contents of Leata 2.2 and 2.3 as follows: Theore 2.4. Given the Arnoldi-like decoposition (2.5) and a function f such that f(a) as well as f(h ) are defined, we denote by q P 1 the unique polynoial which interpolates f at the eigenvalues of H. Then there holds f := βv f(h )e 1 = βv q(h )e 1 = q(a)b. (2.8) We shall refer to (2.8) as the Krylov subspace approxiation of f(a)b associated with the Arnoldi-like decoposition (2.5). Note that (2.8) is erely a coputational device for generating the Krylov subspace approxiation of f(a)b without explicitly carrying out the interpolation process. This is an advantage whenever f(h )e 1 for n can be evaluated efficiently. Reark 2.5. We also point out the following soewhat acadeic detail regarding finite terination: while Krylov subspace approxiations q(a)b are defined for polynoials q of any degree, Arnoldi-like decopositions, and hence (2.8), are only available for 1 L = L(A, b). At index = L, the characteristic polynoial of H L = H coincides with the inial polynoial A,b of b with respect to A (see (2.2)). In view of (2.8) and Proposition 2.1, we then have f L = f(a)b. In this sense, Krylov subspace approxiations of the for (2.8) respect the spectral distribution of A relevant for b and, in exact arithetic, possess the finite terination property. This is in contrast to other approaches such as those based on Chebyshev or Faber expansions (see below). Besides those generated by the Arnoldi or Lanczos processes, any ascending basis {w } L =1 of K L (A, b) or, equivalently, any sequence of polynoials {w 1 } L =1 of exact degree 1 ay be used in the Arnoldi-like decoposition (2.5) or its polynoial counterpart (2.7), provided a eans for obtaining the atrix H L of recurrence coefficients is available. One such exaple is the sequence of kernel/quasi-kernel polynoials associated with the Arnoldi/Lanczos decoposition (see [12]), where the corresponding Hessenberg atrix is easily constructed fro that of the original decoposition. Approxiations based on quasi-kernel polynoials are discussed in [20]. Yet another approach one which ephasizes the interpolation aspect of the Krylov subspace approxiation fixes a sequence of nodes ϑ 1 ϑ (2) 1 ϑ (2) 2 ϑ (3) 1 ϑ (3) 2 ϑ (3)

6 6 M. EIERMANN AND O. G. ERNST and chooses the basis vectors w = w 1 (A)b as the associated nodal polynoials w 1 (λ) = ω 1 (λ ϑ () 1 )(λ ϑ () 2 )... (λ ϑ () ), ω 0. One possible choice of such a node sequence is the zeros of Chebyshev polynoials, in which case the nested basis vectors correspond to Chebyshev polynoials. Other choices of node sequences are explored in [19, 22, 20]. Note that, in view of Reark 2.5, such a basis choice, which is independent of A and b, will generally destroy the finite terination property. We also point out that, when f(a) is defined, this need not be so for f(h ) for < L. For the Arnoldi approxiation, a sufficient condition ensuring this is that f, as a scalar function, be analytic in a neighborhood of the field of values of A. As a case in point, consider the full orthogonalization ethod (FOM) for solving a nonsingular syste of linear equations Ax = b. The solution is f(a)b with f(λ) = 1/λ and, if the initial approxiation is x 0 = 0, the -th FOM iterate is siply the Arnoldi approxiation f = βv H 1 e 1. There are well-known exaples [3] in which f(a), i.e., A 1, is defined but for which H is singular for one or ore of the indices = 1,..., L 1, a phenoenon soeties called a Galerkin breakdown An error representation. We conclude this section with a representation of the error of the Krylov subspace approxiation of f(a)b based on any Arnoldi-like decoposition or, equivalently, any interpolatory approxiation. We shall need the following notation: Given a function f and a set of nodes ϑ 1,..., ϑ with associated nodal polynoial p(λ) = (λ ϑ 1 )(λ ϑ 2 ) (λ ϑ ), (2.9) we denote the -th order divided difference of f with respect to the nodes {ϑ j } j=1 by p f := f I pf. (2.10) p Theore 2.6. Given A C n n, b C n and a function f, let (2.5) be an Arnoldi-like decoposition of K (A, b) and let w P 1 be the associated polynoial, cf. (2.7). Then there holds f(a)b βw f(h )e 1 = βγ [ w f](a)w +1. (2.11) with γ as in Lea 2.2. Proof. We consider first an arbitrary set of nodes ϑ 1,..., ϑ with associated nodal polynoial p as in (2.9). Fro the definition (2.10), there holds f(λ) = [I p f](λ) + [ p f](λ)p(λ). Inserting A for λ in this identity and ultiplying by b, we obtain f(a)b = [I p f](a)b + [ p f](a)p(a)b. Since I p f P 1, Lea 2.2 yields [I p f](a)b = βw [I p f](h )e 1 and, since p P is onic, p(a)b = βw p(h )e 1 + βγ w +1, giving f(a)b βw [I p f](h )e 1 = β[ p f](a) ( W p(h )e 1 + γ w +1 ). Source of and justification for this notation can be found in [7].

7 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 7 Choosing p as the characteristic polynoial w of H, it follows that w (H ) = O by the Cayley-Hailton Theore and, since I w f interpolates f at the eigenvalues of H, there also holds [I w f](h ) = f(h ) by Lea 2.3. We interpret (2.11) as follows: further iproveent of a Krylov approxiation f = βv f(h )e 1 could be achieved by approxiating the error ter f(a)b f = f(a) b with f(λ) := [ w f](λ) and b := βγ w +1. Note that the odified function f has the sae doain of analyticity as f and the vector b points in the direction of the last vector in the Arnoldi-like decoposition. 3. A restarted Arnoldi approxiation. For the reainder of the paper we shall restrict the discussion to the Arnoldi approxiation of f(a)b. To set this apart fro a general Krylov subspace approxiation (2.8) we denote the (orthonoral) Arnoldi basis vectors by v 1, v 2,..., v L and the Arnoldi decoposition by AV = V H + η +1, v +1 e, V = [v 1, v 2,..., v ], (cf. (2.3)). Our results apply to other Krylov subspace approxiations with obvious odifications, soe of which we shall point out Short recurrences are not enough. Besides the evaluation of f(h ), the coputation of the -th Arnoldi approxiation f = βv f(h )e 1 requires the Arnoldi basis V, which consists of vectors of size n. As a consequence, even if the evaluation of f(h ) can be accoplished inexpensively, work and storage requireents incurred by V ake this ethod ipractical for oderate to large values of. For f(λ) = 1/λ, i.e., when solving linear systes of equations, one can take advantage of the fact that the Arnoldi process reduces to the Heritian Lanczos process when A is Heritian. In this case the atrices H are Heritian, hence tridiagonal, and three-ter recurrence forulas can be derived for their characteristic polynoials w. (The sae is true even in the non-heritian case when eploying the non-heritian Lanczos process, possibly with look-ahead techniques.) If we interpolate f(λ) = 1/λ at the zeros of the -th basis polynoial w, the resulting interpolating polynoial q 1 = I w f satisfies q 1 (λ) = w (0) w (λ), (3.1) λw (0) and therefore q 1 and hence also the approxiation f obey a siilar three-ter recurrence. The relation (3.1) between the nodal and the interpolation polynoials can therefore be viewed as the basis for the efficiency of the conjugate gradient ethod and other polynoial acceleration ethods such as Chebyshev iteration for solving linear systes of equations. A relation analogous to (3.1) fails to hold for ore coplicated (non-rational) functions f such as the exponential function, and therefore short recurrences for the nodal polynoials do not translate into short recurrences for the interpolation polynoials. The coputation of f therefore necessitates storing the full Arnoldi basis V also when A is Heritian. It is therefore of interest to odify the Arnoldi approxiation in a way that allows the construction of successively better approxiations

8 8 M. EIERMANN AND O. G. ERNST of f(a)b based on a sequence of Krylov spaces of sall diension. Such restarted Krylov subspace ethods are well known for the solution of linear systes of equations, see [28, 10] Krylov approxiation after Arnoldi restart. Consider two Krylov spaces of order with Arnoldi decopositions AV = V H + η +1, v +1 e, (3.2a) AV (2) = V (2) H (2) + η (2) +1, v (2) +1 e, (3.2b) where v 1 = b/β and v (2) 1 = v +1, i.e., obtained fro two cycles of the Arnoldi process applied to A, beginning with initial vector b and restarted after steps with the last Arnoldi basis vector v +1 fro the first cycle. We note that the coluns of W 2 := [V, V (2) ] for a basis of K 2 (A, b), albeit not an orthonoral one, and we ay cobine the two proper Arnoldi decopositions (3.2) to the Arnoldi-like decoposition where H 2 is the Hessenberg atrix [ AW 2 = W 2 H 2 + η (2) +1, v (2) +1 e 2, (3.3) H 2 := O η +1, e 1e H (2) H ]. (3.4) Reark 3.1. We restart the Arnoldi process with v +1, which is the ost natural choice. We could, however, restart with any vector of the for v +1 = V y + y +1 v +1 K +1(A, b) \ K (A, b) with a coefficient vector y = [y 1, y 2,..., y ] C. In this case we ust replace H in (3.4) by the rank-one odification H (η +1, /y +1)ye and η +1, by η +1, /y +1. It is conceivable that this could be used to ephasize certain directions such as Ritz approxiations of certain eigenvectors as is done in popular restarting techniques for linear systes of equation [21] and eigenvalue calculations [30], but we shall not pursue this here. Our objective is to copute the Krylov subspace approxiation associated with (3.3) without reference to V. The forer is defined as f 2 = [I w2 f](a)b = βw 2 [I w2 f](h 2 )e 1 = βw 2 f(h 2 )e 1, (3.5) where w 2 is the nodal polynoial with zeros Λ(H ) Λ(H (2) ) with ultiplicity. To evaluate the approxiation (3.5), we note that f(h 2 ) is of the for [ f ( H ) ] O f(h 2 ) = X 2,1 f ( H (2) ), X 2,1 C, (3.6) Another reedy, well-known fro Lanczos-based eigenvalue coputations (see [24, Chapter 13]), is to discard the basis vectors no longer needed in the recurrence and either recopute these or retrieve the fro secondary storage when foring the approxiation.

9 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 9 a consequence of the block triangular structure of H 2, whereby (3.5) becoes f 2 = βv f(h )e 1 + βv (2) X 2,1 e 1. (3.7) The first ter on the right is the Arnoldi approxiation with respect to K (A, b). If X 2,1 e 1 were coputable, one could discard the basis vectors V and use (3.7) to update the Arnoldi approxiation, thus yielding the basis for a restarting schee. One conceivable approach is to observe that X 2,1 satisfies the Sylvester equation H (2) X 2,1 X 2,1 H = η +1, [f(h(2) )e 1 e e 1 e f(h )], (3.8) which follows fro coparing the (2, 1) blocks of the identity H 2 f(h 2 ) = f(h 2 )H 2, and one could therefore proceed by solving (3.8). This approach, however, suffers fro the shortcoing that the Sylvester equation (3.8) is only wellconditioned if the spectra of H and H (2) are well-separated (cf. [16, Section 15.3]). are both copressions of the sae atrix A, it is to be expected that at least soe of their eigenvalues atch very closely. We shall instead derive a coputable expression for X 2,1 e 1 directly by way of interpolation. Since H and H (2) Lea 3.2. Given two successive Arnoldi decopositions as in (3.2), let w w (2) and w 2 denote the onic nodal polynoials associated with Λ(H ), Λ(H (2), ) and Λ(H 2 ) = Λ(H ) Λ(H (2) ), respectively, with H 2 the upper Hessenberg atrix of the cobined Arnoldi-like decoposition (3.3). Then there holds [ [I [I w2 f](h 2 )e 1 = w f] ( H ) ] e1 ), (3.9) e1 γ [I w (2) ( w f)] ( H (2) where γ = j=1 η j+1,j (cf. Lea 2.2). Proof. Due to the block triangular structure of H 2 as given in (3.6), there holds [I w2 f] ([ O η +1, e 1e H (2) H ]) [ [I = w2 f] ( H ) ] O X 2,1 [I w2 f] ( H (2) ) with X 2,1 as in (3.6). Next, we establish the polynoial identity [I w2 f] = I w f + I w (2) (3.10) ( w f) w, (3.11) which can be seen by noting that both polynoials have the sae degree 2 1 and interpolate f in the Herite sense at the nodes Λ(H ) Λ(H ). For nodes ϑ Λ(H ) this is so because w (ϑ) = 0 and therefore [I w f](ϑ) + [I w (2) For nodes ϑ Λ(H (2) ) we have [I w ( w f)](ϑ) w ( w f)](ϑ) w f](ϑ) + [I (2) w = f(ϑ) = [I w2 f](ϑ) (ϑ) = [I w f](ϑ) = f(ϑ) = [I w2 f](ϑ). (ϑ) = [I w f](ϑ) + [ w f](ϑ) w (ϑ) with the second equality following fro the definition (2.10), and (3.11) is established. The assertion on the first block of the vector (3.9) is now verified by inserting the

10 10 M. EIERMANN AND O. G. ERNST atrix H into the polynoials on either side of (3.11), noting that w (H ) = O, and ultiplying both sides of (3.10) by e 1. To verify the second block of (3.9), we use identity (3.11) to write where M := [I w [I w2 f](h 2 ) = M + M (2) M (3) f](h 2 ), M (2) := [I w (2) ( w f)](h 2 ), M (3) := w (H 2 ). The block lower triangular structure of H 2 carries over to functions of H 2, giving [ ] M (i) M (i) 1,1 O = M (i) 2,1 M (i), i = 1, 2, 3, 2,2 where in addition M (3) 1,1 = w (H ) = O. In this notation the second block of (3.9) is given by X 2,1 e 1 = M 2,1 e 1 + M (2) 2,2 M (3) 2,1 e 1. (3.12) For the first ter on the right, we have M 2,1 e 1 = 0 because, as the (2, 1)-block of M = [I w f](h 2 ), a polynoial of degree 1 in the Hessenberg atrix H 2, has a zero first colun. Next, again by the block lower triangular structure of H 2, there holds M (2) 2,2 = [I ( w (2) w f)] ( H (2) ) (3). Finally, we note that M 2,1 e 1 = M 2,1 γ e 1. This follows in a siilar way as the evaluation of M 2,1 e 1, but here M (3) = w (H 2 ) is a polynoial of degree in the 2 2 upper Hessenberg atrix H 2. Again by the sparsity structure of powers of Hessenberg atrices, the first colun of M (3) 2,1 is a ultiple of e 1. Coparing coefficients reveals this ultiple to be γ. Inserting these quantities in (3.12) establishes the second block of identity (3.9), and the proof is coplete. Reark 3.3. We note that the sae proof applies when the two Krylov spaces are of different diensions 1 and 2. Coparing coefficients in (3.7) and (3.9) reveals that X 2,1 e 1 = γ [ w f] ( H (2) ) e1, and we suarize the resulting basic restart step in the following theore. Theore 3.4. The Krylov subspace approxiation (3.5) based on the Arnoldilike decoposition (3.3) is given by f 2 = βv f ( H ) e1 + βγ V (2) [ w f] ( H (2) ) e1. (3.13) Proof. The proof follows iediately fro (3.5) upon inserting the representation for [I w2 f](h 2 ) given in Lea 3.2: starting with (3.5), we obtain f 2 = βw 2 f(h 2 )e 1 ( = β V [I w = βv f ( H f] ( H ) e1 + V (2) γ ) e1 + βγ V (2) [ w [I w (2) f] ( H (2) ( w ) e1 f)] ( H (2) ) ) e1 where the last equality follows fro the interpolation properties of I w and I (2) w.

11 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS The restarting algorith. Theore 3.4 suggests the following schee for a Krylov approxiation of f(a)b based on the restarted Arnoldi process with cycle length : the first approxiation f is siply the usual Arnoldi approxiation with respect to the first Krylov space K (A, b), i.e., f = βv f(h )e 1. The next Krylov space is generated with the initial vector v +1 and, according to (3.13), the correction to f required to obtain the Krylov subspace approxiation of f(a)b with respect to the Arnoldi-like decoposition (3.3) is given by f (2) = f + βγ V (2) [ w f] ( H (2) The effect of restarting is seen to be a odification of the function f to w f and a replaceent of the vector b by βγ v +1. Note that this is in line with the error representation (2.3) in that, after restarting, we are in fact approxiating the error ter and using this approxiation as a correction. The coputation of this update requires storing a representation of w f as well as the current approxiation f, but the Arnoldi basis V can be discarded. Proceeding in this fashion, we arrive at the restarting schee given in Algorith 1. ) e1. Algorith 1: Restarted Arnoldi approxiation for f(a)b. Given: A, b, f f (0) := f, f (0) := 0, b (0) := b, γ (0) := b. for k = 1, 2,... until convergence do Copute the Arnoldi decoposition AV (k) K (A, b (k 1) ). Update the approxiation f (k) := f (k 1) + γ (k 1) V (k) γ (k) := γ (k 1) j=1 η(k) j+1,j f (k) := w (k) f (k 1) where w (k) = V (k) H (k) + η (k) +1, b(k) e of f (k 1) (H (k) )e 1. is the characteristic polynoial of H (k). Reark 3.5. Algorith 1 is forulated for Krylov spaces of constant diension in each restart cycle, but this diension can vary fro cycle to cycle. Although Algorith 1 appears very attractive fro a coputational point of view, nuerical experients with a Matlab ipleentation have revealed it to be afflicted with severe stability probles. The cause of this sees to be the difficulty of nuerically coputing interpolation polynoials of high degree (see also [33]). We therefore turn to a slightly less efficient variant of our restarting schee, which our nuerical tests indicate to be free fro these stability probles. The generic step of this second variant of the restarted Arnoldi algorith proceeds as follows: After k 1 cycles of the algorith, we ay collect the entirety of Arnoldi decopositions in the (k 1)-fold Arnoldi-like decoposition... V (k 1) ]. Cobining this with the Arnoldi decoposi- with W (k 1) = [V tion AW (k 1) = W (k 1) H (k 1) + η (k 1) +1, v (k 1) +1 e (k 1) V (2) AV (k) = V (k) H (k) + η (k) +1, v (k) +1 e of the next Krylov space K (A, v (k 1) +1 ), we obtain the next Arnoldi-like decoposition AW k = W k H k + η (k) +1, v (k) +1 e k

12 12 M. EIERMANN AND O. G. ERNST with W k = [W (k 1), V (k) ] and [ H k = H (k 1) η (k 1 +1, )e 1e (k 1) O H(k) ]. Denoting by w k the characteristic polynoial of H k, forula (2.11) for the Krylov subspace approxiation with respect to an Arnoldi-like decoposition gives f (k) = βw k f(h k )e 1 = f (k 1) + βv (k) [f(h k )e 1 ] (k 1)+1:k, (3.14) where the subscript in the last ter is eant to refer to the vector with the last coponents of f(h k )e 1. (3.14) provides an alternative update forula for the restarted Arnoldi approxiation. It is soewhat less efficient than that given in Algorith 1 in that it requires storing H k and the evaluation of f(h k ), but we have found it to be uch stabler than the forer. The second variant is suarized in Algorith 2. Algorith 2: Restarted Arnoldi approxiation for f(a)b (variant 2). Given: A, b, f f (0) := f, f (0) := 0, b (0) := b, β := b. for k = 1, 2,... until convergence do Copute the Arnoldi decoposition AV (k) K (A, b (k 1) ). if k = 1 then H k := H else [ H k := H (k 1) η (k 1) +1, e 1e (k 1) O H(k) ]. = V (k) H (k) + η (k) +1, b(k) e of Update the approxiation f (k) := f (k 1) + βv (k) [f(h k )e 1 ] (k 1)+1:k. 4. Properties of the restarted Arnoldi algorith Special cases. In this section we recover soe known algoriths as special cases of the restarted Arnoldi approxiation Linear systes of equations. We begin by showing that for f(λ) = 1/λ we recover the well-known restarted full orthogonalization ethod (FOM) for solving linear systes of equations [26]. With I w f for this case given in (3.1), there results so that the representation (2.11) becoes A 1 b = βv H 1 e 1 + [ w f](λ) = 1 1 w (0) λ, βγ w (0) A 1 v +1 = f + βγ w (0) A 1 v +1, where f denotes the -th FOM iterate. The associated residual is therefore r = b Af = βγ w (0) v +1 (4.1)

13 which leads to A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 13 A 1 b = f + A 1 r. We conclude that in this case the exact correction c to the Arnoldi approxiation f is the solution of the residual equation Ac = r, leading to the proble of approxiating f(a)r, which in restarted FOM is carried out using a new Krylov space with initial vector r. As an aside, we observe that (4.1) iplies that the FOM residual nor can be expressed as r = βγ w (0) = β j=1 η j+1,j, det H an expression first given in [4] Initial value probles. We consider the initial value proble with A C n n, b C n (independent of t) with solution y (t) = Ay(t), y(0) = b (4.2) y(t) = f t (A)b, f t (λ) = e tλ. (4.3) The Arnoldi approxiation of (4.3) with respect to (2.3) is given by y (t) = V u(t), u(t) = βe th e 1, β = b. (4.4) As is easily verified, the associated approxiation error d (t) := y(t) y (t) as a function of t satisfies the initial value proble ( t A)d (t) = r (t), d (0) = 0, (4.5) in which the forcing ter r (t), which plays the role of a residual, is given by r (t) := η +1, e 1 u(t)v +1 = βη +1, e e th e 1 v +1 =: ρ (t)v +1. (4.6) In [4] (see also [19]) Celledoni and Moret propose a restarted Krylov subspace schee for solving (4.2) based on the variation of constants forula d (t) = F t (A)v +1, F t (λ) := t 0 e (t s)λ ρ (s) ds. (4.7) for the solution of the residual equation (4.5) using repeated Arnoldi approxiations of F t (A)v +1 in a anner siilar to Algorith 1. We note that, in contrast to Algorith 1, their ethod requires a tie-stepping schee in addition to the Krylov approxiation. As the approxiate solution (4.4) of (4.2) is an Arnoldi approxiation, the error representation (4.7) ust coincide with that given in (2.11). To provide ore insight on the restarted Arnoldi approxiation for solving initial value probles, we proceed to show explicitly that the two error representations are the sae. The key is the proper treatent of the paraeter t. Denoting the error representation (2.11) with f = f t by d (t) = βγ [ w f t ](A)v +1, (4.8)

14 14 M. EIERMANN AND O. G. ERNST we prove the following following result. Theore 4.1. The error representation (4.8) for the Arnoldi approxiation of (4.3) as a function of t solves the initial value proble (4.5). Proof. The initial condition d (0) = 0 follows fro the fact that f 0 1 and, since this function is interpolated without error, the associated divided difference is zero. To verify that d solves the differential equation, note first that differentiating the interpolant of f t with respect to the paraeter t results in This can be seen by writing the interpolant as t [I w f t ] = I w ( t f t ). (4.9) [I w f t ](λ) = k j=1 n j 1 l=0 f (l) t (ϑ j ) q j,l (ϑ j ), k n j =. j=1 in ters of the Herite basis polynoials q j,l P 1, characterized by q (p) j,l (ϑ q) = δ j,q δ l,p, j, q = 1, 2,..., k; l, p = 0, 1,..., n j 1, and exchanging the order of differentiation. As a consequence of (4.9) and the fact that ( t f t )(λ) = λf t (λ), we also have t [ w f t ] = w ( t f t ) = w (gf t ), where g(λ) = λ. The product forula for divided differences (see e.g. [25, Theore 1.3.3]) now yields t [ w f t ](λ) = λ[ w f t ](λ) + π 1 (t), (4.10) where π 1 (t) is the leading coefficient of I w f t. Inserting A for λ in the scalar equation (4.10) and ultiplying by v +1 now gives us ) ( t A) d (t) = βγ (A[ w f t ](A) + π 1 (t)i A[ w f t ](A) = βγ π 1 (t)v +1. A coparison with (4.6) reveals that what reains to be shown is that γ η +1, π 1 (t) = e e th e 1. v +1 The ter on the right is the entry at the (, 1) position of the atrix e th = [I w f t ](H ). Due to the sparsity pattern of powers of an upper Hessenberg atrix, this entry is given by 1 j=1 η j+1,j π 1 (t) = γ η +1, π 1 (t) and the proof is coplete. The uniqueness of the solution of (4.5) together with Theore 4.1 now iply once ore that d (t) = d (t). We ephasize again that our restarted Arnoldi ethod approxiates these error ters directly without recourse to a tie-stepping schee.

15 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS Convergence. The full Arnoldi approxiation is known to converge superlinearly for the exponential function, as shown in e.g. [17, 8]. For the case of solving linear systes of equations, i.e., the Arnoldi approxiation for the function f(λ) = 1/λ, it is known that restarting the process can degrade or even destroy convergence. In this section we show that, for sufficiently sooth functions, restarting the Arnoldi approxiation preserves its superlinear convergence rate. We state the following result for entire functions of order one (cf. [2, Section 2.1]), a class which includes the exponential function, and note that the result generalizes to other orders with inor odifications. Theore 4.2. Given A C n n and an entire function f of order one, let {ϑ () j } j=1, 1, denote an arbitrary node sequence contained in the field of values W (A) of A with associated nodal polynoials w P. Then there exist constants C and γ which are independent of such that γ 1 f(a)b [I w f](a)b C b for all. (4.11) ( 1)! Proof. We recall the well-known Herite representation theore for the interpolation error (cf. [6, Theore 3.6.1]): Let Γ C be a contour which contains W (A), and hence also the interpolation nodes, in its interior, which we denote by Ω. Then for all λ Ω we have f(λ) [I w f](λ) = 1 2πi Γ f(t) w (λ) dt. (4.12) t λ w (t) By replacing f with f p in (4.12), we obtain for arbitrary polynoials p P 1 the identity f(λ) [I w f](λ) = 1 f(t) p(t) w (λ) 2πi Γ t λ w (t) dt. Inserting A for λ on both sides, ultiplying with b and taking nors gives f(a)b [I w f](a)b = 1 2π [f(t) p(t)](ti A) 1 w (A) w (t) dt b. (4.13) Γ We now bound each factor of the integrand. For any unit vector u C n, we have u H Au W (A), and thus, for all t Γ, dist(γ, W (A)) t u H Au = u H (ti A)u (ti A)u. For arbitrary v C n, it follows that (ti A)v dist(γ, W (A)) v and therefore Siilarly, since the nodes ϑ () j (ti A) 1 1 dist(γ, W (A)). (4.14) are contained in W (A) by assuption, we have w (t) = (t ϑ () 1 )(t ϑ () 2 ) (t ϑ () ) dist(γ, W (A)), t Γ. (4.15) Moreover, with r(a) := ax{ λ : λ W (A)} denoting the nuerical radius of A, we ay bound w (A) by w (A) A ϑ j I ( A + ϑ j ) [3r(A)], (4.16) j=1 j=1

16 16 M. EIERMANN AND O. G. ERNST which follows fro the well-known inequality A 2r(A) and since ϑ () j W (A). Thus, fro (4.13), (4.14), (4.15), (4.16) and the fact that p P 1 was arbitrary, we obtain the bound f(a)b [I w f](a)b l(γ) 2π inf p P 1 f p,ω [3r(A)] dist(γ, W (A)) +1 b, where l(γ) denotes the length of the contour Γ and,ω denotes the supreu nor on Ω. The assertion now follows fro the convergence rate of best unifor approxiation of entire functions of order one by polynoials. In particular, it is known (see [11]) that there exist constants C and γ such that γ 1 inf f p,ω C p P 1 ( 1)!. Corollary 4.3. The restarted Arnoldi approxiation converges superlinearly for entire functions of order one. Proof. This follows fro Theore 4.2 by noting that, for the Arnoldi approxiation, the set of interpolation nodes for each restart cycle are Ritz values of A and therefore contained in W (A). 5. Nuerical experients. In this section we deonstrate the behavior of the restarted Arnoldi approxiation for the exponential function using several exaples fro the literature. All coputations were carried out in Matlab version 7.0 (R14) on a 1.6 GHz Power Mac G5 coputer with 1.5 GB of RAM D heat equation. Our first nuerical experient is based on one fro [14]: consider the initial-boundary value proble u u = 0 on (0, 1) 3 (0, T ), (5.1a) u(x, t) = 0 on (0, 1) 3 for all t [0, T ], (5.1b) u(x, 0) = u 0 (x), x (0, 1) 3. (5.1c) When the Laplacian is discretized by the usual seven-point stencil on a unifor grid involving n interior grid points in each Cartesian direction, proble (5.1) reduces to the initial value proble u(t) = Au(t), t (0, T ), u(0) = u 0, with an N N atrix A (N = n 3 ) and an initial vector u 0 consisting of the values u 0 (x) at the grid points x, the solution of which is given by u(t) = f t (A)u 0 = e ta u 0. (5.2) As in [14], we give the initial vector in ters of its expansion in eigenfunctions of the discrete Laplacian as u i,j,k 0 = i,j,k 1 i + j + k sin(ii πh) sin(jj πh) sin(kk πh).

17 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 17 Table 5.1 The full Arnoldi approxiation applied to the 3D heat equation with h = 1/36 and t = 0.1 = n steps t. The diension of the Krylov spaces is chosen as the sallest to result in an error e 2 less than at t = 0.1. t n steps tie [s] e 2 1e e-11 5e e-11 2e e-11 1e e-11 5e e-11 1e e-11 5e e-12 1e e-12 5e e-12 Table 5.2 The restarted Arnoldi approxiation applied to the 3D heat equation with h = 1/36 and t = 0.1. The diension of the Krylov spaces is chosen to coincide with the runs in Table 5.1 and now the nuber of restarts k is chosen as the sallest to result in an error e 2 less than at t = 0.1. t k tie [s] e 2 1e e-17 1e e-12 1e e-15 1e e-15 1e e-11 1e e-11 1e e-11 1e e-11 Here h = 1/(n + 1) is the esh size and the triple indexing is relative to the lexicographic ordering of the esh points in the unit cube. We first consider the case n = 35 and repeat a calculation in [14], where (5.2) is approxiated at t = 0.1 using the unrestarted Arnoldi approxiation. Writing the solution in the for u(t) = (e ta ) k u 0 where k t = t, one can copute the solution using k applications of the Arnoldi approxiations involving the atrix ta, which has a saller spectral interval than A and hence results in faster convergence. There is thus a tradeoff between using Krylov spaces of sall diension and having to take a sall nuber of tie steps of length t. The results in Table 5.1 show the execution ties which result fro fixing the tie step t and using the sallest Krylov subspace diension which results in a Euclidean nor of less that for the error vector e of the unrestarted Arnoldi approxiation. We observe that using saller tie steps does allow one to use saller Krylov spaces, however at a higher cost in ters of execution tie. We next consider the sae proble, but instead of taking several tie steps with the full Arnoldi approxiation, we reduce the size of the Krylov spaces by restarting after every steps. The results are given in Table 5.2. The diension of the Krylov spaces is chosen to coincide with the corresponding runs fro Table 5.1, but now the nuber of restarts k is chosen as the sallest to result in an error less than Again there is a tradeoff between the size of the Krylov space and the nuber

18 18 M. EIERMANN AND O. G. ERNST Fig Error nor histories for the restarted Arnoldi approxiation applied to the 3D heat equation with n = 50, i.e., N = , for several restart lengths = =20 =10 =6 error atrix vector products Table 5.3 Execution ties for the runs depicted in Figure 5.1: denotes the restart length and k the nuber of restart cycles. k tie [s] of restarts required until convergence. In contrast to Table 5.1, however, we note that the total execution ties decrease rather than increase when saller Krylov spaces are eployed, in spite of the fact that this requires ore restart cycles. Moreover, the longest execution tie of the restarted variant is less than half of the shortest execution tie of any of the full Arnoldi approxiation runs. Finally, we consider the sae proble for a finer discretization with n = 50, resulting in a atrix of diension N = We apply the restarted Arnoldi approxiation with restart lengths = 6, 10 and 20 using the full Arnoldi approxiation ( = ) as a reference. Each iteration is run until the accuracy no longer iproves. The resulting error curves are shown in Figure 5.1, and the corresponding execution ties in Table 5.3. We observe here that the ethod requires successively ore restart cycles to converge as the restart length is decreased. Convergence, however, is erely delayed and is aintained down to the sallest restart length = 6. In ters of execution tie, there appears to be a point of diinishing returns using shorter and shorter restart lengths, as the shortest execution tie was obtained for = Skew-syetric proble. Our next exaple is taken fro [17]. We consider a atrix A with 1001 equidistant eigenvalues in [ 20i, 20i]. In contrast to [17], we choose A to be block diagonal and real (and not diagonal and coplex) in

19 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 19 Fig Error nor histories for the skew-syetric proble of diension n = = =6 =4 =2 error atrix vector products order to avoid coplex arithetic: A = blockdiag(b 0, B 1,..., B 500 ) R , B 0 = 0, B j = j 25 [ ] 0 1, j = 1, 2,..., (5.3) The vector b is a rando vector of unit nor. The error curve of the full Arnoldi approxiation ( = ) as well as those of the restarted Arnoldi approxiation with restart lengths = 2, 4 and 6 are shown in Figure 5.2. We observe that the errors associated with the restarted Arnoldi approxiations initially increase before tending to zero. We also observe that the final accuracy of the approxiation deteriorates with decreasing restart length. This indicates that the restart length is too sall to resolve the spectral interval of A. For an explanation, recall that the Arnoldi approxiations f of exp(a)b can be viewed as the result of an interpolation process: f = q 1 (A)b, where q 1 is an interpolating polynoial for the exponential function. For the unrestarted Arnoldi ethod, the interpolation nodes are the Ritz values of A with respect to K (A, b), which are approxiately uniforly distributed over [ 20i, 20i] (cf. Figure 5.3, where the iaginary parts of the Ritz values are shown.) For the restarted Arnoldi ethod (with restart length ), however, the interpolation nodes ( are the collection of the Ritz values of A with respect to several Krylov spaces K ) A, b (j), j = 0, 1,..., k 1 (after k restarts). These are far fro uniforly distributed in [ 20i, 20i], but rather tend to accuulate at discrete points (see Figure 5.3). In the extree case of restart length one, all interpolating nodes equal ϑ = 0 (at least in exact arithetic) and the interpolating polynoial q k 1 (λ) = k 1 j=0 1 j! λj is siply the truncated Taylor expansion of exp(λ). It is well known that, for λ generated by the Matlab syntax: randn( state,0); b = randn(1001,1) Note that all Ritz values are purely iaginary because A is skew-syetric and b is real.

20 20 M. EIERMANN AND O. G. ERNST 60 Fig Interpolation nodes for the skew-syetric proble of diension n = diension of Krylov space diension of Krylov space Ritz values (iaginary part) =6: Eigenvalues of H (iaginary part) diension of Krylov space diension of Krylov space =4: Eigenvalues of H (iaginary part) =2: Eigenvalues of H (iaginary part) 0, interediate partial sus are uch larger than the final liit. An analogous stateent holds for Herite interpolating polynoials of the exponential function at too few nodes. The phenoenon described above becoes ore pronounced if we increase the spectral interval of A: Again, we consider the atrix A of (5.3), but now of diension with equidistant eigenvalues in [ 200i, 200i]. The resulting error curves for the restart lengths = 5, 10, 20 and 40 are shown in Figure 5.4. Table 5.4 shows the nuber of atrix vector products and the execution ties which were required to reach a final accuracy for different restart length. We also list the largest interediate error (and after how any atrix vector ultiplications it is observed). Note that for every the quotient of this largest error and the final accuracy approxiately equals the achine precision of 2e 16.

21 A RESTARTED KRYLOV METHOD FOR MATRIX FUNCTIONS 21 Fig Error nor histories for the skew-syetric proble of diension n = error = =40 =20 =10 = atrix vector products Table 5.4 The restarted Arnoldi approxiation applied to the skew-syetric proble of diension 10001, for several restart lengths (cf. Figure 5.4). atrix vector final largest largest error tie[s] products accuracy error final accuracy e e+01 [1] 1.8e e e+01 [160] 1.2e e e+02 [140] 8.9e e e+05 [140] 4.3e e e+13 [145] 9.7e Convection-diffusion proble. Our final exaple is taken fro [19, Exaple 6.1]: we consider the initial-boundary value proble u u + τ 1 u x1 + τ 2 u x2 = 0 on (0, 1) 3 (0, T ), u(x, t) = 0 on (0, 1) 3 for all t [0, T ], u(x, 0) = u 0 (x), x (0, 1) 3. Discretizing the Laplacian by the usual seven-point stencil and the first-order derivatives, u x1 and u x2, by central differences on a unifor grid with step size h = 1/(n+1) leads as in Section 5.1 to an ordinary initial value proble with the atrix of diension N = n 3. Here, u(t) = Au(t), t (0, T ), u(0) = u 0 A = I n [I n C 1 ] + [B I n + I n C 2 ] I n B = 1 h 2 tridiag(1, 2, 1), C j = 1 h 2 tridiag(1 + µ j, 2, 1 µ j ), j = 1, 2,

22 22 M. EIERMANN AND O. G. ERNST Fig Error nor histories for the restarted Arnoldi approxiation applied to the convection-diffusion proble with n = 15, i.e., N = 3375 for several restart lengths. As in [19], we chose µ 1 = 3, µ 2 = 4 (top) and µ 1 = µ 2 = 10 (botto) = =6 =4 = error atrix vector products 10 0 error = =6 =4 = atrix vector products where µ j = τ j h/2. The nonsyetric atrix A is a popular test atrix because its eigenvalues are explicitly known: If µ j > 1 (for at least one j), they are coplex, ore precisely (cf. [19]) Λ(A) 1 h 2 [ 6 2 cos(πh) Re(θ), cos(πh) Re(θ)] 1 h 2 [ 2i cos(πh) I(θ), 2i cos(πh) I(θ)] with θ = µ µ 2 2. As in [19], we choose h = 1/16, τ 1 = 96, τ 2 = 128 (τ 1 = τ 2 = 320) which leads to µ 1 = 3, µ 2 = 4 (µ 1 = µ 2 = 10) and approxiate e ta b, where t = h 2 and b = [1, 1,..., 1]. The resulting error nor histories are shown in Figure 5.5. In the second exaple we again observe transient error growth. We attribute this, as in the skew-syetric exaple, to the sufficiently large iaginary

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

c 2010 Society for Industrial and Applied Mathematics

c 2010 Society for Industrial and Applied Mathematics SIAM J MATRIX ANAL APPL Vol 31, No 4, pp 1848 1872 c 21 Society for Industrial and Applied Matheatics STOCHASTIC GALERKIN MATRICES OLIVER G ERNST AND ELISABETH ULLMANN Abstract We investigate the structural,

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng EE59 Spring Parallel LSI AD Algoriths Lecture I interconnect odeling ethods Zhuo Feng. Z. Feng MTU EE59 So far we ve considered only tie doain analyses We ll soon see that it is soeties preferable to odel

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994) 1

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods

Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations, Krylov Spaces, and Krylov Methods Fixed-Point Iterations Solve nonsingular linear syste: Ax = b (solution ˆx = A b) Solve an approxiate, but sipler syste: Mx = b x = M b Iprove

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Algebraic Montgomery-Yang problem: the log del Pezzo surface case

Algebraic Montgomery-Yang problem: the log del Pezzo surface case c 2014 The Matheatical Society of Japan J. Math. Soc. Japan Vol. 66, No. 4 (2014) pp. 1073 1089 doi: 10.2969/jsj/06641073 Algebraic Montgoery-Yang proble: the log del Pezzo surface case By DongSeon Hwang

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

ALGEBRA REVIEW. MULTINOMIAL An algebraic expression consisting of more than one term.

ALGEBRA REVIEW. MULTINOMIAL An algebraic expression consisting of more than one term. Page 1 of 6 ALGEBRAIC EXPRESSION A cobination of ordinary nubers, letter sybols, variables, grouping sybols and operation sybols. Nubers reain fixed in value and are referred to as constants. Letter sybols

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Fourier Series Summary (From Salivahanan et al, 2002)

Fourier Series Summary (From Salivahanan et al, 2002) Fourier Series Suary (Fro Salivahanan et al, ) A periodic continuous signal f(t), - < t

More information

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations International Journal of Applied Science and Technology Vol. 7, No. 3, Septeber 217 Coparison of Stability of Selected Nuerical Methods for Solving Stiff Sei- Linear Differential Equations Kwaku Darkwah

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Dedicated to Richard S. Varga on the occasion of his 80th birthday.

Dedicated to Richard S. Varga on the occasion of his 80th birthday. MATRICES, MOMENTS, AND RATIONAL QUADRATURE G. LÓPEZ LAGOMASINO, L. REICHEL, AND L. WUNDERLICH Dedicated to Richard S. Varga on the occasion of his 80th birthday. Abstract. Many probles in science and engineering

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k 12 VICTORIA HOSKINS 3. Algebraic group actions and quotients In this section we consider group actions on algebraic varieties and also describe what type of quotients we would like to have for such group

More information

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n THE POLYNOMIAL REPRESENTATION OF THE TYPE A n RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n SHEELA DEVADAS AND YI SUN Abstract. We study the polynoial representation of the rational Cherednik algebra

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Matheatical Society Vol. 41 (2015), No. 4, pp. 981 1001. Title: Convergence analysis of the global FOM and GMRES ethods for solving

More information

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe PROPERTIES OF MULTIVARIATE HOMOGENEOUS ORTHOGONAL POLYNOMIALS Brahi Benouahane y Annie Cuyt? Keywords Abstract It is well-known that the denoinators of Pade approxiants can be considered as orthogonal

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Some Perspective. Forces and Newton s Laws

Some Perspective. Forces and Newton s Laws Soe Perspective The language of Kineatics provides us with an efficient ethod for describing the otion of aterial objects, and we ll continue to ake refineents to it as we introduce additional types of

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volue 3, Nuber 2, Pages 211 231 c 2006 Institute for Scientific Coputing and Inforation A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR

More information

Numerical issues in the implementation of high order polynomial multidomain penalty spectral Galerkin methods for hyperbolic conservation laws

Numerical issues in the implementation of high order polynomial multidomain penalty spectral Galerkin methods for hyperbolic conservation laws Nuerical issues in the ipleentation of high order polynoial ultidoain penalty spectral Galerkin ethods for hyperbolic conservation laws Sigal Gottlieb 1 and Jae-Hun Jung 1, 1 Departent of Matheatics, University

More information

Four-vector, Dirac spinor representation and Lorentz Transformations

Four-vector, Dirac spinor representation and Lorentz Transformations Available online at www.pelagiaresearchlibrary.co Advances in Applied Science Research, 2012, 3 (2):749-756 Four-vector, Dirac spinor representation and Lorentz Transforations S. B. Khasare 1, J. N. Rateke

More information

Solving initial value problems by residual power series method

Solving initial value problems by residual power series method Theoretical Matheatics & Applications, vol.3, no.1, 13, 199-1 ISSN: 179-9687 (print), 179-979 (online) Scienpress Ltd, 13 Solving initial value probles by residual power series ethod Mohaed H. Al-Sadi

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind ISSN 746-7659, England, UK Journal of Inforation and Coputing Science Vol., No., 6, pp.-9 Bernoulli Wavelet Based Nuerical Method for Solving Fredhol Integral Equations of the Second Kind S. C. Shiralashetti*,

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

ESTIMATE OF THE TRUNCATION ERROR OF FINITE VOLUME DISCRETISATION OF THE NAVIER-STOKES EQUATIONS ON COLOCATED GRIDS.

ESTIMATE OF THE TRUNCATION ERROR OF FINITE VOLUME DISCRETISATION OF THE NAVIER-STOKES EQUATIONS ON COLOCATED GRIDS. ESTIMATE OF THE TRUNCATION ERROR OF FINITE VOLUME DISCRETISATION OF THE NAVIER-STOKES EQUATIONS ON COLOCATED GRIDS. Alexandros Syrakos, Apostolos Goulas Departent of Mechanical Engineering, Aristotle University

More information

LEADING COEFFICIENTS AND THE MULTIPLICITY OF KNOWN ROOTS

LEADING COEFFICIENTS AND THE MULTIPLICITY OF KNOWN ROOTS LEADING COEFFICIENTS AND THE MULTIPLICITY OF KNOWN ROOTS GREGORY J. CLARK AND JOSHUA N. COOPER Abstract. We show that a onic univariate polynoial over a field of characteristic zero, with k distinct non-zero

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

The Fundamental Basis Theorem of Geometry from an algebraic point of view

The Fundamental Basis Theorem of Geometry from an algebraic point of view Journal of Physics: Conference Series PAPER OPEN ACCESS The Fundaental Basis Theore of Geoetry fro an algebraic point of view To cite this article: U Bekbaev 2017 J Phys: Conf Ser 819 012013 View the article

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period An Approxiate Model for the Theoretical Prediction of the Velocity... 77 Central European Journal of Energetic Materials, 205, 2(), 77-88 ISSN 2353-843 An Approxiate Model for the Theoretical Prediction

More information

1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations,

1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations, ANALYSIS OF SOME KRYLOV SUBSPACE METODS FOR NORMAL MATRICES VIA APPROXIMATION TEORY AND CONVEX OPTIMIZATION M BELLALIJ, Y SAAD, AND SADOK Dedicated to Gerard Meurant at the occasion of his 60th birthday

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

(t, m, s)-nets and Maximized Minimum Distance, Part II

(t, m, s)-nets and Maximized Minimum Distance, Part II (t,, s)-nets and Maxiized Miniu Distance, Part II Leonhard Grünschloß and Alexander Keller Abstract The quality paraeter t of (t,, s)-nets controls extensive stratification properties of the generated

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Linear Transformations

Linear Transformations Linear Transforations Hopfield Network Questions Initial Condition Recurrent Layer p S x W S x S b n(t + ) a(t + ) S x S x D a(t) S x S S x S a(0) p a(t + ) satlins (Wa(t) + b) The network output is repeatedly

More information

A model reduction approach to numerical inversion for a parabolic partial differential equation

A model reduction approach to numerical inversion for a parabolic partial differential equation Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/0266-56/30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi

More information

The Wilson Model of Cortical Neurons Richard B. Wells

The Wilson Model of Cortical Neurons Richard B. Wells The Wilson Model of Cortical Neurons Richard B. Wells I. Refineents on the odgkin-uxley Model The years since odgkin s and uxley s pioneering work have produced a nuber of derivative odgkin-uxley-like

More information