Analysis of Wave Propagation in 1D Inhomogeneous Media

Size: px
Start display at page:

Download "Analysis of Wave Propagation in 1D Inhomogeneous Media"

Transcription

1 Analysis of Wave Propagation in D Inhomogeneous Media Patrick Guidotti, Knut Solna, and James V. Lambers Author Query AQ Au: Please provide keywords. AQ Au: Primary and Secondary classifications are required to drive our online search engine and to help readers locate your article more easily. So please provide the appropriate primary and secondary classifications. AQ Au: Please provide received and accepted dates. AQ Au: Remark inserted here, is it correct? AQ Au: Please update. AQ Au: Please provide complete information. AQ Au: Is it book or journal? Please clarify. AQ Au: These references [,, ] are not cited in text. Please cite inside the text or delete from reference lists.

2 Numerical Functional Analysis and Optimization, ():, Copyright Taylor & Francis Group, LLC ISSN: - print/- online DOI:./ ANALYSIS OF WAVE PROPAGATION IN D INHOMOGENEOUS MEDIA Patrick Guidotti and Knut Solna Department of Mathematics, University of California at Irvine, Irvine, California, USA James V. Lambers Department of Petroleum Engineering, Stanford University, Stanford, California, USA In this paper, we consider the one-dimensional inhomogeneous wave equation with particular focus on its spectral asymptotic properties and its numerical resolution. In the first part of the paper, we analyze the asymptotic nodal point distribution of high-frequency eigenfunctions, which, in turn, gives further information about the asymptotic behavior of eigenvalues and eigenfunctions. We then turn to the behavior of eigenfunctions in the highand low-frequency limit. In the latter case, we derive a homogenization limit, whereas in the first we show that a sort of self-homogenization occurs at high frequencies. We also remark on the structure of the solution operator and its relation to desired properties of any numerical approximation. We subsequently shift our focus to the latter and present a Galerkin scheme based on a spectral integral representation of the propagator in combination with Gaussian quadrature in the spectral variable with a frequency-dependent measure. The proposed scheme yields accurate resolution of both high- and low-frequency components of the solution and as a result proves to be more accurate than available schemes at large time steps for both smooth and nonsmooth speeds of propagation. Keywords. AMS Subject Classification. AQ Received ; Accepted. Address correspondence to James V. Lambers, Department of Petroleum Engineering, Stanford University, Stanford, CA -, USA; lambers@standford.edu AQ AQ

3 Guidotti et al.. INTRODUCTION In this paper, we consider the one-dimensional inhomogeneous wave equation { tt u c (x) xx u = in (, ), (.) u + x u = on,, with = and c L (, ) and strictly positive. Our results remain valid for any but, for the sake of brevity, we shall present them for the case = only. After some remarks on the structure of the solution operator and on the implications for its numerical approximability, we turn to the main focus of the paper: Spectral asymptotics and numerical resolution of (.). As for the asymptotic spectral properties of the generator, the general result of [, Theorem..] would readily imply that k ( / ) dx k, k large c(x) for the eigenvalues of the generator = c (x) xx. In this particular case, however, led by the physical meaning of the coefficient c, we are able to obtain information about the asymptotic behavior of the nodal points of high-frequency eigenfunctions and, from that, infer about their asymptotic shape. The analysis is based on a shooting method for the computation of the eigenvalue/eigenfunction pairs and the self-similar nature of the problem in combination with the use of a canonical transformation. In particular, we observe that a sort of self-homogenization occurs at high frequencies (cf. Section.). It turns out that the same ideas can be profitably employed to obtain homogenization results for rapidly varying coefficents. These are similar to the result derived in, for instance, [] for the self-adjoint case using a variational approach. Here, however, our focus is an asymptotic approximation for the spectrum, and we present the approach in Section.. Then, in Section, we integrate some of the ideas developed into a numerical approach to high order/large time step resolution of (.). This approach employs Krylov subspace spectral methods, first introduced in []. These methods are Galerkin methods in which each component of the solution in the chosen basis of trial functions is computed using an approximation of the propagator belonging to a low-dimensional Krylov subspace of the operator. Each approximation is based on the use of Gaussian quadrature to evaluate Riemann Stieltjes integrals over the spectral domain as described in []. Because the Krylov subspace approximation of is constructed using Gaussian rules that are tailored

4 Wave Propagation in D Inhomogeneous Media to each component, all components can be resolved more accurately than with traditional spectral methods. Based on the encouraging results for the one-dimensional case, we intend to pursue the possibility of adapting the techniques used in this paper to perform similar analysis in the higher dimensional case.. ANALYTIC STRUCTURE OF THE PROPAGATOR In this section, we derive a spectral representation formula for the solution of the inhomogeneous wave equation in a bounded onedimensional interval as given in (.). In order to do so, we need to analyze the spectral properties of non self-adjoint boundary value problem (, B) given by = c (x) xx, (.) B = j, j =,. (.) where j denotes the trace operator at j =,. This is done in Section.. The analytic structure of the solution makes the relation between the conservation and reversibility properties of the equation apparent (Section.). In particular, they can be concisely formulated in terms of a functional relation satisfied by the propagator (evolution operator)... Properties of the Generator We start by collecting some information about the spectral properties of the generator, that is, of the boundary value problem (.) (.). We therefore study c (x) xx u = u, (.) u(j) =, j =,. (.) Lemma.. All eigenvalues of (.) (.) are strictly positive real and simple. The eigenfunction corresponding to the first (smallest) eigenvalue can be chosen to be positive. Proof. Assume that is an eigenvalue of (.) (.) and u an associated eigenfunction, then u(x) c (x) dx = x u(x) dx which implies the positivity of the eigenvalue. Moreover, an eigenvalue of (.) (.) is given when the boundary conditions are linearly dependent, and therefore the kernel has always at most dimension one, which gives

5 Guidotti et al. simplicity of the eigenvalues. Finally, because the operator has empty kernel and compact resolvent, the spectrum is a pure point spectrum, which concludes the proof. Borrowing from the self-adjoint terminology, we call the first eigenvalue the principal eigenvalue. Next, we show that it is a strictly monotone function of the size of the domain. Lemma.. Let x (, ) and (x ) be the principal eigenvalue for the Dirichlet problem for c (x) xx on [, x ]. Then Proof. (x )>(x ), < x < x. Normalizing eigenfunctions by the requirement x () = we can look for them by considering the initial value problem xx u = u, x [, ], c (x) u() =, x u() =. (.) For =, no nontrivial solution can be found, but, by increasing its magnitude, the value of the solution at x = can be reduced until it becomes zero for the first time. This gives and for [, ]. It is therefore also obvious that needs to be further increased to obtain a zero at x <, which, in its turn, determines and for the interval [, x ]. It turns out that we can determine all other eigenvalues and order them according to their size or equivalently according to the number of their zeros. Lemma.. For every n there is exactly one simple eigenvalue n > for the Dirichlet problem for c (x) xx on [, ] such that the associated eigenfunction n has exactly n + zeros (counting the boundary points). Proof. By using exactly the same arguments as in the proof of Lemma., one can obtain all eigenfunctions as solutions the initial value problem (.) by gradually increasing in order to produce, one by one, new zeros in the interval [, ]. They therefore can be numbered by using their zeros.

6 Wave Propagation in D Inhomogeneous Media Next we introduce a functional setting that allows us to obtain a spectral representation of the operator. Let L (, ) be the Lebesgue space of square integrable functions. Denote by A the L (, )-realization of with domain of definition given by dom(a) = H (, ) H (, ), the space of H functions that vanish on the boundary. Because A has compact resolvent, it allows for a spectral calculus. Lemma.. The operator A can be represented by A = n n, n, (.) n= where ( n ) n and ( n ) n are the eigenfunctions of A and A to the eigenvalue n, respectively. Here A is given by the L -realization of xx (c (x) ) with Dirichlet boundary conditions. Proof. Because all eigenvalues are simple and = is not one of them, the operator A does not contain any nontrivial Jordan blocks nor does it contains a quasi-nilpotent operator. It follows that the operator A allows for the claimed spectral representation. See [] for more details. One needs only to observe that the spectral projection E n is given by n,, where n can be defined through n span k : n = k and n, n = and can be easily verified to be an eigenfunction of A to the eigenvalue n. Remark.. In general, the functions ( n ) n are not an orthogonal system. They are, however, asymptotically orthogonal for smooth c and almost orthogonal for small perturbations of a constant c as we shall see in the next sections. It should be observed that the operator A becomes self-adjoint with respect to the weighted scalar product (u v) = u(x)v(x)/c (x)dx. This provides a different point of view but produces the same spectral resolution of A. See also Remark.... Structure of the Solution The spectral representation of the generator A allows us to obtain a representation of the solution operator (propagator) in terms of the AQ

7 Guidotti et al. sine and cosine families generated by A by a simple functional calculus. Introduce R (t) = A / sin(t A) := R (t) = cos(t A) := n= sin(t n ) n n, n, (.) cos(t n ) n, n, (.) n= where taking the square root of the operator poses no problem even though the operator is not self-adjoint. Then the propagator of (.) can be written as [ ] R (t) R (t) P (t) =. (.) AR (t) R (t) Remark.. The fact that the wave equation is reversible can be seen through the identities R (t) + AR (t) = id L (,), R (t)r (t) = R (t)r (t), t (.) which imply P (t)p ( t) = P ( t)p (t) = [ idl (,) id L (,) ] (.) We observe that our ultimate goal is an efficient numerical scheme for the solution of (.). We are in particular interested in nondissipative and nondispersive schemes. The functional relations (.) make the constraints apparent that such a scheme should satisfy. Next we introduce an appropriate energy norm A and show that it is conserved along solutions of (.). This is done by means of the basis development in terms of the eigenfunctions ( n ) n. Let u L (, ), then we can write u = n }{{, u n. } u n n= Then, taking (u, v) H (, ) L (, ), we define (u, v) A = ( n un + n) v whenever the right-hand side is finite. n= (.)

8 Wave Propagation in D Inhomogeneous Media Remark.. It is not a priori clear that (.) does define a norm that is equivalent to the standard norm of H (, ) L (, ). This follows from the fact that, in the described setting, n = n/c (x) and the fact that the speed of propagation is bounded above and below. The relation between the eigenfunctions is a manifestation of Remark.. This also means that ( n ) n is a frame and (n ) n its dual frame. For a definition and characterization of frames, we refer []. Lemma.. For any solution of (.) with =, one has (u(t), u(t)) A = (u(), u()) A, t. (.) Proof. Denote the initial value (u(), u()) by (u, u ). Then, by developing in the basis of eigenfunctions, we can write the solution as ( [ (u(t), u(t)) = cos ( t ) n u n + sin ( t ] ) n u n n, n n= n= A simple computation then shows that n= [ n sin ( t ) n u n + cos ( t ) ) n u n ] n. (.) [ { n cos ( t ) n u n + sin ( t ] ) n u n n [ + n sin ( t ) n u n + cos ( t ) ] } n u n =.. High-Frequency Spectral Asymptotics n (u n ) + ( u n ). In this section, we show that a sort of self-homogenization occurs at high frequency, which makes the asymptotic behavior of the eigenvalues and eigenfunctions of A very simple. We begin with the following lemma concerning small perturbations of the constant coefficient case. Lemma.. Assume that c C ([, ]) is almost constant, that is, that c. Then the spectrum of A is a small perturbation of that of the operator A given by dom A = H (, ) H (, ), Au = c xx u, u dom(a) (.) for c = ( dx). c(x) n= AQ

9 Guidotti et al. Proof. Introducing the change of variables given by y = (x) = c x d (.) c() which leaves the interval invariant; the operator A in the new variables takes on the form c (x) xx = c yy cc ( (y)) y. The result then follows from the continuous dependence of the operator on its coefficient functions. Remark.. It should be observed that the coefficient c can be thought of as the speed of propagation through the medium in the interval [, ]. Then the integral dx can be interpreted as the time it takes to go c(x) from one end to other of the medium. Thus the averaged coefficient actually measures the effective size of the interval. It turns out that this kind of averaging is always taking place regardless of the size and shape of the coefficient c, at least at high frequencies. The next Proposition makes this precise and also gives an approximation for the high-frequency eigenfunctions. Proposition.. A is given by For large n, the asymptotic behavior of the eigenvlaues of n ( (n) d). (.) c() Moreover, the eigenfunctions n have the following asymptotic shape ( n (x) sin x x ) j, x [x j, x j ] (.) x j x j where = x < x < < x n = have to be chosen such that xj x j c() d = n d, j =,, n. (.) c() Proof. We know from Lemmata.. that the nth eigenfunction n has n + zeros in [, ]. Denote them by = x < x < < x n =.

10 Wave Propagation in D Inhomogeneous Media If n is the associated eigenvalue, then it is also the principal eigenvalue j n of the problems { c (x) xx u = u, in [x j, x j ] u(x j ) = u(x j ) =. for j =,, n. So, in particular one has j n = n, j =,, n. By blowing up the intervals [x j, x j ] to the fixed interval [, ] by means of we obtain x = x j + y(x j x j ), y [, ] c (y) (x j x j ) yyũ = ũ, in [, ] u() = u() =. where now c(y) = c(x j + y(x j x j )) is a slowly varying coefficient provided n is large. Lemma. therefore gives ( j n (x j x j ) x j x j and subsequently that xj ) x j c() d = xj x j c() d = n c() d ( xj x j c() d). because we know already that n = =n n = n. We conclude that the subintervals are uniquely determined. Lemma. also entails that the eigenfunctions on the subintervals all have approximately the form ( j n (x) = sin x x ) j, x [x j, x j ]. x j x j Remark.. It is interesting to observe that to first order the asymptotic behavior of the eigenvalues only contains average information concerning the coefficient, whereas the asymptotic behavior of eigenfunctions reflects local averages taken at the scale determined by the number of its zeros.

11 Guidotti et al... Low Frequency Spectral Asymptotics In this section, we consider the asymptotic behavior of the low-frequency part of the spectrum. We describe it in the regime where the length x of the medium is large. That is, we consider the problem { c (x) xx u = u, x [, x ], (.) u() =, u(x ) = in the limit x. Observe that we allow for large () fluctuations in the local speed c. Ifweset = /x and make the change of variables y = x, this problem becomes ( ) y c yy u = u = u, u () =, u () =. y [, ], (.) This is a homogenization scaling. The self-adjoint case when is periodic is discussed in [], for instance, and the case when is random and varies on a microscale is discussed in []. Wave propagation in the quasistatic limit correpsonding to a scaling of the above type is discussed in, for instance, [, ] where the group velocity in this limit is derived from the homogenized equations. Here we consider the leading part of the spectrum of the non-self-adjoint problem with rapidly varying coefficients. It can be characterized by the following proposition. Proposition.. Let x and ( n (x ), n (x; x ) be the nth pair of eigenvalue and function of the Dirichlet problem (.). For f C assume that y ( ) s y c f (s)ds = c f (s)ds( + ( p )) (.) with Then c as x. = lim c (s/)ds, < c < c(x) < c <, p >. n (x ) (n) c /x (.) n (x; x ) sin(nx/x ) (.) x

12 Wave Propagation in D Inhomogeneous Media Proof. As in the proof of Proposition., we use a shooting argument to solve the eigenvalue problem. It involves normalizing the eigenfunction by requiring y () = and writing (.) as: ( ) y yy = c, y [, ], (.) () =, y () =. Again, for = no nontrivial solution can be found. By increasing, the value of can be reduced until it becomes zero for the first time. This value gives the first eigenvalue and the corresponding leading eigenfunction. In order to describe these for x large we introduce v = (v, v ) T = (,,y )T and obtain the initial value problem [ ] v y = c v, v() = (, ) T. Then, we construct an approximating sequence v n by letting and [ ] v y = c v, [ ] v n y = c v n, n, (.) with v n () = (, ). The increments v n = v n v (n ) solve the same equation (.) and we find v n y (y) ( + ( c/c) ) v (n ) (s)ds. Observe next that v [ ] (y) = y (c c )v (s)ds < p c y, and v n (y) (c y) n p, n, n!

13 Guidotti et al. where here and below c j are constants independent of. Thus, v n form a Cauchy sequence and sup v (y) p e c. y (,) We next establish that v is close to the principal eigenfunction associated with the constant speed c. This follows because explicitly v (y) = sin( /c y), /c moreover, v () p exp(c ) and y = is the first zero of, which is a positive function, thus > such that : (c ) p c sup sin(y) (y) p c y (,) Finally, upon a normalization and a change of argument, we arrive at (.) for n =. Next, we consider the case with general n that follows by induction. Let n be the nth eigenfunction associated with (.), which can be constructed as above via a shooting procedure where is successively increased. The eigenvalues are again identified with those values for that give a new zero in the interval [, ], because the additional zero only can enter at y = because yy = only for =. Now assume that (.) hold for the first n eigenfunctions. Then, > so that for y n () > /, it follows that c (n) > such that n+ n c (n). By an argument as above, c (n) such that sup n+ (y) sin( n+/c y) y (,) n+ /c p e c (n) and we find c (n) > such that sin ( n+/c y) has exactly n + zeros in [, + p c (n)]. Now, by proceeding as above, (.) follows for n + and therefore for general n. Remark.. The condition (.) is satisfied with p = if for instance c (x) = c ( + (x))

14 Wave Propagation in D Inhomogeneous Media with (x) being a bounded function. More generally for we find y with c ( s c (x) = c ( + (x)) ) ( y ( f (s)ds = c f (s)ds + Y Y (x) = x ( ) y y f (y) Y (s)ds. Thus, if Y (x) = (x p ) with p >, then (.) is satisfied.. KRYLOV SUBSPACE SPECTRAL METHODS ( ) )) s f (s)ds In this section, we apply Krylov subspace spectral methods developed in [] to the problem (.) with = and the initial conditions u(x,) = f (x), u t (x,) = g (x), x (, ). (.).. Symmetrization We first apply two transformations to the differential operator = c (x) xx defined in (.). As in previous discussion, we focus on the operator A that is the L (, )-realization of defined on dom(a) = H (, ) H (, ). First, we apply the change of variables (.) to obtain where, we recall, and A = c xx cc ( (x)) x, (.) (x) = c ( ) c = c() d (.) x Next, we define the transformation V by d, x (, ). (.) c() Vf (x) = (x)f (x), (.)

15 Guidotti et al. where which yields [ c x ] (x) = exp c ( ())d, (.) A s = V AVf = V c [Vf ] xx c(c )[Vf ] x = c [f + f + f ] c(c )[f + f ] [ ( ) ] [( ) ( = c f + c c(c ) f + c + [(( ) c = c f c + + c(c ) ) ] c(c ) f ) c + ( c c ) ] f (.) It is easy to see that these transformations have the property that they symmetrize the operator, and they also respect the boundary conditions; that is, if f dom(a) = H (, ) H (, ), then V (f ) dom(a), and conversely... Krylov Subspace Spectral Methods for IBVP Once we have preconditioned the operator, to obtain a new selfadjoint operator, we can use Krylov subspace methods developed in [] to compute an approximate solution. These methods are Galerkin methods that use an approximation for each coefficient of the solution in the chosen basis that is, in some sense, optimal.... Reduction to Quadratic Forms Using a standard Galerkin approach, we begin with an orthonormal set of N trial functions (x) = sin(x), < N, (.) that satisfy the boundary conditions. We seek an approximate solution ũ(x, t) = N ũ (t) (x), (.) = that lies in the space spanned by the trial functions, where each coefficient ũ, =,, N, is an approximation of the quantity u (t) =, u(, t). (.)

16 Wave Propagation in D Inhomogeneous Media Because the exact solution u(x, t) is given by u(x, t) = R (t)f (x) + R (t)g (x), (.) where R (t) and R (t) are defined in (.), (.), we can obtain ũ by approximating each of the quadratic forms where is a nonzero constant, because u (t) = c + (t) c (t) c + (t) = + f, R (t)[ + f ] (.) c (t) = f, R (t)[ f ] (.) s + (t) = + g, R (t)[ + g ] (.) s (t) = g, R (t)[ g ], (.) + s+ (t) s (t). (.) Similarly, we can obtain the coefficients ṽ of an approximation of u t (x, t) by approximating the quadratic forms c + (t) = + f, AR (t)[ + f ] (.) c (t) = f, AR (t)[ f ] (.) s + (t) = + g, R (t)[ + g ] (.) s (t) = g, R (t)[ g ]. (.) As noted in [], this approximation to u t (x, t) does not introduce any error due to differentiation of our approximation of u(x, t) with respect to t the latter approximation can be differentiated analytically. It follows from the preceding discussion that we can compute an approximate solution ũ(x, t) at a given time T using the following algorithm. Algorithm. (Krylov Subspace Spectral Method for IBVP). Given functions c(x), f (x), and g (x) defined on the interval (, ), a final time T, and an orthonormal set of functions,, N that satisfy the boundary conditions, the following algorithm computes a function ũ(x, t) of the form (.) that approximately solves the problem (.), (.) from t = to t = T. t = Choose a nonzero constant while t < T do Select a time step t f (x) =ũ(x, t) g (x) =ũ t (x, t)

17 Guidotti et al. for = to N do Compute the quantities c + (t), c (t), s+ (t), s (t), c + (t), c (t), s + (t), and s (t) ũ (t) = (c + (t) c (t)) + (s+ (t) s (t)) ṽ (t) = (c + (t) c (t) ) + (s+ (t) s (t) ) end ũ(x, t + t) = N = (x)ũ (t) ũ t (x, t + t) = N = (x)ṽ (t) t = t + t end... Computation of the Quadratic Forms We now discuss the approximation of quantities of the form I [f ]= v N, f (A)v N (.) where A is the L (, )-realization of a self-adjoint differential operator defined on dom(a) = H (, ) H (, ), f is a given analytic function, and v N (x) is a function of the form v N (x) = N u (x). (.) Given this representation of v N, we can approximate this quantity by = I N [f ]=v T N f (A N )v N (.) where v N =[v N (x ) v N (x N )] T, and A N is an N N symmetric matrix that approximates the operator A on the space spanned by,, N. For example, we may choose [A N ] ij = N k (x i ) k, A l l (x j ), (.) k,l= or use a finite-difference approximation that takes the boundary conditions into account. In particular, if we use a three-point stencil, then A N is a tridiagonal matrix. We can compute this quadratic form using techniques described in []. Let A N have eigenvalues a = N = b, (.)

18 Wave Propagation in D Inhomogeneous Media with corresponding eigenvectors q,, q N. Then I N [f ]=v T N f (A N )v N (.) N = f ( j ) q T j v N (.) = j= b where () is the piecewise constant measure <a n q T j v N i < i+ () = j=i n q T j v N b j= a f ()d() (.) (.) We can approximate the value of this Riemann Stieltjes integral using Gaussian quadrature. Applying the symmetric Lanczos algorithm to A N with initial vector v N, we can construct a sequence of polynomials p,, p K that are orthogonal with respect to the measure (). These polynomials satisfy a three-term recurrence relation j+ p j+ () = ( j+ )p j () j p j (), (.) p (), p () =, v N (.) which can be written in matrix-vector notation as p K () = J K p K () + K p K ()e K (.) where p () p K () =., J K = (.) p K () K K It follows that the eigenvalues of J K are the zeros of p K (), which are the nodes for Gaussian quadrature. It can be shown (see []) that the K

19 Guidotti et al. corresponding weights are equal to the squares of the first components of the normalized eigenvectors of J K.... Accuracy of the Approximate Solution We now state and prove a result concerning the accuracy of each component of the approximate solution. We first use the following result from []. Lemma.. LetAbeanN N symmetric positive definite matrix. Let u and v be fixed vectors, and define u = u + v. For j a positive integer, let g j () be defined by g j () = et T j e u, (.) where T is the K K Jacobi matrix produced by the K iterations of the symmetric Lanczos algorithm applied to A with starting vector u. Then, for some satisfying <<, g j () g j ( ) j K = u T A j v + k=k [ j K + k=k e T e T [ T k X T X T A k ] re T K T j k e u T u [ ] r T k X T X T Ak e T K T j k e u T ] u = (.) Proof. See []. Corollary.. for j < K. Under the assumptions of the lemma, g j () g j ( ) = u T A j v, (.) We can now describe the local truncation error of each component of the computed solution. Theorem.. Assume that c(x),f(x), and g (x) belong to span,, N, and let u(x, t) be the exact solution of (.), (.) at (x, t), and let ũ(x, t) be the approximate solution computed by Algorithm.. Then, u(, t) ũ(, t) = O(t K ) (.) where K is the number of quadrature nodes used in Algorithm..

20 Wave Propagation in D Inhomogeneous Media Proof. Let g () be the function from Lemma. with A = A NK, where N K = K N, u = and v = f, where f = f (x ) f (x NK )] T. Furthermore, denote the entries of T by () () () () () T = (.) K () K () K () K () K () Finally, let () = u and K () = r, and let c = [c + (t) c (t)] =, R (t)f. (.) Then, by Lemma. and Corollary.,, R (t)f c { t j = ( ) j, A j f g } j() g j ( ) (j)! = = = j= { t j j K ( ) j, A j f c T (j)! Aj N K f + j= t K (K )! et d d K t d (K )! et d k=k e T re T K T j k e } + O(t K ) d [ ] T k d X T X T Ak = N K [ ] T K X T X T AK = N K re T K T K e + O(t K ) [ K T j e K r T j= AK j N K ] = re T K T K e + O(t K ) K t d [ ] = (K )! et T K e K r T re T K d T K e + O(t K ) = = t K d [ ] r e T (K )! d T K e K + O(t = K ) = t K d ( () K () ) (K )! d + O(t K ) = = O(t K ). (.) A similar result holds for s =, R (t)g.

21 Guidotti et al. Note that the proof assumes that Algorithm. uses a discretization of A on an N K -point grid, where N K = K N. This grid refinement is used to avoid loss of information that would be incurred on an N -point grid when multiplying gridfunctions. In practice, this refinement is seen to be unnecessary when the coefficients are reasonably smooth. When it is needed to ensure sufficient accuracy, its effect on the efficiency of Algorithm. is minimized by the fact that K is typically chosen to be small (say, K = ork = ). Implementation details discussed in [] also mitigate this concern.... Non-Orthogonal Basis Functions Ideally, we would like our trial functions to be approximate eigenfunctions of the symmetrized operator obtained previously. Although the eigenfunctions of this operator are orthogonal with respect to the inner product,, we cannot assume that any basis of approximate eigenfunctions is necessarily orthogonal as well. Suppose that we refine our initial (orthogonal) basis of approximate eigenfunctions of the form (.) to obtain a new basis (x) N = so that each function (x) is a sparse combination of functions of the form (.); that is, = C (.) where the matrices and are defined by ij = j (x i ), ij = j (x i ), x i = ix, < i < N, (.) and the matrix C is sparse. Then, if we define the vector u(t) to be the values of our approximate solution ũ(x, t) at time t and the gridpoints x i, i =,, N, then we can efficiently obtain u(t + t) by computing where the vector ũ(t + t) is defined by u(t + t) = C(C T C) ũ(t + t) (.) [ũ(t + t)] =, R (t)ũ(, t) + R (t)ũ t (, t), <<N. (.) Generalizing, if we obtain the matrix representing the values of approximate eigenfunctions by a sequence of transformations C,, C k where each C j, j =,, k, has O() bandwidth, and the integer k is small, then we can still compute the solution in O(N ) time per time step... Numerical Experiments To test our algorithm, we solve the problem (.), (.) from t = to t =.

22 Wave Propagation in D Inhomogeneous Media... Construction of Test Cases In many of the following experiments, it is necessary to construct functions of a given smoothness. To that end, we rely on the following result (see []): Theorem.. Let f (x) be a -periodic function and assume that its pth derivative is a piecewise C function. Then, ˆf () constant/( p+ +). (.) Based on this result, the construction of a C p+ function f (x) proceeds as follows:. For each =,, N /, choose the discrete Fourier coefficient ˆf () by setting ˆf () = (u + iv)/ p+ +, where u and v are random numbers uniformly distributed on the interval (, ).. For each =,, N /, set ˆf ( ) = ˆf ().. Set ˆf () equal to any real number.. Set f (x) = ˆf <N / ()e ix. In the following test cases, coefficients and initial data are constructed so that their third derivatives are piecewise C, unless otherwise noted. We will now introduce some functions that will be used in the experiments described in this section. As these functions and operators are randomly generated, we will denote by R, R, the sequence of random numbers obtained using MATLAB s random number generator rand after setting the generator to its initial state. These numbers are uniformly distributed on the interval (, ). We will make frequent use of a two-parameter family of functions defined on the interval [, ]. First, we define { f j,k (x) = Re ˆf j ()( + ) k+ e }, ix j, k =,,, (.) where <N /, = ˆf j () = R jn +(+N /) + ir jn +(+N /). (.) The parameter j indicates how many functions have been generated in this fashion since setting MATLAB s random number generator to its initial state, and the parameter k indicates how smooth the function is.

23 Guidotti et al. In many cases, it is necessary to ensure that a function is positive or negative, so we define the translation operators E + and E by E + f (x) = f (x) min f (x) +, x [,] (.) E f (x) = f (x) max f (x). x [,] (.) It is also necessary to ensure that a periodic function vanishes on the boundary, so we define the translation operator E by E f (x) = f (x) f (). (.)... Discretization and Error Estimation The problem is solved using the following methods: A finite difference scheme presented by Kreiss et al. (see []). The Krylov subspace spectral method with K = Gaussian quadrature nodes and the basis (.). The Krylov subspace spectral method with K = Gaussian quadrature nodes and a basis obtained by applying two iterations of inverse iteration to each function in the basis (.). In all cases, the operator = c(x) xx is preconditioned using the transformations described in Section to obtain a self-adjoint operator. Then, the L (, )-realization of defined on H (, ) H (, ) is discretized using a matrix of the form (.) that operates on the space of gridfunctions defined on a grid consisting of N equally spaced points x j = jx, where x = /(N + ), for various values of N. The approximate solution is then computed using time steps t k = k, k =,,, so that we can analyze the temporal convergence behavior. Let u (k) (x, t), k =,,, be the approximate solution computed using time step t k. For k =,,, the relative error E k in u (k) (x, t) at t = is estimated as follows: We use the same method to solve the backward problem for (.) with end conditions u(x,) = u (k) (x, t), u t (x,) = u (k) t (x, t), x (, ). (.) Let v (k) (x, t) be the approximation solution of the inverse problem, for k =,,. Then we approximate the relative difference between v (k) (x,) and u (k ) (x,) = f (x) in the L -norm; that is, E k u(k) (,) v (k) (,) u (k) (,), (.)

24 where Wave Propagation in D Inhomogeneous Media [u (k) ] j = u (k) (x j,), [v (k) ] j = v (k) (x j,), x j = jx. (.)... Results We first solve the problem (.), (.) with smooth data c(x) = f, (x), f (x) = f, (x), g (x) = f, (x). (.) The functions c(x), f (x), and g (x) are plotted in Figs., respectively. The temporal convergence is illustrated in Fig., where N = gridpoints are used in all cases. The finite difference method of Kreiss et al. converges quadratically, whereas approximately th-order convergence is attained using the Krylov subspace spectral method. Note that the use of inverse iteration does not improve the convergence rate, but it does yield a more accurate approximation for larger time steps. In Fig., all three methods are used to solve (.), (.) with time steps t k = k and mesh sizes x k = (k+), for k =,,. The finitedifference method converges quadratically, while the Krylov subspace spectral method without inverse iteration exhibits quintic convergence. Using inverse iteration, the convergence is only superquadratic, but this is FIGURE Smooth wave speed c(x) = f, (x). F-F F F

25 Guidotti et al. FIGURE Smooth initial data u(x,) = f (x) = f, (x). FIGURE Smooth initial data u t (x,) = g (x) = f, (x).

26 Wave Propagation in D Inhomogeneous Media FIGURE Estimates of relative error in approximate solutions of the problem (.), (.) with data (.) computed using finite differencing and Krylov subspace spectral methods, with time steps t k = k, k =,,. FIGURE Estimates of relative error in approximate solutions of the problem (.), (.) with data (.) computed using finite differencing and Krylov subspace spectral methods, with time steps t k = k and mesh sizes x k = (k+), for k =,,.

27 Guidotti et al. due to the fact that the accuracy is so high at t = that the machine precision prevents attaining a faster convergence rate for smaller time steps. Both experiments are repeated with data that is not as smooth. Specifically, we use c(x) = f, (x), f (x) = f, (x), g (x) = f, (x). (.) The functions c(x), f (x), and g (x) are plotted in Figs., respectively. The results corresponding to Figs. and are illustrated in Figs. and, respectively. As expected, the accuracy and the convergence rate are impaired to some extent. This can be alleviated by refining the spatial grid during the Lanczos iteration, as described in [], in order to obtain more accurate inner products of functions; we do not do this here... Gaussian Quadrature in the Spectral Domain Consider the computation of the quadratic form, f (Ã) where (x) is defined in (.) and à is defined in (.). Figures and illustrate the relationship between the eigenvalues of à and the Gaussian quadrature nodes obtained by the symmetric Lanczos algorithm that is employed by Krylov subspace spectral methods. In Fig., the speed c(x) is defined to be c, (x), which is shown in Fig.. Because the speed is FIGURE Non-smooth wave speed c(x) = f, (x). F-F F-F F

28 Wave Propagation in D Inhomogeneous Media FIGURE Nonsmooth initial data u(x,) = f (x) = f, (x). FIGURE Nonsmooth initial data u t (x,) = g (x) = f, (x).

29 Guidotti et al. FIGURE Estimates of relative error in approximate solutions of the problem (.), (.) with data (.) computed using finite differencing and Krylov subspace spectral methods, with time steps t k = k, k =,,. FIGURE Estimates of relative error in approximate solutions of the problem (.), (.) with data (.) computed using finite differencing and Krylov subspace spectral methods, with time steps t k = k and mesh sizes x k = (k+), for k =,,.

30 Wave Propagation in D Inhomogeneous Media FIGURE Approximate eigenvalues of the operator = c, (x) xx, and Gaussian quadratures nodes of a -point rule used to approximate, f (Ã) where à is defined in (.), plotted against the wave number. For each eigenvalue, the wave number is determined by the dominant frequency of the corresponding approximate eigenfunction. smooth, (x) is an approximate eigenfunction of Ã, and it follows that the nodes are clustered around the corresponding approximate eigenvalue. In Fig., the speed is c(x) = + cos(x). Because of this oscillatory F perturbation, the eigenvalues do not define a smooth curve, as seen in the top plot. Note that the sharp oscillations in the curve traced by the eigenvalues correspond to sharp changes in the placement of the two quadrature nodes.. CONCLUSIONS We have considered wave propagation in one dimension in the case of heterogeneous and complicated coefficients. Our point of view has been to consider the analytical structure of the solution operator in order to derive asymptotic properties for the spectrum. In particular, we considered nonself-adjoint problems with small fluctuations in the coefficients and problems where the fluctuations are large and rapid, respectively. We then derived techniques for efficient and accurate numerical wave propagation that are based on using low-dimensional Krylov subspace approximations of the solution operator to obtain components of the solution in a basis of trial functions in a Galerkin-type scheme. We demonstrated that this

31 Guidotti et al. FIGURE Approximate eigenvalues of the operator = ( + cos(x) xx, and Gaussian quadratures nodes of a -point rule used to approximate, f (Ã) where à is defined in (.), plotted against the wave number. For each eigenvalue, the wave number is determined by the dominant frequency of the corresponding approximate eigenfunction. approach gives a high-order approximation that converges faster than competing methods in the problems that we have considered. Developing theory and numerical procedures for wave propagation in rough and multiscale media is important in a number of applications, such as analysis and design of algorithms for solving inverse problems related to propagation in the ocean, the atmosphere, or in the heterogeneous earth, for instance. Such applications require us, however, to consider propagation in several spatial dimensions, and our main aim is to generalize and integrate further our approach to deal with multiple spatial dimensions. ACKNOWLEDGMENT P.G. was partially supported by NSF CMS. R.S. was supported by NSF OMS and OMS. REFERENCES. A. Bensoussan, J. L. Lions, and G. Papanicolaou (). Asymptotic Analysis of Periodic Structures. North Holland, Amsterdam.. P. G. Casazza (). The art of frame theory. Taiwanese J. Math. ():.. N. Dunford and J. T. Schwarz (). Linear Operators, Part III. Wiley, New York.

32 Wave Propagation in D Inhomogeneous Media. G. Dahlquist, S. C. Eisenstat, and G. H. Golub (). Bounds for the error of linear systems of equations using the theory of moments. J. Math. Anal. Appl. :.. G. H. Golub (). Bounds for matrix moments. Rocky Mt. J. Math. :. AQ. G. H. Golub (). Some modified matrix eigenvalue problems. SIAM Rev. :. AQ. G. H. Golub and C. Meurant (). Matrices, Moments, and Quadrature. Stanford University Technical Report SCCM--.. G. H. Golub and J. Welsch (). Calculation of Gauss quadrature rules. Math. Comp. :.. B. Gustafsson, H.-O. Kreiss, and J. Oliger (). Time-Dependent Problems and Difference Methods. Wiley, New York.. J. B. Keller, D. McLaughlin, and G. Papanicolaou (). Surveys in Applied Mathematics. Plenum Press, New York.. H.-O. Kreiss, N. A. Petersson, and J. Yström (). Difference approximations for the second order wave equation. SIAM J. Numer. Anal. :.. J. V. Lambers (). Krylov subspace methods for variable-coefficient initial-boundary value problems. Ph.D. Thesis, Stanford University SCCM Program.. J. V. Lambers. Practical Implementation of Krylov Subspace Spectral Methods (in preparation). AQ. G. Milton (). The Theory of Composites.. K. Sølna and G. Milton (). Can mixing materials make electromagnetic signals travel faster? AQ SIAM J. App. Math. V ():.. G. Szego (). Orthogonal Polynomials, rd. ed. American Mathematical Society, Providence, RI. AQ. Yu. Safarov and D. Vassiliev (). The Asymptotic Distributions of Eigenvlaues of Partial Differential Operators. Translation of Mathematical Monographs, AMS. AQ

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments James V. Lambers September 24, 2008 Abstract This paper presents a reformulation

More information

A Spectral Time-Domain Method for Computational Electrodynamics

A Spectral Time-Domain Method for Computational Electrodynamics A Spectral Time-Domain Method for Computational Electrodynamics James V. Lambers Abstract Block Krylov subspace spectral (KSS) methods have previously been applied to the variable-coefficient heat equation

More information

Stability of Krylov Subspace Spectral Methods

Stability of Krylov Subspace Spectral Methods Stability of Krylov Subspace Spectral Methods James V. Lambers Department of Energy Resources Engineering Stanford University includes joint work with Patrick Guidotti and Knut Sølna, UC Irvine Margot

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Spectrum and Exact Controllability of a Hybrid System of Elasticity.

Spectrum and Exact Controllability of a Hybrid System of Elasticity. Spectrum and Exact Controllability of a Hybrid System of Elasticity. D. Mercier, January 16, 28 Abstract We consider the exact controllability of a hybrid system consisting of an elastic beam, clamped

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Orthogonal Polynomials and Gaussian Quadrature

Orthogonal Polynomials and Gaussian Quadrature Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM

ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM ESTIMATES OF THE TRACE OF THE INVERSE OF A SYMMETRIC MATRIX USING THE MODIFIED CHEBYSHEV ALGORITHM GÉRARD MEURANT In memory of Gene H. Golub Abstract. In this paper we study how to compute an estimate

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED ALAN DEMLOW Abstract. Recent results of Schatz show that standard Galerkin finite element methods employing piecewise polynomial elements of degree

More information

PDEs, part 1: Introduction and elliptic PDEs

PDEs, part 1: Introduction and elliptic PDEs PDEs, part 1: Introduction and elliptic PDEs Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Partial di erential equations The solution depends on several variables,

More information

Orthogonal polynomials

Orthogonal polynomials Orthogonal polynomials Gérard MEURANT October, 2008 1 Definition 2 Moments 3 Existence 4 Three-term recurrences 5 Jacobi matrices 6 Christoffel-Darboux relation 7 Examples of orthogonal polynomials 8 Variable-signed

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

A Computational Approach to Study a Logistic Equation

A Computational Approach to Study a Logistic Equation Communications in MathematicalAnalysis Volume 1, Number 2, pp. 75 84, 2006 ISSN 0973-3841 2006 Research India Publications A Computational Approach to Study a Logistic Equation G. A. Afrouzi and S. Khademloo

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58

Background. Background. C. T. Kelley NC State University tim C. T. Kelley Background NCSU, Spring / 58 Background C. T. Kelley NC State University tim kelley@ncsu.edu C. T. Kelley Background NCSU, Spring 2012 1 / 58 Notation vectors, matrices, norms l 1 : max col sum... spectral radius scaled integral norms

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators Lei Ni And I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Fourier spectral method (Amath Bretherton)

The Fourier spectral method (Amath Bretherton) The Fourier spectral method (Amath 585 - Bretherton) 1 Introduction The Fourier spectral method (or more precisely, pseudospectral method) is a very accurate way to solve BVPs with smooth solutions on

More information

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical

More information

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as

More information

NEW A PRIORI FEM ERROR ESTIMATES FOR EIGENVALUES

NEW A PRIORI FEM ERROR ESTIMATES FOR EIGENVALUES NEW A PRIORI FEM ERROR ESTIMATES FOR EIGENVALUES ANDREW V. KNYAZEV AND JOHN E. OSBORN Abstract. We analyze the Ritz method for symmetric eigenvalue problems and prove a priori eigenvalue error estimates.

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Chapter 1: The Finite Element Method

Chapter 1: The Finite Element Method Chapter 1: The Finite Element Method Michael Hanke Read: Strang, p 428 436 A Model Problem Mathematical Models, Analysis and Simulation, Part Applications: u = fx), < x < 1 u) = u1) = D) axial deformation

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators John Sylvester Department of Mathematics University of Washington Seattle, Washington 98195 U.S.A. June 3, 2011 This research

More information

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2

Index. C 2 ( ), 447 C k [a,b], 37 C0 ( ), 618 ( ), 447 CD 2 CN 2 Index advection equation, 29 in three dimensions, 446 advection-diffusion equation, 31 aluminum, 200 angle between two vectors, 58 area integral, 439 automatic step control, 119 back substitution, 604

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Qualifying Examination

Qualifying Examination Summer 24 Day. Monday, September 5, 24 You have three hours to complete this exam. Work all problems. Start each problem on a All problems are 2 points. Please email any electronic files in support of

More information

Polynomial Approximation: The Fourier System

Polynomial Approximation: The Fourier System Polynomial Approximation: The Fourier System Charles B. I. Chilaka CASA Seminar 17th October, 2007 Outline 1 Introduction and problem formulation 2 The continuous Fourier expansion 3 The discrete Fourier

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications Thorsten Hohage joint work with Sofiane Soussi Institut für Numerische und Angewandte Mathematik Georg-August-Universität

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS COMM. APP. MATH. AND COMP. SCI. Vol., No., ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS PER-GUNNAR MARTINSSON, VLADIMIR ROKHLIN AND MARK TYGERT We observe that, under

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1

CAAM 336 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination 1 CAAM 6 DIFFERENTIAL EQUATIONS IN SCI AND ENG Examination Instructions: Time limit: uninterrupted hours There are four questions worth a total of 5 points Please do not look at the questions until you begin

More information

Superconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients

Superconvergence of discontinuous Galerkin methods for 1-D linear hyperbolic equations with degenerate variable coefficients Superconvergence of discontinuous Galerkin methods for -D linear hyperbolic equations with degenerate variable coefficients Waixiang Cao Chi-Wang Shu Zhimin Zhang Abstract In this paper, we study the superconvergence

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1

IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1 Computational Methods in Applied Mathematics Vol. 1, No. 1(2001) 1 8 c Institute of Mathematics IMPROVED LEAST-SQUARES ERROR ESTIMATES FOR SCALAR HYPERBOLIC PROBLEMS, 1 P.B. BOCHEV E-mail: bochev@uta.edu

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

MATHEMATICAL METHODS AND APPLIED COMPUTING

MATHEMATICAL METHODS AND APPLIED COMPUTING Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department

More information

2.3. VECTOR SPACES 25

2.3. VECTOR SPACES 25 2.3. VECTOR SPACES 25 2.3 Vector Spaces MATH 294 FALL 982 PRELIM # 3a 2.3. Let C[, ] denote the space of continuous functions defined on the interval [,] (i.e. f(x) is a member of C[, ] if f(x) is continuous

More information

The method of lines (MOL) for the diffusion equation

The method of lines (MOL) for the diffusion equation Chapter 1 The method of lines (MOL) for the diffusion equation The method of lines refers to an approximation of one or more partial differential equations with ordinary differential equations in just

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you. Math 54 Fall 2017 Practice Final Exam Exam date: 12/14/17 Time Limit: 170 Minutes Name: Student ID: GSI or Section: This exam contains 9 pages (including this cover page) and 10 problems. Problems are

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Calculus in Gauss Space

Calculus in Gauss Space Calculus in Gauss Space 1. The Gradient Operator The -dimensional Lebesgue space is the measurable space (E (E )) where E =[0 1) or E = R endowed with the Lebesgue measure, and the calculus of functions

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

Matrix Theory, Math6304 Lecture Notes from October 25, 2012 Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started

More information

10 The Finite Element Method for a Parabolic Problem

10 The Finite Element Method for a Parabolic Problem 1 The Finite Element Method for a Parabolic Problem In this chapter we consider the approximation of solutions of the model heat equation in two space dimensions by means of Galerkin s method, using piecewise

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Spectral theory of first order elliptic systems

Spectral theory of first order elliptic systems Spectral theory of first order elliptic systems Dmitri Vassiliev (University College London) 24 May 2013 Conference Complex Analysis & Dynamical Systems VI Nahariya, Israel 1 Typical problem in my subject

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

Explicit high-order time stepping based on componentwise application of asymptotic block Lanczos iteration

Explicit high-order time stepping based on componentwise application of asymptotic block Lanczos iteration NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2012; 19:970 991 Published online 19 March 2012 in Wiley Online Library (wileyonlinelibrary.com)..1831 Explicit high-order time stepping

More information