A model reduction approach to numerical inversion for a parabolic partial differential equation

Size: px
Start display at page:

Download "A model reduction approach to numerical inversion for a parabolic partial differential equation"

Transcription

1 Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/ /30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin 2, Alexander V Maonov 3 and Mihail Zaslavsy 2 Departent of Matheatics, University of Michigan, 2074 East Hall, 530 Church Street, Ann Arbor, MI , USA 2 Schluberger-Doll Research Center, Hapshire St., Cabridge, MA , USA 3 Schluberger, 3750 Briarpar Dr., Houston, X 77042, USA E-ail: borcea@uich.edu, drusin@slb.co, aaonov@slb.co and zaslavsy@slb.co Received 9 June 204, revised 26 Septeber 204 Accepted for publication 8 October 204 Published 9 Noveber 204 Abstract We propose a novel nuerical inversion algorith for the coefficients of parabolic partial differential equations, based on odel reduction. he study is otivated by the application of controlled source electroagnetic exploration, where the unnown is the subsurface electrical resistivity and the data are tie resolved surface easureents of the agnetic field. he algorith presented in this paper considers inversion in one and two diensions. he reduced odel is obtained with rational interpolation in the frequency (Laplace) doain and a rational Krylov subspace proection ethod. It aounts to a nonlinear apping fro the function space of the unnown resistivity to the sall diensional space of the paraeters of the reduced odel. We use this apping as a nonlinear preconditioner for the Gauss Newton iterative solution of the inverse proble. he advantage of the inversion algorith is twofold. First, the nonlinear preconditioner resolves ost of the nonlinearity of the proble. hus the iterations are less liely to get stuc in local inia and the convergence is fast. Second, the inversion is coputationally efficient because it avoids repeated accurate siulations of the tie-doain response. We study the stability of the inversion algorith for various rational Krylov subspaces, and assess its perforance with nuerical experients. Keywords: inverse proble, parabolic equation, odel reduction, rational Krylov subspace proection, CSEM /4/250+33$ IOP Publishing Ltd Printed in the UK

2 Inverse Probles 30 (204) 250 (Soe figures ay appear in colour only in the online ournal). Introduction Inverse probles for parabolic partial differential equations arise in applications such as groundwater flow, solute transport and controlled source electroagnetic oil and gas exploration. We consider the latter proble, where the unnown r () x is the electrical resistivity, the coefficient in the diffusion Maxwell syste satisfied by the agnetic field H(, t x) = H(, t x) [ r( x) H( t, x)], (.) t for tie t > 0 and x in soe spatial doain. he data fro which r ()is x to be deterined are the tie resolved easureents of H(, t x) at receivers located on the boundary of the doain. Deterining r () x fro the boundary easureents is challenging especially because the proble is ill-posed and thus sensitive to noise. A typical nuerical approach is to iniize a functional given by the least squares data isfit and a regularization ter, using Gauss Newton or nonlinear conugate gradient ethods [22, 23, 25]. here are two ain drawbacs. First, the functional to be iniized is not convex and the optiization algoriths can get stuc in local inia. he lac of convexity can be overcoe to soe extent by adding ore regularization at the cost of artifacts in the solution. Nevertheless, convergence ay be very slow [23]. Second, evaluations of the functional and its derivatives are coputationally expensive, because they involve ultiple nuerical solutions of the forward proble. In applications the coputational doains ay be large with eshes refined near sources, receivers and regions of strong heterogeneity. his results in a large nuber of unnowns in the forward proble, and tie stepping with such large systes is expensive over long tie intervals. We propose a nuerical inversion approach that arises when considering the inverse proble in the odel reduction fraewor. We consider one and two-diensional ( and 2D) edia, and denote the spatial variable by x = (, x z), with z and x Ω, a siply connected doain in n with boundary, for n =, 2. he setting is illustrated in figure. he resistivity is r = r( x), and assuing that H = ut (, x) e, (.2) we obtain fro (.) the parabolic equation z = ut (, x) [ rx () ut (, x)], (.3) t for t > 0 and x Ω. he boundary consists of an accessible part A, which supports the receivers at which we ae the easureents and the initial excitation u(0, x) = u ( x), supp{ u }, (.4) o o A and an inaccessible part I = A, where we set ut (, x) = 0, x. (.5) he boundary condition at A is I n( x) u( t, x) = 0, x, (.6) A 2

3 Inverse Probles 30 (204) 250 Figure. Exaples of spatial doains for equation (.) in 2 (left) and 3 (right). he ediu is constant in the direction z, which is transversal to Ω. he accessible boundary A = Ω is ared with. where n is the outer noral. We choose the boundary conditions (.5) (.6) to siplify the presentation. Other (possibly inhoogeneous) conditions can be taen into account and aount to inor odifications of the reduced odels described in this paper. he sources and receivers, are located on the accessible boundary A, and provide nowledge of the operator ( u ) = u ( t, x ), t > 0, (.7) o x A for every u o such that supp{ u o } A. Here ut (, x)is the solution of (.3) with the initial condition (.4). he inverse proble is to deterine the resistivity r(x) for x Ω fro. Note that the operator is highly nonlinear in r, but it is linear in u o. his iplies that in one diension where Ω is an interval, say Ω = (0, ), and the accessible boundary is a single point A = { x = 0}, is copletely defined by uo () x = δ () x. All the inforation about r (x) is contained in a single function of tie y ( t) = ( δ ( x)) = u( t, 0), t > 0. (.8) o ease the presentation, we begin by describing in detail the odel reduction inversion ethod for the D case. hen we show how to extend it to the 2D case. he reduced odel is obtained with a rational Krylov subspace proection ethod. Such ethods have been applied to forward probles for parabolic equations in [7, 0, 3], and to inversion in one diension in [] and ultiple diensions in [20]. he reduced odels in [] are 2 rational interpolants of the transfer function defined by the Laplace transfor of y(t) fro (.8). In this paper we build on the results in [] to study in ore detail and iprove the inversion ethod in one diension, and to extend it to two diensions. he reduced order odels allow us to replace the solution of the full scale parabolic proble by its low order proection, thus resolving the high coputational cost inversion challenge entioned above. Conventionally, the reduced order odel (the rational approxiant of the transfer function) is paraetrized in ters of its poles and residues. We paraetrize it instead in ters of the coefficients of its continued fraction expansion. he rational approxiant can be viewed as the transfer function of a finite difference discretization of (.3) on a special grid with rather few points, nown in the literature as the optial or spectrally atched grid, or as a finite-difference Gaussian rule [8]. he continued fraction coefficients are the entries in the finite difference operator, which are related to the discrete resistivities. o itigate the other inversion challenge entioned above, we introduce a nonlinear apping fro the function space of the unnown resistivity r to the low-diensional space of the discrete resistivities. his ap appears to resolve the nonlinearity of the proble and 3

4 Inverse Probles 30 (204) 250 we use it in a preconditioned Gauss Newton iteration that is less liely to get stuc in local inia and converges quicly, even when the initial guess is far fro the true resistivity. o precondition the proble, we ap the easured data to the discrete resistivities. he inverse proble is ill-posed, so we liit the nuber of discrete resistivities (i.e. the size of the reduced order odel) coputed fro the data. his nuber depends on the noise level and it is typically uch less than the diension of odels used in conventional algoriths, where it is deterined by the accuracy of the forward proble solution. his represents another significant advantage of our approach. he paper is organized as follows: we begin in section 2 with a detailed description of our ethod in one diension. he inversion in 2D edia is described in section 3. Nuerical results in one and two diensions are presented in section 4. We conclude with a suary in section 5. he technical details of the coputation of the nonlinear preconditioner and its Jacobian are given in the appendix. 2. Model order reduction for inversion in D edia In this section we study in detail the use of reduced order odels for inversion in one diension. Many of the ideas presented here are used in section 3 for the two-diensional inversion. We begin in section 2. by introducing a sei-discrete analogue of the continuu inverse proble, where the differential operator in x in equation (.3) is replaced by a atrix. his is done in any nuerical inversion, and we use it fro the start to adhere to the conventional setting of odel order reduction, which is rooted in linear algebra. he proection-based reduced order odels are described in section 2.2. hey can be paraetrized in ters of the coefficients of a certain reduced order finite difference schee, used in section 2.3 to define a pair of nonlinear appings. hey define the obective function for the nonlinearly preconditioned optiization proble, as explained in section 2.4. o give soe intuition to the preconditioning effect, we relate the appings to so-called optial grids in section 2.5. he stability of the inversion is addressed in section 2.6, and our regularization schee is given in section 2.7. he detailed description of the D inversion algorith is in section Sei-discrete inverse proble In one diension the doain is an interval, which we scale to Ω = (0, ), with accessible and inaccessible boundaries A = {0} and I = {} respectively. o adhere to the conventional setting of odel order reduction, we consider the sei-discretized equation (.3) where u() t = A( ru ) ( t), (2.) t A() r = D diag() r D, (2.2) is a syetric and negative definite atrix, the discretization of x[ r( x) x ]. he vector r N N + contains the discrete values of r(x) and the atrix D N arises in the finite difference discretization of the derivative in x, for boundary conditions (.6) (.5). he discretization is on a very fine unifor grid with N points in the interval [0, ], and spacing h = ( N + ). Note that our results do not depend on the diension N, and all the derivations can be carried out for a continuu differential operator. We let A be a atrix to avoid unnecessary technicalities. he vector u( t) N is the discretization of ut (, x), and the 4

5 Inverse Probles 30 (204) 250 initial condition is u(0) = e, (2.3) h the approxiation of a point source excitation at A. he tie-doain response of the seidiscrete dynaical syste (2.) is given by y ( t; r) = e u( t), (2.4) where e = (, 0, 0,, 0). his is a direct analogue of (.8), i.e. it corresponds to easuring the solution of (2.) at the left end point A = {0} of the interval Ω. We ephasize in the notation that the response depends on the vector r of discrete resistivities. We use (2.4) to define the forward ap N + that taes the vector r : C (0, + ) (2.5) N + to the tie doain response () r = y ( ; r). (2.6) he easured tie doain data is denoted by ( ) true d ( t) = y t; r + ( t), t > 0, (2.7) where r true is the true (unnown) resistivity vector and ( t) is the contribution of the easureents noise and the discretization errors. he inverse proble is: given data d(t) for t [0, ), recover the resistivity vector Proection-based odel order reduction In order to apply the theory of odel order reduction we treat (2.) (2.4) as a dynaical syste with the tie doain response A() r t e A() r t y ( t; r) = e e = b e b (2.8) h written in a syetrized for using the source/easureent vector b = e h. (2.9) he transfer function of the dynaical syste is the Laplace transfor of (2.8) + = st Y s r y t r t = b si A r (; ) (; )e d ( ()) b, s > 0. (2.0) 0 Since A( r) is negative definite, all the poles of the transfer function are negative and the dynaical syste is stable. In odel order reduction we obtain a reduced odel ( A, b) so that its transfer function ( ) Y () s = b si A b (2.) is a good approxiation of Y (; s r)as a function of s in soe nor. Here I is the identity atrix, A, b and N. Note that while the atrix A( r) given by (2.2) is sparse, the reduced order atrix A is typically dense. hus, it has no straightforward interpretation as a discretization of the differential operator x[ r( x) x] on soe coarse grid with points. 5

6 Inverse Probles 30 (204) 250 Proection-based ethods search for reduced odels of the for A = VAV, b = Vb, VV = I, (2.2) N where the coluns of V for an orthonoral basis of an -diensional subspace of N on which the syste is proected. he choice of V depends on the atching conditions for Y and Y. hey prescribe the sense in which Y approxiates Y. Here we consider oent atching at interpolation nodes s [0, + ) that ay be distinct or coinciding, Y s = Y s s= s s= s, = 0,, 2M, =,, l. (2.3) he ultiplicity of a node s is denoted by M, so at non-coinciding nodes M =. he reduced order transfer function Y atches Y and its derivatives up to the order 2M, and the size l of the reduced odel is given by = = M. Note fro (2.) that Y (s) is a rational function of s, with partial fraction representation c Y ( s) =, c > 0, θ > 0. (2.4) s + θ = Its poles θ are the eigenvalues of A and the residues c are defined in ters of the noralized eigenvectors z, ( ) 2 c = b z, A z + θ z = 0, z =, =,,. (2.5) hus, (2.3) is a rational interpolation proble. It is nown [6] to be equivalent to the proection (2.2) when the coluns of V for an orthogonal basis of the rational Krylov subspace ( s) = span{ ( si A) b =,, l; =,, M}. (2.6) he interpolation nodes, obviously, should be chosen in the resolvent set of A( r). Moreover, since in reality we need to solve a liiting continuu proble, the nodes should be in the closure of the intersection of the resolvent sets of any sequence of finite-difference operators that converge to the continuu proble. his set includes (, 0) for probles on bounded doains. In our coputations the interpolation nodes lie on the positive real axis, since they correspond to woring with the Laplace transfor of the tie doain response. Ideally, in the solution of the inverse proble we would lie to iniize the tie (or frequency) doain data isfit in a quadratic nor weighted in accordance with the statistical distribution of the easureent error. When considering the reduced order odel, it is natural to choose the interpolation nodes that give the ost accurate approxiation in that nor. Such interpolation is nown in control theory as 2 (Hardy space) optial, and in any cases the optial interpolation nodes can be found nuerically [7]. Moreover, it was shown in [], that the solution of the inverse proble using reduced order odels with such interpolation nodes also iniizes the isfit functional (in the absence of easureent errors). When such nodes are not available, we can select soe reasonable interpolation nodes chosen based on a priori error estiates, for exaple, the so-called Zolotarev points or their approxiations obtained with the help of potential theory [0, 24]. In ost cases such choices lead to interpolation nodes that are distributed geoetrically. 6

7 Inverse Probles 30 (204) Finite difference paraetrization of reduced order odels As we entioned above, even though the reduced order odel ( A, b) coes fro a finite difference operator A, it does not retain its structure. In particular, A is a dense atrix. he odel can be uniquely paraetrized by 2 nubers, for exaple the poles θ and residues c in the representation (2.4). Here we show how to reparaetrize it so that the resulting 2 paraeters have a eaning of finite difference coefficients. A classical result of Stieltes says that any rational function of the for (2.4) with negative poles and positive residues adits a representation as a Stieltes continued fraction with positive coefficients Y () s = κ s + κ + κ s + κ s + κ. (2.7) Obviously, this is true in our case, since θ are the Ritz values of a negative definite operator A and the residues are given as squares in (2.5). Furtherore, it is nown fro [8] that (2.7) is a boundary response w () s (Neuann-to- Dirichlet ap) of a second order finite difference schee with three point stencil and boundary conditions w w w w κ κ κ + sw = 0, = 2,,, (2.8) w 2 w sw + = 0, w+ = 0. (2.9) κ κ κ hese equations closely reseble the Laplace transfor of (2.), except that the discretization is at nodes, which is uch saller than the diension N of the fine grid. It is convenient to wor henceforth with the logariths log κ and log κ of the finite difference coefficients. We now introduce the two appings and that play the crucial role in our inversion ethod. We refer to the first apping 2 : C (0, + ) as data fitting. It taes the tie-dependent data d(t) to the 2 logariths of reduced order odel paraeters ( d ( )) = {( log κ, log κ ) } (2.20) = via the following chain of appings (a) (b) (c) : d( t) Y( s) Y( s) { ( c, θ ) } = (d) (e) { ( κ, κ ) } { ( log κ, log κ ) }. (2.2) = = Here step (a) is a Laplace transfor of the easured data d(t), step (b) requires solving the rational interpolation proble (2.3), which is converted to a partial fraction for in step (c), which in turn is transfored to a Stieltes continued fraction for in step (d) with a variant of Lanczos iteration, as explained in the appendix. 7

8 Inverse Probles 30 (204) 250 Note step (b) is the only ill-conditioned coputation in the chain. he ill-posedness is inherited fro the instability of the parabolic inverse proble and we itigate it by liiting. he instability ay be understood intuitively by noting that step (b) is related to analytic continuation. We give ore details in section (2.6). In practice we choose so that the resulting κ and κ are positive. his deterines the axiu nuber of degrees of freedo that we can extract fro the data at the present noise level. N he apping 2 : + is the nonlinear preconditioner. It taes the vector of discrete resistivities r N + to the sae output as. In siplest ters, it is a coposition of and the forward ap (2.6) = given by () r = ( y ( ; r)). (2.22) Unlie the data fitting, the coputation of can be done in a stable anner using a chain of appings (a) (b) (c) (d) : r A( r) V A {( c, θ ) } = (e) (f) { ( κ, κ ) } { ( log κ, log κ ) }. (2.23) = = Here step (a) is ust the definition of A, in (b) we copute the orthonoral basis V for the rational Krylov subspace (2.6) on which A is proected in step (c) to obtain A and b. hen, the poles and residues (2.5) are coputed. he last two steps are the sae as in the coputation of Nonlinearly preconditioned optiization Now that we defined the data fitting and the nonlinear preconditioner appings, we can forulate our ethod for solving the sei-discrete inverse proble as an optiization. Given the data d(t) for t [0, + ) (recall (2.7)), we estiate the true resistivity r true by r, the solution of the nonlinear optiization proble r = arg in ( d ( )) ( r)) 2. (2.24) + 2 N r his is different than the typical optiization-based inversion, which iniizes the L 2 nor of the (possibly weighted) isfit between the easured data d ( )and the odel (). r Such an approach is nown to have any issues. In particular, the functional is often nonconvex with any local inia, which presents a challenge for derivative-based ethods (steepest descent, nonlinear conugate gradients, Gauss Newton). he convergence is often slow and soe for of regularization is required. In our approach we ai to convexify the obective functional by constructing the nonlinear preconditioner. o explain the ap, let us give a physical interpretation of the reduced odel paraeters {( κ, κ )} = by introducing the change of coordinates x ξ, so that 2 r x = ξ. he equation for U (, s ξ), the Laplace transfor of ut (, x( ξ)), is r 2 r2 U su = 0 (2.25) ( ) ξ ξ and (2.8) is its discretization on a staggered grid. he coefficients κ and κ are the increents in the discretization of the differential operators r 2 ξ and r 2 ξ, so they are proportional to the local values of r 2 and r 2, respectively. Since =, an ideal choice of would be an approxiate inverse of for resistivity functions that can be approxiated well on the discretization grid. It would interpolate in soe anner the grid 8 2

9 Inverse Probles 30 (204) 250 values of r(x), which are defined by { κ 2, κ 2 } =, up to soe scaling factors that define the grid spacing. Not all grids give good results, as explained in ore detail in the next section. We can factor out the unnown grid spacings by woring with the logarith of the coefficients { κ 2, κ 2 } = instead of the resistivity r. Although it is possible to calculate a good grid, we do not require it in our inversion ethod. However, the grids can be useful for udging the quality of different atching conditions and for visualizing the behavior of as shown in section 2.5. he ill-posedness of the inverse proble is localized in the coputation of the data fitting ter ( d ( )) that is coputed once. he instability of this coputation can be controlled by reducing the size of the reduced order odel proection subspace. A good strategy to follow in practice is to choose the largest such that all the coefficients κ, κ, =,, coputed by are positive. Conventional optiization-based ethods regularize the inversion by adding a penalty ter to the obective function. Our approach is different. We acnowledge that there is a resolution versus stability trade-off by reducing the size of the reduced odel, and view regularization only as a eans of adding prior inforation about the unnown resistivity. If such inforation is available, we can incorporate it at each iteration of the Gauss Newton ethod via a correction in the null space of the Jacobian. he regularization schee is discussed in detail in section 2.7. Inversion via odel order reduction is a recent idea that was introduced in [, 2, 20]. In particular, the approach in [] uses aps c and θ c to the spectral paraeters of the θ reduced order odel θ and c, =,,. Unlie the continued fraction coefficients κ and κ, the poles θ and residues c do not have a physical eaning of resistivity and the apping c does not behave lie an approxiate identity, as does. Our definition of the apping θ is based on the ideas fro [5]. It allows us to iprove the results of []. he iproveent becoes especially pronounced in cases of high resistivity contrast, as shown in the nuerical coparison in section 4. he idea of convexification of the nonlinear inverse proble has been pursued before for hyperbolic equations in [3, 2] and global convergence results were obtained in [2]. However, it is not clear if these approaches apply to parabolic equations. Although both hyperbolic and parabolic equations can be transfored to the frequency doain via Fourier and Laplace transfors, their data is apped to different parts of the coplex plane where the spectral paraeter lies. It is nown that the transforations fro the hyperbolic to the parabolic data are unstable [9], so one cannot directly apply the ethods fro [2, 3, 2] to the parabolic proble. he odel reduction approach in this paper gives a specially designed discrete proble of sall size which can be inverted stably Connection to optial grids We explain here that the ap relates to an interpolant of the resistivity r on a special, socalled optial grid. Although our inversion ethod does not involve directly such grids, it is beneficial to study the because they provide insight into the behavior of the nonlinear preconditioner. We give a brief description of the grids, and show that they depend strongly on the interpolation nodes s in the atching conditions (2.3). We use this dependence to conclude that not all rational approxiants of the transfer function Y (; s r)are useful in inversion, as shown in section hen, we use in section the optial grids and the continued fraction coefficients κ and κ to visualize the action of on r. his allows us to display how the nonlinear preconditioner approxiates the identity. 9

10 Inverse Probles 30 (204) Optial grids. he optial grids have been introduced in [8, 9, 8] to obtain exponential convergence of approxiations of the Dirichlet-to-Neuann ap. hey were used and analyzed in the context of inverse spectral probles in [5] and in the inverse proble of electrical ipedance toography in [4, 6]. he grids are defined by the coefficients of the continued fraction representation of the rational response function corresponding to a reduced odel of a ediu with constant resistivity r (0). In principle we can define grids for other reference resistivities, not necessarily constant, but the results in [4 6] show that the grids change very little with respect to r(x). his is why they are very useful in inversion. Let then r (0) () x so that the change of coordinates in (2.25) is trivial (ξ = x) and obtain fro (2.8) (2.9) that {( κ (0), κ (0) )} = are the optial grid steps in a finite difference schee for the equation 2w sw = 0. x2 he priary and dual grid points are (0) (0) (0) (0) = = x = κ, x = κ, =,,, (2.26) with boundary nodes x0 (0) = x 0 (0) = 0. In the nuerical experients we observe that the grid (0) (0) is staggered, i.e. the priary nodes x and the dual nodes x obey the interlacing conditions 0 (0) 0 (0) (0) (0) 2 (0) 2 (0) (0) (0) (0) 0 = x = x < x < x < x < x < < x < x < x. (2.27) We do not prove (2.27) here, although it is possible to do so at least in soe settings. he optial grids are closely connected with the sensitivity functions, which for the seidiscrete proble are given by the rows of the Jacobian 2 defined N as ( ), log κ = log κ, if,, if + 2, =,, N. he studies in [4, 6] show that the sensitivity function corresponding to κ is localized around (0) (0) the corresponding grid cell ( x, x + ), and its axiu is near x (0). he sae holds for κ, after interchanging the priary and dual grid nodes. Moreover, the coluns of the pseudoinverse ( ) have siilar localization behavior. he Gauss Newton update is the linear cobination of the coluns of ( ), and therefore optial grids are useful for inversion. hey localize well features of the resistivity that are recoverable fro the easureents. A good grid should have two properties. First, it should be refined near the point of easureent to capture correctly the loss of resolution away fro A. Second, the nodes should not all be clustered near A because when the nodes get too close, the corresponding rows of becoe alost linearly dependent, and the Jacobian is ill-conditioned Matching conditions. We study here the grids for three choices of atching conditions (2.3). he first corresponds to l =, s = 0 and M = (siple Padé) and yields the rational Krylov subspace 0

11 Inverse Probles 30 (204) 250 (0) = span { A b, A 2b,, A b}. his approxiant has the best accuracy near the interpolation point (s = 0), and is obviously inferior for global approxiation in s when copared to ultipoint Padé aproxiants. he other two choices atch Y (; s r) and its first derivative at nodes = s ( s ) = that are distributed geoetrically = s 2 s s, =,,. (2.28) s We use henceforth the tilde to distinguish these nodes fro those obtained with the change of variables s s = (0, ], (2.29) s intended to iprove the conditioning of the interpolation. he athching conditions at s yield the rational Krylov proection subspace = ( s ) span {( si A) b,, ( si A) b}, and the two choices of interpolants differ in the rate of growth of s. he second interpolant uses the rate of growth s 2 s = + 2 in (2.28) and s = 2, so that 2 s = 2 +, =,,. (2.30) his choice approxiates the Zolotarev nodes [8] which arise in the optial rational approxiation of the transfer function Y (; s r)over a real positive and bounded interval of s. he third interpolant uses a faster rate of growth of s and gives worse results, as illustrated in the nuerical experient below. We show the optial grids for all three choices of atching conditions in figure 2 for reduced odels of sizes = 5, 0. We observe that the nodes of the grids corresponding to fast growing s are clustered too close to the easureent point x = 0. hus, inversion results are expected to have poor resolution throughout the rest of the doain away fro the origin. In addition, the clustering of the grid nodes leads to poor conditioning of the Jacobian. his is illustrated in figure 3, where the condition nubers are plotted against the size of the reduced odel. We observe that the condition nuber of the Jacobian for the reduced odel with fast growing s increases exponentially. he condition nubers of the Jacobians for the other two choices of atching conditions grow very slowly. We can explain why the case of fast growing s is undesirable in inversion by looing at the liiting case of approxiation at infinity. he siple Padé approxiant at infinity corresponds to the Krylov subspace ( + ) = span { b, Ab,, A b}, which is unsuitable for inversion in our setting. o see this, recall fro (2.2) that A( r) is tridiagonal and b is a scalar ultiple e. hus, for any + only the first + coponents of the vector A b are non-zero. When A( r) is proected on K ( + ) in (2.2), the reduced odel atrix A is aware only of the upper left ( + ) ( + ) bloc of A( r), which depends only on the first + entries of r. his is unsuitable for inversion where we want A to capture the behavior of the resistivity in the whole interval, even for sall. he

12 Inverse Probles 30 (204) 250 Figure 2. Priary x (0) and dual x (0) optial grid nodes for =,, ( = 5,0) and different choices of atching conditions. Moent atching at s = 0: priary, dual. Interpolation at geoetrically distributed interpolation nodes: priary and dual for slowly growing s; priary and dual for fast growing s. he nuber of fine grid steps in the sei-discretized odel is N = 999. Figure 3. Dependence of the condition nuber of on the size of the reduced odel for different atching conditions. Moent atching at s = 0 is, interpolation at geoetrically distributed nodes: slowly growing s is, fast growing s is. he nuber of fine grid steps in the sei-discretized odel is N = 999. corresponding optial grid steps will siply coincide with the first grid steps h of the fine grid discretization in (2.2). When the interpolation nodes grow too quicly, we are near the liiting case of siple Padé approxiant at infinity, and the first optial grid steps are x (0) x (0) h. N N κ κ Consequently, the rows and of are alost collinear and the = = Jacobian is poorly conditioned, as shown in figure 3. 2

13 Inverse Probles 30 (204) 250 Figure 4. Action of the nonlinear preconditioner on the resistivities r Q, r L and r J (0) (solid blac line) defined in (4.7) and (4.9). he priary ratios ( x, ζ) are blue, (0) the dual ratios ( x, ζ ) are red for =,, with = 0. he geoetric averages (0) ( x, ζ ) are green Action of the nonlinear preconditioner on the resistivity. he optial grids can be used to obtain resistivity reconstructions directly, without optiization, as was done in [5] for the inverse spectral proble and in [4, 6] for electrical ipedance toography. We do not use this approach, but we show here such reconstructions to display the behavior of the nonlinear preconditioner when acting on r. Recall fro equation (2.25) and the explanation in section 2.4 that κ and κ are proportional to the values of r 2 () x and r 2 () x around the corresponding optial grid points. If we tae as the proportionality coefficients the values κ (0) and κ (0) then we expect the ratios ζ 2 κ κ = ζ = = κ, κ,,,, (2.3) (0) (0) 2 to behave roughly as r ( x (0) ) and r ( x (0) ). In practice, a ore accurate estiate of the resistivity can be obtained by taing the geoetric average of ζ and ζ (0) κ κ ζ = ζζ =, =,,. (2.32) (0) κκ Since building a direct inversion algorith is not our focus, we only show (2.32) for coparison purposes. (0) In figure 4 we display the ratios (2.3) plotted at the nodes of the optial grid ( x, ζ), (0) ( x, ζ ), =,,. We tae the sae resistivities r Q, r L and r J that are used in the nuerical experients in section 4. hey are defined in (4.8) and (4.9). We observe that the (0) curve defined by the linear interpolation of the priary points ( x, ζ) overestiates the (0) true resistivity, while the dual curve passing through ( x, ζ ) underestiates it. Both curves capture the shape of the resistivity quite well, so when taing the geoetric average (2.32) the reconstruction falls right on top of the true resistivity. his confirs that resolves ost of the nonlinearity of the proble and thus acts on the resistivity as an approxiate identity. We can also illustrate how well resolves the nonlinearity of the proble by considering an exaple of high contrast resistivity. In figure 5 we plot the sae quantities as 3

14 Inverse Probles 30 (204) 250 Figure 5. Action of the nonlinear preconditioner on a piecewise constant resistivity of contrast 20. Sae setting as in figure 4. in figure 4 in the case of piecewise constant resistivity of contrast 20. he contrast is captured quite well by ζ, while the shape of the inclusion is shrun. his is one of the reasons why we use as a preconditioner for optiization and not a reconstruction apping. Optiization allows us to recover resistivity features on scales that are saller than those captured in ζ and ζ. Moreover, since resolves ost of the nonlinearity of the inverse proble, the optiization avoids the pitfalls of traditional data isfit iniization approaches, such as sensitivity to the initial guess, nuerous local inia and slow convergence Data fitting via the rational interpolation Unlie () r = ( y ( ; r))coputed using the chain of appings (2.23) with all stable steps, the coputation of the data fitting ter ( d ( )) requires solving an osculatory rational interpolation proble (2.3) in step (b) of (2.2) to obtain the rational interpolant Y (s) of the transfer function Y(s). his involves the solution of a linear syste of equations with an illconditioned atrix, or coputing the singular value decoposition of such atrix. We use the condition nuber of the atrix to assess the instability of the proble. he condition nuber grows exponentially with, but the rate of growth depends on the atching conditions used in the rational interpolation. We show this for the two choices of atching conditions: interpolation of Y(s) and its first derivatives at distinct nodes s distributed as in (2.30) (ultipoint Padé), and atching of oents of Y(s) at s = 0 (siple Padé). We describe first both Padé interpolation schees and then copare their stability nuerically. 4

15 Inverse Probles 30 (204) 250 Multipoint Padé: Let us rewrite the reduced order transfer function (2.) in the for f () s Y () s = = gs () f + f s + + f s 0 g0 + gs + + gs, (2.33) where f(s) and g(s) are polynoials defined up to a coon factor. We use this redundancy later to choose a unique solution of an underdeterined proble. he atching conditions (2.3) of Y(s) and Y ()at s the distinct interpolation nodes 0 < s < s < < s 2 are ( ) ( ) ( ) f s Y s g s = 0, f ( s) Y ( s) g( s) Y( s) g ( s) = 0, =,,. (2.34) Next, we recall the change of variables (2.29) and define the Vanderonde-lie ( + ) atrices and the diagonal atrices s s 2 s = s2 s2 2 s2, 2 s s s 0 2s s = 0 2s2 s2 s 0 2s s ( ( ) ( ) ) ( ( ) ( ) ), (2.35) = diag Y s,, Y s, = diag Y s,, Y s. (2.36) his allows us to write equations (2.34) in atrix-vector for as an underdeterined proble with and u = 0, u 2 +, (2.37) :, : = 2 (2+ ), (2.38) :, : = f s u, = 0,,, + = g s u, = 0,,. (2.39) + + he proble is underdeterined because of the redundancy in (2.33). We eliinate it by the additional condition u 2 =, which aes it possible to solve (2.37) via the singular value decoposition. If we let U be the atrix of right singular vectors of, then u. (2.40) = U:(2 + ), 2+ Once the polynoials f and g are deterined fro (2.39), we can copute the partial fraction expansion (2.4). he poles θ are the roots of g(s), and the residues c are given by f ( θ ) c =, =,,, g θ θ = ( ) assuing that θ are distinct. Finally, κ and κ are obtained fro θ and c via a Lanczos iteration, as explained in the appendix. 5

16 Inverse Probles 30 (204) 250 able. Condition nubers of (ultipoint Padé approxiant) and (siple Padé approxiant). he nuber of fine grid steps in the sei-discretized odel is N = cond( ) cond( ) Siple Padé: When l =, s = 0 and M =, we have a siple Padé approxiant which atches the first 2 oents of yt (; r) in the tie doain, because Y s = Y s s= 0 s= 0 + = ( ) yt ( ; r) t d t, = 0,,, 2. 0 A robust algorith for siple Padé approxiation is proposed in [5]. It is also based on the singular value decoposition. If Y(s) has the aylor expansion at s = 0 Y( s) = τ + τ s + τ s + + τ s +, (2.4) then the algorith in [5] perfors a singular value decoposition of the oeplitz atrix τ τ τ τ0 τ+ τ τ2 τ = τ τ τ τ ( + ). (2.42) + + If U ( ) ( ) is the atrix of right singular vectors of, then the coefficients in (2.33) satisfy and g0 g g f f f 0 = :( + ), + U, (2.43) τ g0 τ τ = g. (2.44) τ τ τ 0 g 2 0 We refer the reader to [5] for detailed explanations. Coparison: o copare the perforance of the two interpolation procedures, we give in table the condition nubers of atrices and. hey are coputed for the reference resistivity r (0), for several values of the size of the reduced odel. We observe that while both condition nubers grow exponentially, the growth rate is slower for the ultipoint Padé approxiant. hus, we conclude that it is the best of the choices of atching conditions considered in this section. It allows a ore stable coputation of ( d ( )), it gives a good distribution of the optial grid points and a well-conditioned Jacobian. 6

17 Inverse Probles 30 (204) Regularization of Gauss Newton iteration Our ethod of solving the optiization proble (2.24) uses a Gauss Newton iteration with regularization siilar to that in [4]. We outline it below, and refer to the next section for the precise forulation of the inversion algorith. Recall that aps the vectors r N + of resistivity values on the fine grid to 2 reduced order odel paraeters {(log κ, log κ )} =. hus, the Jacobian has diensions 2 N. Since the reduced order odel is uch coarser than the fine grid discretization 2 N, the Jacobian ()has r a large null space. At each iteration the Gauss Newton update to r is in the 2 diensional range of the pseudoinverse () r of the Jacobian. his leads to low resolution in practice, because is ept sall to itigate the sensitivity of the inverse proble to noise. If we have prior inforation about the unnown r true, we can use it to iprove its estiate r. We incorporate the prior inforation about the true resistivity into a penalty functional ( r). For exaple, ( r) ay be the total variation nor of r if r true is nown to be piecewise constant, or the square of the l 2 nor of r or of its derivative if r true is expected to be sooth. In our inversion ethod we separate the iniization of the nor of the residual ( d ( )) ( r) and the penalty functional ( r). At each iteration we copute the standard Gauss Newton solution r and then we add a correction to obtain a regularized iterate ρ. he correction ρ r is in the null space of, so that the residual reains unchanged. We define it as the iniizer of the constrained optiization proble iniize ( ρ), (2.45) s.t.[ ]( r ρ) = 0 which we can copute explicitly in the case of a weighted discrete H seinor regularization, assued henceforth, Here the atrix 2 2 () r = W D r 2. (2.46) 2 D is a truncation of D defined as D = D ( N ) N, :( N ), : N N N and W ( ) ( ) is a diagonal atrix of weights. We specify it below depending on the prior inforation on the true resistivity. With the choice of penalty in the for (2.46) the optiization proble (2.45) is quadratic with linear constraints, and thus ρ can be calculated fro the first order optiality conditions given by the linear syste D WD ρ + [ ] λ = 0, (2.47) [ ] ρ = [ ] r, (2.48) 2 where λ is a vector of Lagrange ultipliers. In the nuerical results presented in section 4 we consider sooth and piecewise constant resistivities, and choose W in (2.46) as follows. For sooth resistivities we siply tae W = I, so that (2.46) is a regular discrete H seinor. For discontinuous resistivities we would lie to iniize the total variation of the resistivity. his does not allow an explicit coputation of ρ, so we ae a coproise and use the weights introduced in []. he atrix W is diagonal with entries 7

18 Inverse Probles 30 (204) 250 = + ϕ = 2 2 w D r () r,,, N, (2.49) where ϕ ()is r proportional to the isfit for the current iterate ϕ () r = Cϕ ( d( )) () r 2, (2.50) and C ϕ is soe constant, set to 2 (2 ) in the nuerical exaples in section 4. o ensure that A( r) corresponds to a discretization of an elliptic operator, we need positive entries in r. his can be done with a logarithic change of coordinates, which transfors the optiization proble to an unconstrained one. However, in our nuerical experients we observed that if is sufficiently sall so that for the given data d(t) all the entries of 2 ( d ( )) are positive, then the Gauss Newton updates of r reain positive as well. hus, the logarithic change of coordinates for r in not needed in our coputations he inversion algorith for D edia Here we present the suary of the inversion algorith. he details of the coputation of ()and r its Jacobian can be found in the appendix. he inputs of the inversion algorith are the easured data d(t) and a guess value of. his reflects the expected resolution of the reconstruction, and ay need to be decreased depending on the noise level. o copute the estiate r of r true, perfor the following steps: () Define the interpolation nodes s via (2.30). Using the ultipoint Padé schee fro section 2.6 copute ( κ, κ ) = using the data d(t). (2) If for soe either κ 0 or κ 0, decrease to and return to step. If all ( κ, κ ) = are positive, fix and continue to step 3. (3) Define the vector of logariths l = (log κ,, log κ, log κ,, log κ ). (4) Choose an initial guess () N r + and the axiu nuber n GN of Gauss Newton iterations. (5) For p =,, n perfor: GN (5.) For the current iterate r ( p) copute the apping ( r p ( ) ( ) ( ) ) = {( log κ, logκ )} and its Jacobian ( p) = ( r( p) ) as explained in the appendix. (5.2) Define the vector of logariths p p = p ( ( ) ( ) ( ) ( ) ) l( p) = log κ,, log κ, log κ,, log κ. p p p (5.3) Copute the step ( ) ( ) ρ ( p) = ( p) l( p) l. 8

19 Inverse Probles 30 (204) 250 (5.4) Choose the step length α ( p) and copute the Gauss Newton update rgn = r ( p ) + ζ( p ) ρ( p ). (5.5) Copute the weight W using (2.49) with r GN or W = I. + (5.6) Solve for the next iterate r ( p ) fro the linear syste (6) he estiate is ( ) ( p) + D WD p r p 0 λ( p) ( ) ( ) n r = r GN +. ( ) = 0 ( p ) r ( ) GN. (2.5) Let us rear that since the nonlinear preconditioner is an approxiation of the identity in the sense explained in section 2.4, ost of the nonlinearity of the proble is resolved by the rational interpolation in step. hus, we ay start with a poor initial guess in step 4 and still obtain good reconstructions. Moreover, the nuber n GN of Gauss Newton iterations ay be ept sall. In the nuerical results presented in section 4 we tae the nuber of iterations n = 5 for ediu contrast resistivities and GN n = 0 for the high contrast case. In general, GN any of the standard stopping criteria could be used, such as the stagnation of the residual. In order to siplify our ipleentation we set the step length ( p) α = in step 5.4. However, choosing α ( p) adaptively with a line search procedure ay be beneficial, especially for high contrast resistivities. While the Jacobian ( p) is well-conditioned, the syste (2.5) ay not be. o alleviate this proble, instead of solving (2.5) directly, we ay use a truncated singular value decoposition to obtain a regularized solution. ypically it is enough to discard ust one coponent corresponding to the sallest singular value as we do in the nuerical experients. 3. wo diensional inversion Unlie the D case, the inverse proble in two diensions is forally overdeterined. he unnown is the resistivity r(x) defined on Ω 2, and the data are three diensional. One diension corresponds to tie and the other two coe fro the source (initial condition) and receiver locations on A. he odel reduction inversion fraewor described in section 2.8 applies to a forally deterined proble. We extend it to two diensions by constructing separately reduced odels for certain data subsets. Each odel defines a apping that is siilar to in one diension, where =,, Nd is the index of the data set. he aps are coupled by their dependence on the resistivity function r(x), and are all taen into account in inversion as explained below. Let uo () ( x) be the initial condition for a source that is copactly supported on a segent (interval) of A. We odel it for siplicity with the indicator function of and write o () ( ) ( ) u ( x) = x δ x. (3.) Here x = ( x, x ), with x the arclength on A and x the noral coordinate to the boundary, which we suppose is sooth. he seidiscretized version of equation (.3) on a grid with N 9

20 Inverse Probles 30 (204) 250 points is () u () t = A( ru ) () ( t), (3.2) t with r N the vector of discrete saples of the resistivity. Equation (3.2) odels a dynaical syste with response atrix y (, t r) defined by the restriction of the solution u ( t) = e r u (3.3) () A( ) t () o to the support of the th receiver. We tae for siplicity the sae odel of the sources and receivers, with support on the disoint boundary segents of the accessible boundary. hus, if we let b () N be the easureent vector corresponding to the th source or receiver, we can write the tie doain response as ( ) A( r) t ( ) y ( t; r) = b e b. (3.4) he diagonal of this atrix is the high diensional extension of (2.8). he atrix valued transfer function is the Laplace transfor of (3.4), + = st ( ) Y (; s r) y (; t r)e d t = b ( si A()) r ( ) b, s > 0. (3.5) 0 In odel order reduction we obtain a reduced odel with rational transfer function Y, (; s r) that approxiates (3.5). he reduced odel is constructed separately for each receiver source pair (, ). It is defined by an syetric and negative definite atrix A (, ) ( () r with N, and easureent vectors b ) and b (). he transfer function ( ) ( ) (, ) c Y, ( s; r) = b si A b () =, (3.6) s + θ (, ) l= ( l, ) (, has poles at the eigenvalues θ ) (, l of A ) (,, for eigenvectots z ) l, and residues l (, ) ( ) ( l, ) ( ) ( l, ) c b z b z. (3.7) = ( )( ) We are interested in the continued fraction representation of Y, (; s r), in particular its { ( )} coefficients κ, κ l l (, ) (,) l= l that define the preconditioner apping in our inversion approach. hese coefficients are guaranteed to be positive as long as the residues in (3.7) satisfy c (, ) 0. his is guaranteed to hold for the diagonal of (3.6), because l l (, ) () (,) l 2 c b z. (3.8) = ( ) hus, we construct the reduced odels for the diagonal of the atrix valued transfer function, (, ) (,) and copute the odel paraeters κ, κ as in one diension. Such { ( )} l l l= easureent setting has analogues in other types of inverse probles. For exaple, a siilar setting in wave inversion is the bacscattering proble, where the scattered wave field is easured in the sae direction as the incoing wave. If we have N d boundary segents at A, we define ( ) ( ) (,) l (,) l l= ( r) = y ( ; r) = log κ, log κ, =,, N, (3.9) as in one diension, using the chain of appings (2.23). he resistivity is estiated by the solution of the optiization proble 20 d

21 Inverse Probles 30 (204) 250 r Nd = arg in d ( ) () r 2, (3.0) 2 N r + = ( ) ) where d (t) is the data easured at the receiver supported on, for the th source excitation. Note that the su in (3.0) couples all the sources/receivers together in a single obective functional. Another question that we need to address is the choice of atching conditions. For siplicity we atch the oents of Y (s) at a single interpolation node s. his yields the atching conditions 2 Y s, = Y s s= s s= s, = 0,,, 2, (3.) and rational Krylov subspaces () ( ) {( ) ( ) } () () = s span si A b,, si A b, =,, N. (3.2) hen the only paraeter to deterine is the node s > 0. Siilarly to the D case we use the condition nuber of the Jacobian to deterine the optial choice of s (2 N ) N. Here the total Jacobian d is a atrix of N d individual Jacobians N 2 staced together. It appears that for a fixed the sensitivity functions (, for log κ ) (, l and log κ ) l closely reseble the spherical waves propagating fro the iddle of. As the index l increases, the wave propagates further away fro. his behavior is illustrated quantitatively in section 4.2 for an optial choice of s. he speed of propagation of these sensitivity waves decreases as s grows. Obviously, to get the resolution throughout the whole doain we would lie all of Ω to be covered by the sensitivity waves, which eans higher propagation speed (saller s ) is needed. On the other hand, if the sensitivity waves propagate too far they reflect fro the boundary which leads to poor conditioning of the Jacobian. he balance between these two requireents leads to an optial choice of s, which we deterine experientally in the nuerical exaple in section 4.2. d 4. Nuerical results We assess the perforance of the inversion algorith with nuerical experients. o avoid coitting the inverse crie we use different grids for generating the data and for the solution of the inverse proble. We describe the setup of the nuerical siulations and present the inversion results in section 4. for D edia, and in section 4.2 for 2D edia. 4.. Nuerical experients in one diension We use a fine grid with N f = 299 uniforly spaced nodes to siulate the data d(t), and a coarser grid with N = 99 uniforly spaced nodes in the inversion. true he first ter yt (; r ) in (2.7) is approxiated by solving the sei-discrete forward proble (2.) with an explicit forward Euler tie stepping on a finite tie interval [0, ], where = 00. he tie step is h = 0 5. We denote by y the vector of length N = h, with entries given by the nuerical approxiation of yt (; r true ) at the tie saples t = h. Since even in the absence of noise there is a systeatic error () s () t coing fro the nuerical approxiation of the solution of (2.), we write 2

22 Inverse Probles 30 (204) 250 able 2. Reduced odel sizes used for various noise levels ϵ. ϵ (noiseless) true () s ( ) ( ) ( ) N y = y,, y, y = y t ; r + t, =,, N. We also define the vector ultiplicative odel ( n) n ( ) N that siulates easureent noise using the = ϵ diag ( χ,, χ ) y, (4.) N where ϵ is the noise level, and χ are independent rando variables distributed norally with N zero ean and unit standard deviation. he data vector d is ( n) d = y +, (4.2) and we denote its coponents by d, for =,, N. Such noise odel allows for a siple estiate for a signal-to-noise ratio d 2. (4.3) ( n) ϵ 2 he inversion algorith described in section 2.8 deterines at step 2 the size of the reduced odel for different levels of noise. he larger ϵ, the saller. he values of ϵ and used to obtain the results presented here are given in table 2. he transfer function and its derivative at the interpolation points are approxiated by taing the discrete Laplace transfor of the siulated data ( ) N Y s h d e st, (4.4) ( ) = N Y s h d t e st. (4.5) = o quantify the error of the reconstructions r we use the ratio of discrete l 2 nors = r rtrue 2. (4.6) r true 2 While this easure of error is ost appropriate for sooth resistivities, it ay be overly strict for the reconstructions in the discontinuous case due to the large contribution of discontinuities. However, even under such an unfavorable easure the inversion procedure deonstrates good perforance. We show first the estiates of three resistivity functions of contrast two. We consider two sooth resistivities 2 rtrue () x = rq (): x = 2 4 x, (4.7) 2 rtrue ( x) = r ( x) : = 0.8e 00( x 0.2) 2 + x +, (4.8) L 22

23 Inverse Probles 30 (204) 250 Figure 6. Reconstructions of r(x) (blac solid line) after one (blue ) and five (red ) iterations. rue coefficient by colun: left r Q, iddle r L, right r J. Reduced odel size fro top row to botto row = 3, 4, 5, 6. Noise levels are fro table 2. he relative error is printed at the botto of the plots. 23

24 Inverse Probles 30 (204) 250 and the piecewise constant, for x < 0.2, rtrue () x = rj (): x = 2, for 0.2 x 0.6,.5, for x > 0.6. (4.9) he results are displayed in figure 6, for various reduced odel sizes and levels of noise, as listed in table 2. Each reconstruction uses its own realization of noise. We use five Gauss Newton iterations n = 5, and we display the solution both after one iteration and after all GN five. he initial guess is r () () x. It is far fro the true resistivity r true () x, and yet the inversion procedure converges quicly. he features of true r () x are captured well after the first iteration, but there are soe spurious oscillations, corresponding to the peas of the sensitivity functions. A few ore iterations of the inversion algorith reove these oscillations and iprove the quality of the estiate of the resistivity. he relative error (4.6) is indicated in each plot in figure 6. It is sall, of a few percent in all cases. We also observe in figure 6 that the inversion ethod regularized with the nonlinear weight (2.49) perfors well for the piecewise constant resistivity r J. Without the regularization, the estiates have Gibbs-lie oscillations near the discontinuities of true r () x. hese oscillations are suppressed by the weighted discrete H regularization. In figure 7 we are coparing our inversion procedure to an inversion approach lie in [] that fits the poles and residues ( θ, c) = instead of the continued fraction coefficients. he coparison is done for the case of piecewise constant resistivity of higher contrast, for x < 0.2, rtrue () x = rh (): x = 5, for 0.2 x 0.6, 3, for x > 0.6. (4.0) A higher contrast case is chosen since for the low contrast the difference in perforance between c and is less pronounced. θ As expected, the reconstructions are better when we use the apping. In fact, the algorith based on c diverges for = 4 and = 6. hus, for = 4, 6 we plot the first and θ second iterates only. he inversion based on the apping converges in all three cases and aintains the relative error well below 0% for = 4, 5 and around % for = 6. he reconstruction plots in figure 7 are copliented with the plots of the relative error versus the Gauss Newton iteration nuber p. Note that even when the iteration with c converges θ ( = 5) the reconstruction with has saller error (8.5% versus 8.2%). Aside fro providing a solution of higher quality our ethod is also ore coputationally efficient since it does not require the solution of the forward proble (2.) in tie. While constructing the orthonoral basis for the Krylov subspace (2.6) requires a few linear solves with the shifted atrix A, the nuber of such solves is sall and thus it is cheaper than the tie stepping for (2.). For exaple, the explicit tie stepping to generate the data d for the nuerical experients above taes 275 s, whereas all the five Gauss Newton iterations of our inversion algorith taes less than a second on the sae achine Nuerical experients in two diensions We consider a 2D exaple in a rectangular doain Ω = [0, 3] [0, ]. he fine grid to siulate the data has the diension nodes, while the coarse grid used in inversion is nodes. We use N d = 8 sources/receivers with disoint supports uniforly distributed on the accessible boundary interval A = {( x, x2) x (, 2), x2 = 0}. For each 24

25 Inverse Probles 30 (204) 250 Figure 7. Coparison between two preconditioners for high contrast piecewise constant resistivity r H (x) (blac solid line) after one (blue and ) and n GN (red and ) iterations. op row: reconstructions using ( ngn = 0). Middle row: reconstructions using c θ ( ngn = 2 for = 4, 6 and ngn = 0 for = 5). Botto row: relative error versus the iteration nuber p for reconstructions with (solid line with ) and c θ (dashed line with ). he relative error is printed at the botto of the reconstruction plots. diagonal entry of the easured data atrix y (t), =,, Nd a reduced order odel with = 5 is constructed. As entioned in section 3, the interpolation node s for the atching conditions (3.) is chosen so that the sensitivity waves reach the boundary without reflecting fro it. his is shown in figure 8 where the sensitivities for one particular source/receiver are plotted. For this Ω the interpolation node that gives the desired behavior is s = 60. We cannot tae a saller s (4, 4) since the sensitivity function for κ 5 already touches the boundary x2 =. On the other 25

26 Inverse Probles 30 (204) 250 Figure 8. Sensitivity functions (rows of the Jacobian indexed by l) for the 2D unifor ediu r ( x) in the rectangular doain Ω = [0, 3] [0, ]. Source/ receiver index is = 4 out of a total of N d = 8 (id-points of sources/receivers ared as blac ), = 5. hand, increasing s will shrin the region covered by the sensitivity functions and thus will reduce the resolution away fro A. We solve the optiization proble (3.0) for the 2D edia using a regularized preconditioned Gauss Newton inversion algorith fro section 2.8 adapted for the obective functional of (3.0). In two diensions the preconditioner appears to be even ore efficient with high quality reconstructions obtained after a single iteration. Subsequent iterations iprove the reconstruction arginally, so in figure 9 we show the solutions after a single Gauss Newton iteration starting fro a unifor initial guess r () x. All three exaples in figure 9 are piecewise constant so the inversion is regularized with a discrete H seinor (W = I ). In the first two exaples there are two rectangular inclusions in each with the contrast fro in r( x) = 0.66 to ax r( x) =.5 on a unit bacground. he x Ω inclusions touch each other at a corner and a side respectively. his deonstrates that the ethod handles well the sharp interfaces. In the third exaple there is a single tilted inclusion x Ω 26

27 Inverse Probles 30 (204) 250 Figure 9. Reconstructions in 2D in the rectangular doain Ω = [0, 3] [0, ]. Left colun: true coefficient r(x); right colun: reconstruction after a single Gauss Newton iteration. Mid-points of sources/receivers are ared with blac, =,, Nd, N d = 8. of contrast 2 on a unit bacground. It is used to show the gradual loss of resolution away fro A. All three exaples are narrow aperture, eaning that the horizontal extent of the inclusions is equal to the width of A. Overall the reconstruction quality is good with the contrast captured fully by the first Gauss Newton iteration. One can iterate further to iprove the reconstruction, but an adaptive choice of the step length α ( p) is required for convergence. 5. Suary We introduced a nuerical inversion algorith for linear parabolic partial differential equations. he proble arises in the application of controlled source electroagnetic inversion, where the unnown is the subsurface electrical resistivity r(x) in the earth. We study the inversion ethod in one and 2D edia, but extensions to three diensions are possible. o otivate the inversion algorith we place the inverse proble in a odel reduction fraewor. We seidiscretize in x the parabolic partial differential equation on a grid with N points, and obtain a dynaical syste with transfer function Y (; s r), the Laplace transfor of the tie easureents. In two diensions the transfer function is atrix valued, and we construct reduced odels separately, for each entry on its diagonal. Each odel reduction construction is as in the D case. he reduced odels are dynaical systes of uch saller size N, with transfer function Y () s Y(; s r). Because Y (s) is a rational function of s, we solve a rational approxiation proble. We study various such approxiants to deterine which are best suited for inversion. We end up with a ultipoint Padé approxiant Y (s), which interpolates Y and its first derivatives at nodes distributed geoetrically in +. he inversion algorith is a Gauss Newton iteration for an optiization proble preconditioned with nonlinear appings and. hese appings are the essential ingredients in the inversion. 27

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng

EE5900 Spring Lecture 4 IC interconnect modeling methods Zhuo Feng EE59 Spring Parallel LSI AD Algoriths Lecture I interconnect odeling ethods Zhuo Feng. Z. Feng MTU EE59 So far we ve considered only tie doain analyses We ll soon see that it is soeties preferable to odel

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Solving initial value problems by residual power series method

Solving initial value problems by residual power series method Theoretical Matheatics & Applications, vol.3, no.1, 13, 199-1 ISSN: 179-9687 (print), 179-979 (online) Scienpress Ltd, 13 Solving initial value probles by residual power series ethod Mohaed H. Al-Sadi

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS BIT Nuerical Matheatics 43: 459 466, 2003. 2003 Kluwer Acadeic Publishers. Printed in The Netherlands 459 RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS V. SIMONCINI Dipartiento di

More information

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term

Numerical Studies of a Nonlinear Heat Equation with Square Root Reaction Term Nuerical Studies of a Nonlinear Heat Equation with Square Root Reaction Ter Ron Bucire, 1 Karl McMurtry, 1 Ronald E. Micens 2 1 Matheatics Departent, Occidental College, Los Angeles, California 90041 2

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA Multivariate Methos Xiaoun Qi Principal Coponents Analysis -- PCA he PCA etho generates a new set of variables, calle principal coponents Each principal coponent is a linear cobination of the original

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction

Finding Rightmost Eigenvalues of Large Sparse. Non-symmetric Parameterized Eigenvalue Problems. Abstract. Introduction Finding Rightost Eigenvalues of Large Sparse Non-syetric Paraeterized Eigenvalue Probles Applied Matheatics and Scientific Coputation Progra Departent of Matheatics University of Maryland, College Par,

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

STOPPING SIMULATED PATHS EARLY

STOPPING SIMULATED PATHS EARLY Proceedings of the 2 Winter Siulation Conference B.A.Peters,J.S.Sith,D.J.Medeiros,andM.W.Rohrer,eds. STOPPING SIMULATED PATHS EARLY Paul Glasseran Graduate School of Business Colubia University New Yor,

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations

Comparison of Stability of Selected Numerical Methods for Solving Stiff Semi- Linear Differential Equations International Journal of Applied Science and Technology Vol. 7, No. 3, Septeber 217 Coparison of Stability of Selected Nuerical Methods for Solving Stiff Sei- Linear Differential Equations Kwaku Darkwah

More information

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications

More information

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volue 3, Nuber 2, Pages 211 231 c 2006 Institute for Scientific Coputing and Inforation A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR

More information

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES

HIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Scattering and bound states

Scattering and bound states Chapter Scattering and bound states In this chapter we give a review of quantu-echanical scattering theory. We focus on the relation between the scattering aplitude of a potential and its bound states

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

On the approximation of Feynman-Kac path integrals

On the approximation of Feynman-Kac path integrals On the approxiation of Feynan-Kac path integrals Stephen D. Bond, Brian B. Laird, and Benedict J. Leikuhler University of California, San Diego, Departents of Matheatics and Cheistry, La Jolla, CA 993,

More information

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A new type of lower bound for the largest eigenvalue of a symmetric matrix Linear Algebra and its Applications 47 7 9 9 www.elsevier.co/locate/laa A new type of lower bound for the largest eigenvalue of a syetric atrix Piet Van Mieghe Delft University of Technology, P.O. Box

More information

Stability Analysis of the Matrix-Free Linearly Implicit 2 Euler Method 3 UNCORRECTED PROOF

Stability Analysis of the Matrix-Free Linearly Implicit 2 Euler Method 3 UNCORRECTED PROOF 1 Stability Analysis of the Matrix-Free Linearly Iplicit 2 Euler Method 3 Adrian Sandu 1 andaikst-cyr 2 4 1 Coputational Science Laboratory, Departent of Coputer Science, Virginia 5 Polytechnic Institute,

More information

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) =

SOLUTIONS. PROBLEM 1. The Hamiltonian of the particle in the gravitational field can be written as, x 0, + U(x), U(x) = SOLUTIONS PROBLEM 1. The Hailtonian of the particle in the gravitational field can be written as { Ĥ = ˆp2, x 0, + U(x), U(x) = (1) 2 gx, x > 0. The siplest estiate coes fro the uncertainty relation. If

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Reducing Vibration and Providing Robustness with Multi-Input Shapers

Reducing Vibration and Providing Robustness with Multi-Input Shapers 29 Aerican Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June -2, 29 WeA6.4 Reducing Vibration and Providing Robustness with Multi-Input Shapers Joshua Vaughan and Willia Singhose Abstract

More information

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT PACS REFERENCE: 43.5.LJ Krister Larsson Departent of Applied Acoustics Chalers University of Technology SE-412 96 Sweden Tel: +46 ()31 772 22 Fax: +46 ()31

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Donald Fussell. October 28, Computer Science Department The University of Texas at Austin. Point Masses and Force Fields.

Donald Fussell. October 28, Computer Science Department The University of Texas at Austin. Point Masses and Force Fields. s Vector Moving s and Coputer Science Departent The University of Texas at Austin October 28, 2014 s Vector Moving s Siple classical dynaics - point asses oved by forces Point asses can odel particles

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER IEPC 003-0034 ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER A. Bober, M. Guelan Asher Space Research Institute, Technion-Israel Institute of Technology, 3000 Haifa, Israel

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization Structured signal recovery fro quadratic easureents: Breaking saple coplexity barriers via nonconvex optiization Mahdi Soltanolkotabi Ming Hsieh Departent of Electrical Engineering University of Southern

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Lecture 13 Eigenvalue Problems

Lecture 13 Eigenvalue Problems Lecture 13 Eigenvalue Probles MIT 18.335J / 6.337J Introduction to Nuerical Methods Per-Olof Persson October 24, 2006 1 The Eigenvalue Decoposition Eigenvalue proble for atrix A: Ax = λx with eigenvalues

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

Four-vector, Dirac spinor representation and Lorentz Transformations

Four-vector, Dirac spinor representation and Lorentz Transformations Available online at www.pelagiaresearchlibrary.co Advances in Applied Science Research, 2012, 3 (2):749-756 Four-vector, Dirac spinor representation and Lorentz Transforations S. B. Khasare 1, J. N. Rateke

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

PROXSCAL. Notation. W n n matrix with weights for source k. E n s matrix with raw independent variables F n p matrix with fixed coordinates

PROXSCAL. Notation. W n n matrix with weights for source k. E n s matrix with raw independent variables F n p matrix with fixed coordinates PROXSCAL PROXSCAL perfors ultidiensional scaling of proxiity data to find a leastsquares representation of the obects in a low-diensional space. Individual differences odels can be specified for ultiple

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino UNCERTAINTY MODELS FOR ROBUSTNESS ANALYSIS A. Garulli Dipartiento di Ingegneria dell Inforazione, Università di Siena, Italy A. Tesi Dipartiento di Sistei e Inforatica, Università di Firenze, Italy A.

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information