Contributors and Resources

Size: px
Start display at page:

Download "Contributors and Resources"

Transcription

1 for Introduction to Numerical Methods for d-bar Problems Jennifer Mueller, presented by Andreas Stahel Colorado State University Bern University of Applied Sciences, Switzerland Lexington, Kentucky, May 17, 2014 Contributors and Resources for Contributors: Jennifer Mueller, Colorado State University Melody Dodd, Colorado State University Resources: Jennifer Mueller, Samuli Siltanen, Linear and Nonlinear Inverse Problems with Practical Applications, SIAM 2012 Melody Dodd, Jennifer Mueller, A Real-time D-bar Algorithm for 2-D Electrical Impedance Tomography Data, 2014, to appear K. Knudsen, J. Mueller, S. Siltanen, Numerical Solution Method for the dbar-equation in the plane, Journal of Computational Physics 198, 2004

2 Electrical Impedance Tomography Measure in the lab or hospital. for Gain knowledge about the interior of the chest, e.g. two lung sections and a heart., the equations I for Two pictures For main a goal conductivity and problems σ on a bounded domain Ω R 2 consider the PDE (σ u) = 0 in Ω R 2 (1) We assume that the conductivity σ is strictly positive and identical to 1 in a neighborhood of the boundary Ω. The current density is given by Ohm s law J = σ u

3 , the equations II for Apply a voltage u on the boundary and measure the resulting normal current density J J(z) = σ(z) u(z) n for to obtain the Dirichlet to Neumann map Λ σ : u σ u n z Ω on Ω (2) also voltage to current density map. Standard elliptic PDE theory implies Λ σ L(H 1/2 ( Ω), H 1/2 ( Ω)), the equations III for Apply a current density J on the boundary and measure the resulting voltage u. For a static situation the total current into Ω has to be zero, i.e. Ω J(s) ds = Ω σ u n ds = 0 to obtain the Neumann to Dirichlet map. We use subspaces with this property when required. R σ : σ u n u on Ω (3) also current density to voltage map. Standard elliptic PDE theory implies R σ L(H 1/2 ( Ω), H 1/2 ( Ω))

4 , the equations IV for The ND map is smoothing and compact, while the DN map is clearly not smoothing. In the lab it is better to measure the ND map, (R σ ) and then determine Λ σ from it. This is based on the susceptibility to noise. We have Λ σ R σ = I on H 1/2 ( Ω) R σ Λ σ = I on H +1/2 ( Ω) The ND and DN maps are linear with respect to the voltage u and the current density J, but nonlinear with respect to the conductivity σ. We assume throughout that the conductivity is identical to 1 in a neighborhood of the boundary Ω of the bounded, smooth domain Ω R 2. : the idea for The goal of is to recover the conductivity σ from Λ σ Good news: In 1980 Alberto Calderon proved that σ is completely determined by Λ σ for the linearized case. Good news: In 1996 Adrian Nachman gave a constructive proof for the general case. Bad news: The problem is extremely ill-posed, i.e. large variations in the conductivity σ might only have a minimal influence on Λ σ. There is a simple analytical example by Alessandrini. Good news: the problem can be overcome by using scattering transforms, based on CGO solutions (Complex Geometrical Optics).

5 CGO solutions and scattering transform I for Using the transformation one verifies q = σ σ and ũ = σ u (σ u) = 0 ũ = q ũ with the Schrödinger potential q. We use the operators = z = 1 2 ( x + i y ) = z = 1 2 ( x i y ) CGO solutions and scattering transform II for Introduce a complex parameter k C = R 2 and a CGO solution ψ(, k) solves the PDE with the asymptotic condition ψ(, k) = q( ) ψ(, k) e i k z ψ(z, k) 1 W 1,p (R 2 ) for any 2 < p <. You may read this as a growth condition on ψ(, k). Examine the function e i k z = exp(i (k 1 + i k 2 ) (x + i y)) = exp(i (k 1 x k 2 y)) exp( (k 2 x + k 1 y)) i.e. exponential growth in the direction ( k 2, k 1 ) and periodic behavior in the orthogonal direction (k 1, k 2 ).

6 CGO solutions and scattering transform III for Thus the function m(z, k) = e i k z ψ(z, k) is bounded and use the operator identity 4 = to verify This implies q(z) ψ(z, k) = ψ(z, k) q(z) e +i k z m(z, k) = 4 (e +i k z m(z, k)) =... = e +i k z (4 i k + ) m(z, k) ( 4 i k + q(z) ) m(z, k) = 0 (4) i.e. m is the solution of a PDE involving the operator. CGO solutions and scattering transform IV for If we have a fundamental solution g k of ( 4 i k ) g k (z) = δ(z) then m(z) is a solution of the Lippmann-Schwinger type integral equation, since m 1 W 1,p (R 2 ). To verify this apply a convolution with g k to both sides of ( 4 i k ) (m(, k) 1) = q m(, k) to conclude m 1 = g k (q m) = g k (z w) q(w) m(w) dw R 2 This is a Fredholm integral equation of the second kind. The Faddeev Green s function g k can be computed using the exponential-integral function Ei(z).

7 CGO solutions and scattering transform V for Using ψ(z, k) = e +i k z m(z, k) define the scattering transform t(k) := e i k z q(z) ψ(z, k) dx dy R 2 = e i (k z+ k z) q(z) m(z, k) dx dy R 2 = e i 2 ( k1x+k2y) q(z) m(z, k) dx dy R 2 Since m is asymptotically close to 1 this scattering transform is approximately the Fourier transform of q, evaluated at ( 2 k 1, 2k 2 ). t(k) e i 2 ( k1x+k2y) q(z) dx dy R 2 The method to solve for The scattering transform approach to the problem can now be formulated on a single line Λ σ ψ(, k) Ω t(k) but obviously the steps have to be spelled out. m(z, k) σ Algorithm 1: equation to solve problem Use the DN map Λ σ to construct the CGO solutions ψ(z, k) Use the CGO solutions ψ to construct the scattering transform t(k) Solve the equation to obtain m(z, k) Determine σ(z) by using m(z, k)

8 : Λ σ ψ(, k) Ω I for Use the DN map Λ σ to construct the CGO solutions ψ(z, k). For all k C \ {0} solve the boundary integral equation ψ(, k) Ω = e +i k z Ω S k (Λ σ Λ 1 ) ψ(, k) (5) where S k (Λ σ Λ 1 ) ψ(z, k) = Ω G k (z ξ) (Λ σ Λ 1 ) ψ(ξ, k) ds(ξ) : Λ σ ψ(, k) Ω II for To verify that the solutions of the boundary integral equations are the CGO solutions we use: the DN map Λ q of the Schrödinger equation. For a given f let v be the unique solution of the Schrödinger problem v = q v in Ω and v = f on the boundary Ω. Then Λ q f := v n Ω The DN map for the problem is closely related to Λ q. Let u be the unique solution of the conductivity problem (σ u) = 0 in Ω and u = f on the boundary Ω. Then Λ q = Λ σ. To verify this use the relations q = σ σ, v = σ u and since σ 1 near the boundary Ω. Λ q f = v n Ω = σ n u Ω + σ u n Ω = Λ σ f

9 : Λ σ ψ(, k) Ω III for Alessandrini s identity [1988]: For any two smooth (H 1 (Ω)) solutions q l of u l = q l u l in Ω with the boundary condition u l = f on Ω we have (q 1 q 2 ) u 1 u 2 da = u 1 (Λ q1 Λ q2 ) u 2 ds (6) Ω The proof is based in a clever application of Green s theorem. Observe that the Alessandrini s equation does not use the conductivity based DN map Λ σ, but the abstract operator Λ q, based on the Schrödinger equation. Ω : Λ σ ψ(, k) Ω IV for Now examine the integral equation solved by m(z, k), using Faddeev Green s function g k (z). Use the notation G k (z) := e +i k z g k (z). m(, k) = 1 g k (q m(, k)) m(z, k) = 1 g k (z ξ) q(ξ) m(ξ, k) dξ R 2 e +i k z m(z, k) = e +i k z R 2 = e +i k z e +i k z g k (z ξ) q(ξ) m(ξ, k) dξ e +i k (z ξ) g k (z ξ) q(ξ) e +i k ξ m(ξ, k) dξ e i k z m(z, k) = e i k z (e i k z g k ) (q e +i k z m(z, k)) ψ(z, k) = e i k z G k (q ψ(z, k)) R 2

10 : Λ σ ψ(, k) Ω V for Now use Alessandrini s identity (6) with u 2 ( ) = ψ(, k), q 2 = q = σ σ, q 1 = 0, u 1 (ξ) = G k (z ξ) for some z / Ω. Observe that q vanishes outside of the domain Ω. (q 1 q 2 ) u 1 u 2 da = u 1 (Λ q1 Λ q2 ) u 2 ds Ω q(ξ) G k (z ξ) ψ(ξ, k) da = ψ(z, k) + e +i k z = Ω Ω Ω G k (z ξ) (Λ q Λ 0 ) ψ(, k) ds(ξ) G k (z ξ) (Λ q Λ 0 ) ψ(, k) ds(ξ) Nachman [1996] showed that the above is correct for z Ω. This is the desired boundary integral equation for ψ. : ψ(, k) Ω t(k) I for Evaluate the scattering transform t(k) by using Alessandrini s equation (6) with q 2 = q, u 2 = ψ(, k), q 1 = 0 and u 1 = e +i k z. t(k) = q ψ(, k) e +i k z da = = Ω Ω Ω e +i k z (Λ q Λ q=0 ) ψ(, k) ds e +i k z (Λ σ Λ σ=1 ) ψ(, k) ds The scattering transform is related to the DN data through an equation that requires knowledge of ψ on the boundary of Ω. Thus t(k) depends on the conductivity σ through the DN map Λ σ and through ψ.

11 : ψ(, k) Ω t(k) II for As a regularization method the scattering transform t(k) is then restricted to a compact support domain B(0, R). This is somewhat similar to a lowpass filter, as t(k) is related to the Fourier transform. The optimal cut off radius R has to be chosen experimentally. It depends on the quality of the data measured and the desired resolution, and the acceptable run times for the algorithm. For very low noise levels and aiming for high resolution, choose R larger. For high noise levels work with smaller values of R, suppressing noise, but giving up resolution. : ψ(, k) Ω t(k) III for For a faster implementation we may replace the scattering transform by a simpler formula by replacing ψ(z, k) with its asymptotic behaviour e +i k z. t exp (k) = e +i k z (Λ σ Λ 1 ) e +i k ds (7) Ω This approximation was first introduced in 2000 [Siltanen, Mueller, Isaacson] and was later studied (Knudsen, Lassas, Mueller, Siltanen 2007) where it was shown that the equation with t(k) replaced by t exp (k) truncated to a disk B(0, R) in the k-plane has a unique solution, which is smooth with respect to z, and the reconstruction is smooth and stable. Further, it was shown that no systematic artifacts are introduced when the method with t exp is applied to piecewise continuous conductivities.

12 : ψ(, k) Ω t(k) IV for If you are interested in difference images from a reference frame, replace the scattering transform by t exp diff(k) = e +i k z (Λ σ Λ σref ) e +i k ds (8) Ω If σ σ ref then the values of the scattering transform will be small. : t(k) m(z, k) I for For each z Ω solve the equation with m(z, ) 1 L p 0 L. k m(z, k) = t(k) 4 π k e z(k) m(z, k) (9) The above uses the notation e k (z) := exp(i (k z + k z)) = exp(i 2 (k 1 x k 2 y)) The equation (9) is generated by differentiating the Lippmann-Schwinger equation m = 1 g k (q m) with respect to k, found by Nachman [1996]. Nachman verified that (9) has a unique solution for m(, z) 1 L p 0 L (C) for some p 0 > 2.

13 : t(k) m(z, k) II for The function 1 π k is a fundamental solution for k and thus one can replace (9) by an integral equation with the help of a convolution. m(z, k) = π k 1 4 π k t(k) e z(k) m(z, k) = π 2 = π 2 R 2 R 2 t(k ) (k k ) k e z(k ) m(z, k ) dk t(k ) (k k ) k 2 (k ei 1 x k 2 y) m(z, k ) dk This Fredholm integral equation will be examined numerically in the second part of the presentation. : m(z, k) σ I for Examine k 0 in equation (4) and use q = σ σ m(z, k) 4 i k m(z, k) = q(z) m(z, k) to conclude m(z, 0) = σ σ m(z, 0) The rigorous proof by Nachman takes the logarithmic singularity in the Faddeev Green s function g k into account.

14 : m(z, k) σ II for Thus m and σ solve the same Schrödinger equation with f (z) = m(z,0) m(z,0) m(, 0) = f m(, 0) and σ = f σ = σ(z). The decay condition in σ(z) m 1 W 1,p 0 (R 2 ) and σ 1 on R 2 \ Ω now implies m(, 0) = σ. Recover the conductivity by σ(z) = lim k 0 m(z, k) = m(z, 0) One can verify σ(z) = m(z, 0) directly, without the above limit. The Fredholm equation on R 2 I for For each z Ω we solve m(z, k) = π 2 R 2 t(k ) (k k ) k 2 (k ei 1 x k 2 y) m(z, k ) dk for a scattering transform t with compact support supp(t) B(0, R). Thus we may actually examine the equation m(z, k) = t(k ) 2 (k 4 π 2 (k k ei 1 x k 2 y) m(z, k ) dk (10) ) k B(0,R) Key observation: If we know m(, k) on B(0, R), then we can compute it on all of R 2 by evaluation the RHS.

15 The Fredholm equation on R 2 II for This is a Fredholm integral equation of the second kind. It has a unique solution with m 1 L p 0 for any given p 0 > 2, [Nachman1996]. The term t(k ) k points towards a singularity, but one can show that t(k ) c k 2 and thus the expression is bounded. The asymptotic behavior m 1 L p 0 is contained in the equation, represented by the 1 on the RHS. This is not a complex linear equation, but only real linear, caused by the conjugate of m(z, k ). The periodic equation I for To be able to use equation (10) has to be setup as a periodic equation. This is possible since the support B(0, R) of t(k) is bounded. Use a smooth cutoff function η(k) on a domain Q = [ 2 R ε, +2 R + ε] 2, as illustrated in the Figure for the 1-D situation. Then extend periodically with period 4 R + 2 ε. Then define a new, periodic Green s function β(k) := η(k) π k extended periodically R +2R graph of η(k)

16 The periodic equation II for Since the scattering transform t is supported in B(0, R) we can extend the contribution in (10) periodically. φ z (k) := 1 4 π and examine m(z, k) = 1 Q t(k) k e +i (k 1x k 2 y) β(k k ) φ z (k ) m(z, k ) dk (11) One can prove [MuellerSiltanen2012] that (11) has a unique, periodic solution m and on B(0, R) it coincides with the solution m of (10). The figure on the next slide is a visual argument for the identical evaluation of the convolution integrals. The periodic equation III for Domains of integration leading to nonmodified (left) and modified (center, right) result green circle: domain in which convolution results coincide green disk: support of shifted scattering transform t(k ) blue circle: domain in which the kernel 1 k is not modified

17 The periodic equation IV for To compute the convolution integral in the periodic setting (11) at a point k we have to shift the support B(0, R) of φ z (green circle) by k, leading to the green disk. Caused by the periodic setting there are infinitely many green disks, offset by multiples of 4 R + 2 ε. Then multiply with the other expressions and integrate. Inside of B(0, 2 R) (blue circle) the Green s function is not modified by the cutoff function. If k R. (Visualized in the left part of the figure) The green disk is completely inside B(0, 2 R) and thus the expression to be integrated coincides with the original setting in (10). The other green disks are not in the domain of integration. If k > R. (Visualized in the right part of the figure) The green disk has a section outside of B(0, 2 R) and thus the expression to be integrated is modified by the cutoff function. The periodic equation V for Some of the other green disks are now in the domain of integration and will thus modifiy the result. For k / B(0, R) the results for m(z, k) for (10) and (11) differ. Now we have all the tools to verify that m(z, k) = m(z, k) for k B(0, R) Use the solution m of the periodic problem (11) and the convolution in (10) to generate a second solution m 2 of (9) on all R 2. Compare with the unique solution m of (10), resp (9). m 2 (z, k) := π 2 B(0,R) t(k ) (k k ) k ei 2 (k 1 x k 2 y) m(z, k ) dk

18 The periodic equation VI for The function m(z, k) solves m(z, k) = t(k ) 2 (k 4 π 2 (k k ei 1 x k 2 y) m(z, k ) dk ) k B(0,R) and thus the equation (9) on C, i.e. k m(z, k) = t(k) 4 π k e z(k) m(z, k) For k B(0, R) (or k Q = [ 2 R ε, +2 R + ε] 2 ) we have m(z, k) = m 2 (z, k) = π 2 B(0,R) t(k ) (k k ) k 2 (k ei 1 x k 2 y) m 2 (z, k ) dk Since the support of t is B(0, R) the above integral equation for m 2 is correct for all k C and m 2 is a second solution to the equation. The periodic equation VII for The uniqueness of the solution of the equation (9) on C now implies m(z, k) = m 2 (z, k). On B(0, R) we have m 2 (z, k) = m(z, k) and thus the desired result. Observe that the above result m(k, z) = m(k, z) is only possible with the compact support B(0, R) of the scattering transform. For many other Fredholm equations the support is not compact, but we have a decay condition for the Green s function. In these cases one can only hope for m m for R large enough.

19 Discretization and convolution I for To compute the convolution 2D integral we have to discretize the square domain Q = [ 2 R ε, 2 R + ε] [ 2 R ε, 2 R + ε] with a N N grid, N preferably a power of two. In the figure a discretization with 8 8 grid points is shown. Evaluating the double integral turns into multiplications of the two factors and then a summation m i m o Discretization and convolution II for Now examine the evaluation of the convolution integral: Thus we have N 2 discretized values of the functions to be integrated. For each of the N 2 possible offsets k = (k 1, k 2 ) this amounts to a sum involving N 2 products. The total operation count is N 4, for a naive implementation of the convolution! Using this can be reduced considerably.

20 Using the inner points only I for The vector m containing all discretized values of the function m(z, ) can be split up in two parts: m i contains the inner points, the ones inside of B(0, R). At most 1 4 of the number of points are inside, i.e. (N/2)2. m o contains the outer points, the ones outside of B(0, R). The values at these outer points will be multiplied by 0, since the support of the scattering transform is in B(0, R). We only need the values of m on the support of the scattering transform t, thus the outer points are of no use and interest! For easy programming use an inner square with half the side length, i.e. the number of unknowns is divided by four. Using the inner points only II for Evaluating the convolution integral approximately at all the spectral grid points k can be written as a matrix multiplication T m and the system to be solved is I m T m = 1. We know that the outer points are multiplied by 0 and thus we have a block structure where two blocks contain zeros. ( mi m o ) [ Tii 0 T oi 0 Consider this as two smaller systems ] ( mi m o ) = ( 1 1 (I T ii ) m i = 1 and m o = T oi m i + 1 and obviously only the inner system ) A m i = (I T ii ) m i = 1 has to be solved. This is good news, as we only need the inner values!

21 Why I for We illustrate the efficiency of for computing convolution integrals by a 1D example. The key is the convolution theorem F(f g) = F(f ) F(g), combined with the computational effort for a, i.e. N log N for a 1-D. Assume a 2π-periodic function f is discretized, leading to a vector f C N. Its Fourier coefficients are denoted by F C N. Similarly for a function g and its Fourier coefficients G C N. The periodic function h is defined by periodic convolution over the interval [ π, π], i.e. h(t) = (f g)(t) = or the discretized version h k = +π π N f k j g j j=1 f (t τ) g(τ) dτ Why II for Since there are N components to be computed and each sum requires N multiplications, this leads to a computational effort of N 2 operations. The convolution theorem implies that the Fourier coefficients H j of h are given by a pointwise multiplication of the Fourier coefficients F j and G j, i.e. h = f g = Hj = F j G j Thus we can use the inverse to determine h In short we may write h = I( H) = c I( F G) = c I((f ) (g)) where the multiplication is an element wise multiplication. The computational effort is given by N log N operations. Computational effort: N 2 N log N

22 Application to the equation I for We have a 2D convolution expression to be computed and consequently Computational effort: N 4 N 2 log N and we have to use 2D implementations of and I. The following table of run times for a conductivity reconstruction using the method confirms the result. For each of the 359 frames 368 z values were examined by Melody Dodd (2014). grid size run time 68 s 161 s 578 s 2355 s Observe that we have almost a factor 4 in run time when doubling the grid size, at least for N large. Application to the equation II for The algorithm to evaluate one discretization of β(k k ) φ z (k ) m(z, k ) dk for one value of z is given by Q Discretize β(k) leading to an N N matrix M 1. This evaluation will not change as we examine different values of z. Discretize φ z (k) m(z, k) leading to an N N matrix M 2. Since φ z (k) = 1 4 π t(k) k e +i (k 1x k 2 y) this evaluation will change as we examine different values of z.

23 Application to the equation III for A numerical evaluation of the above 2D periodic convolution integral is now represented by M 1 M 2, a (circular) convolution of matrices. Then the discretized values of the convolution are given by I((M 1 ) (M 2 )) where stands for an element wise multiplication. With the above we can evaluate the integral in the periodic equation m(z, k) = 1 β(k k ) φ z (k ) m(z, k ) dk Q but not solve the system. The equation is now in the form A m = b Application to the equation IV for Using the points inside of B(0, R) only we can reduce the size of the complex matrix to a (N/2) 2 (N/2) 2 matrix. The above allows to evaluate A m without explicitly construction the matrix A. To generate all entries in the matrix, we would have to perform 1 4 N2 matrix applications. Observe that the resulting system is not complex linear, but real linear. Thus the complex system actually turns into a 1 2 N2 1 2 N2 real, linear system of non-homogeneous equations of the form A m = 1. The matrix A is neither sparse nor symmetric. Using a standard direct solver (e.g. LU factorization) would require 1 3 ( 1 2 N2 ) 3 = 1 24 N6 operations.

24 Using an iterative solver, for Solving the equation leads to a linear system of the form A u = b and the above is an algorithm to perform one matrix multiplication, without actually generating the matrix. Based on these observations we use an iterative solver for the non-symmetric, dense matrix A. We use the algorithm, which is internally using an. I for The algorithm generates a sequence of orthonormal vectors q n. It constructs orthogonal matrices Q n and an upper Hessenberg matrix H n. Q n = [ q 1, q 2, q 3,..., q n ] M N,n with Q T n Q n = I n h 11 h 12 h 1n h 21 h 22 h 23 h 2n. 0 h 32 h.. 33 h 3n H n =. 0 0 h.. M n+1,n h n,n 1 h n,n 0 0 h n+1,n The key properties are A Q n = Q n+1 H n M N,n and Q T n Q n = I n

25 II for The last column of the above matrix equation A Q n = Q n+1 H n reads as n+1 A q n = h j,n q j and thus q n+1 = 1 h n+1,n j=1 A q n n h j,n q j Based on this construction the vectors b = q 1, q 2,..., q n are in a Krylov space K n j=1 K n = span{ b, A b, A 2 b, A 3 b,..., A n 1 b} III for As starting vector q 1 we may choose any vector of length 1. The orthogonality is satisfied if 0 = q n+1, q k = 1 A q n, q k h n+1,n = and thus 1 h n+1,n ( A q n, q k h k,n 1) n h j,n q j, q k j=1 h k,n = A q n, q k for k = 1, 2, 3,... n The condition q n+1 = 1 then determines the value of h n+1,n.

26 IV for Algorithm 2: Choose q 1 with q 1 = 1 for n = 1, 2, 3,... do Set q n+1 = A q n for k = 1, 2, 3,..., n do h k,n = q n+1, q k q n+1 = q n+1 h k,n q k end h n+1,n = q n+1 and then q n+1 = 1 h n+1,n q n+1 end V for The computational effort for one step is given by one matrix multiplication, n scalar products for vectors of length N and the summation of n vectors of length N to generate q n+1. To generate all of Q n+1 we need n matrix multiplications and n 2 N additional operations.

27 I for The basic idea of the (Generalized Minimum RESiduals) algorithm is to solve a least square problem at each step of the. We assume that the matrix A is of size N N. Examine an n dimensional Krylov space K n K n = span{ b 0, A b 0, A 2 b0, A 3 b0,..., A n 1 b0 } with an orthonormal base Q n = [ q 1, q 2, q 3,..., q n ], generated by the. For a starting vector x 0 we seek a solution in the affine subspace x 0 + K n. The exact solution x 0 + x = A 1 b is approximated by a vector x 0 + x n, such that min A( x 0 + x) b = r n = A x n + A x 0 b = A x n b 0 x K n The vector b 0 = b A x 0 is the first residual. II for Using the matrix Q n the above can be rephrased (use Q n y = x) as min AQ n y b 0 = r n = A x n b 0 y R n If we use the initial vector q 1 = 1 b0 for the we b 0 know that q k, b 0 = 0 for all k 2. b0 = b 0 q 1 = b 0 Q n+1 e 1 With the we have A Q n = Q n+1 H n M N,n and thus we have a smaller least square problem. min y R n Qn+1 H n y b 0 Q n+1 e 1 = min y R n Hn y b 0 e 1

28 III for The large least square problem with the matrix A of size N N is replaced with an equivalent least square problem with the Hessenberg matrix H n of size (n + 1) n. This least square problem can be solved by the algorithm of your choice, e.g. a QR factorization. Using Givens transformations the QR algorithm is very efficient for Hessenberg matrices, n 2 operations. The algorithm can be stopped if the desired accuracy is achieved, e.g. if A x n b 0 / b is small enough. IV for Algorithm 3: Choose x 0 and compute b 0 = b A x 0 Q 1 = q 1 = 1 b0 b 0 for n = 1, 2, 3,... do step n to determine q n+1 and H n minimize H n y b e 1 x n = Q n y end x x 0 + x n

29 V for The basic algorithm usually performs very well, but has a few key disadvantages if many s are asked for. The size of the matrices Q n and H n increases with n, and with them the computational effort. The orthogonality of the columns of Q n will not be perfectly maintained, caused by arithmetic errors. To control both of these problem the algorithm is usually restarted every m steps, with typical values of m This leads to the (m) algorithm, with an outer and inner loop. Picking the optimal value for m is an art. VI for Algorithm 4: (m) Choose restart parameter m Choose x 0 for k = 0, 1, 2, 3,... do b0 = b A x 0 Q 1 = q 1 = 1 b0 b 0 for n = 1, 2,,, 3,..., m do step n to determine q n+1 and H n minimize H n y b e 1 x n = Q n y end x 0 = x 0 + x m end

30 Convergence of for The norm of the residuals r n = A x n b is decreasing r n+1 r n. This is obvious since the Krylov subspace is enlarged. In principle will generate the exact solution after N steps, but this result is useless. The goal is to use n N s and arithmetic errors will prevent us from using as a direct solver. One can construct cases where (m) does stagnate and not converge at all. For a fast convergence the eigenvalues of A should be clustered at a point away from the origin. This is different from the conjugate gradient algorithm, where the condition number determines the rate of convergence. The convergence can be improved by preconditioners. applied to the equation I for Since is an iterative algorithm to solve the linear system, one has to decide when to stop. The wish for high accuracy for the linear system has to be balanced with the effect of the noise inherent by measured data and the computational cost. Keep in mind the different sources for errors in the conductivity: systematic errors in the measured data random noise errors in the measured data effect of restricting the support of the scattering transform error caused by solving the linear system by The influence of the caused error has to be smaller than the others.

31 applied to the equation II for One of the key points for short run time is a good starting value x 0. This can be obtained by a good initial guess of the actual conductivity σ. Often one tries to analyze multiple frames sequentially. Then the final result of the previous frame may serve as an initial guess for the current frame. When generating difference images the scattering transform t exp diff(k) generates very small values and the resulting matrix A is rather close to the identity matrix. This leads to fast convergence. On a grid of k values it often takes 1 (yes: one) of to achieve the desired accuracy (Mueller, Dodd 2014). Parallelized evaluation I for On the CPUs and GPUs of today many operations can be performed in parallel. Thus we take advantage of this feature. Most of the runtime is used for solving the equation. For each value of z on the chosen grid the equation m(z, k) = 1 β(k k ) φ z (k ) m(z, k ) dk Q has to be solved. The computationally expensive part is the computation of and I.

32 Parallelized evaluation II for There are multiple options to parallelize the task: 1 Use the multiple CPUs for and do the different z values sequentially. 2 Hand each CPU one value of z. 3 If you have multipe frames to be analyzed, hand each CPU one frame. In recent computations J. Mueller and M. Dodd used Matlab and its parallel computing toolbox on computers with 12 or 64 CPUs. 1 Very little gain by multiple CPUs. 2 Works, but very small gain if more that 7 CPUs are used. 3 Best result for parallel computation, up to 60 CPUs. Thank you That s all folks for Thank you for your attention

Reconstructing conductivities with boundary corrected D-bar method

Reconstructing conductivities with boundary corrected D-bar method Reconstructing conductivities with boundary corrected D-bar method Janne Tamminen June 24, 2011 Short introduction to EIT The Boundary correction procedure The D-bar method Simulation of measurement data,

More information

Calderón s inverse problem in 2D Electrical Impedance Tomography

Calderón s inverse problem in 2D Electrical Impedance Tomography Calderón s inverse problem in 2D Electrical Impedance Tomography Kari Astala (University of Helsinki) Joint work with: Matti Lassas, Lassi Päivärinta, Samuli Siltanen, Jennifer Mueller and Alan Perämäki

More information

AALBORG UNIVERSITY. A new direct method for reconstructing isotropic conductivities in the plane. Department of Mathematical Sciences.

AALBORG UNIVERSITY. A new direct method for reconstructing isotropic conductivities in the plane. Department of Mathematical Sciences. AALBORG UNIVERSITY A new direct method for reconstructing isotropic conductivities in the plane by Kim Knudsen January 23 R-23-1 Department of Mathematical Sciences Aalborg University Fredrik Bajers Vej

More information

NONLINEAR FOURIER ANALYSIS FOR DISCONTINUOUS CONDUCTIVITIES: COMPUTATIONAL RESULTS

NONLINEAR FOURIER ANALYSIS FOR DISCONTINUOUS CONDUCTIVITIES: COMPUTATIONAL RESULTS NONLINEAR FOURIER ANALYSIS FOR DISCONTINUOUS CONDUCTIVITIES: COMPUTATIONAL RESULTS K. ASTALA, L. PÄIVÄRINTA, J. M. REYES AND S. SILTANEN Contents. Introduction. The nonlinear Fourier transform 6 3. Inverse

More information

Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005

Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 SOME INVERSE SCATTERING PROBLEMS FOR TWO-DIMENSIONAL SCHRÖDINGER

More information

arxiv: v2 [math.ap] 17 Jun 2014

arxiv: v2 [math.ap] 17 Jun 2014 NONLINEAR FOURIER ANALYSIS FOR DISCONTINUOUS CONDUCTIVITIES: COMPUTATIONAL RESULTS K. ASTALA, L. PÄIVÄRINTA, J. M. REYES AND S. SILTANEN arxiv:30.705v [math.ap] 7 Jun 04 Contents. Introduction. The nonlinear

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Complex geometrical optics solutions for Lipschitz conductivities

Complex geometrical optics solutions for Lipschitz conductivities Rev. Mat. Iberoamericana 19 (2003), 57 72 Complex geometrical optics solutions for Lipschitz conductivities Lassi Päivärinta, Alexander Panchenko and Gunther Uhlmann Abstract We prove the existence of

More information

Numerical computation of complex geometrical optics solutions to the conductivity equation

Numerical computation of complex geometrical optics solutions to the conductivity equation Numerical computation of complex geometrical optics solutions to the conductivity equation Kari Astala a, Jennifer L Mueller b, Lassi Päivärinta a, Samuli Siltanen a a Department of Mathematics and Statistics,

More information

A Direct Method for reconstructing inclusions from Electrostatic Data

A Direct Method for reconstructing inclusions from Electrostatic Data A Direct Method for reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with:

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Computational inversion based on complex geometrical optics solutions

Computational inversion based on complex geometrical optics solutions Computational inversion based on complex geometrical optics solutions Samuli Siltanen Department of Mathematics and Statistics University of Helsinki, Finland samuli.siltanen@helsinki.fi http://www.siltanen-research.net

More information

An inverse elliptic problem of medical optics with experimental data

An inverse elliptic problem of medical optics with experimental data An inverse elliptic problem of medical optics with experimental data Jianzhong Su, Michael V. Klibanov, Yueming Liu, Zhijin Lin Natee Pantong and Hanli Liu Abstract A numerical method for an inverse problem

More information

Fast shape-reconstruction in electrical impedance tomography

Fast shape-reconstruction in electrical impedance tomography Fast shape-reconstruction in electrical impedance tomography Bastian von Harrach bastian.harrach@uni-wuerzburg.de (joint work with Marcel Ullrich) Institut für Mathematik - IX, Universität Würzburg The

More information

Reconstructing inclusions from Electrostatic Data

Reconstructing inclusions from Electrostatic Data Reconstructing inclusions from Electrostatic Data Isaac Harris Texas A&M University, Department of Mathematics College Station, Texas 77843-3368 iharris@math.tamu.edu Joint work with: W. Rundell Purdue

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

A CALDERÓN PROBLEM WITH FREQUENCY-DIFFERENTIAL DATA IN DISPERSIVE MEDIA. has a unique solution. The Dirichlet-to-Neumann map

A CALDERÓN PROBLEM WITH FREQUENCY-DIFFERENTIAL DATA IN DISPERSIVE MEDIA. has a unique solution. The Dirichlet-to-Neumann map A CALDERÓN PROBLEM WITH FREQUENCY-DIFFERENTIAL DATA IN DISPERSIVE MEDIA SUNGWHAN KIM AND ALEXANDRU TAMASAN ABSTRACT. We consider the problem of identifying a complex valued coefficient γ(x, ω) in the conductivity

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Basic Aspects of Discretization

Basic Aspects of Discretization Basic Aspects of Discretization Solution Methods Singularity Methods Panel method and VLM Simple, very powerful, can be used on PC Nonlinear flow effects were excluded Direct numerical Methods (Field Methods)

More information

Stability and instability in inverse problems

Stability and instability in inverse problems Stability and instability in inverse problems Mikhail I. Isaev supervisor: Roman G. Novikov Centre de Mathématiques Appliquées, École Polytechnique November 27, 2013. Plan of the presentation The Gel fand

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

On uniqueness in the inverse conductivity problem with local data

On uniqueness in the inverse conductivity problem with local data On uniqueness in the inverse conductivity problem with local data Victor Isakov June 21, 2006 1 Introduction The inverse condictivity problem with many boundary measurements consists of recovery of conductivity

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Increasing stability in an inverse problem for the acoustic equation

Increasing stability in an inverse problem for the acoustic equation Increasing stability in an inverse problem for the acoustic equation Sei Nagayasu Gunther Uhlmann Jenn-Nan Wang Abstract In this work we study the inverse boundary value problem of determining the refractive

More information

The double layer potential

The double layer potential The double layer potential In this project, our goal is to explain how the Dirichlet problem for a linear elliptic partial differential equation can be converted into an integral equation by representing

More information

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution Math 5 (006-007 Yum-Tong Siu. Derivation of the Poisson Kernel by Fourier Series and Convolution We are going to give a second derivation of the Poisson kernel by using Fourier series and convolution.

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

The Factorization Method for Inverse Scattering Problems Part I

The Factorization Method for Inverse Scattering Problems Part I The Factorization Method for Inverse Scattering Problems Part I Andreas Kirsch Madrid 2011 Department of Mathematics KIT University of the State of Baden-Württemberg and National Large-scale Research Center

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Numerical tensor methods and their applications

Numerical tensor methods and their applications Numerical tensor methods and their applications 8 May 2013 All lectures 4 lectures, 2 May, 08:00-10:00: Introduction: ideas, matrix results, history. 7 May, 08:00-10:00: Novel tensor formats (TT, HT, QTT).

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

University of North Carolina at Charlotte Charlotte, USA Norwegian Institute of Science and Technology Trondheim, Norway

University of North Carolina at Charlotte Charlotte, USA Norwegian Institute of Science and Technology Trondheim, Norway 1 A globally convergent numerical method for some coefficient inverse problems Michael V. Klibanov, Larisa Beilina University of North Carolina at Charlotte Charlotte, USA Norwegian Institute of Science

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

A local estimate from Radon transform and stability of Inverse EIT with partial data

A local estimate from Radon transform and stability of Inverse EIT with partial data A local estimate from Radon transform and stability of Inverse EIT with partial data Alberto Ruiz Universidad Autónoma de Madrid U. California, Irvine.June 2012 w/ P. Caro (U. Helsinki) and D. Dos Santos

More information

Fast Multipole BEM for Structural Acoustics Simulation

Fast Multipole BEM for Structural Acoustics Simulation Fast Boundary Element Methods in Industrial Applications Fast Multipole BEM for Structural Acoustics Simulation Matthias Fischer and Lothar Gaul Institut A für Mechanik, Universität Stuttgart, Germany

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Parallel Algorithms for Four-Dimensional Variational Data Assimilation

Parallel Algorithms for Four-Dimensional Variational Data Assimilation Parallel Algorithms for Four-Dimensional Variational Data Assimilation Mie Fisher ECMWF October 24, 2011 Mie Fisher (ECMWF) Parallel 4D-Var October 24, 2011 1 / 37 Brief Introduction to 4D-Var Four-Dimensional

More information

Weighted Regularization of Maxwell Equations Computations in Curvilinear Polygons

Weighted Regularization of Maxwell Equations Computations in Curvilinear Polygons Weighted Regularization of Maxwell Equations Computations in Curvilinear Polygons Martin Costabel, Monique Dauge, Daniel Martin and Gregory Vial IRMAR, Université de Rennes, Campus de Beaulieu, Rennes,

More information

Recovery of anisotropic metrics from travel times

Recovery of anisotropic metrics from travel times Purdue University The Lens Rigidity and the Boundary Rigidity Problems Let M be a bounded domain with boundary. Let g be a Riemannian metric on M. Define the scattering relation σ and the length (travel

More information

DIRECT ELECTRICAL IMPEDANCE TOMOGRAPHY FOR NONSMOOTH CONDUCTIVITIES

DIRECT ELECTRICAL IMPEDANCE TOMOGRAPHY FOR NONSMOOTH CONDUCTIVITIES DIRECT ELECTRICAL IMPEDANCE TOMOGRAPHY FOR NONSMOOTH CONDUCTIVITIES K ASTALA, J L MUELLER, A PERÄMÄKI, L PÄIVÄRINTA AND S SILTANEN Abstract. A new reconstruction algorithm is presented for eit in dimension

More information

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention

More information

arxiv: v1 [math.ap] 25 Sep 2018

arxiv: v1 [math.ap] 25 Sep 2018 arxiv:1809.09272v1 [math.ap] 25 Sep 2018 NACHMAN S RECONSTRUCTION METHO FOR THE CALERÓN PROBLEM WITH ISCONTINUOUS CONUCTIVITIES GEORGE LYTLE, PETER PERRY, AN SAMULI SILTANEN Abstract. We show that Nachman

More information

Electrostatic Imaging via Conformal Mapping. R. Kress. joint work with I. Akduman, Istanbul and H. Haddar, Paris

Electrostatic Imaging via Conformal Mapping. R. Kress. joint work with I. Akduman, Istanbul and H. Haddar, Paris Electrostatic Imaging via Conformal Mapping R. Kress Göttingen joint work with I. Akduman, Istanbul and H. Haddar, Paris Or: A new solution method for inverse boundary value problems for the Laplace equation

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces

Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces (Recap) Announcement Room change: On Thursday, April 26th, room 024 is occupied. The lecture will be moved to room 021, E1 4 (the

More information

Notes for CS542G (Iterative Solvers for Linear Systems)

Notes for CS542G (Iterative Solvers for Linear Systems) Notes for CS542G (Iterative Solvers for Linear Systems) Robert Bridson November 20, 2007 1 The Basics We re now looking at efficient ways to solve the linear system of equations Ax = b where in this course,

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

A Multigrid Method for Two Dimensional Maxwell Interface Problems

A Multigrid Method for Two Dimensional Maxwell Interface Problems A Multigrid Method for Two Dimensional Maxwell Interface Problems Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University USA JSA 2013 Outline A

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Discontinuous Galerkin methods for fractional diffusion problems

Discontinuous Galerkin methods for fractional diffusion problems Discontinuous Galerkin methods for fractional diffusion problems Bill McLean Kassem Mustapha School of Maths and Stats, University of NSW KFUPM, Dhahran Leipzig, 7 October, 2010 Outline Sub-diffusion Equation

More information

Math 577 Assignment 7

Math 577 Assignment 7 Math 577 Assignment 7 Thanks for Yu Cao 1. Solution. The linear system being solved is Ax = 0, where A is a (n 1 (n 1 matrix such that 2 1 1 2 1 A =......... 1 2 1 1 2 and x = (U 1, U 2,, U n 1. By the

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

The Linear Sampling Method and the MUSIC Algorithm

The Linear Sampling Method and the MUSIC Algorithm CODEN:LUTEDX/(TEAT-7089)/1-6/(2000) The Linear Sampling Method and the MUSIC Algorithm Margaret Cheney Department of Electroscience Electromagnetic Theory Lund Institute of Technology Sweden Margaret Cheney

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Notes by Maksim Maydanskiy.

Notes by Maksim Maydanskiy. SPECTRAL FLOW IN MORSE THEORY. 1 Introduction Notes by Maksim Maydanskiy. Spectral flow is a general formula or computing the Fredholm index of an operator d ds +A(s) : L1,2 (R, H) L 2 (R, H) for a family

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Polynomial Approximation: The Fourier System

Polynomial Approximation: The Fourier System Polynomial Approximation: The Fourier System Charles B. I. Chilaka CASA Seminar 17th October, 2007 Outline 1 Introduction and problem formulation 2 The continuous Fourier expansion 3 The discrete Fourier

More information

Last Update: April 7, 201 0

Last Update: April 7, 201 0 M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Linear Algebra in Hilbert Space

Linear Algebra in Hilbert Space Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger

More information

arxiv: v1 [math.na] 19 Sep 2018

arxiv: v1 [math.na] 19 Sep 2018 SPECTRAL APPROACHES TO D-BAR PROBLEMS ARISING IN ELECTRICAL IMPEDANCE TOMOGRAPHY arxiv:89.8294v [math.na] 9 Sep 28 CHRISTIAN KLEIN AND NIKOLA STOILOV Abstract. In this work we present spectral approaches

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods

Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Solving Ill-Posed Cauchy Problems in Three Space Dimensions using Krylov Methods Lars Eldén Department of Mathematics Linköping University, Sweden Joint work with Valeria Simoncini February 21 Lars Eldén

More information

Iterative solvers for linear equations

Iterative solvers for linear equations Spectral Graph Theory Lecture 17 Iterative solvers for linear equations Daniel A. Spielman October 31, 2012 17.1 About these notes These notes are not necessarily an accurate representation of what happened

More information

Numerical Methods for Large-Scale Nonlinear Systems

Numerical Methods for Large-Scale Nonlinear Systems Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.

More information