ANALYSIS OF THE PARALLEL SCHWARZ METHOD FOR GROWING CHAINS OF FIXED-SIZED SUBDOMAINS: PART I

Size: px
Start display at page:

Download "ANALYSIS OF THE PARALLEL SCHWARZ METHOD FOR GROWING CHAINS OF FIXED-SIZED SUBDOMAINS: PART I"

Transcription

1 ANALYSIS OF THE PARALLEL SCHWARZ METHOD FOR GROWING CHAINS OF FIXED-SIZED SUBDOMAINS: PART I G. CIARAMELLA AND M. J. GANDER Abstract. In implicit solvation models, the electrostatic contribution to the solvation energy can be estimated by solving a system of elliptic partial differential equations modeling the reaction potential. The domain of definition of such elliptic equations is the union of the van der Waal s cavities corresponding to the atoms of the solute molecule. Therefore, the computations can naturally be performed using Schwarz methods, where each atom of the molecule corresponds to a subdomain, see [, 0, ]. In contrast to classical Schwarz theory, it was observed numerically that the convergence of the Schwarz method in this case does not depend on the number of subdomains, even without coarse correction. We prove this observation by analyzing the Schwarz iteration matrices in Fourier space and evaluating corresponding norms in a simplified setting. In order to obtain our contraction results, we had to choose a specific iteration formulation, and we show that other formulations of the same algorithm can generate Schwarz iteration matrices with much larger norms leading to the failure of norm arguments, even though the spectral radii are identical. By introducing a new optimality concept for Schwarz iteration operators with respect to error estimation, we finally show how to find Schwarz iteration matrix formulations which permit such small norm estimates. Key words. Domain decomposition methods; Schwarz methods; chain of atoms; elliptic PDE; COSMO solvation model. AMS subject classifications. 65N55, 65F0, 65N, 70-08, 35J57.. Introduction. Recent developments in physical and chemical applications are creating a large demand for numerical methods for the solution of complicated systems of equations, which are often used before rigorous numerical analysis results are available. As a particular example we study here the new methodology that was recently presented in [] and supported by [0, ]. Based on a physical model approximation of solvation phenomena, called COSMO [, 6, 6], the authors introduced the so called ddcosmo, that is a new formulation of the Schwarz methods for the solution of solvation problems where large molecular systems, given by chains of atoms, are involved, and each atom corresponds to a subdomain. The Schwarz methods are written in a boundary element form, see for example [] for a review on the application of boundary element methods, and no theoretical analyses of the algorithm are performed in [, 0, ]. The authors observe however in their numerical experiments an unusual convergence behavior of the Schwarz methods used: the convergence of the iterative procedure without coarse correction is in many cases independent of the number of atoms and thus subdomains, see for example [, Figure 0]. Schwarz methods are a mature field, see, e.g., [5, 8] and references therein, and it is well known that the convergence of Schwarz methods without a coarse space component depends in general for elliptic problems on the number of subdomains. We prove in what follows that in the specific case of [, 0, ] for an approximate geometrical setting, the convergence does indeed not depend on the number of atoms in the molecule, and is thus independent of the number of subdomains. This is our first main result, which we prove for an approximate -dimensional model that describes a chain of atoms whose domains are approximated by rectangles. Our analysis leads us to estimate the spectral radius of iteration operators. This We acknowledge the financial support of the Swiss National Science Foundation (SNSF) funding the projects PASC-DIAPHANE and /. Section de Mathématiques, University of Geneva, Switzerland (gabriele.ciaramella@unige.ch). Section de Mathématiques, University of Geneva, Switzerland (martin.gander@unige.ch).

2 CIARAMELLA AND GANDER b 0 b b j b j b N b N a a a j a j+ a N a N+ δ L δ δ δ L δ δ δ L δ δ x ˆL Ω Ω j Ω N g g j g N Fig.. Grid of points used to define the geometry of the problem and our rectangular model for the chain of atoms is in general a difficult task, and it is often easier to estimate norms. For a given method, different iteration operator formulations are possible that have the same spectral radius, but the iteration operator norms are in general different. This fact can strongly influence the convergence analysis, which can in the worst case even be inconclusive. To the best of our knowledge, there has been no attempt so far to characterize, describe and compare different such iteration operator formulations in terms of error estimation. Our second main contribution is a precise characterization of the best Schwarz iteration operator formulation with respect to error estimation. Based on a new optimality concept for error estimation, this formulation allowed us to obtain our convergence result independent of the number of subdomains. Our paper is organized as follows: in Section, we introduce the approximate model that describes a chain of atoms whose -dimensional circular domains are approximated by rectangular domains, and define the parallel Schwarz method for the solution to this approximate model. Section 3 is devoted to constructing Schwarz iteration matrices in Fourier space. In Section 4, we prove convergence of the parallel Schwarz method independently of the number of subdomains by estimating norms of Schwarz iteration matrices. In Section 5, we show that different iteration formulations are possible and they can lead to Schwarz iteration matrices having very different norms, even though the spectral radii are the same. We then introduce a new concept of optimality of transfer operators with respect to error estimation, and we prove optimality of the Schwarz iteration operator we used in Section 3. We illustrate our analysis with numerical experiments in Section 6, and present our conclusions in Section 7.. Parallel Schwarz method for a solvation model. We now define our simplified solvation model consisting of a chain of N rectangular atoms, and study a parallel one level Schwarz method for computing its state. To define the domain of each atom, let L R + and δ R +, and define the grid points a j = (j )L δ for j =,..., N +, and b j = jl + δ for j = 0,..., N, as shown in Figure. The domain of the j-th atom of the chain is a rectangle of dimension Ω j := (a j, b j ) (0, ˆL). In addition, some forces f j and g j act in the interior and on the boundary of the j-th atom. The chain of atoms is also shown in Figure. The state u j (x, y) of the j-th atom is governed by the boundary value problem 77 () u j = f j in Ω j, u j (, 0) = g j (, 0), u j (, ˆL) = g j (, ˆL), u j = u j in [a j, b j ] (0, ˆL), u j = u j+ in [a j+, b j ] (0, ˆL),

3 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS where the last two conditions describe the interaction of the j-th atom with atom j and atom j +, and the function f j is equal to f j+ or f j in the overlap. We note that in the original model considered in [0] f j = 0, but we consider here the more general case of possibly nonzero f j. Notice that problem () is defined for j =,..., N. The states of the first and the last atom are governed by u = f in Ω, () u (, 0) = g (, 0), u (, ˆL) = g (, ˆL), u (a, ) = g (a, ), u = u in [a, b ] (0, ˆL), and (3) u N = f N in Ω N, u N (, 0) = g N (, 0), u N (, ˆL) = g N (, ˆL), u N (b N, ) = g N (b N, ), u N = u N in [a N, b N ] (0, ˆL) For a large number of atoms N, it is natural to apply a domain-decomposition method to solve ()-(3). In particular, we focus on the parallel Schwarz method, u n j = f j in Ω j, u n j (, 0) = g j (, 0), u n j (, ˆL) = g j (, ˆL), u n j (a j, ) = u n j (a j, ), for j =,..., N and u n = f in Ω, u n (, 0) = g (, 0), u n (, ˆL) = g (, ˆL), u n (a, ) = g (a, ), u n (b, ) = u n (b, ), u n j (b j, ) = u n j+ (b j, ), and u n N = f N in Ω N, u n N(, 0) = g N (, 0), u n N(, ˆL) = g N (, ˆL), u n N(a N, ) = u n N (a N, ), u n N(b N, ) = g N (b N, ). 9 Notice that δ represents the overlap between two consecutive subdomains and δ can 9 assume values in the interval (0, L/). We remark that the Schwarz method above 93 corresponds to the Jacobi version of the Schwarz method used in [], but applied 94 to a chain of atoms defined on rectangular domains. 95 In order to analyze the convergence of the parallel Schwarz method, we denote by 96 e n j the error between the exact solution u j and the approximate solution computed at 97 the iteration n, that is e n j := u j u n j. By linearity, the parallel Schwarz method for 98 the errors is 99 (4) e n j = 0 in Ω j, e n j (, 0) = 0, e n j (, ˆL) = 0, e n j (a j, ) = e n j (a j, ), e n j (b j, ) = e n j+ (b j, ), for j =,..., N and (5) e n = 0 in Ω, e n (, 0) = 0, e n (, ˆL) = 0, e n (a, ) = 0, e n (b, ) = e n (b, ), and e n N = 0 in Ω N, e n N (, 0) = 0, e n N(, ˆL) = 0, e n N(a N, ) = e n N (a N, ), e n N (b N, ) = 0. In what follows, we study the convergence of the parallel Schwarz method, that is, e n j 0 as n. Our convergence analysis is performed using a Fourier sine

4 4 CIARAMELLA AND GANDER series to solve the elliptic problems (4)-(5). This technique allows us to study the convergence of the parallel Schwarz method for each coefficient of the Fourier series. In particular, for each mode we can construct a Schwarz iteration matrix T that is used to generate a sequence of errors in the Fourier coefficients v n as v n = T v n. Notice that the step from n to n is considered in order to analyze the decay of the error corresponding to the j-th atom. This fact will become clear in the next section, and is common for the analysis of Schwarz type methods; see, e.g., [7]. 3. Constructing the Schwarz iteration matrix. To construct the Schwarz iteration matrix corresponding to the parallel Schwarz method (4)-(5), we use a sine series expansion, (6) e n j (x, y) = vj n (x, k) sin(ky), k = πmˆl, m= 5 where the Fourier coefficients vj n (x, k) are given by 6 (7) vj n (x, k) = c j (k, δ)e kx + d j (k, δ)e kx, and c j (k, δ) and d j (k, δ) are computed using the conditions vj n(a j, k) = v n j (a j, k) and vj n(b j, k) = v n j+ (b j, k), which are obtained by transforming the transmission conditions. Notice that k is parametrized by m, hence one should formally write k m, but we drop the subscript m for simplicity. We obtain that vj n (x, k) = e kx e jkl[ ] g A (k, δ)v n j+ (b j, k) g A (k, δ)v n j (a j, k) (8) + e kx e jkl[ ] g B (k, δ)v n j (a j, k) g B (k, δ)v n j+ (b j, k) with (9) g A (k, δ) := e3kδ+kl e 4kδ+kL, g e kδ+kl A(k, δ) := e 4kδ+kL, g B (k, δ) := e3kδ+kl e 4kδ+kL, g e kδ B(k, δ) := e 4kδ+kL. In a similar fashion we solve the problems (5) on the left and right and obtain with and with v n (x, k) = w (x, k; δ)v n (b, k) w (x, k; δ) := z N (x, k; δ) := e kδ+kl [ e 4kδ+kL e kx e kδ+kx], v n N(x, k) = z N (x, k; δ)v n N (a N, k) e kδ+kl [ e 4kδ+kL e kx knl e knl+kδ kx]. 3 Our convergence analysis focuses on the Fourier coefficients vj n (x, k), and we thus 33 rewrite (8) in the form 34 (0) v n j (x, k) = w j (x, k; δ)v n j+ (b j, k) + z j (x, k; δ)v n j (a j, k),

5 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS where () w j (x, k; δ) := e kx e jkl g A (k, δ) e kx e jkl g B (k, δ) and () z j (x, k; δ) := e kx e jkl g B (k, δ) e kx e jkl g A (k, δ). By applying (0) recursively, we obtain (3) v n j (x, k) = w j (x, k; δ)w j+ (b j, k; δ)v n j+ (b j+, k) Evaluating (3) at x = a j and x = b j yields (4) + w j (x, k; δ)z j+ (b j, k; δ)v n j (a j+, k) + z j (x, k; δ)w j (a j, k; δ)v n j (b j, k) + z j (x, k; δ)z j (a j, k; δ)v n j (a j, k). v n j (a j, k) = w a (k, δ)w b (k, δ)v n j+ (b j, k) + w a (k, δ)z b (k, δ)v n j (a j, k) + z a (k, δ)w a (k, δ)v n j (b j, k) + z a (k, δ)z a (k, δ)v n j 3 (a j, k), v n j+(b j, k) = w b (k, δ)w b (k, δ)v n j+3 (b j+, k) + w b (k, δ)z b (k, δ)v n j+ (a j+, k) where we used the fact that + z b (k, δ)w a (k, δ)v n j+ (b j, k) + z b (k, δ)z a (k, δ)v n j (a j, k), (5) w a (k, δ) := w j (a j+, k; δ) = w j (a j, k; δ), w b (k, δ) := w j+ (b j, k; δ) = w j (b j, k; δ), z a (k, δ) := z j (a j+, k; δ) = z j (a j, k; δ), z b (k, δ) := z j+ (b j, k; δ) = z j (b j, k; δ), for j =,..., N. Similarly, since w (a, k; δ) = w a (k, δ) and z N (b N, k; δ) = z b (k, δ), for the first and the last atom we get (6) v n (a, k) = w a (k, δ)w b (k, δ)v3 n (b, k) + w a (k, δ)z b (k, δ)v n (a, k), vn n (b N, k) = z b (k, δ)w a (k, δ)v n N (b N, k) + z b (k, δ)z a (k, δ)v n N (a N, k). For the atoms j = and j = N we have and v n (a 3, k) = w a (k, δ)z a (k, δ)v n (b, k) + w a (k, δ)z b (k, δ)v n (a 3, k) + w a (k, δ)w b (k, δ)v4 n (b 3, k) v n (b, k) = w a (k, δ)z b (k, δ)v n (b, k) + w b (k, δ)z b (k, δ)v n (a 3, k) + w b (k, δ)w b (k, δ)v n 4 (b 3, k), vn (a n N, k) = w a (k, δ)z b (k, δ)v n N (a N, k) + z a (k, δ)w a (k, δ)v n N (b N, k) + z a (k, δ)z a (k, δ)v n N 3 (a N, k) vn (b n N, k) = w b (k, δ)z b (k, δ)v n N (a N, k) + z b (k, δ)w a (k, δ)v n N (b N, k) + z b (k, δ)z a (k, δ)v n N 3 (a N, k). Before we present the Schwarz iteration matrix, we prove the following lemma which is useful to obtain a simpler representation for it.

6 6 CIARAMELLA AND GANDER Lemma. For any (k, δ) (0, ) [0, L], the quantities defined in (5) satisfy w a (k, δ) 0 and w b (k, δ) 0. Moreover w a (k, δ) = z b (k, δ) and w b (k, δ) = z a (k, δ). Proof. A direct calculation using () and (9) shows that 57 (7) w a (k, δ) = ekδ+kl e kδ e 4kδ+kL and w b (k, δ) = e4kδ+kl e kl e 4kδ+kL. 58 The first statement follows now from (7). Then, a direct calculation involving () 59 and (9) allows us to compute that z b (k, δ) = ekδ+kl e kδ and z e 4kδ+kL a (k, δ) = e4kδ+kl e kl e 4kδ+kL. 60 Comparing these with (7), the second statement follows. 6 We are now ready to complete the construction of the Schwarz iteration matrix. 6 By defining v n (k) R N as 63 (8) v n (k) := ( 0, v n (b, k), v n (a, k), v n 3 (b, k),..., v n j (a j, k), v n j+(b j, k),......, v n N (a N, k), v n N (b N, k), v n N (a N, k), 0 ), equations (4) and (6) can be written in the form v n (k) = T (k, δ)v n (k), where (using Lemma ) the Schwarz iteration matrix T (k, δ) R N N is given by z b z b w b z b w b w b z b z b z b w b 0 0 w b z b z b z b w b z b w b w b. 0 w b z b z b z b w b z..... b. 0 0 w b z b z b z..... b. w T (k, δ) = b w b w b z..... b 0 0, wb z b w b w b zb z b w b z b wb z b z b z b w b z b 0 w b w b w b z b z b z b w b z b 0 0 z b w b z b z b w b w b w b z b z b z b and we omitted the dependence on k and δ for simplicity. Notice that we added a first and last zero entry in the vector v n (k) leading to the first and last zero rows and columns in order to have in the Schwarz iteration matrix T (k, δ), because this reveals a block structure corresponding to the atoms in the chain that will be useful later. 4. Convergence analysis of the parallel Schwarz method. We now prove that the parallel Schwarz method (4)-(5) converges independently of the number of atoms N. We start by proving essential properties of the Schwarz iteration matrix T (k, δ), using that k > 0, which always holds because of (6), i.e. the Dirichlet boundary conditions on the boundary of the atoms. Lemma. The following statements hold: (a) For any δ > 0, the map k (0, ) (z b + w b )(k, δ) R is strictly monotonically decreasing.

7 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS (9) (b) For any k > 0, the map δ (0, L/) (z b + w b )(k, δ) R is strictly monotonically decreasing. (z (c) For any k > 0, we have (z b + w b )(k, 0) =, b +w b ) k (k, 0) = 0 and (z b +w b ) δ (k, 0) < 0. Proof. From Lemma and using (7), we obtain (z b + w b )(k, δ) = ekδ+kl e kδ + e 4kδ+kL e kl e 4kδ+kL = (ekδ+kl )e kδ + (e kδ+kl )e kl (e kδ+kl )(e kδ+kl = ekδ + e kl + ) e kδ+kl Differentiating (9) with respect to k we get 87 (0) (z b + w b ) k (k, δ) = LekL + δe kδ e kδ+kl + (ekl + e kδ )(L + δ)e kl+kδ (e kδ+kl + ) [Le ] 4kδ+kL + δe kδ+kl δe kδ Le kl = (e kδ+kl + ) Notice that e 4kδ+kL e kl > 0 and e kδ+kl e kδ > 0 for any k > 0 and δ > 0. Hence, we have that (z b+w b ) k (k, δ) < 0, and the statement (a) follows. The claim (b) can be proved in a similar way: we have that ] (z b +w b ) () (k,δ)= kekδ δ e kδ+kl + k(ekl +e kδ )e kl+kδ k [e kδ+kl e kδ (e kδ+kl +) = (e kδ+kl +). Notice that e kδ+kl e kδ > 0 for any k > 0 and δ (0, L/), which implies (z b +w b ) δ (k, δ) < 0, and the statement (b) follows. The claim (c) follows by continuity of (9), (0) and () with respect to δ and passing to the limit for δ 0. We can now prove that the parallel Schwarz method for the solution of (4)-(5) converges independently of the number of atoms N by proving that the spectral radius of the Schwarz iteration matrix T (k, δ) is bounded by a function of k and δ that does not depend on N. Theorem 3. For any (k, δ) (0, ) (0, L) we have the bound ρ(t (k, δ)) T (k, δ) λ(k, δ) <, where ρ(t (k, δ)) is the spectral radius of T (k, δ) and ( e kδ + e kl ) λ(k, δ) := e kδ+kl, + which is independent of the number N of atoms. Moreover, for N 5 it holds that T (k, δ) = λ(k, δ). Proof. By Lemma, all the entries of T (k, δ) are positive. The sum of the entries of the j-th row for 5 j N 5 is z b w b + z b z b + z b w b + w b w b = ( ), z b + w b where we omitted the dependence on k and δ for simplicity. Moreover, it follows from Lemma that

8 CIARAMELLA AND GANDER () ( (, T (k, δ) )j,l z b (k, δ) + w b (k, δ)) l for j =,..., N. Hence we obtain for the infinity norm of the Schwarz iteration matrix (3) T (k, δ) ( z b (k, δ) + w b (k, δ) ), and equality holds if N 5. Now using (9) and Lemma we get (4) z b (k, δ) + w b (k, δ) = ekδ + e kl e kδ+kl + <, and combining (3) with (4) concludes the proof. We now use Theorem 3 to obtain convergence in the L norm using Parseval s identity: Corollary 4. Under the assumptions of Theorem 3 we have that (5) T (k, δ) T (k, δ). In addition, with c := max k T (k, δ), the following inequality holds, ) ( e n j (a j, ) L + en j (b j, ) L c ( e n j (a j, ) L + en j (b j, ) L ). j j Proof. Since the Schwarz iteration matrix satisfies T (k, δ) = T (k, δ), (5) follows by applying the standard estimate T (k, δ) T (k, δ) T (k, δ). The second statement can be shown using Parseval s identity. Next, our aim is to obtain a sharper estimate of the spectral radius. According to the estimate (), the effect of the inner atoms dominates the effect of the first and the last atom of the chain. We first notice that T (k, δ) can be written as a sum, T (k, δ) = T (k, δ) + T (k, δ), where T (k, δ) is a symmetric matrix given by zb z b w b 0 z b w b z b 0 0 w b z b z b z b w b 0. 0 z b w b zb w b z..... b. 0 0 w b z b z..... b T :=. 0 z b w..... b zb w b z b w b z b wb z b zb 0 z b w b zb

9 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS and T (k, δ) is given by wb w b w T := b w b wb Under the action of an appropriate permutation matrix P, the matrix T (k, δ) can be transformed into T (k, δ) = P T (k, δ)p with block-diagonal structure, ( ) T (k, T δ) = (k, δ) 0 0 T (k,, δ) where T (k, δ) and T (k, δ) are tridiagonal matrices of the form 0 zb w b z b z w b z b z b w b z b b w b z b w b z b zb w b z b and w b z b zb w b z b w b z b z z b w b zb b w b z b z b w b zb 0 The block structure of T corresponds to the specific ordering of the vector v n defined in (8). In particular, v n = (v n, v n,..., vn n ), where vj n = (vn j (a j, k), vj+ n (b j, k)), with v n = (0, v n (b, k)) and vn n = (vn N (a N, k), 0). In order to obtain this particular block/tridiagonal structure, the permutation matrix P is P v n = ( v n, v n 3, v n 5,..., v n, v n 4, v n 6,... ). 43 This transformation allows us to identify the two Schwarz subsequences 44 { v n, v3 n, v5 n,... } and { v n, v4 n, v6 n,... } 45 as pointed out in [9]. These subsequences 46 correspond to a well known red-black ordering of the subdomains, see for example 47 [5, page 8]. 48 Since T (k, δ) T (k, δ) is a diagonal matrix having entries equal to wb 4 and zero, 49 and denoting by µ the maximum between the dimension of the non-zero block of T 50 and the dimension of T, we have { ( z jπ ) } 5 (6) T (k, δ) = max b + w b z b cos, T (k, δ) = w j µ µ + b.

10 0 CIARAMELLA AND GANDER Theorem 5. Let N 4. Then, for any k > 0 and δ > 0, the estimate ρ(t (k, δ)) γ(k, δ, µ) T (k, δ) <, holds with { ( z jπ ) } (7) γ(k, δ, µ) := max b (k, δ) + w b (k, δ)z b (k, δ) cos + w j µ µ + b(k, δ), where µ is the maximum between the dimension of the non-zero block of T and the dimension of T. Proof. Using the triangle inequality and (6), we get The claim follows by noticing that T (k, δ) T (k, δ) + T (k, δ) = γ(k, δ, µ). γ(k, δ, µ) z b + w b z b + w b = ( z b + w b ) = T (k, δ), and using Theorem 3. Using Theorems 3 and 5 and Corollary 4, we thus obtain the estimate (8) ρ(t ) T γ(k, δ, µ) T λ(δ, k) < Example. Consider a chain of N = 5 atoms. The corresponding matrix T (k, δ) R 0 0. Denoting by P j,k R 5 5 the matrix performing the permutation of the j-th row with the k-th row, and P := (P 3,4 P 4,5 P,3 ) I, where denotes the Kronecker product and I is the identity, we get P ( v n, v n, v n 3, v n 4, v n 5 ) = ( v n, v n 3, v n 5, v n, v n 4 ), 7 and the matrix T (k, δ) is obtained by T (k, δ) = P T (k, δ)p. An intuitive and picto rial representation of the decomposition of T can be obtained with the help of graph theory [3, 5]. In particular, we can associate to T an adjacency matrix A d defined by { if (T ) jk 0 and j k, (A d ) jk := 0 otherwise. A well-known result says that the graph corresponding to an adjacency matrix A d R m m is disconnected if and only if the matrix Y := A d +A d + +Am d has at least one zero entry; see, e.g., [5, Corollary B page 6]. A simple calculation shows that in our example the matrix Y has zero entries, which means that the corresponding graph is not connected. Moreover, as shown in Figure, a simple plot shows that the graph corresponding to A d has only two connected components. These components correspond to T and T, respectively. Fig.. Graphs corresponding to the adjacency matrices obtained from T (left) and from T = P T P (right). A comparison between the two pictures shows that the action of the matrix P corresponds only to a relabeling of the nodes revealing the red-black ordering.

11 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS Optimality of the Schwarz iteration matrix for error estimation. The Schwarz iteration matrix T presented in Section 3 and also used for example in [0, 9] is not the only possible one. It is often easier to construct the Schwarz iteration matrix that acts on the constants of the formal solution of the differential equation considered, see for example [4, 6]. Doing this in our case, we obtain from (7)-(8) that a formal solution to (4) can be written as (9) v n j (x, k) = e k(x jl) c n j (k, δ) + e k(x jl) d n j (k, δ). By using the boundary conditions vj n(a j, k) = v n j (a j, k) and vj n(b j, k) = v n j+ (b j, k), we obtain the system of equations (30) D e kδ ( ) c n j (k, δ) d n j (k, δ) = L where ( ) e k(l+δ) e D := k(l+δ), L := e kδ ( c n j d n j ) (k, δ) + R (k, δ) ( e kδ e kδ 0 0 ( c n j+ d n j+ ) (k, δ), (k, δ) ) ( ) 0 0, R := e k(δ L) e k(δ L) Equation (30) for all the subdomains reads... (3) D D wn = L 0 R L 0 R... wn, where w n := (..., c n j, dn j, cn j, dn j, cn j+, dn j+,...). By defining the matrices (3) T := D L, T := D R, T3 := T T + T T, we obtain the new Schwarz iteration matrix... (33) T := T 0 T3 0 T T 0 T3 0 T..., and (3) can be written in compact form as (34) w n = T w n. As in Section 4, we can obtain from (33) the infinity norm T (k, δ) = ( e 6δk + e 4δk e δk) e 4kL + ( e 8δk e 4δk ) e kl + e 4δk (e kl+4δk ). For δ = 0 the previous norm becomes T (k, 0) =. Moreover, by fixing L =, ˆL =, m = (hence k = π) and δ = 0.05 we obtain that T., which shows that the infinity norm of T is not always bounded by. This shows that different Schwarz iteration matrices can have different norms, and only our first Schwarz iteration matrix

12 CIARAMELLA AND GANDER allowed us to get the convergence estimate. Notice also that, as in the case of the matrix T (see Theorem 3), the the norm T is not affected by the components of the first and last subdomains for N 5. For this reason, in what follows we exclude in our analysis the atoms j = and j = N and we remove from the Schwarz iteration matrices the corresponding rows and columns. Hence, we work with the Schwarz iteration matrices T N and T N in R (N ) (N ) given by zb w b z b 0 0 w b z b zb z b w b wb z. b w b z..... b w b z b z b w (35) T N = b z bw b zb w b wb z b w b z b wb z b zb wb z bw b zb w b z b 0 0 w b z b zb and T 3 0 T 0 T3 0 T T N = T 0 T3 0 T.... T 0 T We now construct a matrix G that allows us to transform T N into the Schwarz iteration matrix T N. To do so we evaluate (9) at x = a j and x = b j and obtain ( ) ( ) v n j (a j, k) c n vj n(b = D j (k, δ) j, k) d n j (k, δ). Similarly, we evaluate (8) at x = a j and x = b j and get ( ) ( ) ( v n j (a j, k) ga (k, δ) g vj n(b = D A (k, δ) v n j (a ) j, k) j, k) g B (k, δ) g B (k, δ) v n j+ (b. j, k) By combining the two previous inequalities and using the fact that D is invertible for k > 0, we obtain that ( ) ( ) ( c n j (k, δ) ga (k, δ) g d n = A (k, δ) v n j (a ) j, k) j (k, δ) g B (k, δ) g B (k, δ) v n j+ (b. j, k) Defining G :=... G..., G := ( ) ga (k, δ) g A (k, δ), g B (k, δ) g B (k, δ)

13 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS it follows from (34) that Gv n (k) = T N Gv n (k), which implies (36) v n (k) = G TN Gv n (k), and from (34) and (36) we obtain (37) T N = G TN G. Since (37) implies that ρ(t N ) = ρ( T N ), the convergence of the parallel Schwarz method does not depend on the choice of T N or T N, but because T N and T N have very different norms, the convergence analysis of the Schwarz method is strongly affected by the choice of the Schwarz iteration matrix. This implies that a transform map of the type T N G TN G is useful to generalize the results presented in Section 4, where we proved convergence of the sequence (3) defined on the interfaces, that is for vj n(a j, k) and vj n(b j, k). Using (0), we can construct an invertible matrix G such that the matrix ˆT N := G TN G describes the convergence of the parallel Schwarz method in two arbitrary distinct points belonging to (a j, b j ). In particular, by noticing that ρ( ˆT N ) = ρ( T N ) = ρ(t N ) we observe that the convergence behavior is the same for any point x in (a j, b j ). Now since the convergence of Schwarz methods is often done by norm estimates, and we have seen that different formulations give different norm estimates, which is the best Schwarz iteration matrix then to get the sharpest possible error estimate? 5.. Determining Schwarz iteration matrices for optimal error estimation. We introduce the following new concept of optimality of Schwarz iteration matrices with respect to error estimation: Definition 6. Let T R m m be a Schwarz iteration matrix. The Schwarz iteration matrix T R m m is said to be optimal in the norm if it solves the optimization problem (38) min T, T T ( T ) where T ( T ) denotes the set of all admissible Schwarz iteration matrices, T ( T ) := { V R m m G R m m invertible, such that V = G T G }. Definition 7. A stationary point of (38) is called a candidate to be an optimal Schwarz iteration matrix in the norm. Problem (38) can admit more than one solution. In fact, in general the set T ( T ) is not convex and the norm is not strictly convex. Moreover, the standard techniques used to prove the existence of a solution for such a problem require closedness of the constraint set T ( T ) and, as shown in the following example, this does not hold in general. Example. Consider the matrix T ( ) ( ) 0 n 0 =. Now define Q 0 0 n := hav- 0 ( ) ing the inverse Q /n 0 n =. Define the sequence {T 0 n } n as T n := Q T n Q n = ( ) 0 /n. This sequence converges to the zero matrix, T 0 0 n 0. Since the zero matrix is only similar to itself, 0 / T ( T ). This shows that T ( T ) is not closed in R. Fortunately, the set T ( T ) has already been studied in the field of control problems on manifolds, where it is usually called similarity orbit. A short discussion on the closedness of T ( T ) is provided in [4, Example. page 67], where it is stated

14 CIARAMELLA AND GANDER that a similarity orbit is closed if and only if the matrix T is diagonalizable. However, no proof is provided and it seems difficult to find a reference. Recalling that closedness implies the existence of a minimizer of (38), we prove now the sufficiency of diagonalizability. Proposition. If T R q q is diagonalizable, then the similarity orbit T ( T ) is closed in R q q. Proof. For a given sequence {T n } n in T ( T ) such that T n ˆT R q q, we have to show that ˆT T ( T ). Every element T n in the sequence is diagonalizable, since T is, and similarity between T n and T implies that they have the same eigenvalues λ l with the same geometric multiplicity q l, and diagonalizability implies that l q l = q. Now continuity of the determinant implies that det(t n λ l I) det( ˆT λ l I) with det( ˆT λ l I) = 0, which shows that ˆT has the same eigenvalues as T n, and hence as T. If the eigenvalues of T are distinct, then also the ones of ˆT are distinct, and thus ˆT is diagonalizable and the claim easily follows. If the eigenvalues are not distinct, then since T n is diagonalizable, we have that (T n λ l I) = (S n DS n λ l I) = S n (D λ l I)S n, which implies that ker(t n λ l I) and ker(d λ l I) have the same dimension. This means that the geometric multiplicity of each eigenvalue λ l is constant in the sequence {T n } n and hence nullity(t n λ l I) = q l for any n, where nullity : R q q N maps from the space of matrices to the dimension of the corresponding kernel. Now, we can use the fact that the nullity map is upper-semicontinuous (see [7] Example.6.) to write q l = lim sup nullity(t n λ l I) nullity( ˆT λ l I), n and this holds for any eigenvalue λ l. Hence the condition l q l = q holds also for the limit ˆT, which means that ˆT is diagonalizable and the proof is complete. In order to characterize minimizers of (38), it is suitable to consider the first-order optimality system, which is derived in the following theorem. Theorem 8. Let T R m m be a given Schwarz iteration matrix. If a Schwarz iteration matrix T is a local minimizer for (38), then there exists a pair of matrices (G, Λ) R m m R m m, with G invertible, such that (T, T, G, Λ) satisfy the first-order optimality system 40 (39a) T = G T G, 40 (39b) [ Λ, T ] = 0, (39c) (39d) trace( Λ T ) = T, Λ, where is the dual norm of and [, ] is the commutator operator, [P, Q] := P Q QP. Proof. To derive the optimality system, we use the Frobenius scalar product, : R m m R m m R defined as A, B := trace(a B), and rewrite (38) in the equivalent form (40) min M s.t. M X T Y = 0, XY I = 0, where M, X, Y R m m. The Lagrange function corresponding to (40) is (4) L(M, X, Y, Λ, Υ) := M + Λ, M X T Y + Υ, XY I.

15 44 45 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS 5 Using [5] and, e.g., [4], a necessary optimality condition is (4) 0 L(M, X, Y, Λ, Υ), 46 where L(M, X, Y, Λ, Υ) denotes the subdifferential of L at (M, X, Y, Λ, Υ). Notice 47 that the Lagrange function defined in (4) is Lipschitz continuous with respect to 48 M and differentiable with respect to X, Y, Λ and Υ, since the trace of matrices is 49 differentiable; see, e.g., [3] and references therein. Denoting by U := ( R m m) 5 40 and defining x := (M, X, Y, Λ, Υ) U, we notice 4 that the elements in L( x) are elements of the dual space U, i.e., linear functionals 4 acting on U, see [3]. Hence, the condition (4) is equivalent to the existence of a 43 S( x) L( x), defined as S( x) : U R, such that S( x)(δ x) = 0 for all δ x U. 44 Define h ( x) := M and h ( x) := Λ, M X T Y + Υ, XY I. Then the 45 Lagrange function can be written as L( x) = h ( x) + h ( x). Since h is convex and 46 Lipschitz continuous and h is smooth, we obtain (see [3]) L( x) = h ( x) + {h ( x)}, where h ( x) is the directional (Gâteaux) derivative of h at x. Moreover, h ( x) is the subdifferential of h at x and coincides with the subdifferential in the sense of convex analysis [3], and therefore every element S( x) in L( x) is of the form S( x) = S( x) + h ( x), where S( x) h ( x), and the condition 0 L( x) means S( x)(δ x) = 0 for all δ x U, and equivalently (43) S( x)(δ x) + h ( x)(δ x) = 0 δ x U, where the elements δ x are of the form (δm, δx, δy, δλ, δυ). Now h depends only on M, and hence h ( x) = M and the action of S( x) on δ x is given by S( x)(δ x) = S, δm, where S M. Notice that the convexity of the norm guarantees that the subdifferential M is non-empty; see, e.g., [3]. Next, we compute h ( x)(δ x). Since we are in finite-dimensions and h is differentiable, we have h ( x)(δ x) = (h ) M, δm + (h ) X, δx + (h ) Y, δy + (h ) Λ, δλ + (h ) Υ, δυ, where (h ) M, (h ) X, (h ) Y, (h ) Λ, and (h ) Υ are the partial derivatives of h at x with respect to M, X, Y, Λ and Υ. It is straightforward to obtain that (h ) M, δm = Λ, δm, (h ) Λ, δλ = M X T Y, δλ, (h ) Υ, δυ = XY I, δυ. Next, we compute the directional derivatives with respect to X along δx and Y along δy, and obtain [ lim h (M,X+αδX,Y,Λ,Υ) h (M,X,Y,Λ,Υ) ] α 0α [ ] =lim Λ,M (X+αδX) T Y + Υ,(X+αδX)Y I Λ,M X T Y Υ,XY I α 0 α = Λ, δx T Y + Υ,δXY =trace( Λ δx T Y )+trace(υ δxy ) =trace( T Y Λ δx)+trace(y Υ δx)= ΛY T,δX + ΥY,δX = ΥY ΛY T,δX.

16 6 CIARAMELLA AND GANDER Similarly, for Y we have [ lim h (M,X,Y +αδy,λ,υ) h (M,X,Y,Λ,Υ) ] α 0α [ ] =lim Λ,M X T (Y +αδy ) + Υ,X(Y +αδy ) I Λ,M X T Y Υ,XY I α 0 α = Λ, X T δy + Υ,XδY = X Υ T X Λ,δY. In summary, we thus obtain S( x)(δ x) + h ( x)(δ x) = S + Λ, δm + ΥY ΛY T, δx (44) + X Υ T X Λ, δy + M X T Y, δλ + XY I, δυ. Since (43) implies that (44) has to vanish for all (δm, δx, δy, δλ, δυ), we must have (45a) (45b) (45c) (45d) (45e) Λ M, ΥY ΛY T = 0, X Υ T X Λ = 0, M X T Y = 0, XY I = The condition (45e) implies that there exists an invertible matrix G such that Y = G with X = G, and thus (45d) becomes (46) M = G T G. Furthermore, conditions (45b) and (45c) become (47) ΥG = ΛG T, 464 (48) G Υ = T G Λ By multiplying (47) on the right with G, (48) on the left with G and subtracting the two equalities obtained, we get 0 = Λ(G T G) (G T G) Λ, and using (46) leads to (49) 0 = [ Λ, M ]. To conclude the proof, we recall from [7] that (45a) is equivalent to (50) trace( Λ M) = M with Λ, where is the dual norm of. Hence (46), (49) and (50) are equal to (39) by denoting M = T. In the particular case of the Frobenius norm, the optimality system corresponding to (38) is simpler than (39). In fact one can consider in (38) the square of the Frobenius norm, which is differentiable; see, e.g., [3]. Then the optimality system can be derived similarly as in Theorem 8, and we obtain

17 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS Corollary 9. Let T R m m be a given Schwarz iteration matrix, and consider the Frobenius norm F. If a Schwarz iteration matrix T is a global minimizer for (38) with = F, then there exists a pair of matrices (G, Λ) Rm m R m m, with G invertible, such that (T, T, G, Λ) satisfy the system 483 (5a) T = G T G, 484 (5b) [ Λ, T ] = 0, (5c) T + Λ = 0. Conditions (5b) and (5c) are equivalent to require that the matrix T has to be a normal matrix, a result which is obtained in [4] (Example.) with different theoretical arguments. For a complete discussion regarding normal matrices we refer to [] which provides a very large number of results for characterizing normal matrices. A necessary condition for the solvability of the optimality systems (39) and (5) is related to the kernel of the operator ad T ( ) := [, T ]. Notice that, if ker ad T = {0} and T 0, then (39) is not solvable because condition (39c) is not satisfied for Λ = 0. Similarly, if Λ = 0 then (5c) is not satisfied and also (5) is not solvable. For this reason, we prove the following result. Proposition. Let T 0. Then ker ad T {0}. Proof. By defining A := T and B := T, the equation ad T (X) = 0 becomes (5) AX + XB = 0, which is known as a Sylvester equation; see, e.g., [, 8] and references therein. It is clear that a solution to (5) is X = 0. Now equation (5) has a unique solution X if and only if A and B have no eigenvalues in common, see [8]. Since A = T = B this condition is not satisfied, X = 0 is not the only element in ker ad T and the claim follows. 5.. Optimality of the Schwarz iteration matrix of the atom chain. We now want to show that the Schwarz iteration matrix T N given by (35) is optimal with respect to error estimation. To do so we need to study the optimization problem (38): (53) min T N. T N T ( T N ) Recalling that T N = (I N G ) TN (I N G ), the optimization problem (53) can be written as (54) min G (I N G ) TN (I N G ). Now, the structure of T N and T N (see also Theorem 3) allows us to write that and T N = T M for N, M 5, T N = T M for N, M 7. Hence, minimizing (I N G ) TN (I N G ) in G is equivalent to minimize (I 5 G ) T7 (I 5 G ) and Problem (54) becomes (55) min G (I 5 G ) T7 (I 5 G ).

18 8 CIARAMELLA AND GANDER Therefore, to show that T N is optimal it is sufficient to study the case N = 7. We start with a lemma which shows that T N for N = 7 has distinct eigenvalues. This fact guarantees, according to Proposition, that problem (38) is well-posed. Lemma 0. The matrix T N for N = 7 has distinct eigenvalues. Proof. Define c := z b w b. By Lemma, we have that w b > 0 if k > 0 and δ (0, L/), and we also obtain that w b (k, δ) z b (k, δ) = e4kδ+kl e kl e kδ+kl e kδ. 55 Direct calculations show that w b(k,l/) z b (k,l/) = for any k and w b (k,δ) δ z b (k,δ) > 0 for any (k, δ). 56 Therefore, we have that c, and in particular, we see that c > for any positive 57 δ and k. Next, for N = 7 the matrix T N can be written as T N = wb T, where T is 58 given by ( T = P B 59 0 ) 0 P, B where P = ( ) P 3,4 P 4,5 P,3 I is a permutation matrix and c c c c 0 0 c c c 0 0 B = c c c c c c and B = c c c c c c. 0 0 c c 0 0 c c c c c Therefore, it suffices to study the eigenvalues of B and B. Direct calculations show that the four eigenvalues of B are 5c 4c c + c 5c 4c + c c λ (c) =, λ (c) =, 5c 4c c c 5c 4c + c + c λ 3 (c) =, λ 4 (c) =. Recalling that c > for any δ (0, L/), it is straightforward to see that λ 4 (c) > λ (c) > λ 3 (c) > λ (c) for any c >. Next, we compute the characteristic polynomial of B : where det (B λi 6 ) = p (λ; c) p (λ; c), p (λ;c)=(λ 3 3c λ cλ +3c 4 λ+c 3 λ c λ cλ c 6 c 5 +c 4 +c 3 c c) p (λ;c)=(λ 3 3c λ +cλ +3c 4 λ c 3 λ c λ+cλ c 6 +c 5 +c 4 c 3 c +c). Cumbersome calculations would allow us to compute explicitly the roots of p (λ; c) and p (λ; c). These roots have very complicated expressions, and we only need to show that they are distinct. For this reason we proceed as follows. Instead, if we calculate the intersections of the two polynomials and can show that at all intersections their value is non-zero, then this shows that they do not have any common roots. This is

19 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS easy to obtain, because in the difference p (λ; c) p (λ; c) the cubic term λ 3 cancels and we get p 3 (λ; c) := p (λ; c) p (λ; c) = cλ (4c 3 c)λ + c 5 4c 3 c, whose roots are λ ±. Notice that λ + and λ are the only two 550 points where the p (λ; c) and p (λ; c) intersect. Since a direct calculation shows that 55 p (λ ± ; c) 0 and p (λ ± ; c) 0 for any c >, it follows that p (λ; c) and p (λ; c) 55 have distinct roots. To conclude the proof it suffices to see by a direct evaluation that 553 p (λ j ; c) 0 and p (λ j ; c) 0, for any c > and j =,, 3, We now present the optimality characterization of the Schwarz iteration matrix 555 T N for error estimation. 556 Theorem. Consider the Schwarz iteration matrices T N and T N given by (35) 557 and (5). Assume that N 7, and that the overlap satisfies δ L/. Then T N 558 T ( T N ) is a candidate to be an optimal matrix for error estimation with respect to 559. Furthermore, it holds that T N < T N. Proof. By (37), T N = G T G, and since G is a block-diagonal matrix, and using 56 (33), the Schwarz iteration matrix T N R (N ) (N ) can be written as 56 (56) T N = G = (c )± 4c 3... T G 0 G T 3 G 0 G T G G T G 0 G T 3 G 0 G... T G Denoting by I the N N identity, the set S( T N ):= { V R (N ) (N ) A R invertible, G=I A s.t. V =G TN G }, is a subset of T ( T N ), and it thus suffices to prove that T N given by (56) is a stationary point for (38) over S( T N ). Furthermore, as we have discussed at the beginning of this section, for N 7 the specific structure of the elements of S( T N ) shows that the map V N S( T N ) V N is constant with respect to the dimension N. It is therefore sufficient to work with N = 7 (over S( T 7 )), and to show that T N is a stationary point for (55). Lemma 0 ensures for N = 7 that T N has distinct eigenvalues and hence is diagonalizable. Since T N is similar to T N, TN is diagonalizable as well. Hence, Proposition guarantees that the set T ( T N ) is closed and thus the existence of a solution to (38) follows by standard optimization arguments. Next, we recall that for N = 7 the matrix T N is given by zb w b z b 0 0 w b z b zb z b w b wb zb w b z b 0 0 w b z b zb z b w b wb T N = wb z b w b zb w b z b w b z b zb z b w b wb, wb z b w b zb w b z b 0 0 w b z b zb wb z b w b zb w b z b 0 0 w b z b zb where T N = (z b + w b ). Next, our aim is to construct a matrix Λ such that (T N, T N, G I, Λ) solves the optimality system (39). To do this, one can compute

20 CIARAMELLA AND GANDER the kernel of ad T N, and construct the element Λ ker ad T N given by Λ = w b 5z b 5 w b 4z b w b 5z b w b 4z b 8 w b 5z b 5 5 w b 5z b 5 that satisfies trace( Λ T N ) = T N and { } Λ = max w b + 5z b 5, w b + 5 4z b 8. 5, 58 Let k be a real number; similarly as in Lemma 0 we have that w b(k,l/) z b (k,l/) = for any 583 k and w b (k,δ) δ z b (k,δ) > 0 for any (k, δ). Hence w b(k,δ) z b (k,δ) for any k and δ L/. This 584 means that for δ L/ we have Λ, and therefore (T N, T N, G I, Λ) solves the optimality system (39), i.e. T N is a candidate to be an optimal Schwarz iteration matrix with respect to. Next, we show that T N > T N. A straightforward calculation leads to (57) ( T e 6δk + e 4δk e δk) e 4kL + ( e 8δk e 4δk ) e kl + e 4δk N = (e kl+4δk ). According to Theorem 3 we have that ( e kδ + e kl ) (58) T N = e kδ+kl. + Subtracting (58) from (57), we obtain ( T e 6δk e δk) e 4kL ( e 6δk e δk) e 3kL N T N = (e kl+4δk ) ( e 4δk ) e kl + ( e 6δk e δk) e kl + (e kl+4δk ) (59) ( e 6δk e δk) e 4kL ( e 6δk e δk) e 3kL (e kl+4δk ) ( e 6δk e δk) e kl + (e kl+4δk ). Letting y(k) n := e nkl, c(k, δ) := (e6δk e δk ) and noticing that c(k, δ) > 0 for any (e kl+4δk ) δ > 0 and k > 0, we deduce from (59) that T N T N c(k, δ) ( y(k) 3 y(k) + ) y(k),

21 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS which is positive since y(k) = e kl > for any k > 0. The same arguments used to prove Theorem can be applied to study the optimality of T N with respect to the norm, which leads to Corollary. Consider the Schwarz iteration matrices T N and T N given by (37) and (33), and assume that N 7, and the overlap satisfies δ L/. Then T N is a candidate to be an optimal Schwarz iteration matrix with respect to in T ( T N ), and T N < T N. We now show that T N is not optimal with respect to F, but T N provides a better estimate of the spectral radius also in the Frobenius norm, that is T N F < T N F. Theorem 3. For the Schwarz iteration matrices T N and T given by (37) and (33), T int is not a candidate to be an optimal Schwarz iteration matrix with respect to F in T ( T ). However, for N 7 we have that T N F < T N F, with (60) T N F = 4 ( z 4 b + z b w b + w 4 b) + 4 ( z 4 b + z b w b) + (N 6) ( z 4 b + z b w b + w 4 b), where w b and z b are given in Lemma, and (6) T N F = trace ( T 3 T3 + ( T ) T ) + trace ( T 3 T3 + ( T ) T ) + (N 6) trace ( T 3 T3 + ( T ) T + ( T ) T ), where T, T and T 3 are defined in (3). Proof. The first claim follows by noticing that T N does not commute with its transpose TN. To see this, we decompose the matrix T N = H + S, where H = (T N + TN ) is symmetric and S = (T N TN ) is skew-symmetric, i.e. zb z b w b w b/ 0 z b w b zb z b w b wb/ zb z b w b w b/ 0 H = z b w b zb z b w b wb/ wb/ z b w b zb z b w b wb/ 0 0 wb/ z b w b zb z b w b wb/ and S = w b We then obtain [ TN, TN ] [ ] [ ] [ ] [ ] [ ] [ ] 6 = H + S, H S = H, H + S, H H, S S, S = S, H, and since the first entry of [ ] ( [ ]) 6 T N, TN is given by S, H, = w4 b, we obtain by 63 Lemma that T N does not commute with its transpose.

22 CIARAMELLA AND GANDER 64 Next, we show that T N F < T N F. The results in (60) and (6) can be proved 65 by induction working on their structure presented in Section 4 and (33). Using then 66 (60) and (6), a direct calculation shows (6) T N F T N F = φ(k, δ)(n 6) + ϕ(k, δ), where and φ(k, δ) = (ekδ e 8kδ + e 4kδ )(e 8kL 3e 6kL + 8e 4kL 3e kl + ) (e kl+4kδ ) ϕ(k, δ) = (ekδ e 8kδ + e 4kδ )(3e 8kL 0e 6kL + e 4kL 0e kl + 3) (e kl+4kδ ) 4. Since for any positive δ and k we have the inequalities (e kδ e 8kδ + e 4kδ ) > 0, (e 8kL 3e 6kL + 8e 4kL 3e kl + ) > 0, (3e 8kL 0e 6kL + e 4kL 0e kl + 3) > 0, we obtain that φ(k, δ) > 0 and ϕ(k, δ) > 0, and then (6) yields T N F < T N F Further remarks. In this section, we study Schwarz iteration matrices T N and T N for N = 4. In particular, in Example 3 we show that T N does not satisfy the optimality system (39). In Example 4, we provide an example that shows the validity of Theorem also for a case with N < 7, that is T N for N = 4 satisfies (39). Example 3. To show that T N with N = 4 and G the identity does not satisfy the optimality system (39), we use (3) and (33) to obtain (63) TN = f b ã f f b ã f, f := e 4δk (e kl ) (e kl +) (e kl+δk ) (e kl+δk +), ã := (e6δk e δk )e 4kL +(e δk e 6δk )e kl (e kl+δk ) (e kl+δk +), b := (e 6δk e δk )e kl +(e δk e 6δk ) (e kl+δk ) (e kl+δk +). Since f, ã and b are positive, T N = max{ f + ã, f + b}. Next, we define H := T I 4 I 4 T, where I 4 is the 4 4 identity, and obtain after a lengthy calculation ã b ker H = I ã b The columns of ker H are the vectorizations of matrices H j R 4 4, with j =,..., 8, belonging to the kernel of ad T. Now, we look for a linear combination of Λ = j c jh j that satisfies the optimality system (39), that is trace ( ) Λ T = T and Λ, and compute ( trace( H T ),..., trace( H8 T ) ( ) = ã, f, f) 0, 0, 0, 0, ã,.

23 PARALLEL SCHWARZ METHOD FOR CHAIN OF ATOMS ρ(t) λ γ N Fig. 3. Behavior of ρ(t ), λ and γ as a function of the number N of atoms. This says that the only linear combination that can satisfy trace ( ) 650 Λ T = T = max{ f + ã, f + b} ) 65 is Λ = ( H + H with the restriction ( ) ã b, and this linear 65 combination has to satisfy Λ, which means +, which is satisfied ã b for ã b. Hence T with G = I 6 satisfies the optimality system only for ã = b, this condition is not in general satisfied, as one sees in (63). Example 4. In this example, we discuss the result obtained in Theorem for the case N = 4. The matrix T N for N = 4 is given by zb w b z b T N = w b z b zb zb w b z b, w b z b z b where T N = zb + z bw b. Defining Λ := , we see that Λ ker ad TN and that trace( Λ T N ) = T N. Furthermore, we have that T int =, and hence (T N, T N, G I, Λ) solves the optimality system (39). 6. Numerical experiments. We first compute numerically the spectral radius of T and compare the results with the functions λ and γ from the spectral bounds proved in Theorems 3 and 5. Figure 3 shows the behavior of ρ(t ), λ, and γ for an increasing number of atoms N with L =, δ = 0. and k =. We clearly see that λ does not depend on the number of atoms and is a global bound of ρ(t ) and γ. Notice that this Figure shows exactly the estimate (8). The parallel Schwarz method therefore converges independently of the number of subdomains. In Figure 4 we study the decay of ρ(t ), λ, and γ as functions of the overlap δ and the Fourier modes k, for L = and N = 0 and k = for the left panel and δ = 0.05 for the right panel. It is clear that the spectral radius ρ(t ) is bounded as ρ(t ) γ λ < in agreement with (8). In the next experiment, we solve numerically problem ()-()-(3) with f j = 0 and g j = 0 by applying the parallel Schwarz method. We generate the error sequence {e n j } n by solving (4)-(5) starting with an initial error e 0 j = in Ω j. We choose

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Bichain graphs: geometric model and universal graphs

Bichain graphs: geometric model and universal graphs Bichain graphs: geometric model and universal graphs Robert Brignall a,1, Vadim V. Lozin b,, Juraj Stacho b, a Department of Mathematics and Statistics, The Open University, Milton Keynes MK7 6AA, United

More information

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem Chapter 4 Inverse Function Theorem d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d dd d d d d This chapter

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Martin J. Gander and Felix Kwok Section de mathématiques, Université de Genève, Geneva CH-1211, Switzerland, Martin.Gander@unige.ch;

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

MATH 31BH Homework 1 Solutions

MATH 31BH Homework 1 Solutions MATH 3BH Homework Solutions January 0, 04 Problem.5. (a) (x, y)-plane in R 3 is closed and not open. To see that this plane is not open, notice that any ball around the origin (0, 0, 0) will contain points

More information

Notes on Linear Algebra and Matrix Theory

Notes on Linear Algebra and Matrix Theory Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

Algebras of singular integral operators with kernels controlled by multiple norms

Algebras of singular integral operators with kernels controlled by multiple norms Algebras of singular integral operators with kernels controlled by multiple norms Alexander Nagel Conference in Harmonic Analysis in Honor of Michael Christ This is a report on joint work with Fulvio Ricci,

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Automorphism groups of wreath product digraphs

Automorphism groups of wreath product digraphs Automorphism groups of wreath product digraphs Edward Dobson Department of Mathematics and Statistics Mississippi State University PO Drawer MA Mississippi State, MS 39762 USA dobson@math.msstate.edu Joy

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10.

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. INTRODUCTION TO LIE ALGEBRAS. LECTURE 10. 10. Jordan decomposition: theme with variations 10.1. Recall that f End(V ) is semisimple if f is diagonalizable (over the algebraic closure of the base field).

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions Final Exam Linear Algebra Summer Math SX (3) Corrin Clarkson August th, Name: Solutions Instructions: This is a closed book exam. You may not use the textbook, notes or a calculator. You will have 9 minutes

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Lebesgue Measure on R n

Lebesgue Measure on R n CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback

Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 48, NO 9, SEPTEMBER 2003 1569 Stability Analysis and Synthesis for Scalar Linear Systems With a Quantized Feedback Fabio Fagnani and Sandro Zampieri Abstract

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Implicit Functions, Curves and Surfaces

Implicit Functions, Curves and Surfaces Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Numerical Methods for Partial Differential Equations

Numerical Methods for Partial Differential Equations Numerical Methods for Partial Differential Equations Steffen Börm Compiled July 12, 2018, 12:01 All rights reserved. Contents 1. Introduction 5 2. Finite difference methods 7 2.1. Potential equation.............................

More information

Elements of Convex Optimization Theory

Elements of Convex Optimization Theory Elements of Convex Optimization Theory Costis Skiadas August 2015 This is a revised and extended version of Appendix A of Skiadas (2009), providing a self-contained overview of elements of convex optimization

More information

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique

Short note on compact operators - Monday 24 th March, Sylvester Eriksson-Bique Short note on compact operators - Monday 24 th March, 2014 Sylvester Eriksson-Bique 1 Introduction In this note I will give a short outline about the structure theory of compact operators. I restrict attention

More information

Math 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible.

Math 320: Real Analysis MWF 1pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible. Math 320: Real Analysis MWF pm, Campion Hall 302 Homework 8 Solutions Please write neatly, and in complete sentences when possible. Do the following problems from the book: 4.3.5, 4.3.7, 4.3.8, 4.3.9,

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

PURE MATHEMATICS AM 27

PURE MATHEMATICS AM 27 AM SYLLABUS (2020) PURE MATHEMATICS AM 27 SYLLABUS 1 Pure Mathematics AM 27 (Available in September ) Syllabus Paper I(3hrs)+Paper II(3hrs) 1. AIMS To prepare students for further studies in Mathematics

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014 Quivers of Period 2 Mariya Sardarli Max Wimberley Heyi Zhu ovember 26, 2014 Abstract A quiver with vertices labeled from 1,..., n is said to have period 2 if the quiver obtained by mutating at 1 and then

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions

Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions Multiple integrals: Sufficient conditions for a local minimum, Jacobi and Weierstrass-type conditions March 6, 2013 Contents 1 Wea second variation 2 1.1 Formulas for variation........................

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Homework set 4 - Solutions

Homework set 4 - Solutions Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Variational Formulations

Variational Formulations Chapter 2 Variational Formulations In this chapter we will derive a variational (or weak) formulation of the elliptic boundary value problem (1.4). We will discuss all fundamental theoretical results that

More information

The oblique derivative problem for general elliptic systems in Lipschitz domains

The oblique derivative problem for general elliptic systems in Lipschitz domains M. MITREA The oblique derivative problem for general elliptic systems in Lipschitz domains Let M be a smooth, oriented, connected, compact, boundaryless manifold of real dimension m, and let T M and T

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Dirichlet-Neumann and Neumann-Neumann Methods

Dirichlet-Neumann and Neumann-Neumann Methods Dirichlet-Neumann and Neumann-Neumann Methods Felix Kwok Hong Kong Baptist University Introductory Domain Decomposition Short Course DD25, Memorial University of Newfoundland July 22, 2018 Outline Methods

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Algebraic Varieties. Chapter Algebraic Varieties

Algebraic Varieties. Chapter Algebraic Varieties Chapter 12 Algebraic Varieties 12.1 Algebraic Varieties Let K be a field, n 1 a natural number, and let f 1,..., f m K[X 1,..., X n ] be polynomials with coefficients in K. Then V = {(a 1,..., a n ) :

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation. 1 2 Linear Systems In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation 21 Matrix ODEs Let and is a scalar A linear function satisfies Linear superposition ) Linear

More information