On the spectra of periodic waves for infinite-dimensional Hamiltonian systems

Size: px
Start display at page:

Download "On the spectra of periodic waves for infinite-dimensional Hamiltonian systems"

Transcription

1 On the spectra of periodic waves for infinite-dimensional Hamiltonian systems Mariana Hǎrǎguş Laboratoire de Mathématiques Université de Franche-Comté 16 route de Gray Besançon cedex, France Todd Kapitula Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546, USA March 19, 2008 Abstract. We consider the problem of determining the spectrum for the linearization of an infinitedimensional Hamiltonian system about a spatially periodic traveling wave. By using a Bloch-wave decomposition, we recast the problem as determining the point spectra for a family of operators J γ L γ, where J γ is skew-symmetric with bounded inverse and L γ is symmetric with compact inverse. Our main result relates the number of unstable eigenvalues of the operator J γ L γ to the number of negative eigenvalues of the symmetric operator L γ. The compactness of the resolvent operators allows us to greatly simplify the proofs, as compared to those where similar results are obtained for linearizations about localized waves. The theoretical results are general, and apply to a larger class of problems than those considered herein. The theory is applied to a study of the spectra associated with periodic and quasi-periodic solutions to the nonlinear Schr ĺodinger equation, as well as periodic solutions to the generalized Korteweg-de Vries equation with power nonlinearity. Contents 1. Introduction 1 2. General theory Symmetries Canonical form Refinements Eigenvalue problems of NLS-type NLS equation with periodic potential Example: Bose-Einstein condensates Quasi-periodic solutions to the defocusing NLS equation Eigenvalue problems of KdV-type Periodic waves to the gkdv equation Appendix: Proof of the SCS Basis Lemma 30 References 34 mharagus@univ-fcomte.fr tmk5@calvin.edu

2 M. Hǎrǎguş and T. Kapitula 1 1. Introduction We are interested in certain spectral-stability aspects of spatially periodic waves in Hamiltonian nonlinear partial differential equations. Upon linearizing about a periodic wave one is left with studying an eigenvalue problem of the form JLu = λu, 1.1) where L is a symmetric operator with spatially periodic coefficients, and J is a skew-symmetric operator. The main purpose of this paper is to obtain a general result relating the spectra of the operators L and JL, in the spirit of the results in [6, 20, 21, 27] for localized waves, in which the total number of negative point eigenvalues of L is related to the total number of unstable eigenvalues of JL. As is well-known, the spectra associated with periodic waves strongly depends upon the choice of the function space. While for spaces of periodic functions, e.g., H Lper[ L, 2 L]; C n ), the spectrum consists of isolated eigenvalues of finite multiplicity, for spaces of functions defined on the whole real line, e.g., H L 2 R; C n ), the spectrum is continuous, and in particular there are no isolated eigenvalues of finite multiplicity. Here, we are interested in the latter situation, which corresponds to studying the spectral stability of periodic waves with respect to localized perturbations. In this case, the fact that the spectrum of the operator is continuous is a serious impediment to implementing the results of [20, 21], which concern isolated eigenvalues of L and JL only. However, this difficulty can be overcome by considering a Blochwave decomposition see Section 3). Upon performing such a decomposition the eigenvalue problem 1.1) is replaced by a family of eigenvalue problems J γ L γ u = λu, π < γ π, 1.2) where for each γ the operator L γ is symmetric and has now a compact resolvent, while the operator J γ is skew-symmetric. As a consequence, J γ L γ has point spectrum only, so that one expects that the results and ideas presented in [20, 21] will be immediately applicable. However, one runs into another problem which must be overcome here: ImJ γ L γ ) 0, in contrast to the situation in [20, 21] where ImJL) = 0. There is one crucial difference between the cases ImJ γ L γ ) = 0 and ImJ γ L γ ) 0. In the former case the spectrum of J γ L γ is symmetric with respect to both the real and imaginary axes of the complex plane, i.e., one has the eigenvalue quartets {±λ, ±λ}, whereas in the latter case the symmetry is generically with respect to the imaginary axis only, i.e., one has the eigenvalue pairs {λ, λ}. Consequently, this changes the form of the general statement in Theorem 2.13 below as compared to that of [21, Theorem 1]. Nevertheless, under certain symmetry assumptions one recovers the symmetry with respect to the real axis, and hence the expected form of the result see Section 2.1 and Section 2.2). The results presented in [20, 21] rely heavily upon the previous work of [14, 15], in which it is assumed that the operator L has both point and continuous spectra. As it will be seen in Section 2, in our case the absence of continuous spectrum due to the compactness of the resolvent) allows us to recover these results via more elementary means. The proofs are in the spirit of those presented in [6, 27] for localized waves in that they involve a detailed analysis of the indefinite quadratic form, L γ ; however, the proofs presented herein allow us to remove the assumption that the point eigenvalues of J γ L γ are semi-simple see Remark 2.14b)). The paper is organized as follows. In Section 2 we consider general eigenvalue problems given in equation 1.2). Under the general Assumption 2.1 we obtain a relationship between the number of unstable eigenvalues of the operator J γ L γ and the number of negative eigenvalues of the symmetric operator L γ see Theorem 2.13). In particular, if ImJ γ L γ ) = 0, we recover the results presented in [20, 21], although the proof is different. Some particular cases are discussed in Section 2.1 and Section 2.2. The proofs presented in this section are self-contained. A crucial ingredient of the proof, and a result which is of independent interest, is the result of the SCS Basis Lemma Lemma 2.3). Therein it is shown that under general assumptions that the eigenvectors and generalized eigenvectors of the non-self-adjoint operator J γ L γ form a basis. In Section 3 we apply these general results to a class of eigenvalue problems of NLS-type in which J = J γ is a bounded operator. In particular, we study the spectra for periodic and quasi-periodic waves to nonlinear Schr ĺodinger equations. In Section 4 we apply the results to eigenvalue problems of KdV-type in which now J γ is an unbounded operator with compact resolvent. As an example, we study the

3 Spectra of periodic waves 2 spectra of periodic waves to the generalized Korteweg-de Vries equation gkdv). Finally, in the Appendix we give the proof of the SCS Basis Lemma. Acknowledgments. ACI JC 1039 M.H.). This work was partially supported by the Ministère de la Recherche through grant 2. General theory We consider the general eigenvalue problem JLu = λu 2.1) for a symmetric operator L and a skew-symmetric, invertible operator J. The problem will be considered on a complex Hilbert space H with inner product,. We make the following assumptions: Assumption 2.1. The operators J and L in equation 2.1) have the following properties: a) J is skew-adjoint J = J) and invertible with bounded inverse; b) L is self-adjoint L = L) and invertible with compact inverse; c) σl) R, where σl) denotes the spectrum of L, is a finite set; d) there exists a skew-adjoint operator A 0 with compact resolvent such that: i) the operator B JL A 0 is A 0 -compact and satisfies for some positive constants a, b, and r [0, 1); Bu a u + b A 0 r u, u DJL), 2.2) ii) the sequence of purely imaginary eigenvalues {iω n } n Z of A 0, where each eigenvalue is repeated as many times as the value of its multiplicity and ω n ω n+1, satisfies for some s 1; + n= ω n s <, iii) there exists a subsequence of eigenvalues {iω nk } k Z, and positive constants c > 0 and r > r, such that ω nk > 0 and ω nk +1 ω nk cω r n k +1 for k 0, ω nk < 0 and ω nk ω nk 1 c ω nk 1 r for k < ) Remark 2.2. a) Let {e k } k Z be an orthonormal basis of H consisting of eigenvectors of A 0. Then, the operator A 0 r can be defined through A 0 r u = ω k A 0 ) r u k e k, u = u k e k. k Z b) An immediate consequence of Assumption 2.1b) is that the spectrum of L is a countable set, with no accumulation point in C, consisting of real eigenvalues with finite multiplicity. Furthermore, the set of its eigenvectors form an orthonormal basis for H. c) Similarly, since A 0 is skew-adjoint with compact resolvent, by Assumption 2.1d), its spectrum is a countable set, with no accumulation point in C, consisting of purely imaginary eigenvalues with finite multiplicity, and the set of its eigenvectors forms an orthonormal basis for H. As we shall see in the next lemma, similar properties also hold for JL. k Z

4 M. Hǎrǎguş and T. Kapitula 3 d) If s = 1 in Assumption 2.1d)ii), then the operator A 0 is of trace class. e) Assumption 2.1d) is easily verified in the examples considered in Section 3 and Section 4, in which A 0 is an operator with constant coefficients. The eigenvalues of A 0 can then be explicitly computed via Fourier analysis; in particular, for NLS we have that actually B is bounded so that equation 2.2) holds with r = 0, and in equation 2.3) we find r = 1/2. For gkdv, we obtain r = 1/3 and r = 2/3. f) Assumption 2.1d)ii) is required to show that the set of eigenfunctions and generalized eigenfunctions of JL is complete in H. The assumptions equation 2.2) and equation 2.3) are needed to show that the set actually forms a basis of H. This section will be devoted to comparing the spectra of the operators L and JL. We start by collecting a number of properties of the operator JL which follow from the Assumption 2.1. We give the proof of the following lemma in the Appendix the acronym follows from Skew-symmetric Composed with Self-adjoint). Lemma 2.3 SCS Basis Lemma). Assume that Assumption 2.1 holds. following properties: a) JL is invertible and has a compact inverse L 1 J 1 ; Then the operator JL has the b) the spectrum σjl) of JL is a countable set with no accumulation point in C consisting of eigenvalues with finite algebraic multiplicity; c) the set of eigenfunctions and generalized eigenfunctions of JL forms a basis of H; d) given ϸ > 0, the eigenvalues of JL) 1, with a possible exception of a finite number, belong to the sector π/2 {ρe iθ : ρ R +, θ π/2 < ϸ or θ + π/2 < ϸ}. Remark 2.4. An important consequence of Lemma 2.3d) is that σjl) R is a finite set. In addition to the properties in the previous proposition, a key ingredient in the next arguments is the following symmetry property of the spectrum of JL. Proposition 2.5. If λ σjl), then λ σjl). multiplicity, m a λ) = m a λ). Furthermore, λ and λ have the same algebraic Proof: Suppose that λ σjl) is an eigenvalue of JL with associated eigenvector v, so that JLv = λv. Since J has a bounded inverse we have Lv = λj 1 v, and then JL) J 1 v) = LJJ 1 v) = Lv = λj 1 v. Consequently λ σjl) ), so that λ σjl). The equality of the algebraic multiplicities follows from the Fredholm alternative. As a consequence of Proposition 2.5 the eigenvalues of JL appear in pairs {λ, λ}, that is the spectrum of JL is symmetric with respect to the imaginary axis. In addition, if JL is a real operator, ImJL) = 0, then its spectrum is also symmetric with respect to the real axis, so that we have quartets of eigenvalues {±λ, ±λ}. However, this property is typically not true in our applications in which ImJL) 0. Notation 2.6. For an eigenvalue λ σjl), denote by E λ the corresponding spectral subspace, and by I λ the invariant subspace corresponding to the eigenvalue pair {λ, λ}, I λ = E λ E λ. The usual approach for comparing the spectra of JL and L relies upon the study of the quadratic form L, associated to L in the basis of H given by the eigenfunctions and generalized eigenfunctions of JL. The following properties of the quadratic form L, are similar to the ones in [23, Lemmas 2-4]. The main difference is that here they concern pairs of eigenvalues {λ, λ}, while in [23] they were obtained for quartets of eigenvalues {±λ, ±λ}. Lemma 2.7. If v E λ and w E σ with λ + σ 0, then Lv, w = 0.

5 Spectra of periodic waves 4 Proof: First suppose that v and w are not generalized eigenfunctions, so that JLv = λv and JLw = σw. Then Lv, w = λ J 1 v, w, v, Lw = σ v, J 1 w. Since L is symmetric and J 1 is skew-symmetric this implies that λ + σ) J 1 v, w = 0, from which one gets the desired result. Now suppose that either one or both of v and w are generalized eigenfunctions. In this case the proof will follow by induction. One first has the Jordan chains Arguing as above, we find JLv k+1 = λv k+1 + v k v 0 = 0), k = 0,..., k JLw l+1 = σw l+1 + w l w 0 = 0) l = 0,..., l. Lv k+1, w l+1 = λ J 1 v k+1, w l+1 + J 1 v k, w l+1 = σ v k+1, J 1 w l+1 + v k+1, J 1 w l, 2.4) so that We now show that λ + σ) J 1 v k+1, w l+1 + J 1 v k, w l+1 + J 1 v k+1, w l = ) λ + σ) J 1 v k+1, w l+1 = 0, k = 0,..., k, l = 0,..., l. 2.6) The result of equation 2.6) is known to be true for k, l) = 0, 0). First suppose that for some 0 n k 1 the equation 2.6) holds true for l = 0 and k = 0,..., n. Upon using equation 2.5) and the fact that w 0 = 0, one immediately gets that 2.6) holds for k = n + 1. Thus, the result holds for l = 0 and k = 0,..., k. Similarly, equation 2.6) holds for k = 0 and l = 0,..., l. Now consider equation 2.5) with k, l) = 1, 1). Since equation 2.6) holds for k, l) = 1, 0) and k, l) = 0, 1), we conclude that equation 2.6) holds for k, l) = 1, 1). Another induction argument gives the result for l = 1 and k = 0,..., k, as well as for k = 1 and l = 0,..., l. Continuing in this fashion leads to the desired result. Finally, since λ + σ 0 equation 2.6) gives J 1 v k+1, w l+1 = 0, k = 0,..., k, l = 0,..., l. Substituting the above equalities into equation 2.4) completes the proof. An immediate consequence of this lemma is the following property. Proposition 2.8. Suppose that w = v j, where v j I λj for distinct λ j. Then Lw, w = Lv j, v j. Finally, following the argument presented for [23, Lemma 4] we obtain: Proposition 2.9. Assume that λ σjl). Then the restriction L Iλ, of the quadratic form L, to I λ is non-singular. Proof: Recall that λ 0, because JL has a bounded inverse. Then if L Iλ, is singular, there exists a nonzero v I λ such that Lw, v = 0 for all w I λ. Together with Lemma 2.7 this implies that Lw, v = 0 for any eigenvector or generalized eigenvector w of JL. By Lemma 2.3c) this yields that v = 0, which is a contradiction. In other words, Proposition 2.8 and Proposition 2.9 state that the quadratic form associated to L when expanded over the eigenfunctions and generalized eigenfunctions of JL is the sum over the individual invariant subspaces I λ, and that each term in the sum is a nonsingular quadratic form. Thus, in order to study L,, on H, it is enough to consider the restrictions L Iλ, for each λ σjl).

6 M. Hǎrǎguş and T. Kapitula 5 Lemma Assume that λ σjl). Let Lλ) represent the Hermitian matrix associated with the quadratic form L Iλ, on I λ, and denote by nlλ)) and plλ)) the number of its negative and positive eigenvalues counted with algebraic multiplicities, respectively. If Re λ 0, then Lλ) C 2m aλ) 2m a λ), and If Re λ = 0, then Lλ) C m aλ) m a λ) and nlλ)) = plλ)) = m a λ). nlλ)) + plλ)) = m a λ). Proof: The properties on the dimension of the matrix Lλ) follow from the fact that dimi λ ) = 2m a λ), if Re λ 0, whereas dimi λ ) = m a λ), if Re λ = 0. Set k m a λ). First assume that Re λ 0. Let {u 1,..., u k } be a basis for E λ, and let {v 1,..., v k } be a basis for E λ. One has that ) L1 L Lλ) = c L, 2.7) c L 2 where L 1 ) ij = Lu i, u j, L 2 ) ij = Lv i, v j, L c ) ij = Lu i, v j. As a consequence of Lemma 2.7 one has that L 1 = L 2 = 0. Since µ 2 σl cl c ) if and only if ±µ σlλ)), and since L cl c is symmetric, positive semi-definite, and nonsingular, the first result now follows. The result for Re λ = 0 follows immediately as a consequence of Proposition 2.9. Definition Consider the spectral subsets σ r {λ σjl) : Re λ > 0, Im λ = 0}, σ c {λ σjl) : Re λ > 0, Im λ 0}, σ i {λ σjl) : Re λ = 0}. For an eigenvalue λ σjl), with Re λ 0, we define the following quantities: k r λ) m a λ), if λ σ r, k c λ) m a λ), if λ σ c, ki λ) nlλ)), if λ σ i, and set Remark k r k r λ), λ σ r k c k c λ), ki ki λ). 2.8) λ σ c λ σ i a) The union σ r σ c σ i is the set of eigenvalues of JL with Re λ 0. Since the spectrum of JL is symmetric with respect to the imaginary axis we have σjl) = {λ, λ : λ σ r σ c σ i }. b) The numbers k r, k c, and ki have been used in [20, 21] with a slightly different meaning. While in [20, 21] they were associated to quartets of eigenvalues {±λ, ±λ}, here they are associated to the pairs {λ, λ}. Of course this is due to the absence of the symmetry of the spectrum with respect to the real axis. However, if in addition the spectrum of JL is also symmetric with respect to the real axis for example if ImJL) = 0), so that we also have quartets of eigenvalues {±λ, ±λ}, then k r has the same significance, whereas k c, and ki are even numbers, and are precisely twice the numbers considered in [20, 21]. The following general result connecting the number of unstable eigenvalues of JL and the number of negative eigenvalues of L can now be proven.

7 Spectra of periodic waves 6 Theorem Consider the eigenvalue problem of equation 2.1) under Assumption 2.1. One has that k r + k c + k i = nl), where k r, k c, ki are the numbers introduced in Definition 2.11, and nl) denotes the number of negative eigenvalues including multiplicities) of L. Furthermore, if ImJL) = 0, then the spectrum of JL is symmetric with respect to the real axis and the numbers k c and ki are even. Proof: Consider the quadratic form L, and define the cone CL) by CL) {u H 1 : Lu, u < 0} {0}, where H 1 is the domain of L,, DL) H 1 H. Denote by dim[cl)] the dimension of the maximal linear subspace E CL). Since L is a symmetric operator with point spectrum only, upon using an orthogonal basis for H consisting of eigenvectors of L we find that dim[cl)] = nl). Indeed, the linear subspace spanned by the eigenvectors of L associated to the negative eigenvalues of L belongs to CL), so that dim[cl)] nl). Next, assuming that CL) contains a linear subspace of dimension nl) + 1, at least, this subspace necessarily contains a vector v = α j v j, with v j eigenvectors associated to the positive eigenvalues of L. Then Lv, v = α j 2 Lv j, v j > 0, which is in contradiction with v CL). This shows that dim[cl)] = nl). On the other hand, by Lemma 2.10, for any eigenvalue λ of JL with Re λ 0 there is a linear subspace H λ I λ CL) with dimension k r λ), Re λ > 0, Im λ = 0 dim[h λ ] = k c λ), Re λ > 0, Im λ 0 ki λ), Re λ = 0. By Lemma 2.7, the direct sum of these subspaces is contained in CL), so that dim[cl)] k r + k c + ki. Finally, by arguing as above, using the fact that the set of eigenvectors and generalized eigenvectors of JL are a basis, and Lemma 2.7, again, we conclude that dim[cl)] = k r + k c + ki. Together with the equality dim[cl)] = nl), this completes the proof. Remark a) It would be an interesting question to prove Theorem 2.13 by combining the results of [18, Appendix A] on Pontrjagin spaces and the SCS Basis Lemma. In particular, upon noticing that the operator ijl is symmetric on H equipped with the indefinite, sesquilinear, Hermitian scalar product {u, v} u, Lv, one is within the framework presented therein. A major hurdle to overcome in order to apply these ideas seems to be that completeness of the eigenfunctions of JL in H does not necessarily imply completeness in the space supplied with the inner product given above. b) The result of Theorem 2.13 is in the same vein as that presented in [20, 21] for localized waves. The primary difference is that here L is assumed to be invertible with compact inverse. If desired, the assumption that L is nonsingular can be removed, and a result which takes into account that L has a nontrivial null-space can be formulated. However, since in our applications L is typically invertible, we restrict to this case, only. The fact that L has compact inverse allows us to give a self-contained proof which no longer relies upon the results of [14, 15]. c) Recently [6, 27, 30] have initiated a program in which they prove, for localized waves, a result similar to that given in Theorem 2.13 without using any of the results in [14, 15]. In doing so, however, they typically assume that all of the eigenvalues of JL are semi-simple [27], and that the operators J and L have the special form J = 0 id id 0 ), L = ) L ) 0 L In [14, 15] equation 2.9) was also assumed; however, in [21] it was shown how this assumption could be circumvented.

8 M. Hǎrǎguş and T. Kapitula Symmetries The next goal is to refine the result of Theorem 2.13 under the assumption of some underlying symmetries. We will consider two different scenarios. In both cases we find the usual quadruplets {±λ, ±λ}. The second symmetry will be considered in more detail the next subsection. Assumption a) Assume that Im J = 0 and that there exists a bounded unitary operator U 1 such that U 1 J = JU 1 and U 1 L = LU 1. b) Assume that there exists a bounded unitary operator U 2 such that U 2 J = JU 2 and U 2 L = LU 2. Proposition Under Assumption 2.15a), if u H is a generalized) eigenfunction of JL associated with the eigenvalue λ, then U 1 u is a generalized) eigenfunction of JL associated with the eigenvalue λ. Under Assumption 2.15b), if u H is a generalized) eigenfunction of JL associated with the eigenvalue λ, then U 2 u is a generalized) eigenfunction of JL associated with the eigenvalue λ. Proof: Consider the implication of Assumption 2.15a). The proof of Assumption 2.15b) is similar to that of a), and is left for the interested reader.) Suppose that u is an eigenfunction with associated eigenvalue λ, so that JLu = λu. Upon taking the complex-conjugate of both sides one has that JLu = λu. Applying U 1 to both sides then yields the desired result. The case when u is a generalized eigenfunction is proved in the same way. As a consequence of Proposition 2.16 one has that the spectrum of JL is also symmetric with respect to the real axis. First assume the symmetry of Assumption 2.15a). For each invariant subspace I λ with Im λ 0 there is an associated invariant subspace I λ such that dimi λ ) = dimi λ ). Furthermore, the subspaces have the same Jordan block structure. Now, if Re λ 0, then the result of Lemma 2.10 immediately shows that k c λ) = k c λ). Now suppose that Re λ = 0, and let {u 1,..., u k } be a basis for I λ. Following the proof of Lemma 2.10 one has that Lλ)) ij = Lu i, u j. 2.10) As a consequence of Proposition 2.16 one has that {U 1 u 1,..., U 1 u k } is a basis for I λ. Upon using Assumption 2.15a) one has that Lλ)) ij = LU 1 u i, U 1 u j = U 1 Lu i, U 1 u j = Lu i, u j = Lu i, u j = Lu j, u i. Upon comparing with equation 2.10) one then sees that Lλ) = Lλ), which in particular implies that ki λ) = k i λ). A similar argument holds when considering Assumption 2.15b). The primary difference is that for λ ir, Lλ) = L λ). The result of Theorem 2.13 can now be modified in the following way. Theorem Consider the eigenvalue problem 2.1) under Assumption 2.1 and either Assumption 2.15a) or Assumption 2.15b). Then the spectrum of JL is symmetric with respect to both the imaginary and the real axis, and k r + k c + ki = nl), with k c and k i even numbers Canonical form We discuss in this section the particular case of the canonical form given in equation 2.9), when H = H H, and ) ) 0 id L+ 0 J =, L =, 2.11) id 0 0 L

9 Spectra of periodic waves 8 with L ± symmetric operators on H such that J and L satisfy Assumption 2.1. One recovers the result of Theorem 2.17, since JL satisfies Assumption 2.15b) with ) id 0 U 2 =. 2.12) 0 id The goal is to refine this result and state it in terms of the individual operators L ± see also [14, 15, 20, 27]). In order to do so, we review the basis result in Lemma 2.3, the orthogonality result in Lemma 2.7, and the structure of the matrix Lλ) in Lemma 2.10 in this case. Notation For an operator A we set σ + A) {λ σa) : Im λ > 0} {λ σa) : Im λ = 0 and Re λ > 0}. Proposition Consider the eigenvalue problem of equation 2.1) under the Assumption 2.1 with J and L given by equation 2.11). Suppose that λ is an eigenvalue of JL with λ σ + JL). Let {u λ 1,..., uλ k λ } be a basis for E λ consisting of generalized eigenvectors of JL, with u λ j = u λ j, v λ j )T. Then the following properties hold: a) The set {U 2 u λ 1,..., U 2u λ k λ } is a basis for E λ. Furthermore, U 2 u λ j = u λ j, v λ j )T. b) The set {u λ j, U 2u λ j : j = 1,..., k λ, λ σ + JL)} is a basis for H = H H. c) The sets {u λ j : j = 1,..., k λ, λ σ + JL)} and {v λ j : j = 1,..., k λ, λ σ + JL)} are basis for H. Proof: The property a) is an immediate consequence of the Assumption 2.15b) in Section 2.1 and of equation 2.12). Part b) follows from a) and the basis result in Lemma 2.3c), and c) is a consequence of a) and b). The following result is a refinement of Lemma 2.7. Lemma Let u λ u λ, v λ ) T E λ and u σ u σ, v σ ) T E σ. If λ ± σ 0, then L + u λ, u σ = L v λ, v σ = 0. Proof: Suppose that λ + σ 0, and further suppose that the eigenvectors u λ, u σ eigenvectors. From equation 2.11) one has that are not generalized L + u λ = λv λ, L v λ = λu λ ; L + u σ = σv σ, L v σ = σu σ, which upon using the fact that the operators L ± are symmetric yields L + u λ, u σ = λ v λ, u σ = σ u λ, v σ L v λ, v σ = λ u λ, v σ = σ v λ, u σ. By Lemma 2.7 one has that so that L + u λ, u σ + L v λ, v σ = 0, λ σ) u λ, v σ = λ σ) v λ, u σ = 0. If λ σ 0, then one gets the desired result. The proof for u λ, u σ being generalized eigenvectors follows that presented in Lemma 2.7 and is left for the interested reader. Finally, we come back to the structure of the matrix Lλ) in Lemma 2.10 in this case. We distinguish between Im λ = 0 and Im λ 0. Lemma Suppose that λ is an eigenvalue of JL with Im λ = 0. Let {u 1,..., u k } be a basis for E λ consisting of generalized eigenvectors of JL, with u j = u j, v j ) T. Then the following properties hold:

10 M. Hǎrǎguş and T. Kapitula 9 a) The Hermitian matrix Lλ) associated with the quadratic form L Iλ, on I λ = E λ E λ is of the form Lλ) = L 1 +λ) + L 1 λ), with where id id L 1 +λ) = L + λ) id id ), L 1 λ) = L λ) id id ) id, id L + λ)) ij L + u i, u j, L λ)) ij L v i, v j. 2.13) b) We have that L + λ) = L λ). Proof: The property a) is the same as Proposition 2.19a). Next, using Proposition 2.19a) and equation 2.11) a straightforward calculation gives the equalities in part b). Finally, part c) follows from the proof of Lemma 2.7 which shows that the terms on the diagonal of Lλ) vanish, so that L + λ)+l λ) = 0. Lemma Suppose that λ is an eigenvalue of JL with Im λ 0. Let {u 1,..., u k } be a basis for E λ consisting of generalized eigenvectors of JL, with u j = u j, v j ) T. Then the Hermitian matrix Lλ) associated with the quadratic form L Iλ, on I λ has the following properties: a) If Re λ = 0, then I λ = E λ and with Lλ) = L + λ) + L λ), L + λ) = L λ), L + λ)) ij L + u i, u j, L λ)) ij L v i, v j. b) If Re λ 0, let {x 1,..., x k } be a basis for E λ, with x j = x j, y j ) T. Then Lλ) = L + λ) + L λ), with L + λ) = L λ) = 0 Lc L c 0 ), L c ) ij L + u i, x j = L v i, y j. Proof: In both cases the equality Lλ) = L + λ) + L λ) is an immediate consequence of equation 2.11). a) Suppose that Re λ = 0, so that λ = λ and I λ = E λ. Let {u j,k, v j,k ) T : j = 1,..., m, k = 1,..., a j }, m a j = m a λ), j=1 be a basis that represents a Jordan block decomposition of E λ consisting of eigenvectors and generalized eigenvectors of JL such that L + u j,k = λv j,k v j,k 1, v j,0 = 0 L v j,k = λu j,k + u j,k 1, u j,0 = ) First note that upon using equation 2.14) and the fact that L + is symmetric, L + u j,k, u j,k = λ v j,k, u j,k v j,k 1, u j,k = λ u j,k, v j,k u j,k, v j,k ) As a consequence of the last equality in equation 2.15) one has that Similarly one sees that λ[ u j,k, v j,k + v j,k, u j,k ] = u j,k, v j,k 1 v j,k 1, u j,k. 2.16) L v j,k, v j,k = λ u j,k, v j,k + u j,k 1, v j,k = λ v j,k, u j,k + v j,k, u j,k 1, 2.17)

11 Spectra of periodic waves 10 which implies that λ[ u j,k, v j,k + v j,k, u j,k ] = v j,k, u j,k 1 u j,k 1, v j,k. 2.18) Combining equation 2.16) and equation 2.18) yields the identity 2λ[ u j,k, v j,k + v j,k, u j,k ] = u j,k, v j,k 1 + v j,k, u j,k 1 [ u j,k 1, v j,k + v j,k 1, u j,k ]. 2.19) Our aim is to prove that L + u j,k, u j,k = L v j,k, v j,k, which in view of equation 2.15) and equation 2.17) is equivalent to proving that λ[ u j,k, v j,k + v j,k, u j,k ] = u j,k, v j,k 1 + v j,k, u j,k 1. We claim that u j,k, v j,k + v j,k, u j,k = 0, 2.20) for any j, j, k, and k, which then gives the required equality. Indeed, the equality clearly holds when either k = 0 or k = 0, i.e., u j,0, v j,k + v j,0, u j,k = 0, k = 0,..., a j, and u j,k, v j,0 + v j,k, u j,0 = 0, k = 0,..., a j. Then using the equality 2.19), an induction argument for k = 0,..., a j and k = 0,..., a j, gives equation 2.20). b) Now suppose that Re λ 0; in particular, λ ± λ 0. In addition to the basis already given for E λ, following the discussion leading to equation 2.14), let be a basis for E λ such that {x j,k, y j,k ) T : j = 1,..., m, k = 1,..., a j } L + x j,k = λy j,k y j,k 1, y j,0 = 0 L y j,k = λx j,k + x j,k 1, x j,0 = ) First consider the restriction L ± λ) Eλ. The analogue to equation 2.15) is L + u j,k, u j,k = λ v j,k, u j,k v j,k 1, u j,k = λ u j,k, v j,k u j,k, v j,k 1, 2.22) and the analogue to equation 2.17) is L v j,k, v j,k = λ u j,k, v j,k + u j,k 1, v j,k = λ v j,k, u j,k + v j,k, u j,k ) As a consequence of Lemma 2.7 one has that L + u j,k, u j,k + L v j,k, v j,k = 0, which by equation 2.22) and equation 2.23) implies that where λ λ 0. Now λ λ) u j,k, v j,k = u j,k, v j,k 1 u j,k 1, v j,k, 2.24) u j,0, v j,k = u j,k, v j,0 = 0, k = 0,..., a j, k = 0,..., a j,

12 M. Hǎrǎguş and T. Kapitula 11 so that by an induction argument from equation 2.24) we conclude that for all j, j, k, and k, u j,k, v j,k = 0. As a consequence of equation 2.22) and equation 2.23) this yields L ± λ) Eλ L ± λ) E λ = 0. Finally, consider L + u j,k, x j,k = λ v j,k, x j,k v j,k 1, x j,k = λ u j,k, y j,k u j,k, y j,k 1. As a consequence of equation 2.25) one necessarily has that = 0. Similarly, one has that 2.25) Similarly one sees that which implies that λ[ u j,k, y j,k + v j,k, x j,k ] = u j,k, y j,k 1 v j,k 1, x j,k. 2.26) L v j,k, y j,k = λ u j,k, y j,k + u j,k 1, y j,k = λ v j,k, x j,k + v j,k, x j,k 1, 2.27) λ[ u j,k, y j,k + v j,k, x j,k ] = v j,k, x j,k 1 u j,k 1, y j,k. 2.28) Note the similarity of equation 2.26) with equation 2.16), as well as equation 2.28) with equation 2.18). We can then follow the induction argument leading to equation 2.20) to conclude that u j,k, y j,k + v j,k, x j,k = 0, for all j, j, k, and k. Together with equation 2.25) and equation 2.27) this gives L + u j,k, x j,k = L v j,k, y j,k, which completes the proof. Remark In [30] the unfolding of a purely imaginary eigenvalue with m g λ) = 1 and m a λ) = k was considered. Therein they showed that 2nL + λ)) {k 1, k + 1} if k is odd, and 2nL + λ)) = k if k is even. Notation For a Hermitian matrix S C define the cone CS) by For self-adjoint operators S define the cone CS) by CS) {c C : Sc, c < 0} {0}. CS) {u H : Su, u < 0} {0}. We can now refine the result in Theorem 2.17 in the following way. The result concerning k r strengthens that presented in [15, Theorem 1.3] also see [19] and [14, Theorem 2.6]). The result concerning k c + k i appears to be new. Theorem Consider the eigenvalue problem of equation 2.1) under the Assumption 2.1 with J and L given by equation 2.11). Then the spectrum of JL is symmetric with respect to both the imaginary and the real axis and k r + k c + k i = nl + ) + nl ), with k c and ki even numbers. Furthermore, let {u j, U 2 u j : j Z} be a basis consisting of generalized eigenvectors for H = H H, as in Proposition 2.19b), with u j = u j, v j ) T, and consider the Hermitian matrices L ± C defined as L + ) ij L + u i, u j, L ) ij L v i, v j. Then and k c + k i = 2 dim[cl + ) CL )], k r = nl + ) nl ) + 2 min{nl + ), nl )} dim[cl + ) CL )]) nl + ) nl ).

13 Spectra of periodic waves 12 Proof: The first part follows immediately from equation 2.11) and Theorem Now consider the second part, and first notice that by Lemma 2.20 L ± = L ± λ), λ σ + JL) where L ± λ) are the matrices defined in Lemma 2.21 and Lemma If Im λ > 0, then one has as a consequence of Lemma 2.22 that In addition, if Re λ 0 one has that and similarly, if Re λ = 0 CL + λ)) = CL λ)). k c λ) = nl + λ)) = dim[cl + λ))] = dim[cl + λ)) CL λ))], k i λ) = dim[cl +λ)) CL λ))]. Next, if Im λ = 0, then by Lemma 2.21c) one can conclude that CL λ)) CL + λ)) =. Thus, upon summing over the eigenvalues λ σ + JL) and following the argument at the end of the proof of Theorem 2.13 one can conclude that dim[cl + ) CL )] = k c λ) + ki λ). λ σ c σ + JL) λ σ i σ + JL) Consequently, upon using the equalities k c λ) = k c λ) and ki λ) = k i λ), one has that k c + k i = 2 dim[cl + ) CL )]. For the last property, assume without loss of generality that nl + ) nl ). Upon using the above result one has that k r = nl + ) + nl ) k c + k i ) = nl + ) nl ) + 2 nl ) dim[cl + ) CL )]). Furthermore, due to the basis result in Proposition 2.19c) one can conclude that dim[cl )] = nl ), and since dim[cl + ) CL )] dim[cl )] one gets the desired inequality Refinements The previous results can be refined so that they are not dependent upon the basis formed by the generalized eigenvectors. As a consequence of equation 2.14) one can write ) v j,k = λl 1 u j,k + 1 λ u j,k 1, u j,0 = 0; 2.29) hence, L v j,k, v j,k = λ 2 L 1 u j,k + 1 ) λ u j,k 1, u j,k + 1 ) λ u j,k ) In particular, this implies that if m a λ) = m g λ) for some λ σjl), then L λ) = λ 2 L 1 λ), where the latter matrix is formed in the obvious manner. The above results then are valid with L being replaced by L 1. Furthermore, since a cone is independent of the basis, one can replace CL + ) CL ) with CL + ) CL 1 ). This observation leads to the following result. The condition that λ σjl) R satisfy m a λ) = m g λ) appears to be necessary. This issue will be further discussed after the proof.

14 M. Hǎrǎguş and T. Kapitula 13 Corollary Consider the eigenvalue problem of equation 2.1) under the Assumption 2.1 with J and L given by equation 2.11). If all λ σjl) R + satisfy m a λ) = m g λ), then the result of Theorem 2.25 can be reformulated as k c + ki = 2 dim[cl + ) CL 1 )], and k r = nl + ) nl ) + 2 min{nl+ ), nl )} dim[cl + ) CL 1 )] ) nl + ) nl ). Proof: Suppose that Im λ > 0. Suppose that each Jordan block has length a j for j = 1,..., m g λ). Recall equation 2.29) and equation 2.30) for j = 1,..., m g λ) and k = 1,..., a j. Now order the eigenfunctions as {u 1,1,..., u 1,a1, u 2,1,..., u 2,a2,..., u mg λ),1,..., u mg λ),a mgλ) }, and define the nilpotent matrices N l C l l via 1, i = 1,..., l 1, j = i + 1 N l ) ij = 0, otherwise. If one sets Sλ) = diag S 1 λ),..., S mg λ)λ) ), S j λ) = id + 1 λ N a j, then upon substituting the expression of equation 2.29) into the quadratic form L λ)c, c yields that if Re λ = 0, then L λ) = λ 2 S λ)l 1 λ)sλ), 2.31) where the notation consistent with Lemma 2.22a) yields L 1 λ)) ij = L 1 u i, u j. Note that Sλ) = id +N, where N is a nilpotent matrix; hence, S is nonsingular. Re λ > 0, then the matrix L c λ) given in Lemma 2.22 is given by On the other hand, if L c λ) = λ 2 S λ)l 1 λ)sλ) where the notation consistent with Lemma 2.22b) yields L 1 λ)) ij = L 1 u i, x j. First suppose that λ ir with nl + λ)) = 0. The codimension of such eigenvalues is nl + ) + nl ). As a consequence of the above discussion and Lemma 2.22 one has that CL + λ)) = CL λ)) = {0}; consequently, by equation 2.31) one has that CL 1 λ)) = {0}. Now consider an eigenvalue for which nl + λ)) 1, and initially assume that all such eigenvalues are semisimple, so that Sλ) = id for each such eigenvalue. If Im λ > 0, then as a consequence of Lemma 2.22 one has In addition, if Re λ 0 one has that and similarly, if Re λ = 0 CL + λ)) = CL 1 λ)). k c λ) = nl + λ)) = dim[cl + λ))] = dim[cl + λ)) CL 1 λ))], k i λ) = dim[cl +λ)) CL 1 λ))]. Next, if Im λ = 0, then by Lemma 2.21c) one can conclude that CL + λ)) CL 1 λ)) = {0}.

15 Spectra of periodic waves 14 Thus, upon summing over the eigenvalues λ σ + JL) and following the argument at the end of the proof of Theorem 2.13 one can conclude that dim[cl + ) CL 1 )] = k c λ) + ki λ). λ σ c σ + JL) λ σ i σ + JL) Consequently, upon using the equalities k c λ) = k c λ) and ki λ) = k i λ), one has that k c + k i = 2 dim[cl + ) CL 1 )]. Recall that the eigenvectors and generalized eigenvectors form a basis. Since cones are independent of a given basis, and since CL + ) corresponds to taking a cone with a particular basis, one finally has that k c + k i = 2 dim[cl + ) CL 1 )]. All that is left to do is to remove the assumption that the finite number of eigenvalues with Im λ R + having nl + λ)) 1 are semisimple. It is important to keep in mind that there are only a finite number of such eigenvalues. Eigenvalues having a nontrivial Jordan block are nongeneric; hence, a generic perturbation will cause these eigenvalues to unfold to semisimple eigenvalues. Let L ϸ ± correspond to a relatively compact symmetric perturbation of the operators L ±. If the eigenvalue is originally purely imaginary, then it will unfold into eigenvalues which are either purely imaginary or have nonzero real part e.g., see [30]). If the eigenvalue is originally complex with nonzero real part, then it will unfold into eigenvalues having the same property. In either case, for the operators L ϸ ± one recovers the desired result since the eigenvalues are now all semisimple. Since cones are open sets which depend continuously upon ϸ, one one can conclude that the result holds in the limit. Remark If the assumption that all λ σjl) R + satisfy m a λ) = m g λ) in Corollary 2.26 is dropped, then all that can be said is that k c + ki 2 dim[cl + ) CL 1 )]. The above perturbation argument fails in the case of initially real-valued eigenvalues, for in this case the perturbed eigenvalues will either be purely real no cone intersection) or complex with nonzero imaginary part cones coincide). Hence, the limiting behavior is unclear. It is natural to wonder if a different argument will allow one to remove the condition that all λ σjl) R satisfy m a λ) = m g λ). Consider the following example, which suggests that it is indeed necessary. Let J R 4 4 with ) ) 2a a + b 1 0 L + =, L a + b 2b =. 0 1 It is not difficult to check that σjl) = {± b a}, and that each eigenvalue λ satisfies m g λ) = 1 and m a λ) = 2. Upon noting that L 1 = L one has that Note that CL + ) = {x, y) : ax 2 + a + b)xy + by 2 < 0}, CL + ) = {x, y) R 2 : y = x and y = a b x}, CL 1 ) = {x, y) : x 2 y 2 < 0}. CL 1 ) = {x, y) R 2 : y = ±x}. Assuming that b > a, so that σjl) R, it is not difficult to check that if a, b R +, then dim[cl + ) CL 1 )] = 0; otherwise, dim[cl + ) CL 1 )] = 1. Thus, it is not possible to use the dimension of the intersection of the cones in order to say anything definitive in this case. On the other hand, if b < a, so that σjl) ir, then one has dim[cl + ) CL 1 )] = 1. In practice it is difficult to precisely calculate dim[cl + ) CL 1 )]. However, one can derive a computable lower bound for this quantity. Let N + N ) CL + ) CL 1 )) be maximal subspaces, and suppose that an orthonormal basis for each is given by {φ ± 1,..., φ± n ± }, where n ± = nl ± ). It is clear that dimn + N ) dim[cl + ) CL 1 )].

16 M. Hǎrǎguş and T. Kapitula 15 If u N + N, then there exists vectors c ± C n ± such that n + n u = c + j φ + j = c j φ j. j=1 Upon using the orthonormality of the basis functions one sees that the above is equivalent to The above is equivalent to the system Thus, which yields j=1 c + = Nc, c = N c + ; N ij φ i, φ+ j. 2.32) c + = NN c +, c = N Nc. dimn + N ) = dim[kernn id)] = dim[kern N id)], dim[kernn id)] dim[cl + ) CL 1 )]. 2.33) As a consequence of equation 2.33) one now has the following result. Corollary Under the nondegeneracy assumption given in Corollary 2.26 one has that k c + k i 2 dim[kernn id)], where N is defined in equation 2.32). For a final theoretical result, we are now in position to refine the result of Theorem As is seen in [21], the eigenvalue problem of equation 2.1) is equivalent to that of equation 2.11) upon setting L + L, L JLJ. 2.34) The primary effect of looking at the original system in this fashion is that for equation 2.34) there is now an artificial symmetry with respect to the real axis. In particular, if λ; u λ ) is an eigenvalue/eigenfunction pair for equation 2.1), then for equation 2.34) one has the pairs λ; u λ, J 1 u λ ) T ) and λ; u λ, J 1 u λ ) T ). The major implication of introducing this artificial symmetry is that each of the quantities k r, k c, ki in Theorem 2.13 is now doubled for equation 2.34). The Jordan block structure of the eigenvalues are not effected, however. Taking into account the doubling of k c and ki, under the appropriate nondegeneracy condition one can now use Corollary 2.26 to state that for equation 2.1), k c + k i = dim[cl) C J 1 L 1 J 1 )]. Let us now derive a lower bound similar to that given in Corollary Let N CL) be a maximal subspace, and suppose that an orthonormal basis for N is given by {φ 1,..., φ n }, where n = nl). It is a straightforward calculation to show that an orthonormal basis for the maximal subspace of C J 1 L 1 J 1 ) is given by {Jφ 1,..., Jφ n }. Consequently, the matrix N derived in equation 2.32) satisfies N ij = ψ i, Jψ j. Since N is skew-symmetric, one has that NN = N 2, so that dim[kernn id)] = dim[kerin id)] + dim[kerin + id)]. Upon applying the above discussion to Corollary 2.28, we have now proved the following. Corollary Suppose that for the eigenvalue problem of equation 2.1) that all λ σjl) R + satisfy m a λ) = m g λ). Then one can refine Theorem 2.13 by further stating that k c + k i = dim[cl) C J 1 L 1 J 1 )] dim[kerin id)] + dim[kerin + id)], where N C nl) nl) is skew-symmetric and is given by N ij = ψ i, Jψ j.

17 Spectra of periodic waves Eigenvalue problems of NLS-type In this section we apply the general results of Section 2 to the eigenvalue problem where J = J R n n is a skew-symmetric, invertible matrix, and JLu = λu, 3.1) L D 2 2 x + D 1 x + Qx). 3.2) Here D 2 R n n is a positive definite, symmetric matrix, D 1 R n n a skew-symmetric matrix, and Q : R R n n a smooth map with Qx) symmetric for all x R. The form of the operator is motivated by the intended application, i.e., the study of spectra of periodic waves in equations of NLS type e.g., see [2 4, 11] and the references therein). Therefore, unlike in [20], we will assume here that the map Q is 2L-periodic for some L > 0, i.e., Qx + 2L) = Qx). The operators L and JL are considered in L 2 R; R n ). Consequently, all of the spectra are continuous, and there are no isolated point eigenvalues with finite multiplicity. However, upon using a Bloch-wave decomposition, i.e., Floquet theory, one has that solving equation 3.1) is equivalent to solving for each γ [ π, π) the eigenvalue problem JL γ u = λ γ u, u L) = ul), 3.3) where the operator L γ is formed by replacing x x + i γ 2L, so that More precisely, one has that L γ D 2 x + i γ ) 2 + D1 x + i γ ) + Qx). 3.4) 2L 2L σjl) = γ [ π,π) σjl γ ), 3.5) where the operator JL γ is taken in the Hilbert space H Lper[ L, 2 L]; C n ) equipped with the standard inner product e.g., see [28]). In contrast to JL the operator JL γ has compact resolvent, since its domain Hper[ L, 2 L]; C n ) is compactly embedded in H, and it satisfies Assumption 2.1 provided 0 σl γ ). Indeed, Assumption 2.1a) is clearly satisfied, and Assumption 2.1b) holds since L γ is self-adjoint, with compact resolvent, and invertible. ) 2, Moreover, L γ is a relatively compact perturbation of the positive operator D 2 x + i γ 2L which implies that its spectrum is bounded from below, σl γ ) [ M, ), for some M > 0. Together with the fact that σl γ ) is a discrete set, this shows that L γ has only a finite number of negative eigenvalues, which implies Assumption 2.1c). Next, JL γ is a relatively compact perturbation of the operator with constant coefficients A 0 γ JD 2 x + i γ ) 2 + JD1 x + i γ ), 2L 2L and the difference B JL γ A 0 γ = Qx), is bounded. Consequently, equation 2.2) holds with r = 0. A standard Fourier analysis shows that the spectrum of A 0 γ consists of eigenvalues λ k,j = iω k,j, k Z, j = 1,..., n, with λ k,j ) j=1,...,n eigenvalues of the skew-symmetric matrix JD 2 k + γ ) 2 + ijd1 k + γ ), for k Z. 2L 2L Analyzing the location of these eigenvalues on the imaginary axis, one finds that λ k,j = Ok 2 ), as k, and conclude that Assumption 2.1d)ii)-iii) holds for s = 1 and r = 1/2. This shows that J and L γ satisfy Assumption 2.1.

18 M. Hǎrǎguş and T. Kapitula 17 We point out that as a consequence of equation 3.4), one has that L γ = L γ, 3.6) so that if λ σjl γ ) with associated eigenfunction v, then λ σjl γ ) with associated eigenfunction v. Thus, one can conclude that for equation 3.1) one has the standard Hamiltonian quartet {±λ, ±λ} σjl). However, this quartet is recovered only by considering σjl γ ) σjl γ ) for 0 γ π. We can now apply the results in Section 2, more precisely Theorem 2.13, which together with equation 3.6) give the following. Theorem 3.1. Consider the eigenvalue problem 3.3). Let π γ < π be such that dim[kerl γ )] = 0. One has that k r γ) + k c γ) + k i γ) = nl γ). Furthermore, σjl γ ) = σjl γ ), and k r γ) = k r γ), k c γ) = k c γ), k i γ) = k i γ). Remark 3.2. a) As a consequence of the last part of Theorem 3.1, we may restrict from now on to γ [0, π]. b) In stability problems, JL represents the linearization about the periodic wave and if the problem possesses a translation invariance in space which is often the case) then the derivative of the periodic wave belongs to the kernel of JL 0, so that the condition dim[kerl γ )] = 0 does not hold for γ = 0. However, for γ 0 this zero eigenvalue of JL 0 typically moves off the origin, so that this condition is satisfied. One expects in such problems that this assumption will only be violated for a finite number of values of γ. As we shall see, in our examples here the condition dim[kerl γ )] = 0 is only violated for γ = 0. Notice, that since the full spectrum of JL is continuous, this restriction does not qualitatively effect the final result. We consider now the particular case D 1 = 0 and Q x) = Qx). Set U 1 [ux)] u x), 3.7) so that U 1 is a unitary operator on H. Then it is easy to verify that U 1 L = LU 1 and U 1 J = JU 1. For the Bloch operator JL γ it is not difficult to check that U 1 L γ = L γ U 1, and then U 1 L γ = L γ U 1. Consequently, U 1, J, and L γ satisfy Assumption 2.15a). Also note that in this case σjl γ ) = σjl γ ). By applying the results of Theorem 2.17 we obtain the following. Theorem 3.3. Under the assumptions in Theorem 3.1, further suppose that D 1 = 0 and Q x) = Qx). Then the spectrum of JL γ is symmetric with respect to both the imaginary and the real axis and the result in Theorem 3.1 holds with k c γ) and ki γ) even numbers. Finally, consider the canonical form J = which is achieved by taking 0 id id 0 ), L = diagl +, L ), 3.8) D 2 = diagd + 2, D 2), D 1 = 0, Qx) = diagq + x), Q x)). Then in 3.4) one finds L γ = diagl + γ), L γ)). As in the previous case, one has σjl γ ) = σjl γ ), and the result of Theorem 2.25 gives: Theorem 3.4. Assume that J and L are as in equation 3.8. Let π γ < π be such that dim[kerl ± γ))] = 0. Then the result in Theorem 2.25 holds for J and L γ.

19 Spectra of periodic waves NLS equation with periodic potential Consider the nonlinear Schr ĺodinger equation iq t + q xx + ωq + f x, q 2 )q = 0, 3.9) where f is a smooth, real-valued map, and f x + 2L, ) = f x, ). Here qx, t) C and x, t) R R +. A physically relevant example is f x, q 2 ) = g q 2 + V 0 sn 2 x, k), 3.10) where g = ±1 and snx, k) is the Jacobian elliptic sine function with 0 k < 1 e.g., see [3, 4]). In this case, if g = +1 then equation 3.9) is a model equation for an attractive Bose-Einstein condensate BEC), whereas if g = 1 equation 3.9) is a model equation for a repulsive BEC. The form of the potential allows one to explicitly solve the steady-state problem associated with equation 3.9) via the use of elliptic functions. We discuss this particular case in the next section. Suppose that Qx) is a real-valued, periodic steady solution to equation 3.9) which satisfies Qx + 2L) = ±Qx). Note that this implies that Q 2 x) is 2L-periodic. Upon linearizing about this solution one finds the eigenvalue problem given by equation 3.1) with J and L as in 3.8), where with L ± xx + Q ± x), Q ± x + 2L) = Qx), 3.11) Q + x) f x, Q 2 x)) 2D 2 f x, Q 2 x))q 2 x) ω, Q x) f x, Q 2 x)) ω. Note that L ± are operators of Hill type see [24]). Their spectra in L 2 R) consist of unions of bands with nontrivial interior [µ 0, µ 1 ] [µ 2, µ 1] [µ 2, µ 3 ], where the solution of the eigenvalue problem corresponding to the band edge µ j is periodic with period 2L the periodic eigenvalue), and the solution corresponding to the band edge µ j is periodic with period 4L the antiperiodic eigenvalue). In terms of Floquet theory the eigenvalue µ j has a Floquet exponent of γ = 0, while the eigenvalue µ j has a Floquet exponent of γ = π. In terms of the associated Bloch operators L ± γ), the band edge µ j corresponds to an eigenvalue for γ = 0, while the band edge µ j corresponds to an eigenvalue for γ = π. If µ = 2j 1 µ 2j or µ 2j 1 = µ 2j, then this corresponds to a closed gap, and such a point is referred to as a double point. For each γ 0, π) there is precisely one value of µγ) in the interior of each band such that γ is the Floquet exponent for that value of µγ); furthermore, µγ) depends continuously upon γ and µγ) = µ γ). At the band edge, in addition to the periodic or antiperiodic solution there is a solution which grows linearly, unless the band edge is a double point, in which case there are two linearly independent periodic or antiperiodic solutions. We are now in position to present another proof of the instability result in [2, Theorem 3.3], as well as its natural generalization. Theorem 3.5. The periodic wave Qx) is spectrally unstable in each of the following two cases: a) if µ = 0 is in the interior of a band of σl + ); b) if L + and L have a different number of bands contained in R {0}. In both cases, k r γ) 1, for some γ 0, π). Proof: We use the results of Theorem 3.4. Since L Q = 0, and Qx) satisfies Qx + 2L) = ±Qx), one has that µ = 0 is a band edge for L. The set σl ) R {0}) will then contain a finite number of bands, so that for each γ 0, π) one will have nl γ)) = k for some fixed k N 0 which is independent of γ. Now consider the operator L + under the assumption that µ = 0 is in the interior of a band, say without loss of generality) [µ j, µ j+1 ], for some j N 0. Let γ c 0, π) be such that µγ c ) = 0. One then has that j + 1, 0 < γ < γ c nl + γ)) = j, γ c < γ < π.

Spectral stability of periodic waves in dispersive models

Spectral stability of periodic waves in dispersive models in dispersive models Collaborators: Th. Gallay, E. Lombardi T. Kapitula, A. Scheel in dispersive models One-dimensional nonlinear waves Standing and travelling waves u(x ct) with c = 0 or c 0 periodic

More information

Finding eigenvalues for matrices acting on subspaces

Finding eigenvalues for matrices acting on subspaces Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

The Krein signature, Krein eigenvalues, and the Krein Oscillation Theorem

The Krein signature, Krein eigenvalues, and the Krein Oscillation Theorem The Krein signature, Krein eigenvalues, and the Krein Oscillation Theorem Todd Kapitula Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 June 19, 2009 Abstract. In this paper

More information

Instability, index theorem, and exponential trichotomy for Linear Hamiltonian PDEs

Instability, index theorem, and exponential trichotomy for Linear Hamiltonian PDEs Instability, index theorem, and exponential trichotomy for Linear Hamiltonian PDEs Zhiwu Lin and Chongchun Zeng School of Mathematics Georgia Institute of Technology Atlanta, GA 30332, USA Abstract Consider

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

L<MON P QSRTP U V!WYX7ZP U

L<MON P QSRTP U V!WYX7ZP U ! "$# %'&'(*) +,+.-*)%0/21 3 %4/5)6#7#78 9*+287:;)

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS

UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9947(XX)0000-0 UNIQUENESS OF POSITIVE SOLUTION TO SOME COUPLED COOPERATIVE VARIATIONAL ELLIPTIC SYSTEMS YULIAN

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

NOTES ON BILINEAR FORMS

NOTES ON BILINEAR FORMS NOTES ON BILINEAR FORMS PARAMESWARAN SANKARAN These notes are intended as a supplement to the talk given by the author at the IMSc Outreach Programme Enriching Collegiate Education-2015. Symmetric bilinear

More information

On the spectral and orbital stability of spatially periodic stationary solutions of generalized Korteweg-de Vries equations

On the spectral and orbital stability of spatially periodic stationary solutions of generalized Korteweg-de Vries equations On the spectral and orbital stability of spatially periodic stationary solutions of generalized Korteweg-de Vries equations Bernard Deconinck a, Todd Kapitula b a Department of Applied Mathematics University

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Stability of an abstract wave equation with delay and a Kelvin Voigt damping Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

MATH JORDAN FORM

MATH JORDAN FORM MATH 53 JORDAN FORM Let A,, A k be square matrices of size n,, n k, respectively with entries in a field F We define the matrix A A k of size n = n + + n k as the block matrix A 0 0 0 0 A 0 0 0 0 A k It

More information

ACM 104. Homework Set 5 Solutions. February 21, 2001

ACM 104. Homework Set 5 Solutions. February 21, 2001 ACM 04 Homework Set 5 Solutions February, 00 Franklin Chapter 4, Problem 4, page 0 Let A be an n n non-hermitian matrix Suppose that A has distinct eigenvalues λ,, λ n Show that A has the eigenvalues λ,,

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Transverse spectral stability of small periodic traveling waves for the KP equation

Transverse spectral stability of small periodic traveling waves for the KP equation Transverse spectral stability of small periodic traveling waves for the KP equation Mariana Haragus To cite this version: Mariana Haragus. Transverse spectral stability of small periodic traveling waves

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5 JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct

More information

Dmitry Pelinovsky Department of Mathematics, McMaster University, Canada

Dmitry Pelinovsky Department of Mathematics, McMaster University, Canada Spectrum of the linearized NLS problem Dmitry Pelinovsky Department of Mathematics, McMaster University, Canada Collaboration: Marina Chugunova (McMaster, Canada) Scipio Cuccagna (Modena and Reggio Emilia,

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Jordan Normal Form. Chapter Minimal Polynomials

Jordan Normal Form. Chapter Minimal Polynomials Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q

More information

BERNARD RUSSO. University of California, Irvine

BERNARD RUSSO. University of California, Irvine A HOLOMORPHIC CHARACTERIZATION OF OPERATOR ALGEBRAS (JOINT WORK WITH MATT NEAL) BERNARD RUSSO University of California, Irvine UNIVERSITY OF CALIFORNIA, IRVINE ANALYSIS SEMINAR OCTOBER 16, 2012 KEYWORDS

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Determinant lines and determinant line bundles

Determinant lines and determinant line bundles CHAPTER Determinant lines and determinant line bundles This appendix is an exposition of G. Segal s work sketched in [?] on determinant line bundles over the moduli spaces of Riemann surfaces with parametrized

More information

Perturbation Theory for Self-Adjoint Operators in Krein spaces

Perturbation Theory for Self-Adjoint Operators in Krein spaces Perturbation Theory for Self-Adjoint Operators in Krein spaces Carsten Trunk Institut für Mathematik, Technische Universität Ilmenau, Postfach 10 05 65, 98684 Ilmenau, Germany E-mail: carsten.trunk@tu-ilmenau.de

More information

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem L u x f x BC u x g x with the weak problem find u V such that B u,v

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Math 249B. Nilpotence of connected solvable groups

Math 249B. Nilpotence of connected solvable groups Math 249B. Nilpotence of connected solvable groups 1. Motivation and examples In abstract group theory, the descending central series {C i (G)} of a group G is defined recursively by C 0 (G) = G and C

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

COMPACT OPERATORS. 1. Definitions

COMPACT OPERATORS. 1. Definitions COMPACT OPERATORS. Definitions S:defi An operator M : X Y, X, Y Banach, is compact if M(B X (0, )) is relatively compact, i.e. it has compact closure. We denote { E:kk (.) K(X, Y ) = M L(X, Y ), M compact

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Krein-Rutman Theorem and the Principal Eigenvalue

Krein-Rutman Theorem and the Principal Eigenvalue Chapter 1 Krein-Rutman Theorem and the Principal Eigenvalue The Krein-Rutman theorem plays a very important role in nonlinear partial differential equations, as it provides the abstract basis for the proof

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

LIE ALGEBRAS: LECTURE 7 11 May 2010

LIE ALGEBRAS: LECTURE 7 11 May 2010 LIE ALGEBRAS: LECTURE 7 11 May 2010 CRYSTAL HOYT 1. sl n (F) is simple Let F be an algebraically closed field with characteristic zero. Let V be a finite dimensional vector space. Recall, that f End(V

More information

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators

Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators Discreteness of Transmission Eigenvalues via Upper Triangular Compact Operators John Sylvester Department of Mathematics University of Washington Seattle, Washington 98195 U.S.A. June 3, 2011 This research

More information

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators Lei Ni And I cherish more than anything else the Analogies, my most trustworthy masters. They know all the secrets of

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

BERNARD RUSSO. University of California, Irvine

BERNARD RUSSO. University of California, Irvine A HOLOMORPHIC CHARACTERIZATION OF OPERATOR ALGEBRAS (JOINT WORK WITH MATT NEAL) BERNARD RUSSO University of California, Irvine UNIVERSITY OF HOUSTON GREAT PLAINS OPERATOR THEORY SYMPOSIUM MAY 30-JUNE 2,

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =

More information

SYMPLECTIC GEOMETRY: LECTURE 5

SYMPLECTIC GEOMETRY: LECTURE 5 SYMPLECTIC GEOMETRY: LECTURE 5 LIAT KESSLER Let (M, ω) be a connected compact symplectic manifold, T a torus, T M M a Hamiltonian action of T on M, and Φ: M t the assoaciated moment map. Theorem 0.1 (The

More information