Radboud University Nijmegen

Size: px
Start display at page:

Download "Radboud University Nijmegen"

Transcription

1 adboud University Nijmegen Faculty of Science The Matrix Moment Problem Author: Student number: Study: Supervisor: Second eader: Luud Slagter Master Mathematics Prof. dr. H.T. Koelink Dr. P.M. omán March 20, 2017

2 2

3 Contents 1 The Hamburger moment problem Introduction Moment sequences and positive definiteness Determinacy of the moment problem Example due to Stieltjes Orthonormal polynomials Construction and properties of orthonormal polynomials The kernel polynomial Proof of Theorem The matrix moment problem Matrix measures Formulation of the matrix moment problem Matrix inner products and orthonormal matrix polynomials Matrix polynomials Properties of orthonormal matrix polynomials Matrix polynomials of the first and second kind The kernel polynomial Some difficulties that arise while generalizing Theorem The operator approach to the moment problem The Jacobi operator The indices of deficiency elation between the deficiency indices and determinacy of the matrix moment problem The deficiency indices in terms of orthonormal matrix polynomials Examples of the matrix moment problem Examples with diagonalizable weights Example arising from a doubly infinite Jacobi operator Criteria for indeterminacy Conditions for completely indeterminate moment problems Generalizations from Akhiezer A Complex measures 77 A.1 Complex and positive measures A.2 adon-nikodym Theorem

4 B Spectral theory 79 B.1 Bounded operators B.1.1 Hilbert spaces and operators B.1.2 The spectral theorem for bounded self-adjoint operators B.2 Unbounded operators B.2.1 The spectral theorem for unbounded self-adjoint operators Bibliography 85 4

5 Introduction The term moment problem originates from 1894, when Stieltjes introduced it as the following problem: It is required to find the distribution of positive mass on the interval [0,, given the moments of order n n 0, 1, 2,... of the distribution. Thus, in Stieltjes problem, a certain sequence of numbers s n n 0 is given and a non-decreasing function σx x 0 is sought such that 0 x n dσx s n for all n 0. Note that we will use the terminology of measures instead of distributions in this thesis. In the statement of Stieltjes problem, the carrier of the mass is the semi-infinite interval [0,, but instead one can look at the moment problem on any given interval or any other point set that is contained in [0,, by requiring that a certain part of the semi-axis [0, must be free of mass. Here however, we study the extended moment problem on,, which is known as the Hamburger moment problem. The moment problem has been researched extensively in works like [1], and both orthogonal polynomials and spectral theory have turned out to be major tools in discussing this problem. Since the papers [20] and [21] of Krein, there is a general theory on matrix-valued orthogonal polynomials. By using this theory amongst others, the matrix-valued analogue of the moment problem, called the matrix moment problem, has been studied. The original purpose of this thesis was to generalize the main result of [6], which states that the scalar moment problem is determinate i.e. has a unique solution if and only if the smallest eigenvalues of the Hankel matrices H N tend to zero whenever N. In Chapter 1 the classical moment problem is introduced, and the determinacy of the measures involved in the moment problem is discussed. Moreover the corresponding orthonormal polynomials are defined and some properties of these polynomials are studied. These concepts are necessary in order to understand the proof given in [6], which is the content of Section 1.3. Since we wanted to generalize [6], Chapter 2 is dedicated to the matrix moment problem. A brief overview of matrix measures is given, before discussing the matrix moment problem itself and the associated orthonormal matrix polynomials, following a similar structure as the first chapter. Some attempts have been made in order to give a generalization, but to no avail. A few of these attempts are collected in Section 2.4, along with the problems one encounters while trying to find such a generalization. Afterwards the matrix moment problem is treated from a functional analytic perspective in Chapter 3. Herefore a summary of spectral theory is given. In this chapter a connection between the orthonormal polynomials and the so-called Jacobi operator which forms the starting point of the operator approach to the moment problem is established. Chapter 3 concludes with some examples that for instance illustrate the difficulty in finding explicit expressions for the orthonormal polynomials in non-trivial cases, or the difficulty to even generate an example from a given operator with explicit expressions for the measure. 5

6 In Chapter 4 some conditions for the indeterminacy of the matrix moment problem are collected, either results taken from the literature or generalizations from [1]. 6

7 Chapter 1 The Hamburger moment problem This chapter is mostly based on [5], [6] and [19]. See Appendix A for a brief overview of complex measures. 1.1 Introduction The Hamburger moment problem consists of the following two questions: 1. Given a sequence s n n 0 in, does there exist a positive Borel measure µ on such that s n xn dµx for every n 0? 2. If such a measure exists, is it unique? Assumption 1.1. In this chapter we assume that suppµ is infinite, and that µ is not a finite discrete measure see also Section 1.2. Assumption 1.2. Without loss of generality we will always assume that s 0 1. Note that this can be achieved by normalizing the involved measures to be probability measures. If µ is a positive measure on such that s n xn dµx for every n 0, we say that µ is a solution to the moment problem of s n n 0. If µ is unique, we speak of a determinate moment problem. Otherwise the moment problem is called indeterminate. Observe that in the indeterminate case, there exists some convex set of probability measures on solving the moment problem. The first question formulated above will be answered in Theorem 1.14, while Theorem 1.18 solves the second one Moment sequences and positive definiteness Now we will describe the moment problem in some more detail by introducing some notation and definitions. Definition 1.3. Let µ be a positive Borel measure on with infinite support and finite moments of any order s n : s n µ x n dµx. 1.1 We call s n n 0 a moment sequence. 7

8 Definition 1.4. For a real sequence s n n 0 we form for N 0 the so-called Hankel matrices which are matrices of size N + 1 N + 1. Written out we get Moreover we define the infinite Hankel matrix to be H N s i+j 0 i,j N 1.2 s 0 s 1 s 2 s N s 1 s 2 s 3 s N+1 H N s 2 s 3 s 4 s N s N s N+1 s N+2 s 2N H s i+j i,j Notation 1.5. We denote by v C k a column vector with entries v i, where 0 i k 1. Moreover v is the row vector we obtain by conjugating and transposing v. With this notation, the inner product in C k is then defined to be k 1 w, v v i w i v w 1.5 i0 for v, w C k. Observe that the inner product defined above is indeed linear in the first slot, and antilinear in the second, which agrees with Definition B.1. The associated norm is defined by v v, v v v. Similar notation is used for matrices: given A C k k we denote by A the matrix obtained by conjugating and transposing A. Moreover we denote the zero matrix by θ and the identity matrix by 1 k. For future purposes, we now define a norm on C k k, given the vector norm on C k. Definition 1.6. Given the vector norm from Notation 1.5, we define A max v 0 Av v max v 1 Av for A C k k. This matrix norm is said to be induced by the vector norm, and is alternatively called the operator norm. This matrix norm satisfies Av A v for all A C K K and v C k, and 1 k 1 see [16], Section 5.6, for an extensive treatment of matrix norms, and Theorem for the proof of these statements in particular. Before being able to give a characterization of moment sequences, we need the concept of positive hermitian matrices. 8

9 Definition 1.7. A matrix A C k k, say A a ij, is called positive hermitian if it is hermitian i.e. A A and positive definite, i.e. if it is hermitian and Av, v v Av > 0, i.e. k 1 i,j0 a ij v i v j > 0 for all 0 v C k. 1.6 In fact 1.6 implies that A is hermitian see the proof below, so the former condition is actually not necessary in the above definition. Proof. Assume that A is positive definite. Write A as A B + ic where B A + A 2 and C A A. 2i Then B and C are clearly hermitian. Now for all 0 v C k we have 0 < Av, v Bv, v + i Cv, v, 1.7 where both inner products on the right hand side are real. Therefore Cv, v must be equal to 0, in order for the right hand side to be real. Since v C k \ {0} was chosen arbitrarily, C 0. We conclude that A B is hermitian. On the set of hermitian matrices we can define a strict partial ordering i.e. a relation that is irreflexive, transitive and antisymmetric as follows. For two hermitian K K matrices A and B we write B A if A B is a positive hermitian matrix, i.e. if Bv, v < Av, v for all 0 v C K, 1.8 according to 1.6. matrices. Then the relation is a strict partial ordering on the set of hermitian emark 1.8. The positive hermitian matrices are the matrices A for which θ A. In the following lemmas some properties of positive hermitian matrices are collected. Lemma 1.9. The eigenvalues of a positive hermitian matrix are positive. Proof. Let A C k k be a positive hermitian matrix and suppose that λ is an eigenvalue of A with corresponding eigenvector v C k. Then by 1.6, 0 < Av, v λv, v λ v, v λ v 2 so that as it is the ratio of two positive numbers. λ Av, v v 2 > 0 9

10 Lemma Let A C k k be a positive hermitian matrix. Then A tra1 k, where 1 k denotes the k k identity matrix. Proof. Since A is hermitian, it is diagonalizable 1, which means that there exists an invertible matrix B such that A B diagλ 1,..., λ k B 1, where the λ i are the eigenvalues of A, which are all positive by Lemma 1.9. Denote the corresponding eigenvectors by v i and observe that {v i : 1 i k} is a basis for C k. From positivity of the eigenvalues it is clear that Av i, v λ i v i, v λ i v i, v < k λ j v i, v tra v i, v trav i, v for all 0 v C k. Now write v in terms of the basis, say v k i1 c iv i. Then j1 so that A tra1 k. Av, v k c i Av i, v < i1 k c i trav i, v trav, v, i1 Lemma Let A C k k be a regular and positive hermitian matrix. Then A 1 is also positive hermitian. Proof. Given 0 v C k, define w A 1 v 0. Then A 1 v, v v A 1 v Aw A 1 Aw w A A 1 Aw w A w w Aw > 0, hence A 1 is positive hermitian. Lemma Let A C k k be a positive hermitian matrix. Then for all v C k : Av 0 v Av 0. Proof. If Av 0, then obviously v Av 0. For the other implication, define v, w A : Av, w w Av. Then by the Cauchy-Schwarz inequality see Appendix B.1.1 we have v, w A 2 v, v A w, w A. If 0 v Av v, v A, then v, w A 0 for all v C k, hence Av 0. We will now give an answer of the first question formulated at the beginning of this chapter. For this reason we introduce the following definition: Definition A real sequence s n n 0 is called positive definite if its Hankel matrix H N is positive hermitian for every N. Since H N is a real and symmetric matrix, s n n 0 is positive definite if H N v, v s i+j v i v j > 0 for all N 0 and 0 v N See [16], Theorem i,j0 10

11 Theorem 1.14 Hamburger. A sequence s n n 0 is a moment sequence if and only if it is a positive definite sequence. Proof. Suppose s n n 0 is a moment sequence, i.e. s n x n dµx 1.10 for some positive measure µ on. Then for any N 0 and v N+1 \ {0}, s i+j v i v j i,j0 i,j0 N Hence s n n 0 is a positive definite sequence. i0 x i+j dµx v i x i v i v j v j x j dµx j0 N 2 v i x i dµx > i0 The other implication is more complicated and can be found in [1], Theorem Here we only sketch the main idea of the proof. Given the positive definite sequence s n n 0, the so-called truncated moment problem of order 2k 1 is considered for every k 0. The problem is to find a positive measure µ such that 1.10 holds for every 0 n 2k 1. Then a sequence of positive measures µ k k is constructed, each of which is a solution of the truncated moment problem, in other words s n x n dµ k x for 0 n 2k 1. Then, according to Helly s theorem 2, there exists a subsequence µ ki i of µ k k that converges to a measure µ that in turn is a solution of the complete moment problem, i.e. µ solves 1.10 for all n 0. It is thus shown that s n n 0 is a moment sequence Determinacy of the moment problem In order to give an answer to the second question formulated at the beginning of this chapter, we consider the eigenvalues of the Hankel matrices. Given a positive measure µ, the associated Hankel matrices H N are positive hermitian. Hence all eigenvalues of H N are positive see Lemma 1.9. Denote the smallest eigenvalue of H N by λ N. Note that it can be obtained by the classical ayleigh quotient, of which the definition is given below. Definition For a given hermitian matrix A C k k and a non-zero vector v C k, the ayleigh quotient A, v is defined as A, v Av, v v, v v Av v v This theorem is formulated in [11], Chapter II, Theorem 2.2 as follows: Let ϕ n n be a uniformly bounded sequence of non-decreasing functions defined on,. Then ϕ n n has a subsequence which converges on, to a bounded, non-decreasing function. 11

12 Lemma The ayleigh quotient reaches its minimum value λ min i.e. the smallest eigenvalue of A when v v min is the corresponding eigenvector. Proof. Since A is hermitian, there is a unitary U C k k such that A UΛU where Λ diagλ 1, λ 2,..., λ k and λ min λ 1 λ 2... λ k λ max are the eigenvalues of A note that these are all real, hence such an ordering is possible. Then for all v C k, v Av v UΛU v U v ΛU v k λ i U v i 2 i1 and thus since U is unitary, k k λ min v v λ min v i 2 λ min U v i 2 i1 i1 k λ i U v i 2 v Av. i1 It follows that λ min v Av v v for all v C k \{0}. Equality holds when v v min, since in that case v Av v λ min v λ min v v. We conclude that v Av λ min min 0 v C k v v. Likewise it can be shown that the ayleigh quotient attains its maximum value λ max v v max. for We thus obtain the following expression 3 for λ N, namely v H N v λ N min 0 v C N+1 v min v H N v v v C N+1, v 1 min s i+j v i v j : v i 2 1, v i C for 0 i N i0 j0 i0 From 1.13 it follows that λ N is a decreasing function of N. Indeed, suppose that v C N, with N i0 v i 2 1, minimizes N N i0 j0 s i+jv i v j. Then construct w C N+1 by putting w i v i for 1 i N, and w N+1 0. Then clearly N+1 i0 w i 2 N i0 v i 2 1, so that w lies in the set over which the minimum is taken while computing λ N+1. Then w could either minimize N+1 j0 s i+jv i v j, or there could be another vector in C N+1 for which this quantity is smaller. We conclude that λ N+1 λ N. N+1 i0 3 As H N is a real and symmetric matrix, it is actually sufficient to take the minimum over real vectors, i.e. { N } λ N min s i+jv iv j : vi 2 1, v i for 0 i N. i0 j0 i0 In order to give the proof of Theorem 1.18 however, it is convenient to the more general expression given in

13 emark If λ N 0 for some N 0, then also λ n 0 for all n N, and in that case µ is a finite sum of point masses. This case implies determinacy since any measure with compact support is automatically determinate. We, however, exclude this case. Whenever µ has infinite support, it holds that λ N > 0 for all N 0, which agrees with the remark made earlier that all eigenvalues of H N are positive. The following theorem is the main result of [6], and gives a condition for the determinacy of the moment problem. Theorem The moment problem associated with the moments given by 1.1 is determinate if and only if lim N λ N 0. We will postpone the proof of this theorem until we have developed some more machinery see Section 1.3. In the proof of this theorem, the reciprocal of λ N will turn out to play a role. Note that it can be written as 1 λ N v v max v C N+1 v H N v max v i 2 : i Example due to Stieltjes max v v v C N+1, H N v,v 1 i0 j0 s i+j v i v j 1, v i C for 0 i N Here we work out the moment problem for an explicit measure, originally studied by Stieltjes. Consider the measure µ on given by dµx C α,γ e γ x α dx 1.15 where it is assumed that α, γ > 0 note that Stieltjes considered the case γ 1, and C α,γ is a constant depending on these two parameters. The corresponding moments are thus given by s n x n dµx C α,γ x n e γ x α dx Both the moments and the value of C α,γ can be determined by exploiting the equality 0 x c 1 e bx dx b c Γc 1.17 which holds true 4 for eb > 0 and c > 0. Here Γ is the well-known Gamma function, defined for all z C with ez > 0 by Then 1 s 0 C α,γ Γz 4 Indeed, making use of the substitution bx y, we obtain 0 x c 1 e bx dx 0 y b 0 t z 1 e t dt. e γ x α dx 2 C α,γ e γxα dx c 1 e y dy b b c 0 0 y c 1 e y dy b c Γc. 13

14 where, substituting y x α, so that dy αx α 1 dx, and applying 1.17 for c 1 α and b γ, e γxα dx e γy y 1 α dy αy 1 y 1 α 1 e γy dy 1 α α γ 1 1 α Γ. α 0 0 We conclude that C α,γ α 2 γ 1 α Γ 1 1. α In a similar way the moments sn can be calculated. First note that all odd moments vanish, since the integral s 2n+1 C α,γ x 2n+1 e γ x α dx has an odd integrand for every n N. The even moments can be found by applying 1.17 with c 2n+1 α and b γ, since s 2n C α,γ x 2n e γ x α dx 2C α,γ x 2n e γxα dx 0 0 2C α,γ y 2n α e γy y 1 α dy 0 αy 2 α C α,γ y 2n+1 α 1 e γy dy 0 2 α C α,γγ 2n+1 2n + 1 α Γ γ 2n Γ 2n+1 α α α Γ 1. α We now claim that the moment problem under consideration in this example is indeterminate for 0 < α < 1 and determinate for α 1. First we prove the indeterminacy whenever 0 < α < 1. ewriting 1.17 with x y α and c 2n+1 α we obtain b 2n+1 2n + 1 α Γ b c Γc x c 1 e bx dx 1 x c 1 e b x dx α y 2n+1 α e b y α αy α 1 dy α y 2n e b y α dy As and e b 2n+1 α e b e i argb 2n+1 α b 2n+1 α cos argb 2n + 1 α e e b y α e e eb+i Imb y α e eb y α cos Imb y α it follows by taking the real parts on both sides of 1.18 that α y 2n e eb y α cos Imb y α dy b 2n+1 α cos 2 argb 2n + 1 α Γ 2n + 1 Thus the right-hand side is equal to 0 for all n N whenever argb 1 2απ. Since eb > 0, this is the case for α < 1, and because we assumed α > 0, it in fact holds for 0 < α < 1. But every such value of α gives rise to a measure µ α,b different from µ that has the same moments, namely dµ α,b x C α,γ e γ x α + e eb x α cos Imb x α where b C is chosen such that argb 1 2απ. We thus conclude that the moment problem is indeterminate whenever 0 < α < 1. α. 14

15 Now we consider the case in which α 1. It is known that moment problems for which the even moments satisfy s 1 2n 2n, are determinate this result is due to Carleman, see [1], Chapter 2, Problem 11. Here this is also the case, since s 1 2n 2n γ 1 Γ 2n+1 1 2n α α Γ α This follows by the following observations. According to [28], Section 12.33, the logarithm of the Gamma function can be approximated as log Γz z log z z log 2π z for z and argz < π. Then under these conditions we also have Γz 2π z which is known as Stirling s formula. Then, for large enough n, we have 2n + 1 Γ α Since α 1, 1 2n z e z, 2n+1 1 2n α 1 2n+1 2πα 2n + 1 2n + 1 4n αe 2nα 2n + 1 αe 2πα 2n n 2n+1 1 αe 2nα 2n + 1 4n 2n+1 2nα πα and thus 2n + 1 2nα 2n n 2n 1 4n 2n + 1 2nα 1 4n n 4n 1. Thus again for large enough n, 1.20 can be further approximated as As Γ 1 α is constant, 1.19 now follows, since 2n n Γ αe2n n 1. α n 1 n. Thus the determinacy of the moment problem with α 1 has been proved. 15

16 1.2 Orthonormal polynomials As said before, we need some more machinery in order to give a proof of Theorem In this section we therefore introduce orthonormal polynomials, which will play a central role in the treatment of the moment problem Construction and properties of orthonormal polynomials Consider L 2 µ, the space of square integrable functions on with respect to µ, i.e. f L 2 µ if and only if fx 2 dµx < Note that L 2 µ is a Hilbert space see Definition B.2 after identifying two functions f and g for which fx gx 2 dµx 0, 5 with respect to the inner product f, g L 2 µ : f, g fxgxdµx Assume that all moments of µ exist, so that all polynomials are integrable. In applying the Gram-Schmidt orthogonalisation process to the sequence {1, x, x 2, x 3,...} we may end up in one of the following two situations: 1. The polynomials are linearly dependent in L 2 µ. Then there is a non-zero polynomial p such that px 2 dµx 0. This implies that µ is a finite sum of Dirac measures at the zeros of p. We will exclude this case. 2. The polynomials are linearly independent. We then end up with a set of orthonormal polynomials as in the definition that follows below. Note that these polynomials form a basis of the vector space C[x]. Observe that the polynomials p n are real-valued for x, so that their coefficients are real. Moreover it follows from the Gram-Schmidt process that the leading coefficients are positive. Definition A sequence of polynomials p n n0 with degp n n for every n N is a set of orthonormal polynomials with respect to µ if p n, p m p n xp m xdµx p n xp m xdµx δ n,m From 2.27 it follows that p n L 2 µ p n, p n p n x 2 2 dµx emark The orthonormal polynomials p n n0 for µ are uniquely determined if we require the polynomials to satisfy 2.27 and p n is a polynomial of degree n with positive leading coefficient. 5 To be more precise, we let L 2 µ be the space of square integrable functions, i.e. { } L 2 µ f : fx 2 dµx < and then define L 2 µ L 2 µ /, where f g if and only if fx gx 2 dµx 0. 16

17 The following theorem describes a fundamental property of the orthonormal polynomials. Theorem 1.21 Three-term recurrence relation. Let p n n0 be a set of orthonormal polynomials in L 2 µ. Then there exist sequences a n n0, b n n0 with a n > 0 and b n for every n N, such that xp n x a n+1 p n+1 x + b n p n x + a n p n 1 x, for n 1, 1.25 xp 0 x a 1 p 1 x + b 0 p 0 x Moreover, if µ has compact support, then the coefficients a n and b n are bounded. Proof. Since the polynomial xp n x has degree n + 1, it can be written as n+1 xp n x c i p i x 1.27 for certain constants c i. By the orthonormality relation 2.27, n+1 p i xxp n xdµx p i x c j p j x dµx hence c i i0 n+1 c j j0 j0 p i xp j xdµx n+1 c j δ i,j c i, 1.28 j0 xp i xp n xdµx The polynomial xp i x has degree i + 1, so that the above equation implies that c i 0 for i + 1 < k, i.e. for i < k 1. Thus 1.27 can be rewritten as Now define Then a n c n 1, a n+1 xp n x c n+1 p n+1 x + c n p n x + c n 1 p n a n b n p n xxp n+1 xdµx p n 1 xxp n xdµx, 1.31 xp n x 2 dµx xp n+1 xp n xdµx c n+1 and b n c n. Moreover it is clear that b n. Denote the leading coefficient of p n x by l n, which is positive for all n. Then comparing the terms of order n+1 in 1.25 yields l n a n+1 l n+1, hence a n+1 ln l n+1 is positive. Now suppose that suppµ is compact. Then a n xp n 1 xp n xdµx sup x p n 1 x p n x dµx x suppµ sup x p n 1 L 2 µ p n L 2 µ sup x <, 1.33 x suppµ x suppµ 17

18 where we made use of the fact that p n L 2 µ 1 by orthonormality. Moreover in the second inequality we applied the Cauchy-Schwarz inequality see Appendix B.1. Likewise b n xp n x 2 dµx sup x p n x 2 dµx x suppµ sup x p n 2 L 2 µ sup x < x suppµ x suppµ Thus, if µ is compactly supported, then the coefficients a n and b n are bounded. Note that 1.25 and 1.26 together with the initial condition p 0 x 1 completely determine the polynomials p n for all n N. The orthonormal polynomials are also useful to characterize the determinacy of the associated moment problem, as the following remark illustrates. emark The moment problem is indeterminate if and only if there exists a non-real number z 0 such that p n z 0 2 < n0 In the indeterminate case the series 1.35 actually converges for all z 0 C, uniformly on compact sets. In the determinate case the series in 1.35 diverges for all non-real z 0 and also for all real numbers except the at most countably many points where µ has a positive mass. The proof of these statements will be omitted. In Chapter 3 we will look at the general i.e. matrix-valued case, from which the above statement can be derived see emark The kernel polynomial We will now introduce the kernel polynomial, which is defined in terms of the orthonormal polynomials discussed in the previous subsection. Moreover we will see the connection between the kernel polynomial and the Hankel matrices. We won t make explicit use of the results stated in this section; in Section however we will generalize the concept of the kernel polynomial to the matrix-valued case, and this turns out to be a handy tool in discussing the matrix moment problem. Definition The reproducing kernel for the polynomials of degree N is defined as K N x, y p k xp k y 1.36 and is called the kernel polynomial. emark Note that pxk N x, ydµx py

19 for any polynomial p of degree N. This is an immediate consequence of the fact that N p k xk N x, ydµx p k x p l xp l y dµx l0 p k xp l xdµx p l y δ k,l p l y p k y 1.38 l0 l0 for 0 k N. Observe that the kernel polynomial can alternatively be written as K N x, y i0 j0 a N ij x i y j 1.39 where the numbers a N ij by writing p k x k are uniquely determined and satisfy a N ij x i for certain b k C. Then i0 bk i K N x, y i k b k i x i i0 k k i0 j0 i0 j0 b k i k j0 b k j b k j x i y j y j a N ji. This can be shown a N ij x i y j 1.40 where a N ij satisfies a N ij kmaxi,j b k i b k j 1.41 so that a N ij a N ji clearly holds. Now define the matrix A N C N+1 N+1 by A N a N ij. Then A N is the inverse 0 i,j N of the Hankel matrix H N as the following theorem shows see also [4], Theorem 2.1. Theorem The matrix A N is the inverse of H N, i.e. where 1 N+1 is the N + 1 N + 1 unit matrix. Proof. For 0 k N it follows that A N H N 1 N+1 H N A N 1.42 x k K N x, ydµx y k

20 by emark On the other hand we have x k K N x, ydµx Combining 1.43 and 1.44, we see that so that j0 x k i0 j0 N j0 i0 i0 i0 j0 a N ij a N ij s k+i a N ij x i y j dµx x k+i dµx y j y j N y k s k+i a N ij y j 1.45 i0 s k+i a N ij δ k,j We thus have shown that H N A N 1 N+1. By uniqueness of the inverse, the claim follows. The kernel polynomial can also be directly written in terms of the moments and the Hankel matrix, as the following lemma shows. Lemma The kernel polynomial is equal to s 0 s 1... s N 1 s 1 s 2... s N+1 x K N x, y deth N 1 det s N s N+1... s 2N x N 1 y... y N 0 Note that deth N > 0 as H N is positive hermitian, so the above expression is well-defined. Proof. We know that the kernel polynomial satisfies Hence the claim follows whenever the same holds for the right-hand side of

21 Note that s 0 s 1... s N 1 s 0 s 1... s N x k s 1 s 2... s N+1 x s 1 s 2... s N+1 x k+1 x k det dµx det. s N s N+1... s 2N x N dµx s N s N+1... s 2N x k+n 1 y... y N 0 1 y... y N 0 s 0 s 1... s k... s N s k s 1 s 2... s k+1... s N+1 s k+1 det s N s N+1... s k+n... s 2N s k+n 1 y... y k... y N 0 s 0 s s N s k s 1 s s N+1 s k+1 det..... s N s N s 2N s k+n 1 y... y k... y N 0 s 0 s 1... s N 0 s 1 s 2... s N+1 0 det s N s N+1... s 2N 0 1 y... y N y k s 0 s 1 s N s 1 s 2 s N+1 1 N+1 1 N+1 y k det s 2 s 3 s N+2 y k deth N.... s N s N+1 s 2N Here we subtracted column N +2 from column k+1 at the third equality, and then interchanged them at the fourth equality. Dividing by deth N on both sides yields the desired result. 1.3 Proof of Theorem 1.18 By using the machinery of Section 1.2, we are able to give a proof of Theorem 1.18 see also Section 1 and 2 from [6], which states that the scalar moment problem is determinate if and only if the smallest eigenvalues of the Hankel matrices tend to zero whenever N. Proof. Define π N x v j x j 1.48 j0 21

22 where v j C. Then π N x 2 dµx π N xπ N xdµx N v j x j v k x k dµx j0 j0 j0 j0 x j+k v j v k dµx x j+k dµx v j v k s j+k v j v k 1.49 and 2π 0 π N e iθ 2 dθ 2π 2π 0 2π 0 j0 j0 π N e iθ π N e iθ dθ 2π j0 2π v j v k iθj k dθ v j v k e 2π 0 v j v k δ j,k iθj k dθ e 2π v k By 1.13, 1.49 and 1.50 it follows that the smallest eigenvalue λ N of H N is determined by { 2π λ N min π N x 2 dµx : π N e iθ } 2 dθ π N 0 2π The reciprocal of λ N then is equal to { 1 2π max π N e iθ } 2 dθ λ N π N 0 2π : π N x 2 dµx Let p n n0 denote the orthonormal polynomials with respect to µ, so that 2.27 is satisfied. Moreover p n has a positive leading coefficient. As p n n0 forms a basis, we can write π Nx as a linear combination of the orthonormal polynomials p k, say π N x c j p j x, 1.53 j0 where c j C. We will now rewrite the integrals that appear in both 1.51 and 1.52 by using 22

23 1.53. We obtain 2π π N e iθ 2 dθ 2π 0 2π c j p j 0 where we have defined Similarly K jk j0 j0 2π π N x 2 dµx Thus 1.52 can be rewritten as 1 max λ N c j 0 j0 2π c j c k 0 e iθ N c k p k e iθ dθ 2π p j e iθ p k e iθ dθ 2π K jk c j c k 1.54 p j e iθ p k e iθ dθ 2π N c j p j x c k p k x dµx j0 j0 j0 j0 c j c k c j c k δ jk K jk c j c k : p j xp k xdµx c j j0 c j Since the matrix K N : K jk 0 j,k N is positive definite 6, all its eigenvalues are positive, and the sum of these eigenvalues is equal to the trace of the matrix. Note that 1.57 implies that is the largest eigenvalue of K N. Hence we obtain the inequality 1 λ N 1 λ N tr K N K kk 2π According to emark 1.22 it holds that 0 j0 p k e iθ p k e iθ dθ 2π 2π 0 p k e iθ 2 dθ 2π p k e iθ 2 < whenever the moment problem is indeterminate. Thus in the case of indeterminacy it follows from 1.58 and 1.59 that 1 2π λ N 0 p k e iθ 2 dθ 2π 2π 0 p k e iθ 2 dθ 2π < This immediately follows from 1.54, since the left hand side is 0. 23

24 This equality shows that λ N 2π 0 1 p k e iθ 2 dθ > π We thus have established that in the indeterminate case, the smallest eigenvalue λ N is bounded from below. Put differently, if lim N λ N 0, then the moment problem is determinate. Conversely, assume that λ N γ for all N, where γ > 0. We will show that in this case the moment problem is indeterminate. Since 1 λ N 1 1 γ for all N, and λ N is the largest eigenvalue of the positive definite matrix K N, it follows from the ayleigh quotient that for all c c 0, c 1,..., c N C N+1, hence N N j0 K jkc j c k N j0 c 1 1 j 2 λ N γ, j0 K jk c j c k 1 γ Now let p be an arbitrary complex polynomial of degree N, say Then 1.62 can be reformulated as 2π 0 p e iθ 2 dθ 2π 2π 0 1 γ 1 γ 1 γ 1 γ px c j j0 c k p k x c j 2 1 γ j0 j0 c k p k e iθ 2 dθ 2π N c j c k j0 c j c k δ j,k j0 p j xp k xdµx K jk c j c k N c j p j x c k p k x dµx j0 c k p k x 2 dµx 1 γ px 2 dµx Let z 0 be an arbitrary non-real number in the open unit disc, i.e. z 0 < 1. Then it follows from the Cauchy integral formula that pz 0 1 2π p e iθ 2π 0 e iθ e iθ dθ, 1.65 z 0 24

25 hence pz π 2 2π 0 2π 0 2π z 0 2 p e iθ 2 e iθ e iθ dθ z 0 2π p e iθ 2 dθ 2π p e iθ 2 dθ 2π 2π 0 p 0 1 2π 4π dθ e iθ z 0 2 2π 2π 1 dθ 0 1 z 0 2 2π e iθ 2 dθ 2π p e iθ e iθ z 0 dθ At the second inequality we made use of the Cauchy-Schwarz inequality for integrals 7, while the third inequality follows from e iθ z 0 1 z0 e iθ 1 z0. We define κ 1 γ1 z 0 2. Then combining 1.64 and 1.66 yields the inequality pz 0 2 κ px 2 dµx 1.67 which holds for every complex polynomial p of degree N. polynomial px p k z 0 p k x. For this particular polynomial we have N N pz 0 2 pz 0 pz 0 p k z 0 p k z 0 p l z 0 p l z 0 and l0 Lastly we define the complex N N N 2 p k z 0 2 p l z 0 2 p k z px 2 dµx l0 N p k z 0 p k x p l z 0 p l x dµx l0 l0 l0 p k z 0 p l z 0 p k z 0 p l z 0 δ k,l p k xp l xdµx 7 This inequality comprises the following statement: for integrable f, g : [a, b], b 2 fxgxdx a b a fx 2 dx b a p k z gx 2 dx. 25

26 From subsequently 1.68, 1.69 and 1.67 we thus obtain N 2 2 pz 0 2 p k z 0 2 px dµx 2 κ Dividing on both sides by px 2 dµx yields Since N is arbitrary, it follows that px 2 dµx. p k z 0 2 κ p k z 0 2 κ <, 1.71 in other words the moment problem is indeterminate by emark As stated before, the original purpose of this thesis was to generalize the above proof to the matrix-valued case. Therefore we will revert our attention to the matrix-valued moment problem in the next chapter, and discuss some difficulties that arise while attempting to give this generalization. 26

27 Chapter 2 The matrix moment problem In this chapter we generalize the moment problem introduced in Chapter 1 to the matrix-valued setting. Before formulating the matrix moment problem, we will take a closer look at matrix measures. Afterwards orthonormal matrix polynomials are treated, and their connection to the moment problem will be discussed as briefly done in Chapter 1 for the classical moment problem. This chapter is concluded by describing some of the difficulties in generalizing Theorem The treatment of the matrix moment problem and orthonormal polynomials in this thesis is mostly based on [5]. 2.1 Matrix measures We will generalize the notion of complex measures to matrix measures. See Appendix A for a brief overview of complex measures. Definition 2.1. A matrix measure µ on a measurable space X, E is a function µ : E C K K such that µ E n µe n 2.1 for any sequence E n n 1 of pairwise disjoint sets from E. Note that a matrix measure µ µ ij 0 i,j K 1 can be considered as a matrix of K 2 complex measures. Definition 2.2. A matrix measure µ is positive if all values of the measure are positive hermitian matrices, i.e. µe is a positive hermitian matrix for all E E. We denote the set of positive matrix measures on X, E with M K X. In what follows, we will always assume that µ is a positive matrix measure unless stated otherwise. Lemma 2.3. A positive matrix measure µ is i increasing, which means that E F implies that µe µf for all E, F E. ii countably subadditive, i.e. for every sequence E n n 1 in E we have µ E n µe n

28 Proof. where the inequality holds if the right-hand side converges. i Let E F. Then µf µe + µf \ E, and since µf µe µf \ E is positive hermitian, it follows that µe µf. ii Define the sequence F n n 1 by F 1 : E 1 and F n : E n \ E 1... E n 1 for n > 1. Note that the F n are pairwise disjoint sets by construction. Moreover E n F n, from which it follows that k µ E n µ F n µf n lim µf n. 2.3 k As F n E n for every n 1, invoking i yields µf n µe n, and thus We conclude that µ E n lim as was to be shown. k µf n k k µe n. 2.4 k µf n lim k k µe n µe n, 2.5 In the remainder of this section see Theorem 2.6 we will show that every positive matrix measure µ can be written as µdx W xdτ µ x 2.6 where W x is a positive hermitian matrix and τ µ is the so-called trace measure which is defined below. Definition 2.4. Let µ be a positive matrix measure. positive finite measures and so is Then the diagonal measures µ ii are Lemma 2.5. For any µ M K X we have for all E E. τ µ : trµ µ µ K 1,K µ ij E µ ij E τ µ E 2.8 In this lemma µ ij denotes the variation of of µ ij see Definition A.5. Proof. Let E E. The first inequality is obvious by construction, see Appendix A. In order to prove the second one, we introduce the notation a ij µ ij E and A a ij. For i j, the 2 2 matrix aii a ij a ji a jj 28

29 is positive hermitian. In particular it has a non-negative determinant, so that a ii a jj a ij a ji 0, and thus a ij 2 a ii a jj. Hence a ij a ii a jj 2 a ii a jj a ii + a jj tra 2.9 where the third inequality follows by rewriting a ii a jj We conclude from 2.9 that µ ij E τ µ E. Since µ ij is the smallest positive measure satisfying these inequalities see Appendix A, we get µ ij E τ µ E for all E E. Before we are able to give a proof of 2.6, we need the adon-nikodym Theorem, which is formulated in Theorem A.8. Observe that µ ij τ µ for all 0 i, j K 1. Indeed, suppose that τ µ E 0. Then clearly µ ii E 0 for every i, and by 2.9, µ ij E 0 for all 0 i, j K 1. Thus by Theorem A.8 there exist measurable functions f ij such that µ ij E f ij dτ µ 2.10 for all E E. Hence µ ij E E f ij dτ µ by Proposition A.9. Thus, according to Lemma 2.5, f ij x 1 for τ µ -almost all x X, since f ij dτ µ µ ij E τ µ E dτ µ. E It follows that E f ij dτ µ < as τ µ is a finite measure, in other words f ij L 1 τ µ. Now we can finally state the desired result. Theorem 2.6. Let µ be a positive matrix measure with adon-nikodym derivatives f ij L 1 τ µ such that µ ij E f ij xdτ µ x 2.11 E for E E. Then the matrix W x : f ij x 0 i,j K 1 is positive hermitian for τ µ -almost all x X. Proof. Since µ is a positive matrix measure, it follows that µe is positive hermitian for any E E. Let v C K \ {0}. Then F x, vdτ µ x : W xv, v dτ µ x W xdτ µ xv, v µev, v > E E where we have defined F x, v W xv, v and made use of the fact that 2.11 implies µe W xdτ µ x. E Thus the function x F x, v has a positive integral over all sets E E w.r.t. the positive measure τ µ. Hence F x, v is real-valued and positive for τ µ -almost all x E. It follows that there exists a set Ω v with τ µ Ω v 0 such that F x, v > 0 for all x X \ Ω v. Now define Ω v D Ω v for some countable dense subset D of C K. Then τ µ Ω τ µ Ω v τ µ Ω v 0 v D v D 1 Indeed, 0 a ii a jj 2 a ii 2 a iia jj + a jj, so that a ii + a jj 2 a iia jj. E E E 29

30 so that τ µ Ω 0. We also have F x, v > 0 for all x X \ Ω and v D. For fixed x X the function v F x, v is continuous. Moreover it is positive on the dense set D whenever x X \ Ω, and hence it is positive on C K. But then W x is positive hermitian for τ µ -almost all x X, namely for all x X \ Ω. emark 2.7. Moreover it can be shown that W x 1 K for τ µ -almost all x X. Indeed, invoking Lemma 1.10 we see that 1 K W xv, v dτ µ x v, v dτ µ x W xv, v dτ µ x E E E τ µ Ev, v µev, v > 0 for all v C k \ {0}. Following the reasoning in the above proof, we see that 1 K W x is positive hermitian for τ µ -almost all x X, and from this the claim follows. In the following we always assume that the functions f ij are chosen such that W x is positive hermitian and f ij x 1 for all x X. 2.2 Formulation of the matrix moment problem In this section we generalize the scalar moment problem to the matrix-valued case. Suppose K 1. Let µ W τ µ be a positive matrix measure, supported on the real line, and with moments of any order. Here W x C K K is positive hermitian, i.e. W x f ij x 0 i,j K 1 where the functions f ij are chosen as in Theorem 2.6. Definition 2.8. Denote the nth moment of the measure µ with S n : S n µ x n dµx x n W xdτ µ x 2.13 for n N. We call S n n 0 a matrix moment sequence. The integration in 2.13 has to be taken entrywise, which means that the i, j entry of the matrix S n C K K is given by S n ij x n f ij xdτ µ x Note that S n S n as W x is hermitian. Under certain conditions it is possible to assume S 0 1 K without loss of generality see emark This normalization however, is not always convenient in explicit examples. Notation 2.9. We denote the set of positive matrix measures on with moments of any order as M K M K. For µ M K we denote the set of all ν M K with the same moments as µ by [µ], i.e. [µ] {ν M K : S n ν S n µ for all n 0}. The matrix moment problem consists of the following two questions: 1. Which sequences S n n 0 are matrix moment sequences? 30

31 2. To which extent is µ M K determined by its moment sequence? The answer to the latter question is given in terms of the determinacy of the measure µ: Definition Let µ M K. Then µ or the corresponding moment sequence S n n 0 is called determinate if [µ] {µ}, and indeterminate otherwise. If µ is indeterminate, then [µ] is a convex set with at least two elements, and thus infinite. In the previous section we have seen that the trace measure τ µ is useful to give another characterization of positive matrix measures, namely via 2.6 with W x f ij x 0 i,j K 1. As we shall show next, this particular measure also gives a sufficient condition for determinacy of its associated matrix measure µ. To show this, we first need to state a result regarding the determinacy of matrix measures and its components. Theorem Let µ, ν be positive matrix measures with moments of any order and assume that they have the same moments. i If µ ii is determinate for some i {0, 1,..., K 1}, then µ ij ν ij for j {0, 1,..., K 1}. ii If µ ii is determinate for all i {0, 1,..., K 1}, then µ ν, so that µ is determinate. The proof is omitted and can be found in [5], Theorem 3.6. Note that ii immediately follows from i. Corollary Let µ be a positive matrix measure with moments of any order. determinate, then µ is determinate. If τ µ is Proof. Fix i and note that µ ii τ µ. Then it follows from Lemma A.4 that µ ii is determinate, since τ µ is assumed to be determinate. But then µ is determinate according to Theorem The above corollary thus relates determinacy of the matrix moment problem and the classical moment problem. Completely similar to Section 1.1, we now form the Hankel block matrices corresponding to a sequence S n n 0 of hermitian K K-matrices, namely H N S i+j 0 i,j N 2.15 for N 0. Observe that the Hankel matrices are of size KN + 1 KN + 1. Written out we thus get S 0 S 1 S 2 S N S 1 S 2 S 3 S N+1 H N S 2 S 3 S 4 S N S N S N+1 S N+2 S 2N Moreover we define the infinite Hankel matrix to be H S i+j i,j

32 Definition A matrix sequence S n n 0 is called positive definite if its Hankel matrix H N is positive hermitian for every N, which is equivalent to H N v, v vi S i+j v j > 0 for all N 0 and 0 v C K N i,j0 The generalized form of Hamburger s theorem i.e. Theorem 1.14, is the following: Theorem 2.14 Krein. A sequence S n n 0 is a matrix moment sequence if and only if it is a positive definite matrix sequence. Proof. Let S n n 0 be a matrix moment sequence, i.e. S n x n W xdτ µ x for W x as in Theorem 2.6. Then for any N 0 and 0 v C K N+1 we have i,j0 vi S i+j v j vi x i+j W xdτ µ x v j i,j0 N N vi x i W x v j x j dτ µ x i0 W xu, u dτ µ x > 0 where we have defined u N i0 v ix i and made use of We conclude that S n n 0 is a positive definite matrix sequence. The other implication is more involved as was the case in Theorem 1.14, and can be given in a similar fashion by using a generalized form of Helly s theorem. Another proof can be found in full detail in [5], Theorem 3.2, and is based on [20]. Here we only present a sketch of the proof. A large part of that proof relies on spectral theory, which will also be discussed in Chapter 3 and Appendix B. Given a positive definite sequence S n n 0, define the positive hermitian form, on the set of vector polynomials L {gx i c ix i : c i C K } by d j x j, c i x i c i S i+j d j j i i,j with associated seminorm i c ix i 2 i,j c i S i+jc j. Consider the multiplication operator in L, denoted by A 0, i.e. A 0 gx xgx. It can then be shown that A 0 induces an operator A in the quotient space L/L 0, where L 0 {g L : g 0}, and that the form defined above defines an inner product on L/L 0. Now take H to be the Hilbert space completion of L/L 0 with respect to that inner product, and let H be a Hilbert space which contains H as a closed subspace. It is known that one can find a self-adjoint extension of A in H, say T. Applying the Spectral Theorem, T can be written in the form T xdw x for some spectral measure W x on the Borel sets of. Finally a particular positive measure µ is defined in terms of W x similar as in 3.9, and again invoking the Spectral Theorem one sees that S n ij xn dµ ij x where µ ij is a component of the matrix measure µ. Hence S n n 0 is a matrix moment sequence. 32

33 2.3 Matrix inner products and orthonormal matrix polynomials Matrix polynomials The orthonormal polynomials with respect to some positive measure turned out to have a direct connection with the associated classical moment problem; see Section 1.2. Here we proceed completely similar by introducing orthonormal matrix polynomials. First we revert our attention to matrix polynomials. Definition A matrix polynomial P is a polynomial in one complex variable x which has K K-matrices as coefficients, i.e. P x n A k x k n x k A k with A k C K K for every 0 k K The degree of P is the highest power k of x for which A k 0. The set of matrix polynomials with coefficients in C K K is denoted by C K K [x]. If a matrix polynomials of degree n equals the zero matrix for more than n values of the variable x, then all matrix coefficients are equal to the zero matrix. Notation If P is a polynomial of degree n, then we denote its leading coefficient by lcp A n. emark We consider the set C K K [x] of matrix polynomials as a module over the matrix ring C K K. This is possible since C K K is a complex vector space in which left and right multiplication by matrices is possible. Notation Let P x n A kx k be a matrix polynomial. Then we denote by P the matrix polynomial n P x A k xk Note that P x P x for x C. In the scalar case the orthonormal polynomials p n n 0 satisfy degp n n and lcp n > 0. For the orthonormal polynomials in the matrix-valued case, similar properties hold. For this reason we introduce the following terminology: Definition A sequence of matrix polynomials P n n 0 is called simple if i P n has degree n; ii The leading coefficient of P n is regular. Proposition Let P n n 0 be a simple sequence of matrix polynomials. Then every matrix polynomial P of degree n can be uniquely expressed as P x n A k P k x where A k C K K for every 0 k n

34 Put differently, we thus see that a simple sequence of matrix polynomials forms a basis of C K K [x], as a left module over C K K. Proof. Let P be a polynomial of degree n, and let P n n 0 be a simple sequence of matrix polynomials wherein P n has leading coefficient L n. Then it can obviously be written as a linear combination of the polynomials P n. Assume that P x n A k P k x n B k P k x 2.22 are two different ways to write P. Then the leading coefficient of P equals lcp A n lcp n A n L n and likewise lcp B n L n. egularity of L n then implies that A n B n. It is then inductively clear that also A n 1 B n 1,..., A 0 B 0. Hence P can be uniquely expressed as a linear combination of the polynomials P n. Before we are able to give a definition of orthonormal matrix polynomials, we need to consider matrix inner products. Definition A matrix inner product on C K K [x] is a mapping such that i P, Q Q, P ;, : C K K [x] C K K [x] C K K 2.23 ii A 1 P 1 + A 2 P 2, Q A 1 P 1, Q + A 2 P 2, Q, where A i C K K for i 1, 2; iii θ P, P. The matrix inner product is called non-degenerate if it also satisfies iv for all P C K K [x], if P, P θ, then P θ. The matrix inner product is called degenerate if there exists some non-zero matrix polynomial P for which P, P θ. emark From i it follows that P, P is always hermitian, while iii implies that it is positive. Moreover it follows from i and ii that P, A 1 Q 1 + A 2 Q 2 A 1 Q 1 + A 2 Q 2, P A 1 Q 1, P + A 2 Q 2, P Q 1, P A 1 + Q 2, P A 2 P, Q 1 A 1 + P, Q 2 A 2. Lemma A matrix inner product, is non-degenerate if and only if for all n 0 and for all v 0,..., v n C K the following condition holds: n vi x i 1 K, x j 1 K v j 0 v 0... v n i,j0 Note that the implication in 2.24 is trivial. 34

35 Proof. Let P be a matrix polynomial, say P x n i0 A ix i. Then P, P n A i x i 1 K, x j 1 K A j. i,j0 Suppose that, is non-degenerate and let v 0,..., v n C K satisfy n vi x i 1 K, x j 1 K v j 0. i,j0 Now let A i be the matrix for which the zero th row is equal to vi, while all other rows are zero. Then the kl th entry of A i x i 1 K, x j 1 K A j is zero, unless k l 0, in which case the entry equals vi xi 1 K, x j 1 K v j. Taking the sum over all 0 i, j n, it follows that P, P θ, and thus P θ by non-degeneracy of the inner product. But then A i θ for every i, from which it follows that v i 0. This proves the first implication. Conversely assume that 2.24 holds. Moreover let P be a matrix polynomial that satisfies P, P θ, say P x n i0 A ix i. Note that 0 v P, P v n v A i x i 1 K, x j 1 K A jv i,j0 for any v C K, so that A i v 0 for every 0 i n by assumption. Since v was chosen arbitrarily, we have A i θ for all i. But then P θ, hence the inner product is non-degenerate. Having introduced the concept of matrix inner products, we are now able to give a specific inner product with respect to a given matrix measure µ M K. Definition Let µ M K. Then we define the matrix inner product with respect to µ by P, Q µ : P xdµxq x P xw xq xdτ µ x We will often just write, for the above inner product if it is clear with respect to which measure µ the inner product has to be taken. Definition A matrix measure µ is called non-degenerate if, µ is non-degenerate. Corollary A positive matrix measure µ is non-degenerate if and only if for all n 0 and for all v 0,..., v n C K the following condition holds: n vi S i+j v j 0 v 0... v n i,j0 Proof. This is an immediate corollory to Lemma 2.23, since x i 1 K, x j 1 K µ x i+j dµx S i+j. 35

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A). Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Boundary Conditions associated with the Left-Definite Theory for Differential Operators

Boundary Conditions associated with the Left-Definite Theory for Differential Operators Boundary Conditions associated with the Left-Definite Theory for Differential Operators Baylor University IWOTA August 15th, 2017 Overview of Left-Definite Theory A general framework for Left-Definite

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

1 Inner Product Space

1 Inner Product Space Ch - Hilbert Space 1 4 Hilbert Space 1 Inner Product Space Let E be a complex vector space, a mapping (, ) : E E C is called an inner product on E if i) (x, x) 0 x E and (x, x) = 0 if and only if x = 0;

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker Archive of past papers, solutions and homeworks for MATH 224, Linear Algebra 2, Spring 213, Laurence Barker version: 4 June 213 Source file: archfall99.tex page 2: Homeworks. page 3: Quizzes. page 4: Midterm

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

Topological vectorspaces

Topological vectorspaces (July 25, 2011) Topological vectorspaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ Natural non-fréchet spaces Topological vector spaces Quotients and linear maps More topological

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Inequalities in Hilbert Spaces

Inequalities in Hilbert Spaces Inequalities in Hilbert Spaces Jan Wigestrand Master of Science in Mathematics Submission date: March 8 Supervisor: Eugenia Malinnikova, MATH Norwegian University of Science and Technology Department of

More information

Functional Analysis Exercise Class

Functional Analysis Exercise Class Functional Analysis Exercise Class Week: December 4 8 Deadline to hand in the homework: your exercise class on week January 5. Exercises with solutions ) Let H, K be Hilbert spaces, and A : H K be a linear

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem

Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Review of Linear Algebra Definitions, Change of Basis, Trace, Spectral Theorem Steven J. Miller June 19, 2004 Abstract Matrices can be thought of as rectangular (often square) arrays of numbers, or as

More information

1 Functional Analysis

1 Functional Analysis 1 Functional Analysis 1 1.1 Banach spaces Remark 1.1. In classical mechanics, the state of some physical system is characterized as a point x in phase space (generalized position and momentum coordinates).

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Eigenvalues and Eigenfunctions of the Laplacian

Eigenvalues and Eigenfunctions of the Laplacian The Waterloo Mathematics Review 23 Eigenvalues and Eigenfunctions of the Laplacian Mihai Nica University of Waterloo mcnica@uwaterloo.ca Abstract: The problem of determining the eigenvalues and eigenvectors

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent Chapter 5 ddddd dddddd dddddddd ddddddd dddddddd ddddddd Hilbert Space The Euclidean norm is special among all norms defined in R n for being induced by the Euclidean inner product (the dot product). A

More information

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0. Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spectral Theory, with an Introduction to Operator Means. William L. Green Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Measures. Chapter Some prerequisites. 1.2 Introduction

Measures. Chapter Some prerequisites. 1.2 Introduction Lecture notes Course Analysis for PhD students Uppsala University, Spring 2018 Rostyslav Kozhan Chapter 1 Measures 1.1 Some prerequisites I will follow closely the textbook Real analysis: Modern Techniques

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

C.6 Adjoints for Operators on Hilbert Spaces

C.6 Adjoints for Operators on Hilbert Spaces C.6 Adjoints for Operators on Hilbert Spaces 317 Additional Problems C.11. Let E R be measurable. Given 1 p and a measurable weight function w: E (0, ), the weighted L p space L p s (R) consists of all

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that Problem 1A. Suppose that f is a continuous real function on [, 1]. Prove that lim α α + x α 1 f(x)dx = f(). Solution: This is obvious for f a constant, so by subtracting f() from both sides we can assume

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information

Newtonian Mechanics. Chapter Classical space-time

Newtonian Mechanics. Chapter Classical space-time Chapter 1 Newtonian Mechanics In these notes classical mechanics will be viewed as a mathematical model for the description of physical systems consisting of a certain (generally finite) number of particles

More information

CHAPTER 8. Smoothing operators

CHAPTER 8. Smoothing operators CHAPTER 8 Smoothing operators Lecture 8: 13 October, 2005 Now I am heading towards the Atiyah-Singer index theorem. Most of the results proved in the process untimately reduce to properties of smoothing

More information

Further Mathematical Methods (Linear Algebra)

Further Mathematical Methods (Linear Algebra) Further Mathematical Methods (Linear Algebra) Solutions For The Examination Question (a) To be an inner product on the real vector space V a function x y which maps vectors x y V to R must be such that:

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information