Fredholm Determinants

Size: px
Start display at page:

Download "Fredholm Determinants"

Transcription

1 Fredholm Determinants Estelle Basor American Institute of Mathematics Palo Alto, California The 6th TIMS-OCAMI-WASEDA Joint International Workshop on Integrable Systems and Mathematical Physics March 2014

2 Preliminaries The theory of Fredholm determinants for integral operators has roots in the theory of ordinary determinants for the familiar n n matrices and the systems of equations associated to them. So let us begin with a little history of the determinants. We suppose that K(x, y) is continuous and we define the operator T on L 2 (a, b) which takes the function f to T(f )(x) = b a K(x, y) f (y) dy. The function K(x, y) is called the kernel of T.

3 Let us also for a moment suppose we are trying to find a function f such that g = f λt(f ) for some given g. One thing we might do is discretize this problem and break up the interval into small segments and try to approximate a solution. Let δ = b a N and choose x p in a partition of [a, b] of mesh size δ. Then using finite sums as an approximation, we can write for p = 1, 2,..., N. g(x p ) f (x p ) λδ N K(x p, x q ) f (x q ) q=1

4 This system will have a solution if λ is not a zero of the determinant D N (λ) defined by: 1 λδk(x 1, x 1 ) λδk(x 1, x 2 ) λδk(x 1, x N ) λδk(x 2, x 1 ) 1 λδk(x 2, x 2 ) λδk(x 2, x N ).... λδk(x N, x 1 ) λδk(x N, x 2 ) 1 λδk(x N, x N )

5 Set ( x1, x K N y 1, y N ) ( ) = det K(x p, y q ), p, q, = 1,, N. Then grouping terms by powers of λ we have D N (λ) = 1 λδ ( xp K )+ (λδ)2 ( xp1 x K p2 x p 2! x p1 x p2 p p 1,p 2 ) + Now let δ 0 and we formally have that D(λ) is ( ) ( ) x1 1 λ K dx x 1 + (λ)2 x1 x K 2 dx 1 2! x 1 x 1 dx 2 + 2

6 What about convergence? To show that this series defines an entire function (at least for a finite interval) we let M be a bound for K(x, y). Note that if a matrix A has columns A 1,, A n then det(a) A 1 A n. This means that each integrand ( ) det K(x i, x j ) has absolute vaue at most M n n n/2 and that each term of the series is at most λ n (b a) n M n n n/2 n! and thus the series converges for all λ.

7 Thus D(λ) is an entire function and it can be shown using the very same techniques just employed that if λ is a zero of D(λ) then 1 is an eigenvalue of the operator T. λ The set of non-zero eigenvalues is discrete and either constitutes a finite set or the eigenvalues have a limit point of zero. Fredholm did more than this in his original paper. He produced the kernel of the inverse of I λt.

8 We suppose D(λ) is not zero, and we define D(x, y, λ) by ( λ) n 0 n! If S is the operator with kernel ( x x1 x K n y x 1 x n D(x, y, λ) D(λ) then (I λt) and (I λs) are inverses. ) dx 1 dx n.

9 In addition to the determinant, we can also define the trace of the integral operator as b a K(x, x) dx. This is motivated by the fact that if K(x, y) = λ i φ i (x) φ i (y) for an orthonormal basis {φ i } then the trace is the sum of the eigenvalues as expected.

10 Other basic properties of T (a) If K(x, y) 2 dx dy <, then T is a bounded operator on L 2. (b) If the kernel of T 1 is K 1 (x, y) and the kernel of T 2 is K 2 (x, y), then the kernel of T 1 T 2 is given by b a K 1 (x, z) K 2 (z, y) dz. (c) If T has kernel K(x, y), then T has kernel K(y, x).

11 Trace-class operators There is another abstract definition of an operator determinant defined for general operators defined on a general Hilbert space that are of the form I + T where T is a trace class operator, so next we turn to these.

12 We say that T is trace class if T 1 = (TT ) 1/2 e n, e n < n=1 for any set of orthonormal basis vectors {e n } in our Hilbert space. It is not always convenient to check whether or not an operator is trace class from the definition.

13 It is fairly straight forward to check if an operator is Hilbert-Schmidt and it is well known that a product of two Hilbert-Schmidt operators is trace class. The definition of the Hilbert-Schmidt class of operators is the set of S such that the sum Se i, e j 2 < i,j is finite for some choice of orthonormal basis. If it is, then the above sum is independent of the choice of basis and its square root is called the Hilbert-Schmidt norm of the operator, denoted by S 2.

14 Properties of trace class operators (a) Trace class operators form an ideal in the set of all bounded operators and are closed in the topology defined by the trace norm. (b) Hilbert-Schmidt operators form an ideal in the set of all bounded operators with respect to the Hilbert-Schmidt norm and are closed in the topology defined by the Hilbert-Schmidt norm. (c) The product of two Hilbert-Schmidt operators is trace class.

15 (d) If T is trace class, then T is a compact operator. (e) If T is trace class, then λ i < where the λ i s are the eigenvalues of T. (f) Hence the product (1 + λ i ) is always defined and finite and we denote it by det(i + T). (g) If T is trace class, then det P α (I + T)P α det(i + T) for orthogonal projections P α that tend strongly (pointwise) to the identity. (For the first determinant we think of P α (I + T)P α as the operator defined on the image of P α.)

16 (h) If A n A, B n B strongly (pointwise in the Hilbert space) and if T is trace class, then A n TB n ATB in the trace norm. (i) The functions defined by tr T = λ i and det(i + T) are continuous on the set of trace class operators with respect to the trace norm. (j) If T 1 T 2 and T 2 T 1 are trace class then tr (T 1 T 2 ) = tr (T 2 T 1 ) and det(i + T 1 T 2 ) = det(i + T 2 T 1 ).

17 If T is trace class then we may define D(λ) = det(i λt) for all complex λ. So the natural question is: if T is an integral operator with kernel K(x, y), does D(λ) = D(λ)? First we should verify that the integral operator is trace class or even Hilbert-Schmidt. This is not always the case and requires an additional assumption.

18 It is not hard to check that if K(x, y) is continuous on [a, b] [a, b], then T is Hilbert-Schmidt and T 2 = b b a a K(x, y) 2 dxdy. The operator T is trace class if, in addition, the Hilbert-Schmidt norm of the kernel K is finite and then y T 1 T 2 + b a K/ y 2. If the above condition holds then D(λ) = D(λ) since they are both entire functions and agree on a countable set, at all the points 1/λ i corresponding to the eigenvalues of the integral operator.

19 There are other conditions which guarantee that an integral operator is trace class. And whenever that is the case, the same argument can be used to show the definitions of the determinants agree. The next few slides show why this is true for certain convolution operators.

20 The truncated Wiener-Hopf operator W α (σ) is defined by where k is given by f (x) g(x) = f (x) + k(x) = 1 2π α 0 k(x y) f (y) dy, (σ(ξ) 1)e ixξ dξ. The symbol σ is defined on the real line and it is assumed that σ 1 in L 1 (R). If α =, then W α (σ) is the usual Wiener-Hopf operator and will be denoted by W(σ).

21 This last condition assures us that W α (σ) I is trace class. To see this, think of ξ as fixed and e i(x y)ξ (σ(ξ) 1) = e ixξ e iyξ (σ(ξ) 1) as the kernel of a rank one operator. The kernel in question is a L 1 limit of sums of such operators and since trace class operators are closed in the trace norm, the truncated Wiener-Hopf operator is trace class.

22 Properties of W(σ) Suppose σ + 1 H and σ 1 H. (Recall H is the set of all φ in L such that the Fourier transform of φ vanishes on the negative real axis). Then W(σ )W(σ)W(σ + ) = W(σ σσ + ). In particular, if σ is also in H, then W(σ 1 + )W(σ + ) = I. In general for appropriate f, The analogue holds for σ. f (W(σ + )) = W(f (σ + )).

23 The other important property of the operator W(σ) is that W(σ)W(φ) = W(σ φ) H(σ)H( φ) where H(σ) has kernel (σ 1) x+y, H( φ) has kernel (φ 1) x y, and where ψ x denotes the Fourier transform of ψ at x. We also note that H(σ) is Hilbert-Schmidt on L 2 (0, ) if or (σ 1) x+y 2 dx dy <, x (σ 1) x 2 dx <.

24 Our first task will be to compute the Fredholm det W α (σ) exactly, and then from the exact form, compute the asymptotics. We will require that not only σ 1 be in L 1, but that k is as well and that (1 + x ) k(x) 2 dx <. Then if σ is bounded away from zero, has index zero, we can factor σ = σ σ + where σ + 1 and σ 1 are in H, and H, respectively.

25 The conditions on k (or equivalently σ) are closed Banach algebra conditions and thus if σ has a continuous logarithm, then the logarithm also satisfies these conditions, and any H 2 projection of the logarithm. This means we can factor σ = σ σ + and the factors will satisfy the conditions as well. In the next slide Q α is the orthogonal projection of L 2 (0, ) onto L 2 (α, ).

26 Borodin-Okounkov-Geronimo-Case identity This is given by where det(w α (σ)) = G(σ) α E(σ) det(i Q α L(σ) Q α ) G(σ) := 1 2π E(σ) = exp 0 log σ (x) dx, x s(x) s( x)dx and s(x) = (log σ) x. L is a trace class operator acting on L 2 (α, ) with kernel ( ) ( ) σ σ+ L(x, y) = 1 1 dz. 0 σ + x+z σ z y

27 We begin with the observation that W α (σ) = P α W(σ)P α where P α is the orthogonal projection of L 2 (0, ) onto L 2 (0, α), and P α W(σ + ) = P α W(σ + )P α, and W(σ )P α = P α W(σ )P α. Then P α W(σ) P α = P α W(σ + ) W(σ 1 + ) W(σ) W(σ 1 ) W(σ ) P α = P α W(σ + ) P α W(σ 1 + ) W(σ) W(σ 1 ) P α W(σ ) P α. We compute the determinant of each term above.

28 The determinants can be easily seen to be ( α exp 2π using the formula det P α W(σ ± )P α ) log σ ± dx det (exp A) = exp (tr A). Thus det P α W(σ + )P α det P α W(σ )P α ( α ) ( α = exp log σ + + log σ dx = exp 2π 2π ) log σ dx.

29 To address the middle term P α W(σ 1 + ) W(σ) W(σ 1 )P α we denote W(σ 1 + ) W(φ) W(σ 1 ) by A. From our conditions on σ this is an invertible operator. We now use Jacobi s identity det P A P = (det A) (det Q A 1 Q) where P + Q = I and we are left with the computation of two more terms.

30 The determinant det W(σ 1 + ) W(σ) W(σ 1 ) = det W(σ 1 + ) W(σ ) W(σ + )W(σ 1 ) exists by our assumptions on σ. Since it is of the form e B e C e B e C with the commutator of B and C trace class, we can write the determinant as exp(tr [B, C]). Computed it becomes exp 0 x s(x) s( x) dx. (Note: The determinant can also be expressed as det W(σ 1 ) W(σ), an answer that works for matrix-valued symbols as well.)

31 We still need to compute det Q A 1 Q. So A 1 is A = W(σ 1 + ) W(σ )W(σ + ) W(σ 1 ) W(σ ) W(σ 1 + )W(σ 1 ) W(σ + ) = W(σ σ 1 + )W(σ 1 σ + ) and the result follows. Notice the kernel that corresponds to this last operator is ( ) ( ) σ σ+ L(x, y) = 1 1 σ + σ as desired. 0 x+z dz, z y

32 To recap: det P α W(σ ) P α det P α W(σ + ) P α = G(σ) α det A = E(σ) det Q A 1 Q = det(i Q α L(σ) Q α )

33 Since Q α tends strongly to zero, we have the asymptotics (the Kac formula): det(w α (σ)) G(σ) α E(σ) = G(σ) α det W(σ 1 )W(σ). I. Fredholm, Sur une classe d equations functionnelles, Acta Math. 27 (1903) I.C. Gohberg, M.G. Krein, Introduction to the theory of linear nonselfadjoint operators Vol. 18, Translations of Mathematical Monographs, Amer. Math. Soc., Rhode Island, A. Böttcher, B. Silbermann, Analysis of Toeplitz Operators, Akademie-Verlag, Berlin, 1990.

34 A. Borodin, A. Okounkov, Fredholm determinant formula for Toeplitz determinants Int. Eqns. Operator Th. 37 (2000), J. Geronimo, K. Case. Scattering theory and polynomials orthogonal on the unit circle, J. Math. Phys. 20 (1979), no. 2, E. Basor, H Widom, On a Toeplitz Identity of Borodin and Okounkov, Int. Eqns. Operator Th 37 (2002), no A. Böttcher, On the determinant formulas by Borodin, Okounkov, Baik, Deift, and Rains, Oper. Th.: Adv. and Appl. 135 (2002), E. Basor, Y. Chen, A note on Wiener-Hopf determinants and the Borodin-Okounkov identity, Int. Eqns. Operator Th., 45 (2003),

35 At times, the B-O-G-C formula can be used to find another term of the expansion. Here is one example without the details. The kernel is g sin π(x y) π sinh g(x y), g > 0. Then as α, det(i Q α L(σ) Q α ) 1 C(g) e 2gα(1 θ/π), where C(g) is a completely determined constant and cos θ = e π2 /g, 0 < θ < π/2.

36 The proof given to find the asymptotics of the determinants of the truncated Wiener-Hopf operators can be easily modified to apply to many other situations. One easy example is to perturb the Wiener-Hopf operators by a trace class integral operator ( on L 2 (0, )). det P α (W(σ) + T)P α G(σ) α det(w(σ 1 )(W(σ) + T)) The other classic example is for finite Toeplitz matrices and many of the ideas for the proof given above were first realized for those matrices. However, there are others, often considered because of problems in random matrix theory.

37 The operator W α (φ) is unitarily equivalent to P α F 1 M φ FP α where F is the Fourier transform and M φ is multiplication by the function φ. For Laguerre ensembles in random matrix theory one might study an analogous operator, the Bessel operator B α (φ) defined on L 2 (0, α) by P α H ν M φ H ν P α where H ν is the Hankel transform of order ν given by H ν (f )(x) = and J ν is the Bessel function of order ν. 0 tx Jν (tx)f (t) dt,

38 In more familiar kernel form we have the operator with kernel φ(y) J ν( x) yj ν ( y) xj ν ( x)j ν ( y). 2(x y) Using much the same ideas for the proof as we did for the Wiener-Hopf case, one can show that if φ = e b 1, ( α det(i+b α (φ)) exp b(x)dx ν 2π ) x(ˆb(x)) 2 dx. 2 0

39 If we scale in Gaussian Unitary ensembles on the edge of the spectrum, linear statistics problems reduce to the study of the Airy operators, integral operators on L 2 (0, α) with kernel A α (f )(x, y) = f (x/α) where A(x) is the Airy function. 0 A(x + z)a(z + y)dz (This operator also has an equivalent definition in terms of multiplication and an Airy transform.)

40 The asymptotic formula reads det(i + A α (f )) exp ( c 1 α 3/2 + c 2 ) where and c 1 = 1 π 0 x log(1 + f ( x)) dx c 2 = 1 x G(x) 2 dx 2 0 G(x) = 1 e ixy log(1 + f ( y 2 ))dy. 2π

41 All that we have done so far requires a nicely behaved symbol σ. So what happens for more general σ? For example, the familiar sine kernel sin π(x y) π(x y) has inverse Fourier transform that is a characteristic function of [ 1, 1], and thus does not satisfy the conditions that are assumed for the Kac formula. To be able to handle this case, in at least some situtations, we turn to Toeplitz determinants.

42 The analogue of what we have just done in the Toeplitz case is the strong Szegö limit theorem. It states that if the symbol ϕ defined on the unit circle has a sufficiently well-behaved logarithm then the determinant of the Toeplitz matrix T n (ϕ) = (ϕ j k ) j,k=0,,n 1 has the asymptotic behavior D n (φ) = det T n (ϕ) G(ϕ) n E(ϕ) as n where ( ) G(ϕ) = e (log ϕ) 0, E(ϕ) = exp k (log ϕ) k (log ϕ) k. k=1 Here subscripts denote Fourier coefficients.

43 In 1968, Fisher and Hartwig raised a conjecture about a certain class of Toeplitz matrices with singular symbols. If ϕ γ,β (e iθ ) = (2 2 cos θ) (γ+β)/2 e i(θ π)(γ β)/2 for 0 < θ < 2π (this symbol is said to have a pure Fisher-Hartwig singularity), then their symbols had the form ψ(z) = ϕ(z) N ϕ γj,β j (z/z j ) where ϕ satisfies the assumption of Szegö s theorem and z 1,, z N are distinct points on the unit circle. j=1

44 They conjectured that for some range of the parameters the asymptotics had the form det T n (ψ) G(ψ) n n γ j β j E(ϕ, γ j, β j, z j ) where E(ϕ, γ j, β j, z j ) is a constant (whose value they did not conjecture). The conjecture has a long history and due to the work of many mathematicians the constant E(ϕ, γ j, β j, z j ) was determined.

45 The conjecture has now been proved in great generality provided that Re (γ j ± β j ) < 1. The basic technique is to first show that the quotient D n (φψ)/d n (φ)d n (ψ) tends to a limit if the singularities of the symbols do not overlap (a localization theorem) and then to compute the determinant exactly for the pure singularity which fortunately can be done. ϕ γ,β

46 It is natural to ask what happens to non-smooth symbols in the Wiener-Hopf case as well. We take as the analogue of the pure Fisher-Hartwig singularity the symbol σ γ,β (ξ) = ( ξ 0i ξ i ) γ ( ) β ξ + 0i. ξ + i (We specify the arguments of ξ ± 0i and ξ ± i to be zero or close to it when ξ is large and positive.) This has the behavior σ γ,β (ξ) ξ γ+β e 1 2 iπ(γ β) sgnξ as ξ 0.

47 The general symbol is then of the form σ 0 (ξ) r σ γr,β r (ξ ξ r ) where σ 0 is a symbol for which the Strong Szegö theorem holds. For an attempted proof, it seems reasonable to try to do what was done in the Toeplitz case, that is, (1) devise the proper localization techniques (2) then try to evaluate the determinants exactly in the case of the pure singularity.

48 While the first step has been accomplished, the second step, except in some special cases has never been done. So something else is needed to overcome this obstacle. Now it turns out the key is to use an identity for our smooth symbols, the Borodin-Okounkov-Geronimo-Case Identity. Note that for Toeplitz determinants the analogue of the identity follows just as before.

49 Before we show how these identities link the Toeplitz and Wiener-Hopf determinants there is an extra complication since for Wiener-Hopf symbols of the form σ γ,β the Wiener-Hopf operator is not of the from I + T where T is trace class, but rather T is Hilbert-Schmidt. Thus we need a regularized determinant version of these identities. This determinant for operators of the form I + T where T is Hilbert-Schmidt with eigenvlaues λ i is defined by: (1 + λi )e λ i.

50 This is quite easy to do and the corresponding statements are: For the regularized determinant we have det 2 T n (ϕ) = G 2 (ϕ) n E(ϕ) det (I K n ) where G 2 (ϕ) = exp ( (log ϕ) 0 ϕ ). For Toeplitz determinants this follows immediately since holds for any finite matrix A. det 2 A = det A e tr(a I)

51 The Wiener-Hopf analogue is where det 2 W α (σ) = G 2 (σ) α E(σ) det (I Q α L Q α ) ( 1 G 2 (σ) = exp 2π ) (log σ(ζ) σ(ζ) + 1) dζ.

52 We can state the main result: If Re (γ ± β) < 1 then det 2 W α (σ γ,β )/G 2 (σ γ,β ) α det 2 T n (ϕ γ,β )/G 2 (ϕ γ,β ) n when α 2n.

53 The asymptotics of det 2 T n (ϕ γ,β )/G 2 (ϕ γ,β ) n are well-known and given by the formula γβ G(1 + γ)g(1 + β) n G(1 + γ + β) where G(1 + z) is the Barnes G-function. G(1 + z) is an entire function satisfying G(1 + z) = Γ(z)G(z) and defined by the infinite product e (1+γ)λ2 n=1 where γ is Euler s constant. (1 + λ2 n 2 )n e λ2 /n

54 Thus in the Wiener-Hopf case det 2 W α (σ γ,β ) G 2 (σ γ,β ) α γβ G(1 + γ)g(1 + β) (α/2). G(1 + γ + β) When β = γ the Wiener-Hopf operators have a well-defined determinant and thus we have as a corollary: If Re (γ) < 1/2 then when α 2n. det W α (σ γ,γ ) e αγ det T n (ϕ γ,γ )

55 Here is the idea of the proof: The function ϕ γ,β = (1 z) γ (1 1 z )β z = e iθ. Define Then ϕ r,γ,β = (1 rz) γ (1 r z )β z = e iθ. D n (ϕ γ,β ) = lim r 1 D n (ϕ r,γ,β ). Apply the identity to the term D n (ϕ r,γ,β ) and take the limit as r 1. Do the same for W α (σ γ,β ) and then compare the answers.

56 To get a complete answer one needs to paste together the asymptotics of the pieces and the quotient terms. This has been done in some cases, but not all. For certain piecewise continuous symbols here is the complete answer. Suppose that σ is bounded away from zero, piecewise C 2 with a finite number of jump discontinuities at the points x 1,..., x R, has a appropriately defined argument which vanishes at ± and assume that σ L 1, (1 + x 2 )σ (x) L 2.

57 Then det(w α (σ)) G(σ) α (α) r λ2 r E(σ) g(λ r ), r where G(σ) = exp 1 log φ(x) dx 2π λ r = 1 2π log(σ(x r+)/σ(x r )), g(λ) = G(1 + λ)g(1 λ) 1 e x E(σ) = exp {x log σ x log σ x λ 2 x r} dx. 0 r

58 M. E. Fisher, R. E. Hartwig, Toeplitz determinants: some applications, theorems, and conjecture, Adv. Chem. Phys. 15 (1968) T. Ehrhardt, A status report on the asymptotic behavior of Toeplitz determinants with Fisher-Hartwig singularities, Oper. Theory Adv. Appl. 124, (2001), E. Basor, H. Widom, Wiener-Hopf determinants with Fisher-Hartwig symbols, Operator Th: Advances and App. 147 (2004) P. Deift, A. Its, I. Krasovsky, Asymptotics of Toeplitz, Hankel, and Toeplitz+Hankel determinants with Fisher-Hartwig singularities, Ann. of Math (2) 174 (2011), no. 2,

59 Some applications to random matrix theory In Random Matrix Theory (RMT) there are three important invariant ensembles. 1. Unitary Ensembles (UE) which are n n Hermitian matrices together with a distribution that is invariant under unitary conjugation. In other words, if U is any unitary matrix, the measure of a set S of matrices is the same as US U. 2. Orthogonal Ensembles (OE) which are n n symmetric matrices together with a distribution that is invariant under conjugation by an orthgonal matrix.

60 3. Symplectic ensembles (SE) which are 2n 2n Hermitian self-dual matrices together with a distribution that is invariant under unitary-symplectic conjugation, that is, matrices M that satisfy M = M = JM t J t and distributions invariant under the mapping M UMU with U satisfying UU = I and UJU = J where J = diag(j 2,..., J 2 ), J 2 = ( ).

61 Each has an associated probability distribution of the form P n (M) dm = 1 Z n e tr Q(M) dm where dm is Lebesgue measure on the algebraically independent entries of M, Q is a real-valued function, generally an even degree polynomial, and Z n is a normalization constant. In the case that Q(x) = x 2 these measures are equivalent to having, as much as is algebraically possible, matrices whose entries are independent normal or Gaussian random variables.

62 These ensembles are probably the most studied and are indicated with an extra Gaussian adjective as the Gaussian Unitary Ensemble (GUE), the Gaussian Orthogonal Ensemble (GOE), and the Gaussian Symplectic Ensemble (GSE). In each of the above cases one can compute the induced distribution on the space of eigenvalues. For example, in the GUE case one can diagonalize a Hermitian matrix as UDU, where U is unitary and D is a diagonal matrix and then make a change of variables M (U, D). After integrating out the unitary part (which is equivalent to computing a Jacobian) one arrives at an induced distribution on the space of eigenvalues.

63 A similar computation can be made for all three ensembles and the resulting probability densities have the form c n e β n 2 i=1 Q(xi) (x i ) β where (x i ) is the Vandermonde determinant (x i ) = det(x i 1 j ) 1 i,j n = j<i (x i x j ) and c n is the normalizing constant.

64 Here, more precisely, we mean that if f is any symmetric function of n real variables, then the expected value of f is c n f (x 1, x 2,..., x n )e β n 2 i=1 Q(xi) (x i ) β dx 1 dx 2... dx n. R n For UE β = 2, for OE β = 1, and for SE β = 4, and thus the three ensembles are often referred to as the β = 1, 2, or 4 ensembles. (It can be shown for SE that eigenvalues occur in pairs and thus all three densities are defined as functions of n variables.)

65 As soon as the densities are known, one can try to compute some statistical information about the eigenvalues. Since it is easier to describe, we illustrate the ideas with GUE. We can first factor the exponential terms into the Vandermonde determinant so that the entry xj i 1 is replaced by e x2 j /2 xj i 1 and then using elementary row operations we can replace each row by any polynomial. That is, replace e x2 j /2 xj i 1 by e x2 j /2 p i 1 (x j ) only changing the determinant by a constant factor.

66 So we choose to replace them by the normalized Hermite polynomials h k (x) which satisfy h k (x) h j (x) e x2 dx = δ jk. From this it follows, after identifying the constant, that the density on the space of eigenvalues for GUE is c n e n i=1 x2 i (xi ) 2 = 1 n! det K n(x i, x j ) where n 1 K n (x i, x j ) = ϕ k (x i )ϕ k (x j ), k=0 and ϕ k (x) = h k (x)e x2 2.

67 The Christoffel-Darboux formula allows one to analyze K n (x, y) in a more concise form since it says that for x y, n φ n (x)φ n 1 (y) φ n (y)φ n 1 (x) K n (x, y) =, 2 x y and for x = y, K n (x, y) is the limit of this expression as x y. All the information we need is somehow contained in the function K n (x, y). It is clear that for large n this information is intimately related to knowledge about the asymptotics of the Hermite polynomials.

68 For example, the density of eigenvalues, ρ n (x), defined to be the limit of the expected number of eigenvalues in an interval around x divided by its length, as the length tends to zero, is exactly K n (x, x). Using the known asymptotics for the Hermite polynomials one can show that lim n 2 n ρ n( { 2 2n x ) = π 1 x 2 if x < 1 0 if x > 1 holds uniformly on compact sets of x < 1 and x > 1. This result, one of the first successes of Wigner s program is called the Wigner semi-circle law.

69 The semi-circle law tells us how we should rescale so that we can obtain meaningful answers. Thus we replace K n (x, y) with ( ) 1 x y K n,. 2n 2n 2n From the theory of Hermite polynomials one can show that as n, ( ) 1 x y sin(x y) K n, 2n 2n 2n π(x y).

70 Now consider a random variable of the form n f (x i 2n). i=1 This is called a linear statistic. A fundamental formula from probability theory shows that if we call the probability distribution function φ n, then its inverse Fourier transform is given by e ik n j=1 f (x j 2n) K n (x 1,..., x n ) dx 1 dx n.

71 The transform is then 1 + (e ik n j=1 f (x j 2n) 1) K n (x 1,..., x n ) dx 1 dx n. We expand the exponential and collect terms in k using the helpful property that n! K n (x 1,..., x m, x m+1,..., x n ) dx m+1 dx m (n m)! = det K n (x i, x j ) m i,j=1.

72 The expansion immediately shows that this is the Fredholm determinant for the kernel (e ik n j=1 f (x j 2n) 1) K n (x, y). If we change variables let n then the limit is the Fredholm determinant det(i + T) where T has kernel K(x, y) = (e i k f (x) sin(x y) 1) π(x y). This has the same Fredholm determinant as the operator with K(x, y) = sin(x y) π(x y) (ei k f (y) 1).

73 We recall that sin x/πx is the Fourier transform of the characteristic function of the interval ( 1, 1). Thus we can write our kernel as K(x, y) = 1 2π χ(ξ)e i(x y)ξ (e i k f (y) 1) dξ where χ(ξ) is the characteristic function of the interval. But this corresponds to the operator that sends g(y) to g 1 1 e iξx e iyξ (e i k f (y) 1)g(y) dy dξ. 2π 1

74 However, g 1 1 e iξx e iyξ (e i k f (y) 1)g(y) dy dξ 2π 1 is the same as FPF 1 M φ and thus has the same determinant as PF 1 M φ FP and is thus a Wiener-Hopf operator with symbol e i k f (y) and restricted to the interval ( 1, 1).

75 Now we take f (x) to be of the form f (x/α) with α a parameter. This is equivalent to changing the interval to ( α, α). Using our limit theorem we have that { α F(φ)(k) exp i k f (x) dx k 2 π 0 } x F 1 (f (x)) 2 dx. This tells us that asymptotically the distribution function is Gaussian and identifies the mean and variance.

76 But what happens if f is not smooth? Then we need to use Fisher - Hartwig symbols. Suppose we let f be the function that counts the number of eigenvalues in a large interval, f (x/α) = χ (α,α) (x). This yields a piecewise continuous symbol for e i k f (x).

77 To begin, let us compute the mean of the linear statistic before scaling. µ = f (xi 2n/α)Kn (x 1,..., x n ) dx = n = f (x 1 2n/α)Kn (x i,..., x n ) dx f (x 1 2n/α)Kn (x 1, x 1 ) dx. Changing variables in the limit we have µ = 2α π.

78 We can do a similar computation and find that the variance is asymptotically or 1 π log 2 α + 1 (1 + γ + log 2) 2 π2 1 log 2 α + O(1) π2

79 Recall our formula where det(w α (σ)) G(σ) α (α) r λ2 r E(σ) g(λ r ), G(σ) = exp 1 log φ(x) dx 2π λ r = 1 2π log(σ(x r+)/σ(x r )) g(λ) = G(1 + λ)g(1 λ) E(σ) = exp 0 {x log σ x log σ x 1 e x x r λ 2 r} dx. r

80 The symbol of interest is 1 if ξ > 1, σ(ξ) = e ik if ξ < 1, G(σ) = exp 1 π log φ(x) dx = exp 2 ki π. We will have two jumps with parameters ik 2π replaced by 2α. and ik 2π and α is In this case R = 2 and λ 1 = i k 2π and λ 2 = i k 2π. Thus, (2α) r λ2 r = exp( k2 log 2 α). 2π2

81 Notice that both of these terms do have the property that their logarithms are quadratic in k. This holds true for E(σ) as well. This term is given by exp{ k2 sin 2 x π 2 0 x 1 e x dz} = exp( k2 log 2 ). 2x 2π 2

82 Thus the entire expansion is given by the formula exp{ 2kiα π ( k2 k2 ) log α ( 2π2 π ) log 2}g(i 2 k/2π)2. The presence of this last term involving the function g clearly shows that this expansion does not have a logarithm quadratic in k. Thus, at first it seems the Gaussian nature of the distribution is missing. However, the natural rescaling of the variable: π ( f (x i µ) log 2α yields exp( k 2 /2) as expected.

83 These techniques can be applied to other ensembles as well. We mention only one here. One can follow a similar path to find analogous asymptotic formulas for the truncated Wiener-Hopf + Hankel operators W α (σ) + H α (σ) in a case where σ is the characteristic function of an interval and so has two jump discontinuities.

84 The corresponding kernel is ( sin (x y) (e 2πik 1) π (x y) With σ as above we have, for Re k < 1/2, ) sin (x + y) +. π (x + y) det (W α (σ) + H α (σ)) e 2iαk α 3k2 2 4 k2 G(1 2k) G(1 + 2k), where G is the Barnes G-function.

85 Gap probabilites Another important statistic in RMT is the gap probability, which is the probability that no eigenvalues are in the interval (a, b). Using algebraic properties of K n as before one can show that in the limit the gap probability is given by a Fredholm determinant on L 2 (a, b) of the form det(i λt) with kernel K(x, y) = sin(x y) π(x y).

86 If we are interested in applying the limit theorems as before, we must restrict ourselves to small λ since the symbol will vanish on an interval. In a wonderful connection to differential equations, it is known by the work of Jimbo, Miwa, Môri, and Sato that if we define σ(s) = d log det(i K) ds then σ satisfies a second-order nonlinear differential equation of Painlevé type. The theory of Painlevé equations and the theory of integrable systems then yield information about the asymptotics of the probability distribution. Our goal now is to describe the operator approach to the equations.

87 In the following slides we consider det(i λk) where K is the sine kernel and we think of the operator as defined on L 2 ( s, s). In other words, we want information about the probability of finding no eigenvalues in the interval ( s, s). It is useful not to try to compute the determinant directly, but the log of the determinant. This is because we have the formula log det(i λk) = trace log(i λk) at our disposal. We will eventually derive a differential equation (nonlinear) that has a connection to the above function (thought of as a function of s).

88 Suppose {K (s)} is a family of operators, such that K (s) is trace class and dk(s) ds is defined. Then whenever this makes sense, d ( ) ds (log (det (I K (s)))) = trace (I K (s)) 1 K (s). This is easy to verify since the trace is linear and tr AB = tr BA.

89 Now let J be the interval ( s, s) and consider the operator K (s) with kernel K (x, y) χ J (y) where K (x, y) is any continuous kernel. This operator sends f to the function s s K (x, y) f (y) dy. To find K (s) we use the Fundamental Theorem of Calculus.

90 The operator K (s) sends f K (x, s) f (s) + K (x, s) f ( s). Note that the image of f is finite rank and spanned by the two vectors, K (x, s) and K (x, s). In terms of kernels we have K (x, s) δ (y s) + K (x, s) δ (y + s). This operator does not make sense for all functions in L 2 but only for functions that are continuous. We will ignore this fact for the time being.

91 We now define R(x, y) to be the resolvent kernel, that is, the kernel that corresponds to the operator Since (I K) 1 K. (I K) 1 = I + (I K) 1 K the operator (I K) 1 has kernel ρ (x, y) = δ (x y) + R (x, y).

92 The kernel of (I K) 1 K (s) is given by ρ(x, z){k (z, s) δ (y s) + K (z, s) δ (y + s)} dz which is the same as R (x, s) δ (y s) + R (x, s) δ (y + s). This has rank two, just like the image of K (s). We can also easily compute its trace to see that it is R (s, s) + R ( s, s).

93 To summarize the list of kernels so far: operator kernel K (s) K (x, s) δ (y s) + K (x, s) δ (y + s) (I K) 1 ρ (x, y) = δ (x y) + R (x, y) (I K) 1 K (s) R (x, s) δ (y s) + R (x, s) δ (y + s)

94 We now compute the kernels of some additional operators. The operator d ds (I K) 1 has kernel R (x, s) ρ (s, y) + R (x, s) ρ ( s, y) = M (x, y). This is because d ds (I K) 1 = (I K) 1 dk ds (I K) 1. The right hand side has kernel ρ (x, y) {K (y, s) δ (z s)+k (y, s) δ (z + s)}ρ (z, u) dy dz or as desired. R (x, s) ρ (s, u) + R (x, s) ρ ( s, u)

95 Up to this point we have used no information about the sine kernel in particular, but from now on, we will use the properties of the sine kernel. The commutator of two operators A and B, AB BA and is denoted by [A, B], and we let D be the derivative operator. The operator [D, (I K) 1 ] has kernel R (x, s) ρ (s, y) + R (x, s) ρ ( s, y). To see this, the commutator [ D, (I K) 1] = (I K) 1 [D, K] (I K) 1. DK has kernel K(x,y), but KD is more complicated. x

96 The operator KD operates on a function f by so K D has kernel K D f = s s = (K (x, y) f (y) y=s y= s s K (x, y) f (y) dy s K (x, y) f (y) dy y K (x, s) δ (y s) K (x, s) δ (y + s) Since K (x, y) is a function of x y, K = K x y kernel of [D, K] is K (x, y). y K (x, s) δ (y s) + K (x, s) δ (y + s). and thus the

97 To find the kernel of [D, (I K) 1 ] we have ρ (x, y) [ K (y, s) δ (z s) + K (y, s) δ (z + s)] ρ (z, u) dy dz = ( R (x, s) δ (z s) ρ (z, u) + R (x, s) δ (z + s) ρ (z, u)) dz = R (x, s) ρ (s, u) + R (x, s) ρ ( s, u).

98 To summarize again the list of kernels: operator kernel K (s) K (x, s) δ (y s) + K (x, s) δ (y + s) (I K) 1 ρ (x, y) = δ (x y) + R (x, y) (I K) 1 K (s) R (x, s) δ (y s) + R (x, s) δ (y + s) d (I ds K) 1 R (x, s) ρ (s, y) + R (x, s) ρ ( s, y) [D, (I K) 1 ] R (x, s) ρ (s, y) + R (x, s) ρ ( s, y)

99 We will consider a slightly more general interval in what follows since it is not any harder to do so. We let I be the unions of the intervals (a 2i+1, a 2i ). We write our kernel λk (x, y) = where A (x) = λ sin (x y) π (x y) = A (x) A (y) A (y) A (x) x y λ sin x. We define functions π ( ) Q (x, â) = (I K) 1 A (x) ) P (x, â) = ((I K) 1 A (x) where â is a vector containing each a i. Define the operator M x (Multiplication by x) to be M x f = x f.

100 More kernels! The operator [ M x, (I K) 1] has kernel ) ( ) Q (x, â) ((I K t ) 1 A (y) P (x, â) (I K t ) 1 A (y) where K t has kernel K (y, x). To see this, once again, [ M x, (I K) 1] = (I K) 1 [M x, K] (I K) 1.

101 Now [M x, K] has kernel (A (x) A (y) A (x) A (y)). One half of the kernel of I I I [ M x, (I K) 1] is given by ρ (w, x) A (x) A (y) ρ (y, z) dx dy = ρ (w, x) A (x) dx A (y) ρ (y, z) dy = Q (x, â) I ( ) (I K t ) 1 A (z)

102 while similarly, for the other half ρ (w, x) A (y) A (x) ρ (y, z) dx dy I I ( ) = P (x, â) (I K t ) 1 A (z). And thus the kernel of [ M x, (I K) 1] is the sum of these ) ( ) Q (x, â) ((I K t ) 1 A (z) P (x, â) (I K t ) 1 A (z).

103 The function R (x, y) can now be written in terms of P and Q and is given by: ) ( ) Q (x, â) ((I K t ) 1 A (y) P (x, â) (I K t ) 1 A (y). x y But clearly for our sine kernel the transpose K t is the same as K. So we have thus proved that for x and y both in I, R (x, y) = Q (x, â) P (y, â) P (x, â) Q (y, â). x y

104 Notice the resolvent kernel has the same form as the sine kernel and now we have reduced our problem into finding information about P and Q. In order to determine R (x, x) consider P (y, â) as y approaches a fixed value for x. We can expand P (y, â) in a Taylor series about the point x as ( ) d [ P (y, â) = P (x, â) + P (x, â) (y x) + O (y x) 2]. dx Similarly for Q Q (y, â) = Q (x, â) + ( ) d [ Q (x, â) (y x) + O (y x) 2]. dx

105 Substituting this into the expression for R (x, y), R (x, y) = Q (x, â) P (x, â) + P (x, â) Q (x, â) + O (x y). Taking the limit as y x, R (x, x) = Q (x, â) P (x, â) + P (x, â) Q (x, â). Now let s examine how R behaves at the endpoints of the intervals in I.

106 We define q j as Q(x, â) x=aj and p j as P(x, â) x=aj. For a i a j, R (a j, a k ) = q jp k p j q k a j a k. We can generalize our previous computation to see that [D, (I K) 1] has kernel 2m k=1 ( 1) k R (x, a k ) ρ (a k, y).

107 Using our kernel representations, it follows that Q (x, a) x=aj = p j 2m k=1 ( 1) k R (a j, a k ) q k. Similarly for P (x, â), P (x, a) x=aj = q j 2m k=1 ( 1) k R (a j, a k ) p k.

108 Placing both of these expressions into the formula for R (x, x) yields ) 2m R (a j, a j ) = p j (p j ( 1) k R (a j, a k ) q k = p 2 j + q 2 j = p 2 j + q 2 j k=1 q j ( q j 2m k=1 2m k=1 2m k=1 ( 1) k R (a j, a k ) p k ) ( 1) k R (a j, a k ) (p j q k q j p k ) ( 1) k R (a j, a k ) R (a k, a j ) (a k a j ).

109 The expression for q j a j can be found to be q j a j = p j 2m k = 1 k j ( 1) k R (a j, a k ) q k. In a completely similar fashion, p j a j = q j 2m k = 1 k j ( 1) k R (a j, a k ) p k.

110 Now let s specialize to the case where K (x, y) = sin(x y) and π(x y) where I is ( s, s). For this interval, K is symmetric with respect to interchange of variables and satisfies K (x, y) = K ( x, y). This same property holds for the kernel ρ. To see this define a flip operator J that sends the function f (x) to g(x) = g( x). It is easy to check that if K and J commute. Now (I K) 1 J = J (I K) 1 since (I K) 1 J(I K) = J (I K) 1 (I K).

111 This also tells us that R(x, y) = R( x, y) and thus R(s, s) = R( s, s). It is also the case that : q 1 = q 2. q 1 = lim Q (x, â) = lim + x s s = lim x s s s = lim x s s s = lim x s = lim x s s s x s + s s ρ ( x, y) A (y) dy ρ (x, y) A (y) dy ρ (x, y) A ( y) dy s A similar argument leads to p 1 = p 2. ρ (x, y) A (y) dy = q 2. ρ (x, y) A (y) dy

112 We are finally very close to finding our differential equation. Define a (s) = sr (s, s) b (s) = sr ( s, s). First notice that d ds (sr ( s, s)) = d ds (p 1q 1 ) = p 1 (p 1 2R ( s, s) q 1 ) + ( q 1 + 2R ( s, s) p 1 ) q 1 = p 2 1 q 2 1.

113 Using the same sort of computation one can also show that d ds (sr (s, s)) = p2 1 + q 2 1. If we square both sides of the right-hand side of the last two equations, it is clear that ( ) 2 da = ds ( ) 2 db + 4b 2. ds

114 It is also the case from the fundamental formulas for a and b that a s da ds = 2b2. Differentiating this last equation we see that sa = 4bb. Inserting this into the previous equation we arrive at the following.

115 Theorem The function a satisfies the following differential equation: s 2 (a ) 2 = 8(sa a)( 2(sa a) + (a ) 2 ). We know that if λ is small, then the asymptotics are of the form: log det(i λk) = c 1 s c 2 log s + c 3 + o(1) from our previous results.

116 The asymptotic formula is given here for λ = 1 reads: log det(i K) = s2 2 log s 4 + log ζ ( 1) + o(1). This was first conjectured by Freeman Dyson in The first two terms were proved in 1995 by Harold Widom and the constant term was computed independently by Torsten Ehrhardt in 2007 and also by Deift, Its, Krasovsky, and Zhou. Ehrhardt s techniques involved operator theory methods while the other approach used the Riemann Hilbert approach.

117 E. W. Barnes. The theory of the G-function, Quart. J. Pure and Appl. Math. 31 (1900) E. L. Basor. Distribution Functions for Random Variables for Ensembles of Positive Hermitian Matrices, Comm. Math. Phys. 188 (1997), E. L. Basor, T. Ehrhardt, H. Widom. On the determinant of a certain Wiener-Hopf + Hankel operator, Integral Equations and Operator Theory 47 (2003) E. L. Basor, H Widom. Determinants of Airy operators and applications to random matrices. J. Statist. Phys. 96 (1999) no. 1-2, A. Böttcher, B. Silbermann. Introduction to Large Truncated Toeplitz Matrices, Springer-Verlag, Berlin, 1998.

118 T. Ehrhardt, Dyson s constants in the asymptotics of the determinants of Wiener-Hopf-Hankel operators with the sine kernel, Comm. Math. Phys. 272 (2007), no. 3, 683Ð698. P. Deift, A. Its, I. Krasovsky, X. Zhou, The Widom-Dyson constant for the gap probability in random matrix theory., J. Comput. Appl. Math. 202 (2007), no. 1, 26Ð47. C. Hughes, J. P. Keating, N. O Connell. On the characteristic polynomial of a random unitary matrix, Commun. Math. Phys. 220 (2001) M. Jimbo, T. Miwa, Y. Môri, M. Sato, Density matrix of an impenetrable Bose gas and the fifth Painlevé transcendent, Physica D 1 (1980), M. Kac. Toeplitz matrices, translation kernels, and a related problem in probability theory, Duke Math. J. 21 (1954),

119 M. L. Mehta. Random Matrices, Academic Press, Rev. and enlarged 2nd ed., San Diego, C. A. Tracy, H. Widom. Level-Spacing Distributions and the Airy Kernel, Comm. Math. Phys. 159 (1994), C. A. Tracy, H. Widom. Introduction to random matrices, in Proc. 8th Scheveningen Conf., Springer Lecture Notes in Physics, C. A. Tracy, H.Widom. Level spacing distributions and the Bessel kernel, Commun. Math Phys. 161 (1994) H. Widom. Asymptotic behavior of block Toeplitz matrices and determinants. II, Adv. in Math. 21:1 (1976) 1-29.

Fredholm determinant with the confluent hypergeometric kernel

Fredholm determinant with the confluent hypergeometric kernel Fredholm determinant with the confluent hypergeometric kernel J. Vasylevska joint work with I. Krasovsky Brunel University Dec 19, 2008 Problem Statement Let K s be the integral operator acting on L 2

More information

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:

More information

Determinants of Perturbations of Finite Toeplitz Matrices. Estelle Basor American Institute of Mathematics

Determinants of Perturbations of Finite Toeplitz Matrices. Estelle Basor American Institute of Mathematics Determinants of Perturbations of Finite Toeplitz Matrices Estelle Basor American Institute of Mathematics October 2010 One of the purposes of this talk is to describe a Helton observation that fundamentally

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

arxiv:hep-th/ v1 14 Oct 1992

arxiv:hep-th/ v1 14 Oct 1992 ITD 92/93 11 Level-Spacing Distributions and the Airy Kernel Craig A. Tracy Department of Mathematics and Institute of Theoretical Dynamics, University of California, Davis, CA 95616, USA arxiv:hep-th/9210074v1

More information

Determinants of Hankel Matrices

Determinants of Hankel Matrices Determinants of Hankel Matrices arxiv:math/67v1 [math.ca] 8 Jun 2 Estelle L. Basor Department of Mathematics California Polytechnic State University San Luis Obispo, CA 9347, USA Yang Chen Department of

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

arxiv:math/ v1 [math.fa] 1 Jan 2001

arxiv:math/ v1 [math.fa] 1 Jan 2001 arxiv:math/0101008v1 [math.fa] 1 Jan 2001 On the determinant formulas by Borodin, Okounkov, Baik, Deift, and Rains A. Böttcher We give alternative proofs to (block case versions of) some formulas for Toeplitz

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes

More information

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES Ann. Inst. Fourier, Grenoble 55, 6 (5), 197 7 MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES b Craig A. TRACY & Harold WIDOM I. Introduction. For a large class of finite N determinantal

More information

Airy and Pearcey Processes

Airy and Pearcey Processes Airy and Pearcey Processes Craig A. Tracy UC Davis Probability, Geometry and Integrable Systems MSRI December 2005 1 Probability Space: (Ω, Pr, F): Random Matrix Models Gaussian Orthogonal Ensemble (GOE,

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap Institute of Fundamental Sciences Massey University New Zealand 29 August 2017 A. A. Kapaev Memorial Workshop Michigan

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Differential Equations for Dyson Processes

Differential Equations for Dyson Processes Differential Equations for Dyson Processes Joint work with Harold Widom I. Overview We call Dyson process any invariant process on ensembles of matrices in which the entries undergo diffusion. Dyson Brownian

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

OPSF, Random Matrices and Riemann-Hilbert problems

OPSF, Random Matrices and Riemann-Hilbert problems OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 2017 Plan of the course lecture 1: Orthogonal

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Spectral properties of Toeplitz+Hankel Operators

Spectral properties of Toeplitz+Hankel Operators Spectral properties of Toeplitz+Hankel Operators Torsten Ehrhardt University of California, Santa Cruz ICM Satellite Conference on Operator Algebras and Applications Cheongpung, Aug. 8-12, 2014 Overview

More information

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Topics in Harmonic Analysis Lecture 1: The Fourier transform Topics in Harmonic Analysis Lecture 1: The Fourier transform Po-Lam Yung The Chinese University of Hong Kong Outline Fourier series on T: L 2 theory Convolutions The Dirichlet and Fejer kernels Pointwise

More information

Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices

Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices Journal of Statistical Physics, Vol. 92, Nos. 5/6, 1998 Correlation Functions, Cluster Functions, and Spacing Distributions for Random Matrices Craig A. Tracy1 and Harold Widom2 Received April 7, 1998

More information

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e.,

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Random matrices and the Riemann zeros

Random matrices and the Riemann zeros Random matrices and the Riemann zeros Bruce Bartlett Talk given in Postgraduate Seminar on 5th March 2009 Abstract Random matrices and the Riemann zeta function came together during a chance meeting between

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Pseudospectra and Nonnormal Dynamical Systems

Pseudospectra and Nonnormal Dynamical Systems Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Extreme eigenvalue fluctutations for GUE

Extreme eigenvalue fluctutations for GUE Extreme eigenvalue fluctutations for GUE C. Donati-Martin 204 Program Women and Mathematics, IAS Introduction andom matrices were introduced in multivariate statistics, in the thirties by Wishart [Wis]

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Dunkl operators and Clifford algebras II

Dunkl operators and Clifford algebras II Translation operator for the Clifford Research Group Department of Mathematical Analysis Ghent University Hong Kong, March, 2011 Translation operator for the Hermite polynomials Translation operator for

More information

Stochastic Differential Equations Related to Soft-Edge Scaling Limit

Stochastic Differential Equations Related to Soft-Edge Scaling Limit Stochastic Differential Equations Related to Soft-Edge Scaling Limit Hideki Tanemura Chiba univ. (Japan) joint work with Hirofumi Osada (Kyushu Unv.) 2012 March 29 Hideki Tanemura (Chiba univ.) () SDEs

More information

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier

Spring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of Chapter 2 Linear Algebra In this chapter, we study the formal structure that provides the background for quantum mechanics. The basic ideas of the mathematical machinery, linear algebra, are rather simple

More information

Chern forms and the Fredholm determinant

Chern forms and the Fredholm determinant CHAPTER 10 Chern forms and the Fredholm determinant Lecture 10: 20 October, 2005 I showed in the lecture before last that the topological group G = G (Y ;E) for any compact manifold of positive dimension,

More information

Schur polynomials, banded Toeplitz matrices and Widom s formula

Schur polynomials, banded Toeplitz matrices and Widom s formula Schur polynomials, banded Toeplitz matrices and Widom s formula Per Alexandersson Department of Mathematics Stockholm University S-10691, Stockholm, Sweden per@math.su.se Submitted: Aug 24, 2012; Accepted:

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University Math 8530, Spring 2017

Linear maps. Matthew Macauley. Department of Mathematical Sciences Clemson University  Math 8530, Spring 2017 Linear maps Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 8530, Spring 2017 M. Macauley (Clemson) Linear maps Math 8530, Spring 2017

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka

Orthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka Orthogonal polynomials with respect to generalized Jacobi measures Tivadar Danka A thesis submitted for the degree of Doctor of Philosophy Supervisor: Vilmos Totik Doctoral School in Mathematics and Computer

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Numerical Evaluation of Standard Distributions in Random Matrix Theory

Numerical Evaluation of Standard Distributions in Random Matrix Theory Numerical Evaluation of Standard Distributions in Random Matrix Theory A Review of Folkmar Bornemann s MATLAB Package and Paper Matt Redmond Department of Mathematics Massachusetts Institute of Technology

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

A Note on the Differential Equations with Distributional Coefficients

A Note on the Differential Equations with Distributional Coefficients MATEMATIKA, 24, Jilid 2, Bil. 2, hlm. 115 124 c Jabatan Matematik, UTM. A Note on the Differential Equations with Distributional Coefficients Adem Kilicman Department of Mathematics, Institute for Mathematical

More information

From Wikipedia, the free encyclopedia

From Wikipedia, the free encyclopedia 1 of 5 26/01/2013 02:14 Fredholm determinant From Wikipedia, the free encyclopedia In mathematics, the Fredholm determinant is a complex-valued function which generalizes the determinant of a matrix. It

More information

Energy method for wave equations

Energy method for wave equations Energy method for wave equations Willie Wong Based on commit 5dfb7e5 of 2017-11-06 13:29 Abstract We give an elementary discussion of the energy method (and particularly the vector field method) in the

More information

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives, The derivative Let O be an open subset of R n, and F : O R m a continuous function We say F is differentiable at a point x O, with derivative L, if L : R n R m is a linear transformation such that, for

More information

SEA s workshop- MIT - July 10-14

SEA s workshop- MIT - July 10-14 Matrix-valued Stochastic Processes- Eigenvalues Processes and Free Probability SEA s workshop- MIT - July 10-14 July 13, 2006 Outline Matrix-valued stochastic processes. 1- definition and examples. 2-

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

Recitation 1 (Sep. 15, 2017)

Recitation 1 (Sep. 15, 2017) Lecture 1 8.321 Quantum Theory I, Fall 2017 1 Recitation 1 (Sep. 15, 2017) 1.1 Simultaneous Diagonalization In the last lecture, we discussed the situations in which two operators can be simultaneously

More information

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st Primes, Paul-Olivier Dehaye pdehaye@math.ethz.ch ETH Zürich, October 31 st Outline Review of Bump & Gamburd s method A theorem of Moments of derivatives of characteristic polynomials Hypergeometric functions

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Orthogonal Polynomials, Perturbed Hankel Determinants. and. Random Matrix Models

Orthogonal Polynomials, Perturbed Hankel Determinants. and. Random Matrix Models Orthogonal Polynomials, Perturbed Hankel Determinants and Random Matrix Models A thesis presented for the degree of Doctor of Philosophy of Imperial College London and the Diploma of Imperial College by

More information

OPSF, Random Matrices and Riemann-Hilbert problems

OPSF, Random Matrices and Riemann-Hilbert problems OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 207 Plan of the course lecture : Orthogonal Polynomials

More information

Uniform individual asymptotics for the eigenvalues and eigenvectors of large Toeplitz matrices

Uniform individual asymptotics for the eigenvalues and eigenvectors of large Toeplitz matrices Uniform individual asymptotics for the eigenvalues and eigenvectors of large Toeplitz matrices Sergei Grudsky CINVESTAV, Mexico City, Mexico The International Workshop WIENER-HOPF METHOD, TOEPLITZ OPERATORS,

More information

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course

Matrix Lie groups. and their Lie algebras. Mahmood Alaghmandan. A project in fulfillment of the requirement for the Lie algebra course Matrix Lie groups and their Lie algebras Mahmood Alaghmandan A project in fulfillment of the requirement for the Lie algebra course Department of Mathematics and Statistics University of Saskatchewan March

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Starting from Heat Equation

Starting from Heat Equation Department of Applied Mathematics National Chiao Tung University Hsin-Chu 30010, TAIWAN 20th August 2009 Analytical Theory of Heat The differential equations of the propagation of heat express the most

More information

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM

FOURIER SERIES, HAAR WAVELETS AND FAST FOURIER TRANSFORM FOURIER SERIES, HAAR WAVELETS AD FAST FOURIER TRASFORM VESA KAARIOJA, JESSE RAILO AD SAMULI SILTAE Abstract. This handout is for the course Applications of matrix computations at the University of Helsinki

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

MS 3011 Exercises. December 11, 2013

MS 3011 Exercises. December 11, 2013 MS 3011 Exercises December 11, 2013 The exercises are divided into (A) easy (B) medium and (C) hard. If you are particularly interested I also have some projects at the end which will deepen your understanding

More information

MORE NOTES FOR MATH 823, FALL 2007

MORE NOTES FOR MATH 823, FALL 2007 MORE NOTES FOR MATH 83, FALL 007 Prop 1.1 Prop 1. Lemma 1.3 1. The Siegel upper half space 1.1. The Siegel upper half space and its Bergman kernel. The Siegel upper half space is the domain { U n+1 z C

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

An Inverse Problem for the Matrix Schrödinger Equation

An Inverse Problem for the Matrix Schrödinger Equation Journal of Mathematical Analysis and Applications 267, 564 575 (22) doi:1.16/jmaa.21.7792, available online at http://www.idealibrary.com on An Inverse Problem for the Matrix Schrödinger Equation Robert

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

Properties of Transformations

Properties of Transformations 6. - 6.4 Properties of Transformations P. Danziger Transformations from R n R m. General Transformations A general transformation maps vectors in R n to vectors in R m. We write T : R n R m to indicate

More information

Complex symmetric operators

Complex symmetric operators Complex symmetric operators Stephan Ramon Garcia 1 Complex symmetric operators This section is a brief introduction to complex symmetric operators, a certain class of Hilbert space operators which arise

More information

The Fisher-Hartwig Conjecture and Toeplitz Eigenvalues

The Fisher-Hartwig Conjecture and Toeplitz Eigenvalues The Fisher-Hartwig Conjecture and Toeplitz Eigenvalues Estelle L. Basor Kent E. Morrison Department of Mathematics California Polytechnic State University San Luis Obispo, CA 93407 6 September 99 Abstract

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information