THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE
|
|
- Susan Murphy
- 5 years ago
- Views:
Transcription
1 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE HWANWOO KIM AND EUGENE SIEGEL Abstract. It is known that the limiting distribution of the normalized eigenvalues of an n n matrix with independent complex Gaussian entries is uniform in the unit disk D C, as n tends to infinity. We are interested in the conditional distribution of the eigenvalues in the presence of obstacles restricting their location. That is, requiring the eigenvalues to lie outside an open subset G of D. In the limit, the optimal distribution consists of a singular component, supported on the boundary of the obstacle G, and a uniform component supported on D \ G. In special cases (e.g. circular or elliptical regions), it is possible to find an analytic solution, however, in general this is difficult and one has to rely on numerical methods. We employ the balayage method, in order to find the optimal distribution via a solution of a suitable Dirichlet problem. Several examples are provided to illustrate the use of the method. Contents. Introduction. Joint Distribution of the eigenvalues 3. A large deviation principle The Equilibrium Measure 4. The Balayage Method and the invariance property of J Invariance property of the functional J 4 5. Examples of the Balayage Method Disk of radius r Ellipse 7 6. Acknowledgements 9 References 9. Introduction Originally introduced by Wishart in 90s [Wis8], random matrices can be used in different areas of science including mathematics, statistics, and physics. [BKS + 06, Meh04, AGZ05]. For example, a large variety of problems can be given in terms of n-dimensional system of linear ordinary differential equations of the form dx () dt = Mx, where x C n, and the coefficient matrix M M C n n has complex random entries. Since the properties of solutions to the system in () are affected by the location of the eigenvalues of M in the complex plane, it becomes necessary to study the random matrix M. For a real symmetric random matrix with independent entries, it is known that the limiting distribution of the eigenvalues is given by the semi-circle law [Meh04, TV08]. For a non-symmetric random matrix with complex independent entries, it has been shown that the limiting distribution E. Siegel was supported by U.S. National Science Foundation Grant DMS-0459.
2 HWANWOO KIM AND EUGENE SIEGEL of the eigenvalues is given by the circular law [TV08]. However, it is not yet known how the limiting distribution of the eigenvalues changes if there are obstacles or restrictions on the locations of the eigenvalues. In this paper, we will focus on random matrices with standard complex normal entries and examine how the limiting distribution of the eigenvalues of this matrix, which is also known as the Ginibre ensemble [Gin65], changes based on the restrictions imposed on the locations of eigenvalues. The structure of this report is as follows: in Section we derive the joint probability distribution of the eigenvalues of an n n matrix with complex normal entries. Section 3 includes a discussion of the reduction that allows for a description of the probability of certain constraints on eigenvalues in terms of continuous measures. Section 4 deals with the balayage method for finding the distribution of the eigenvalues in terms of a solution of a suitable Dirichlet problem. Lastly, in Section 5, we finish this paper by discussing some examples of the balayage method corresponding to different restrictions on the location of the eigenvalues of M.. Joint Distribution of the eigenvalues In this section, we derive the joint probability density function (p.d.f.) of the eigenvalues of a random matrix M, whose elements M ij are independent and identically distributed (i.i.d.) complex normal random variables. We call this random matrix M, the standard complex random matrix, where a standard complex normal random variable is defined as follows: Definition.. A random variable Q is said to be standard complex normal random variable if its density is given by π e q da(q), where A(q) is the measure on C. The joint distribution of n independent standard complex normal random variables is () n π n e i= q i n i= Now let G C be open, and define the set of matrices da(q i ). A G = {M M C n n : i, λ i / G}, where {λ i } n i= is the set of (random) eigenvalues of the matrix M, and MC n n denotes the set of all n n random matrix with complex normal random variables as its entries. Given G, our goal is to compute the probability P(M A G ) of choosing, among all random matrices of size n n, a standard complex random matrix whose eigenvalues lie outside G. From (): P(M A G ) = π n A G e i,j m ij da(m ij ). Before we start to find the joint distribution of the eigenvalues, recall that a non-symmetric square matrix M can be decomposed as (3) M = V (Z + T )V, where V is a unitary matrix, i.e V V = I and V denotes the conjugate transpose of V, Z is a diagonal matrix and T is a strictly upper triangular matrix. This is known as the Schur decomposition. See chapter of [AR] for the further details.
3 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 3 Theorem.. [HKPV09, Section 6.3] Given an n n complex standard random matrix M and Z = (z, z,..., z n ), the n-tuple of eigenvalues of M in uniform random order, the joint distribution of Z is f Z (z) = e n j= z j z i z j da(z), C n where C n is a normalizing constant and A(z) is an area measure on C n. Our approach is to use the change of variable method to re-express () in terms of the eigenvalues instead of the elements of the random matrix M. We proceed as follows. Step : The term e i,j m ij can be rewritten as an expression involving the eigenvalues of the matrix M. Note that i,j m ij = tr(m M), where M is the conjugate transpose of M. From (3), one can write M as V (Z + T )V. Note that this decomposition is not unique. In order to make it unique, we can, for example, require that the diagonal elements of V are positive, i.e. V ii > 0. We may assume that these diagonal values are non-zero. Therefore, tr(m M) = tr(v (Z + T ) V V (Z + T )V ) = tr(v V (Z + T ) V V (Z + T )) tr(ab) = tr(ba) i>j = tr((z + T ) (Z + T )) V is a unitary matrix = tr(z Z) + tr(t T ), where the last line holds due to the fact that Z is a diagonal matrix and T is a strictly upper triangular matrix. Therefore we have e i,j m ij = e ( n j= z j + i<j t ij ). Step : From the Schur decomposition and using matrix differentiation rules, we have dm = dv (Z + T )V + V (dz + dt )V + V (Z + T )dv. Since V V = I, it follows that V dv is a skew-hermitian form, which leads to Therefore, dm = V (V dv (Z + T ) (Z + T )V dv + dz + dt )V. V dmv = V dv (Z + T ) (Z + T )V dv + dz + dt. Now put V dmv = Λ = (λ ij ), Z + T = S = (s ij ), V dv = Ω = (ω ij ), then (4) Λ = ΩS SΩ + ds. Since Λ is the image under unitary transformation of dm, i,j (dm ij dm ij ) = i,j (λ ij λ ij ) = i,j λ ij. We express i,j λ ij in terms of the eigenvalues in the following way: λ ij = = j ω ik s kj k= s ik ω kj + ds ij k=i { ω ij (s jj s ii ) + j k= ω iks kj n k=i+ s ikω kj i > j, ds ij + (ω ii ω jj )s ij + j k=,k i ω iks kj n k=i,k j s ikω kj i j.
4 4 HWANWOO KIM AND EUGENE SIEGEL We evaluate i,j λ ij in the order of λ n, λ n, λ n, λ n,, λ nn, λ nn, λ (n ), λ (n ), λ (n ), λ (n ),, λ, λ, which allows for the cancellations in i,j ω ij. Then using the above expression and the properties of the wedge product, we get λ ij = z i z j dz i ω ij dt ij + t ij (ω ii ω jj ). i,j i>j i i>j i<j Recall that Ω = V dv is a skew-hermitian form and that the dimension of the subspace of skew- Hermitian forms is n n. In addition, note that for all k, ω kk ( i>j ω ij ) is an n n + form, and ω kk ( i>j w ij ) = 0 (It is an n n + form on an n n manifold). Finally, we get (dm ij dm ij ) = λ ij = z i z j dz i ω ij dt ij. i,j i,j i>j i i>j i<j Step 3: Combining step and, we can re-express the joint distribution of elements of the matrix as follows: e ( n j= z j + i<j t i,j ) ( z i z j dz i ω ij dt ij ). π n i>j i i>j i<j Integrating out the ω ij and t ij terms, we get f Z (z) = C n e n j= z j z i z j da(z), where C n is a normalizing constant that depends on n. Remark.. [HKPV09, p.60] One can explicitly compute the normalizing constant C n in the last expression by evaluating the integral C n = e n j= z j z i z j da(z). One can show that C n = π n i>j i>j n k!. The eigenvalues tend to be uniformly distributed on the disk with radius n. See Figure for the case when n is 000. After rescaling the eigenvalues by n, the eigenvalues now tend to lie inside the unit disk centered at the origin. By substituting z j with nw j, the joint distribution of W = (w, w,, w n ) is given by f W (w) = exp n w i C n n n log w i w j da(w) where A(w) is an area measure on C n. i= k= i j 3. A large deviation principle In this section, using the joint distribution of the eigenvalues derived in the previous section, we introduce a large deviation principle to find the optimal limiting distribution of the eigenvalues under the presence of obstacles. The whole derivation closely follows [PH98]. We first list some notations. Let G D = { w } be the union of finitely many simply connected domains and
5 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 5 Figure. The distribution eigenvalues without normalization A G = C \ G. A n G is the n-dimensional cartesian product of A G. Then the probability that no eigenvalues lie in G is P n ( i, w i / G) = f W (w)da(w) = exp n w i n n log w i w j da(w). A n G C n A n G Note that the major contribution to the integral comes from the point w A n G that makes the expression inside the square brackets the smallest. Therefore, after taking logarithm on both sides, we expect the integral to be asymptotically approximated as: (5) log P n ( i, w i / G) n inf w i n n log w i w j C n. w A n G The above expression can be re-written in terms of the empirical measure of the eigenvalues. i= Definition 3.. The empirical measure µ n is defined as µ n = δ wj n where δ wj is a Dirac delta measure at w j. Recall that if f is a continuous function in C, fdµ n = n j= i= f(w j ). Using the definition of empirical measure, (5) can be re-written as [( )] log P n (µ n B G ) n inf w dµ n (w) log z w dµ n (z)dµ n (w) C n, µ n B G C n z w j= i j i j
6 6 HWANWOO KIM AND EUGENE SIEGEL where B G = {µ µ(g) = 0}. Now we introduce the functional, (6) J(µ) = w dµ(w) log z w dµ(z)dµ(z). C C Theorem 3.. If µ n is a sequence of empirical measures of the eigenvalues of the Ginibre ensemble, then ( ) 3 lim n n log P n(µ n B G ) = 4 + inf J(µ). µ B G A rough strategy of the proof for the above theorem is to find an appropriate upper bound for lim sup n log P(µ n n A G ) and a lower bound for lim inf n log P(µ n n A G ). After finding the upper and lower bound, we will use the squeeze theorem to get the value of the limit. In this paper, we demonstrate how the upper bound is achieved. For the lower bound, see [AKR6]. One can also show the conditional limiting measure is the minimizer of the functional J(µ) over µ B G. To begin the proof of the upper bound, we must first introduce some standard definitions and state relevant theorems. See [DZ09] for further explanations. Definition 3.. A Polish space is a separable, completely metrizable topological space. Definition 3.3. Let τ be a Polish space, then the weak topology on M (τ) is the topology generated by the following sets { } U φ,x,δ = v M (τ) : φ dv x < δ where δ > 0, x R, φ C(τ), a set of continuous functions in τ. Definition 3.4. A probability measure µ on τ is tight if for each η > 0, there exists a compact set K η τ such that µ(k c η) < η. Remark 3.. Every probability measure on a Polish space is tight. Definition 3.5. A sequence {µ n } n N of probability measures on a Polish space X satisfies a large deviations principle (LDP) with speed a n as n and rate function I if: () I : X [0, ] is lower semi-continuous, that is, lim inf x x 0 I(x) I(x 0 ) () For any open set O X, inf x O (3) For any closed set F X, lim sup n τ I(x) lim inf n a n log µ n (O) a n log µ n (F ) inf I(x). x F Remark 3.. A sequence {µ n } n N of probability measures on a Polish space X satisfies a weak large deviations principle if () The rate function I is positive and lower semi-continuous. () There is a lower bound for open sets, which is equivalent to the condition () in the definition 3.5. (3) There is an upper bound for compact sets. Here the condition is the same as in the previous definition, except closed sets are replaced by compact sets, resulting in a weaker principle. Definition 3.6. A sequence {µ n } of probability measures on a Polish space X is exponentially tight if there exists a sequence {K L } L N of compact sets such that lim sup lim sup log µ n (K L n a L) c =. n Definition 3.7. A rate function I is good if the level sets {x X : I(x) M} are compact for all M R.
7 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 7 Theorem 3.. [DZ09, Lemma..8] (a) If {µ n } satisfies the weak LDP and is exponentially tight, then it satisfies the full LDP and the rate function I is good. (b) If {µ n } satisfies the upper bound of the LDP, with a good rate function I, then it is exponentially tight. To prove the upper bound in theorem (3.), recall the definition of J(µ): J(µ) = w dµ(w) log z w dµ(z)dµ(w). C C In our case, the rate function is J(µ) and a n = n. One can show that the level sets {µ M (C) : J(µ) M} are compact, thus J(µ) is a good rate function. We first show that the functional J(µ) is lower semi-continuous and strictly convex for measures with compact support. Consider µ M (C), where M (C) is a set of all probability measures on C, and let Σ(µ) = log x y dµ(x)dµ(y), C which is also known as the logarithmic energy of the measure µ. Notice that Σ(µ) = inf log (ɛ + x y )dµ(x)dµ(y). ɛ>0 Σ is an upper semi-continuous functional in µ, as the expression inside the infimum is continuous. Finally, since (7) J(µ) = z dµ(z) Σ(µ) C one can observe that J is a lower semi-continuous functional because the first term is linear and continuous. One can show this in an alternative way by introducing F (x, y) = log x y + ( x + y ), x, y C, F α (x, y) = min{f (x, y), α}, α > 0. Note that F is bounded from below by some constant. And F α is bounded above and below and continuous. Therefore, the integral C F α (x, y)dµ(x)dµ(y) is a continuous functional. Note that, sup F α (x, y)dµ(x)dµ(y) = F (x, y)dµ(x)dµ(y) α>0 C C = x dµ(x) log x y dµ(x)dµ(y) = J(µ). C C The above expression is lower semi-continuous since it is the supremum of continuous functionals. To show that J is a strictly convex functional for measures with compact support, we first introduce the following lemma. Lemma 3.. [ST97, Lemma.8] Let ν = µ µ, where µ, µ are probability measures with compact support and finite logarithmic energy. Then, Σ(ν) 0 and Σ(ν) = 0 if and only if ν = 0. Now suppose that µ and µ satisfy the condition of the above lemma. In order to prove the convexity of J, we show concavity for Σ and it is sufficient to prove mid-point concavity. In other words, we wish to show that ( ) µ + µ (8) Σ (Σ(µ ) + Σ(µ )).
8 8 HWANWOO KIM AND EUGENE SIEGEL By definition, ( ) ( ) ( ) µ + µ µ + µ µ + µ Σ = log x y d (x)d (y) C = 4 Σ(µ ) + 4 Σ(µ ) + (9) log x y dµ (x)dµ (y). C By Lemma 3., we also have ( ) µ µ (0) 0 Σ = 4 Σ(µ ) + 4 Σ(µ ) log x y dµ (x)dµ (y). C Taking the sum of (9) and (0), we get (8). In addition, Σ is strictly concave since the equality in (8) only holds when µ = µ. Hence, from (7), J(µ) is strictly convex for measures with compact support. Now we will introduce some preliminary lemmas to prove the upper bound of the Theorem 3.. Lemma 3.. [PH98, Lemma 0] Let C n be the normalizing constant, C n = exp n z j z j z k da(z). C n j= j<k then, lim sup n Remark 3.3. We will show later that Proof. We use the following simple fact j<k n n log C n inf J(µ). µ M (C) inf µ M (C) J(µ) = 3 4 (a j + a k ) = (n ) Using this, we can rewrite C n as C n = exp z j exp log z j z k z j C n j= j<k j= a j z k da(z). The above expression can be rewritten as = exp z j exp F (z j, z k ) da(z) C n j= j<k ) = exp z j exp ( n F (x, y)dµ n z(x)dµ n z(y))da(z). C n {x y} j= where µ n z = n n j= δ z j. We can bound the above integral by ( ) exp n inf F (x, y)dµ(x)dµ(y) exp( µ M (C) C C n Rewriting the integral over C n, we have ( ( )) C n exp n inf F (x, y)dµ(x)dµ(y) µ M (C) C z j ) da(z). j= ( e z da(z)) n.
9 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 9 After taking the logarithm on both sides, dividing by n, and taking the upper limit, we get the following inequality: lim sup n n log C n inf F (x, y)dµ(x)dµ(y). µ M (C) C Definition 3.8. For all µ M (C) and G M (C), set where µ n z is the empirical measure of z. G = {z C n : µ n z G}, Lemma 3.3. [PH98, Lemma ] µ M (C) and G, an open neighborhood of µ, () inf [lim sup G:µ G n n log P n( G)] J(µ) lim inf n n log C n Proof. Since F α = min{f, α} F, it follows that n j k F α(z j, z k ) n j k F (z j, z k ) + α n. Using this fact, it follows that P n ( G) ) exp z j exp ( n F α (x, y)dµ n C z(x)dµ n z(y) + nα da(z). n G C This, in turn, is Thus, we get P n ( G) C n ( j= n ( ) e da(z))) z exp n inf F α (x, y)dµ (x)dµ (y) + nα. µ G C n log P n( G) inf µ G n log C n. lim sup F α (x, y)dµ (x)dµ (y) lim inf n C n Taking the infimum over all open neighborhoods G of µ and using the continuity and boundedness of F α, we are left with inf[lim sup G n n log P n( G)] F α (x, y)dµ(x)dµ(y) lim inf C n n log C n. The above inequality holds for all α and taking the limit as α, we get () by the monotone convergence theorem. Next we show P n is exponentially tight. To show this, for any positive α, we must find a compact set, K α M (C) such that lim sup lim sup α n n log P n( K α) c = Lemma 3.4. [PH98, Lemma 4] P n is exponentially tight. Proof. For α > 0, let K α = {µ M (C) : C z dµ(z) α}. It can be proven that K α is closed. See [PH98] for more details. Therefore if we show that K α is tight, K α is compact. Consider µ K α, we know that α z dµ(z) = z dµ(z) + z dµ(z) C z r z >r r dµ(z) = r µ({ z > r}) = r µ(dr), c z >r
10 0 HWANWOO KIM AND EUGENE SIEGEL where D r = { z r}. Note that the uniform estimate µ(dr) c α implies that the family K r α forms a tight family. Now we find an upper bound for P n ( K α). c Note that P n ( K α) c = exp n z j z j z k da(z), C n K α c Using the fact that n n z j > α, we know that P n ( K α) c exp ( ) C αn I n, n where ( I n = exp n C n j= Similarly to Lemma 3. (with z replaced by z ), ( [ z I n exp ( n j<k ) z j z j z k da(z). inf µ M (C) j<k ]) ) dµ Σ(µ) + holds for large enough n. We will later show that the infimum is attained. It remains to find a lower bound for C n. Recall that C n = exp n C n z j z j z k da(z). j= To find a lower bound, we integrate over the compact subset B n = {z C n, z = (z, z,..., z n ) : z k w k }. Here, w n k = exp( πik n ), k {,,, n}, are the nth root of unity. If z B n, ( exp n z j exp n n ( + n )) = exp( n ). We know that j= j<k z j z k = z j z k. j<k j k Using the triangle inequality and the fact that each eigenvalue is within of their respective root n of unity, we get z j z k = w j w k + z j w j z k + w k ( w j w k n ). j k j k j k For large n, the two terms w j and w k are about n apart, leading us to z j z k ( ) ( ) n(n ) w j w k = w j w k. j k j k j k In order to proceed our argument, we make a following claim. Claim 3.. D n (w, w,, w n ) = ( n) n where D n is the determinant of an n n Vandermonde matrix.
11 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE Proof. Recall w w... w n w w... w n D n (w, w,, w n ) = w n wn... wn n Let v j = (, w j, wj,, wn j ) C n, j {,,, n}. Note that w j, j {,,, n} are the n th root of unity. Therefore, v j = ( ) = n. In addition, the fact that vj v }{{} k, j k n can be seen from: ( ) ) πin(j k) n n ( πil(j k) v j, v k = wj(w l k ) l = exp n l=0 l=0 exp = exp n ( πi(j k) n = 0. ) The determinant of the Vandermonde matrix is the volume of an n dimensional parallelepiped, which leads to the fact that D n (w, w,, w n ) = ( n) n. In addition to the claim, notice that D n (w, w,, w n ) = j<k w j w k = ( n) n. j k w j w k = n n, and C n exp n z j z j z k da(z) exp( Cn + n log n) da(z). B n B n j= j<k Thus The integral on the right hand side of the last term in previous expression equals to ( π n 4 ) n, which is bounded by exp( Cn log n). Finally, we get Recall that P n ( K α) c = C n K α c using our previous estimates, we get C n exp( Cn Cn log n). lim sup n exp( n z j ) z j z k da(z), j<k n log P n( K c α) α + C. for some constant C. Taking the limit of the right hand side as α, we get that lim sup n,α n log P n( K α) c, and this implies exponential tightness of the sequence µ n (or P n ). Since P n satisfies exponential tightness, we obtain the the upper bound for the left hand side term in the Theorem 3.. One can also prove the lower bound for the one similar to Lemma (3.3) aforementioned term(see [PH98].). In addition, one can find a lower bound in Theorem 3. [AKR6].
12 HWANWOO KIM AND EUGENE SIEGEL 3.. The Equilibrium Measure. In this section, we find the unique global minimizer of the functional J, also known as the equilibrium measure. We first evaluate the functional J for the uniform probability measure on the unit disk, ν eq (z) = D da(z) π, where D is the unit disk centered at the origin. Then we will show that the global minimizer for the functional J is in fact this measure. To show this result, we introduce the notion of logarithmic potential. Definition 3.9. The logarithmic potential of a probability measure µ M (C) is given by U µ (z) = log z w dµ(w). Now consider the functional J applied to ν eq, C () J(ν eq ) = z dν eq (w) log z w dν eq (z)dν eq (w). C C To compute the leftmost integral, we change to polar coordinates and arrive at the following integral: π r rdr dθ π =. 0 0 To compute the rightmost integral in (), we first find the logarithmic potential of the measure ν eq. Working in polar coordinates, we get U νeq (z) = π log z re iθ dθ rdr. π Next, we claim the following result. Claim U νeq (z) = 0 { z z, log z z >. Proof. By a well-known formula [Lan3, p. 34], for α, β C (not both 0), π π 0 log αe iθ β dθ = log max{ α, β }. One can proceed by considering two different cases. When z >, we get U νeq (z) = On the other hand, for z, U νeq (z) = z 0 0 log z rdr + z log max{ z, r} rdr = log z log r rdr = log z z + 0 rdr = log z. ) (r log r r = z r= z.
13 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 3 By Claim 3. the rightmost integral in () becomes ( z ) dν eq (z) = z dν eq (z) z z z dν eq (z). Recall that we already computed the first term of the right hand side of the above equation, and for the second term, the integral is simply an integral with respect to a probability measure over its support, giving a value of. Thus, J(ν eq ) = ( ) = 3 4 We will use this result later in Section 4. Now we prove that the uniform probability measure is the global minimizer of the functional J. To prove this statement, we introduce the notion of a pseudo inner product of probability measures. Definition 3.0. Let µ, µ be measures with finite logarithmic energy and compact support. Then the pseudo inner product of µ and µ is µ, µ = log z w dµ (z)dµ (w) C and the pseudo norm associated with this pseudo inner product is denoted by µ = µ, µ. By taking ν = µ µ, in Lemma 3., we get the following corollary. Corollary 3.. If µ, µ M (C) and µ and µ have finite logarithmic energy, then where the equality holds if and only if µ = µ. µ µ = Σ(µ µ ) 0 Proposition 3.. Let µ be a measure supported inside D with finite logarithmic energy. Then, µ ν eq = J(µ) J(ν eq ). Proof. Note that µ ν eq = µ, µ µ, ν eq ν eq, µ + ν eq, ν eq which is equivalent to log z w dµ(z)dµ(w) + log z w dν eq (w)dµ(z) log z w dν eq (z)dν eq (w). C C C by Claim 3., µ ν eq = log z w dµ(z)dµ(w) + ( z )dµ(z) + C 4 = z dµ(z) log z w dµ(z)dµ(w) + C 4 = J(µ) J(ν eq) Remark 3.4. If the support of µ is not contained in D, then µ ν eq J(µ) J(ν eq ). Corollary 3.. The global minimizer of the functional J(µ), ν eq, is the uniform probability measure on the unit disk. Proof. From Corollary 3. and Proposition 3., we know that J(µ) J(ν eq ) 0, for measures with support in D. Additionally one can see from the Corollary 3. that this global minimizer is unique since J(µ) = J(ν eq ) holds only when µ = ν eq.
14 4 HWANWOO KIM AND EUGENE SIEGEL We see that minimizing J(µ) is equivalent to minimizing µ ν eq. In addition, the measure that minimizes µ ν eq out of all measures satisfying µ(g) = 0, is known as the balayage of µ to D \ G. In the following section, we will discuss the balayage method in more detail. 4. The Balayage Method and the invariance property of J Let G D be an open set with smooth boundary. In the previous section, we mentioned the notion of balayage of µ to D \ G as a measure which minimizes µ ν eq over all measures such that µ(g) = 0, where G D = { w }. In this section, we list some properties of balayage and introduce the balayage method which can be used to find the optimal limiting distribution for a given set G. Lastly, we end this section by using properties of balayage to prove the translation invariance property of J. Let the balayage of µ to D \ G be denoted by µ G. It can be expressed as a sum of two parts: ) a singular component, µ s G, which is supported on the boundary of the obstacle G, and ) a uniform component, µ u G, which is supported on the complement D \ G. In addition, the solution satisfies U µg (z) = { z z D \ G, log z z C \ D where µ G is the balayage measure such that µ G = µ s G + µu G. [ST97, Section II.4] Now we introduce the balayage method, which allows us to find the singular part of the optimal limiting distribution for a nice domain G D. The balayage method gives the singular part in terms of a solution of a Dirichlet problem defined on the domain G. Specifically, we will find a function w where: ) w is harmonic in G, ) w agrees with the logarithmic potential U νeq on D \ G, where ν eq is the equilibrium measure, by solving a Dirichlet problem. After solving for w, we can recover the measure from the function w using the following theorem. Theorem 4.. [ST97, Section II.] The singular part of the optimal limiting measure µ G of the distribution of eigenvalues along G is given by µ s G = ( w π + w ) ds n + n where the function w is the solution of the Dirichlet problem on G that agrees with logarithmic w potential U νeq on D \ G, n + and w n are, respectively, the outward and inward normal derivatives; and ds is the arc length measure defined on G. Therefore, as long as we can solve the Dirichlet problem on G, we can find the optimal limiting measure for the eigenvalues. Additionally, note that from Theorem 4., µ s G is invariant under translations of G in D. This is because the equilibrium measure is translation invariant in C. In this paper, we provide some examples for nice domains where the Dirichlet problem can be easily solved. However, in general, solving a Dirichlet problem analytically turns out to be a very difficult problem. In Section 4, we provide examples of solutions of the Dirichlet problem for some nice domains. 4.. Invariance property of the functional J. In addition to minimizing J, using Theorem 4., one can investigate the dependence of J on G. Now we show the translation invariance property of the functional J. Before we show the property, recall that J(µ G ) is given by, J(µ G ) = z dµ G (z) log z w dµ G (z)dµ G (w) = ( z U µg (z))dµ G (z). C C C
15 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 5 by the definition of the logarithmic potential. Since µ G is supported only on D\G, the logarithmic potential U µg (z) takes on the value z on the support of µ G. Using this fact, we find ( ( z J(µ G ) = z )) ( z dµ G (z) = ) dµ G (z) +. Notice that the latter integrand is equivalent to U νeq (z), and we can rewrite the last expression as log z w dν eq (w)dµ G (z) +. Changing the order of integration, we get J(µ G ) = U µg (z)dν eq (z) +. Thus, J(µ G ) J(ν eq ) = U µg (z)dν eq (z) U νeq (z)dν eq (z) = G ( UµG (z) U νeq (z) ) dν eq (z). To show that this value is translation invariant with respect to shifts of the domain G, we wish to show J(µ G ) = J(µ Ga ), where G a = {z D z = z + a, z G} and G a D. Note that this is equivalent to showing, U µg (z) U νeq (z) = U µga (z + a) U νeq (z + a). By the definition of the logarithmic potential, da(w) U µg (z) U νeq (z) = log z w + π G c Now we must show that log z w dµ s G(w) G = G G G log z w dµ s G(w) log z w da(w) π = G a log z w dµ s G(w) G log z w da(w). π log z w dµ s G a (w) where z = z + a. Since z D, log z w da(w) = log z da(w) w. π π G G a Thus all we need to show is log z w dµ s G(w) = G G a log z + a w dµ s G a (w). C log z w da(w) π G a log z w da(w). π Recall that the singular part of µ G is the balayage of the uniform measure, and therefore, it is invariant under translations. Hence the functional J is translation invariant. 5. Examples of the Balayage Method For certain domains, solving the Dirichlet problem in an analytical way is possible. Here, we provide two examples where we use the balayage method to find the optimal boundary distribution.
16 6 HWANWOO KIM AND EUGENE SIEGEL 5.. Disk of radius r. Suppose the domain G is a disk with radius r and center (x 0, 0). Without loss of generality, we assume x 0 to be positive. The first step of the balayage method is to solve a following Dirichlet problem. Conditions for Dirichlet problem () W (z) = 0, z G () W (z) = U νeq (z), z G Note that U νeq (z) = z, where ν eq is an equilibrium measure. Consider a function W (z) = r x 0 + x 0 x, where z = x + yi C. The function W clearly satisfies both conditions of the Dirichlet problem above. This function can be obtained either by directly deriving from the equation for G or using the Poisson integral formula. We demonstrate the balayage method for the aforementioned function and use Poisson integral formula to check the answer. The function W is r x 0 + x 0 x z G, z W (z) = z D \ G, log z z C \ D. are given below: ( w x x0 = W n + = (x, y), y ) = x x 0 x + y n + r r r r, ( w x0 x = W n = (x 0, 0), y ) = x 0 x 0 x. n r r r Therefore, the singular part µ s G along the boundary is given by µ s G = ( w π + w ) = n + n 4π ds, where ds is an arc length measure. Hence, for a disk, as expected, the optimal distribution along the boundary is just the constant density depending on r, which is roughly One can see this in Figure. The normal derivatives w n + and w n Figure. The optimal distribution plot for circle The reason for the small oscillation in the graph is because of Mathematica s parameterization of the circle and because Mathematica approximates the solution to the Dirichlet problem. Mathematica parameterizes the circle by ordering the points of the circle by angle from the center. This leads to a parameterization that is not quite the circle, but instead a close polygonal approximation.
17 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 7 Here, we used 534 points for the circle. The solution W (z) of the Dirichlet problem is a surface, and because Mathematica makes a mesh of this surface using many triangles, treats the circle as a polygon (discretization error), and uses Lagrange Interpolation to smooth out the surface from a function defined only on a set of nodes, the partial of W (z) with respect to the outer and inner normal vectors varies along the boundary. Additionally, Mathematica treats all terms numerically in solving the Dirichlet (roundup error) and W (z) is not differentiable on the boundary, giving rise to yet another source of error. Thus the reason for the slight oscillation. If Mathematica used more and more, smaller, triangles to approximate the solution, the oscillation would decrease to the value µ which we obtained above analytically. (a) (b) Figure 3. The optimal distribution plotted for circle using a heat map From Figure 3, heuristically, we can see the distribution of eigenvalues is roughly constant (with a margin of error of ±0.0004). Recall Figure, which shows the distribution plot for the circle; regardless of the θ, the dependent variable y was roughly constant with minor oscillation. In the case of Figure 3, the y-value is the numerical representation of the color (the heat map value). Since this value is constant, the resulting color is also constant. 5.. Ellipse. Now suppose our domain G is an ellipse with center (x 0, 0). More formally one can write this as { G = (x, y) (x x 0) a + y } b = It is known that the solution of the corresponding Dirichlet problem is a polynomial of degree in x, y. The solution that satisfies is a b (a +b ) x a b (a +b ) y + b 0 x 0 (a +b ) x + a b b x 0 a b z G, (a +b ) W (z) = z z D \ G, log z ) where C = b (x x 0 ) Note that n + = C ( b(x x 0) a, ay b ) and n = C ( b(x 0 x) a, ay b we can recover an optimal measure µ along the boundary, µ s = ( ( b(x πc x0 ) (x, y), ay ) ( a b + a b a + b x + b 0 x 0 a + b, b a a + b which can be further simplified to ( bx πc bxx 0 + ay + (a b 3b 3 )x 0 x + b 3 x 0 + (b3 a b)x a b a 3 + ab z C \ D a + a y b. Therefore ) ( b(x0 x), ay )) ds, a b + a3 y ab ) y a b + b 3.
18 8 HWANWOO KIM AND EUGENE SIEGEL Figure 4. The optimal distribution plot for ellipse One can see that, unlike the case of disk, the optimal limiting distribution density along the boundary varies with the location. Figure 4 shows how the uniform density changes for the ellipse 9x 4 +9y =. Figure 4 is the solution of the Dirichlet problem without solving for µ s analytically, as was done above. Like the case for the circle, Mathematica parameterizes the ellipse by ordering the points by angle from the center, again constructing a discrete approximation of the ellipse with 500 boundary points. Mathematica solves the Dirichlet problem by approximating the solution surface W (z) by using a mesh of many triangles, and using Lagrange Interpolation to achieve the best smooth surface. Because of these two computational simplifications, the inner and outer normals of W (z) are approximated by consequence. The scale in Figure 4 is larger than that of Figure, so the oscillation does not appear to be as large, but some oscillatory features are still noticeable at 0, π, π, and 3π. (a) (b) Figure 5. The optimal distribution plotted for ellipse in heat map In Figure 5, the limiting distribution oscillates along the boundary of the ellipse, with the most eigenvalues falling on the top or the bottom of the ellipse. The density of these eigenvalues tapers off to a minimum on the right and left sides of the ellipse. Recall in Figure 4, that the distribution is at a maximum when θ = π, 3π and at a minimum when θ = 0, π. This is exactly the case in Figure 5, since this figure is merely the visual representation of this distribution plotted along the
19 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 9 boundary ellipse. From these figures, we can also observe that the points close to the center of the ellipse have higher density values. 6. Acknowledgements We first thank our two mentors, Doctor Diego Ayala and Doctor Alon Nishry for their persistent guiding. We also express our gratitude to the University of Michigan department of mathematics and the research experience for undergraduates committee for this valuable opportunity. References [AGZ05] Greg Anderson, Alice Guionnet, and Ofer Zeitouni, An introduction to random matrices, Cambridge University Press, Cambridge, 005. [AKR6] Kartic Adhikari and Nanda Kishore Reddy, Hole probabilities for finite and infinite ginibre ensembles, 06. on arxiv. [AR] Horn. Roger A and Johnson. Charles R, Matrix analysis, Cambridge university press, 0. [BKS + 06] Édouard Brézin, Vladimir Kazakov, Didina Serban, Paul Wiegmann, and Anton Zabrodin, Applications of random matrices in physics, Vol., Springer Science & Business Media, 006. [DZ09] Amir Dembo and Ofer Zeitouni, Large deviations techniques and applications, Vol. 38, Springer Science & Business Media, 009. [Gin65] Jean Ginibre, Statistical ensembles of complex, quaternion, and real matrices, Journal of Mathematical Physics 6 (965), no. 3, [HKPV09] John Ben Hough, Manjunath Krishnapur, Yuval Peres, and Bálint Virág, Zeros of gaussian analytic functions and determinantal point processes, Vol. 5, American Mathematical Society Providence, RI, 009. [Lan3] Serge Lang, Complex analysis, Vol. 03, Springer Science & Business Media, 03. [Meh04] Madan Lal Mehta, Random matrices, Vol. 4, Academic press, 004. [PH98] Dénes Petz and Fumio Hiai, Logarithmic energy as an entropy functional, Contemporary Mathematics 7 (998), 05. [ST97] Edward B Saff and Vilmos Totik, Logarithmic potentials with external fields, Vol. 36, Springer-Verlag, Berlin, 997. [TV08] Terence Tao and Van Vu, Random matrices: the circular law, Communications in Contemporary Mathematics 0 (008), no. 0, [Wis8] John Wishart, The generalised product moment distribution in samples from a normal multivariate population, Biometrika (98), 3 5.
20 0 HWANWOO KIM AND EUGENE SIEGEL Figure code () m = RandomReal[NormalDistribution[0, /Sqrt[]], {000, 000}] + I*RandomReal[NormalDistribution[0, /Sqrt[]], {000, 000}]; // m is a random normal distribution with 000 values () e = Eigenvalues[m]; // e is the set of eigenvalues of m (3) e = e/sqrt[000]; // e normalizes e so that the majority of eigenvalues are less than (4) Show[{ListPlot[{{Re[#], Im[#]} & /@ e}, AspectRatio -> ], ContourPlot[x^ + y^ ==, {x, -, }, {y, -, }]}] // this plots e, the set of eigenvalues, within the unit circle Figure code () reg = Disk[{0, 0}, /]; // initializes a circle with radius / () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]]; // this body of code gives us the 500 or so boundary points as well as the inner and outer normals of "reg" after discretization (4) solgrad[x_, y_] = {D[sol[x, y], x], D[sol[x, y], y]}; outgrad[x_, y_] = {x, y}; (5) dens[x_] = -solgrad[pp[x][[]], pp[x][[]]].ppnn[x] + outgrad[pp[x][[]], pp[x][[]]].ppnn[x]; // this gives us the density function that is plotted with a constant in Figures & 4. (6) ListPlot[Sort[bdryPts, ArcTan[#[[]], #[[]]] < ArcTan[#[[]], #[[]]] &]]
21 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE (7) Plot[(/(*Pi)) dens[u], {u, 0, }, AxesStyle -> Thickness[0.0], AxesLabel -> {Style[x, Bold, FontSize -> 0], Style[y, Bold, FontSize -> 0]}, TicksStyle -> Directive[Black, Thick, Bold, 0], PlotRange -> {{0, }, { /50, /50}}, PlotStyle -> Thick] // This plots the distribution used in Figures & 4. Figure 3 code Part A // much of the code is the same, except for the numbers that are changed to adjust for the boundary shape. () reg = Disk[{0, 0}, /]; // initializes a circle with radius / () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]]; // this body of code gives us the 500 or so boundary points as well as the inner and outer normals of "reg" after discretization (4) solgrad[x_, y_] = {D[sol[x, y], x], D[sol[x, y], y]}; outgrad[x_, y_] = {x, y}; (5) dens[x_] = -solgrad[pp[x][[]], pp[x][[]]].ppnn[x] + outgrad[pp[x][[]], pp[x][[]]].ppnn[x]; // this gives us the density function of the eigenvalues. (6) ListPlot[Sort[bdryPts, ArcTan[#[[]], #[[]]] < ArcTan[#[[]], #[[]]] &]] (7) plt = ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05],
22 HWANWOO KIM AND EUGENE SIEGEL ColorFunction -> Function[{x, y, u}, ColorData["DarkRainbow"][dens[u]]]]; Show[ParametricPlot[{Cos[*Pi*x], Sin[*Pi*x]}, {x, 0, }, PlotStyle -> Thickness[0.0], AxesStyle -> Thickness[0.0], AxesLabel -> {Style[x, Bold, FontSize -> 54], Style[y, Bold, FontSize -> 54]}, TicksStyle -> Directive[Black, Thick, Bold, 60]], ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05], ColorFunction -> Function[{x, y, u}, ColorData["DarkRainbow"][dens[u]]]]] // This plots the distribution and the circle, coloring the circle to describe the distribution of eigenvalues. Part B // much of the code is the same, except for the numbers that are changed to adjust for the boundary shape. () reg = Disk[{/3, 0}, /]; // plots a circle with radius / centered at (/3, 0) () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]]; // this body of code gives us the 500 or so boundary points as well as the inner and outer normals of "reg" after discretization (4) solgrad[x_, y_] = {D[sol[x, y], x], D[sol[x, y], y]}; outgrad[x_, y_] = {x, y}; (5) dens[x_] = -solgrad[pp[x][[]], pp[x][[]]].ppnn[x] + outgrad[pp[x][[]], pp[x][[]]].ppnn[x]; // this gives us the density function of the eigenvalues.
23 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 3 (6) ListPlot[Sort[bdryPts, ArcTan[#[[]], #[[]]] < ArcTan[#[[]], #[[]]] &]] (7) plt = ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05], ColorFunction -> Function[{x, y, u}, ColorData["DarkRainbow"][dens[u]]]]; Show[ParametricPlot[{Cos[*Pi*x], Sin[*Pi*x]}, {x, 0, }, PlotStyle -> Thickness[0.0], AxesStyle -> Thickness[0.0], AxesLabel -> {Style[x, Bold, FontSize -> 54], Style[y, Bold, FontSize -> 54]}, TicksStyle -> Directive[Black, Thick, Bold, 60]], ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05], ColorFunction -> Function[{x, y, u}, ColorData["DarkRainbow"][dens[u]]]]] // This plots the distribution and the circle, coloring the circle to describe the distribution of eigenvalues. Figure 4 code // much of the code is the same, except for the numbers that are changed to adjust for the boundary shape. () reg = Disk[{0, 0}, {/3, /3}]; // plots an ellipse, width /3, height /3 () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]]; // this body of code gives us the 500 or so boundary points as well as the inner and outer normals of "reg" after discretization (4) solgrad[x_, y_] = {D[sol[x, y], x], D[sol[x, y], y]}; outgrad[x_, y_] = {x, y};
24 4 HWANWOO KIM AND EUGENE SIEGEL (5) dens[x_] = -solgrad[pp[x][[]], pp[x][[]]].ppnn[x] + outgrad[pp[x][[]], pp[x][[]]].ppnn[x]; // this gives us the density function of the eigenvalues. (6) ListPlot[Sort[bdryPts, ArcTan[#[[]], #[[]]] < ArcTan[#[[]], #[[]]] &]] (7) Plot[(/(*Pi)) dens[u], {u, 0, }, AxesStyle -> Thickness[0.0], AxesLabel -> {Style[x, Bold, FontSize -> 0], Style[y, Bold, FontSize -> 0]}, TicksStyle -> Directive[Black, Thick, Bold, 0], PlotRange -> {{0, }, { /5, /5}}, PlotStyle -> Thick] // This plots the distribution of eigenvalues of the ellipse. Figure 5 code Part A // much of the code is the same, except for the numbers that are changed to adjust for the boundary shape. () reg = Disk[{0,0},{/3,/3}]; // plots an ellipse, width /3, height /3 () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]]; // this body of code gives us the 500 or so boundary points as well as the inner and outer normals of "reg" after discretization (4) solgrad[x_, y_] = {D[sol[x, y], x], D[sol[x, y], y]}; outgrad[x_, y_] = {x, y};
25 THE CONDITIONAL DISTRIBUTION OF THE EIGENVALUES OF THE GINIBRE ENSEMBLE 5 (5) dens[x_] = -solgrad[pp[x][[]], pp[x][[]]].ppnn[x] + outgrad[pp[x][[]], pp[x][[]]].ppnn[x]; // this gives us the density function of the eigenvalues. (6) ListPlot[Sort[bdryPts, ArcTan[#[[]], #[[]]] < ArcTan[#[[]], #[[]]] &]] (7) plt = ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05], ColorFunction -> Function[{x, y, u}, ColorData["Rainbow"][dens[u]]]]; Show[ParametricPlot[{Cos[*Pi*x], Sin[*Pi*x]}, {x, 0, }, PlotStyle -> Thickness[0.0], AxesStyle -> Thickness[0.0], AxesLabel -> {Style[x, Bold, FontSize -> 54], Style[y, Bold, FontSize -> 54]}, TicksStyle -> Directive[Black, Thick, Bold, 60]], ParametricPlot[pp[u], {u, 0, }, PlotStyle -> Thickness[0.05], ColorFunction -> Function[{x, y, u}, ColorData["Rainbow"][dens[u]]]]] // This plots the distribution and the ellipse, coloring the ellipse to describe the distribution of eigenvalues. Part B // much of the code is the same, except for the numbers that are changed to adjust for the boundary shape. () reg = Disk[{/3-0.06,0},{/3,/3}]; // plots an ellipse, width /3, height /3. The ellipse is shifted to the right by approximately /3 () discreg = BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]; sol = NDSolveValue[{\!\( \*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(\)]\(u[x, y]\)\) == 0, DirichletCondition[u[x, y] == (x^ + y^)/ - /, True]}, u, {x, y} \[Element] discreg, AccuracyGoal -> 0, PrecisionGoal -> 35]; // this function discretizes the region "reg" such that it satisfies the Dirichlet condition (3) centerx = 0; centery = 0; bdrypts = MeshCoordinates[BoundaryDiscretizeRegion[reg, AccuracyGoal -> 5]]; bdrypts = Sort[bdryPts, ArcTan[#[[]] - centerx, #[[]] - centery] < ArcTan[#[[]] - centerx, #[[]] - centery] &]; bdryptsx = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; bdryptsy = Array[{(# - )/(Length[bdryPts] - ), bdrypts[[#, ]]} &, Length[bdryPts]]; paramx = Interpolation[bdryPtsX, InterpolationOrder -> ]; paramy = Interpolation[bdryPtsY, InterpolationOrder -> ]; pp[x_] = {paramx[x], paramy[x]}; ppt[x_] = {D[paramX[x], x], D[paramY[x], x]}; ppn[x_] = {D[paramY[x], x], -D[paramX[x], x]}; ppnn[x_] = Normalize[ppn[x]];
1 Intro to RMT (Gene)
M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i
More informationFree probabilities and the large N limit, IV. Loop equations and all-order asymptotic expansions... Gaëtan Borot
Free probabilities and the large N limit, IV March 27th 2014 Loop equations and all-order asymptotic expansions... Gaëtan Borot MPIM Bonn & MIT based on joint works with Alice Guionnet, MIT Karol Kozlowski,
More informationLarge Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials
Large Deviations, Linear Statistics, and Scaling Limits for Mahler Ensemble of Complex Random Polynomials Maxim L. Yattselev joint work with Christopher D. Sinclair International Conference on Approximation
More informationMORE NOTES FOR MATH 823, FALL 2007
MORE NOTES FOR MATH 83, FALL 007 Prop 1.1 Prop 1. Lemma 1.3 1. The Siegel upper half space 1.1. The Siegel upper half space and its Bergman kernel. The Siegel upper half space is the domain { U n+1 z C
More informationEmpirical Processes: General Weak Convergence Theory
Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated
More informationThe circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)
The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues
More informationDeterminantal Processes And The IID Gaussian Power Series
Determinantal Processes And The IID Gaussian Power Series Yuval Peres U.C. Berkeley Talk based on work joint with: J. Ben Hough Manjunath Krishnapur Bálint Virág 1 Samples of translation invariant point
More informationGärtner-Ellis Theorem and applications.
Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with
More informationHartogs Theorem: separate analyticity implies joint Paul Garrett garrett/
(February 9, 25) Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ (The present proof of this old result roughly follows the proof
More informationMeasurable Choice Functions
(January 19, 2013) Measurable Choice Functions Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/choice functions.pdf] This note
More informationChapter One. The Calderón-Zygmund Theory I: Ellipticity
Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere
More informationConvergence of spectral measures and eigenvalue rigidity
Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical
More informationJUHA KINNUNEN. Harmonic Analysis
JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes
More informationThe result above is known as the Riemann mapping theorem. We will prove it using basic theory of normal families. We start this lecture with that.
Lecture 15 The Riemann mapping theorem Variables MATH-GA 2451.1 Complex The point of this lecture is to prove that the unit disk can be mapped conformally onto any simply connected open set in the plane,
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University The Residual and Error of Finite Element Solutions Mixed BVP of Poisson Equation
More informationMicroscopic behavior for β-ensembles: an energy approach
Microscopic behavior for β-ensembles: an energy approach Thomas Leblé (joint with/under the supervision of) Sylvia Serfaty Université Paris 6 BIRS workshop, 14 April 2016 Thomas Leblé (Université Paris
More informationRectangular Young tableaux and the Jacobi ensemble
Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau
More informationLarge Deviations for Random Matrices and a Conjecture of Lukic
Large Deviations for Random Matrices and a Conjecture of Lukic Jonathan Breuer Hebrew University of Jerusalem Joint work with B. Simon (Caltech) and O. Zeitouni (The Weizmann Institute) Western States
More informationREPRESENTATION THEORY WEEK 7
REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable
More informationMetric Spaces and Topology
Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationSpectral Theory of Orthogonal Polynomials
Spectral Theory of Orthogonal Barry Simon IBM Professor of Mathematics and Theoretical Physics California Institute of Technology Pasadena, CA, U.S.A. Lecture 2: Szegö Theorem for OPUC for S Spectral Theory
More informationMOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS
MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS ABDUL HASSEN AND HIEU D. NGUYEN Abstract. This paper investigates a generalization the classical Hurwitz zeta function. It is shown that many of the properties
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationRandom matrices: Distribution of the least singular value (via Property Testing)
Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued
More informationPart V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory
Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite
More informationMeasure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond
Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................
More informationMATHS 730 FC Lecture Notes March 5, Introduction
1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationIntegral Jensen inequality
Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More information02. Measure and integral. 1. Borel-measurable functions and pointwise limits
(October 3, 2017) 02. Measure and integral Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/02 measure and integral.pdf]
More informationUpper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1
Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.
More informationOn the smallest eigenvalues of covariance matrices of multivariate spatial processes
On the smallest eigenvalues of covariance matrices of multivariate spatial processes François Bachoc, Reinhard Furrer Toulouse Mathematics Institute, University Paul Sabatier, France Institute of Mathematics
More informationTOPOLOGICAL EQUIVALENCE OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS
TOPOLOGICAL EQUIVALENCE OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS ALEX HUMMELS Abstract. This paper proves a theorem that gives conditions for the topological equivalence of linear ordinary differential
More informationLECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM
LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM Contents 1. The Atiyah-Guillemin-Sternberg Convexity Theorem 1 2. Proof of the Atiyah-Guillemin-Sternberg Convexity theorem 3 3. Morse theory
More informationComplex Analysis MATH 6300 Fall 2013 Homework 4
Complex Analysis MATH 6300 Fall 2013 Homework 4 Due Wednesday, December 11 at 5 PM Note that to get full credit on any problem in this class, you must solve the problems in an efficient and elegant manner,
More informationMATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5
MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have
More informationLarge deviations for random projections of l p balls
1/32 Large deviations for random projections of l p balls Nina Gantert CRM, september 5, 2016 Goal: Understanding random projections of high-dimensional convex sets. 2/32 2/32 Outline Goal: Understanding
More informationSolutions to Complex Analysis Prelims Ben Strasser
Solutions to Complex Analysis Prelims Ben Strasser In preparation for the complex analysis prelim, I typed up solutions to some old exams. This document includes complete solutions to both exams in 23,
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationNATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II
NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part II Chapter 2 Further properties of analytic functions 21 Local/Global behavior of analytic functions;
More information3 Integration and Expectation
3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ
More informationINTRODUCTION TO REAL ANALYTIC GEOMETRY
INTRODUCTION TO REAL ANALYTIC GEOMETRY KRZYSZTOF KURDYKA 1. Analytic functions in several variables 1.1. Summable families. Let (E, ) be a normed space over the field R or C, dim E
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationA large deviation principle for a RWRC in a box
A large deviation principle for a RWRC in a box 7th Cornell Probability Summer School Michele Salvi TU Berlin July 12, 2011 Michele Salvi (TU Berlin) An LDP for a RWRC in a nite box July 12, 2011 1 / 15
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationChapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries
Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.
More informationCHAPTER I THE RIESZ REPRESENTATION THEOREM
CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationTriangular matrices and biorthogonal ensembles
/26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n
More informationRose-Hulman Undergraduate Mathematics Journal
Rose-Hulman Undergraduate Mathematics Journal Volume 17 Issue 1 Article 5 Reversing A Doodle Bryan A. Curtis Metropolitan State University of Denver Follow this and additional works at: http://scholar.rose-hulman.edu/rhumj
More informationTopology of Quadrature Domains
Topology of Quadrature Domains Seung-Yeop Lee (University of South Florida) August 12, 2014 1 / 23 Joint work with Nikolai Makarov Topology of quadrature domains (arxiv:1307.0487) Sharpness of connectivity
More informationOn the concentration of eigenvalues of random symmetric matrices
On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest
More informationWigner s semicircle law
CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are
More informationCentral Limit Theorems for linear statistics for Biorthogonal Ensembles
Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s
More informationOrthogonal polynomials with respect to generalized Jacobi measures. Tivadar Danka
Orthogonal polynomials with respect to generalized Jacobi measures Tivadar Danka A thesis submitted for the degree of Doctor of Philosophy Supervisor: Vilmos Totik Doctoral School in Mathematics and Computer
More informationPCA with random noise. Van Ha Vu. Department of Mathematics Yale University
PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical
More informationMath The Laplacian. 1 Green s Identities, Fundamental Solution
Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external
More informationBOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET
BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski
More informationLecture I: Asymptotics for large GUE random matrices
Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationA Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets
A Minimal Uncertainty Product for One Dimensional Semiclassical Wave Packets George A. Hagedorn Happy 60 th birthday, Mr. Fritz! Abstract. Although real, normalized Gaussian wave packets minimize the product
More informationOn rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro
On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational
More informationDiagonalization by a unitary similarity transformation
Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple
More informationDuality of multiparameter Hardy spaces H p on spaces of homogeneous type
Duality of multiparameter Hardy spaces H p on spaces of homogeneous type Yongsheng Han, Ji Li, and Guozhen Lu Department of Mathematics Vanderbilt University Nashville, TN Internet Analysis Seminar 2012
More informationVARIATIONAL PRINCIPLE FOR THE ENTROPY
VARIATIONAL PRINCIPLE FOR THE ENTROPY LUCIAN RADU. Metric entropy Let (X, B, µ a measure space and I a countable family of indices. Definition. We say that ξ = {C i : i I} B is a measurable partition if:
More informationFunctional Analysis I
Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationConvex Analysis and Economic Theory Winter 2018
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number
More informationNotions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy
Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.
More informationRatio Asymptotics for General Orthogonal Polynomials
Ratio Asymptotics for General Orthogonal Polynomials Brian Simanek 1 (Caltech, USA) Arizona Spring School of Analysis and Mathematical Physics Tucson, AZ March 13, 2012 1 This material is based upon work
More information5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.
5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint
More informationKaczmarz algorithm in Hilbert space
STUDIA MATHEMATICA 169 (2) (2005) Kaczmarz algorithm in Hilbert space by Rainis Haller (Tartu) and Ryszard Szwarc (Wrocław) Abstract The aim of the Kaczmarz algorithm is to reconstruct an element in a
More informationCMS winter meeting 2008, Ottawa. The heat kernel on connected sums
CMS winter meeting 2008, Ottawa The heat kernel on connected sums Laurent Saloff-Coste (Cornell University) December 8 2008 p(t, x, y) = The heat kernel ) 1 x y 2 exp ( (4πt) n/2 4t The heat kernel is:
More informationNOTES ON LINEAR ODES
NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary
More informationHamburger Beiträge zur Angewandten Mathematik
Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael
More informationRandom regular digraphs: singularity and spectrum
Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationINSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD
INSTANTON MODULI AND COMPACTIFICATION MATTHEW MAHOWALD () Instanton (definition) (2) ADHM construction (3) Compactification. Instantons.. Notation. Throughout this talk, we will use the following notation:
More informationCHAPTER 6. Differentiation
CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.
More informationNear-Potential Games: Geometry and Dynamics
Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics
More informationGeneral Theory of Large Deviations
Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations
More information+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1
Appendix To understand weak derivatives and distributional derivatives in the simplest context of functions of a single variable, we describe without proof some results from real analysis (see [7] and
More informationOperator norm convergence for sequence of matrices and application to QIT
Operator norm convergence for sequence of matrices and application to QIT Benoît Collins University of Ottawa & AIMR, Tohoku University Cambridge, INI, October 15, 2013 Overview Overview Plan: 1. Norm
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationLecture Notes March 11, 2015
18.156 Lecture Notes March 11, 2015 trans. Jane Wang Recall that last class we considered two different PDEs. The first was u ± u 2 = 0. For this one, we have good global regularity and can solve the Dirichlet
More information(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);
STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend
More informationGlobal Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations
Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology
More informationLebesgue Measure on R n
CHAPTER 2 Lebesgue Measure on R n Our goal is to construct a notion of the volume, or Lebesgue measure, of rather general subsets of R n that reduces to the usual volume of elementary geometrical sets
More informationThe Strong Largeur d Arborescence
The Strong Largeur d Arborescence Rik Steenkamp (5887321) November 12, 2013 Master Thesis Supervisor: prof.dr. Monique Laurent Local Supervisor: prof.dr. Alexander Schrijver KdV Institute for Mathematics
More informationGreen s Functions and Distributions
CHAPTER 9 Green s Functions and Distributions 9.1. Boundary Value Problems We would like to study, and solve if possible, boundary value problems such as the following: (1.1) u = f in U u = g on U, where
More informationExistence and Uniqueness
Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationAn introduction to Birkhoff normal form
An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an
More information