Determinantal Processes And The IID Gaussian Power Series
|
|
- Bertram Darcy Singleton
- 5 years ago
- Views:
Transcription
1 Determinantal Processes And The IID Gaussian Power Series Yuval Peres U.C. Berkeley Talk based on work joint with: J. Ben Hough Manjunath Krishnapur Bálint Virág 1
2 Samples of translation invariant point processes in the plane: Poisson (left), determinantal (center) and permanental for K(z, w) = 1 π ez w 1 2 ( z 2 + w 2). Determinantal processes exhibit repulsion, while permanental processes exhibit clumping. 2
3 Determinantal Point Processes Let (Ω, µ) be a σ-finite measure space, Ω R d. One way to describe the distribution of a point process X on Ω is via its joint intensities. Definition: X has joint intensities ρ k, k = 1, 2,... if, for any mutually disjoint (measurable) sets A 1,...,A k, [ k ] E X A j = ρ k (x 1,...,x k )dµ j=1 j A j 3
4 In most cases of interest the following is valid (assume no double points): Ω is discrete and µ = counting measure: ρ k (x 1,...,x k ) is the probability that x 1,...,x k X. Ω is open in R d and µ = Lebesgue measure: ρ k (x 1,...,x k ) is lim ǫ 0 P(X has a point in each of B ǫ (x j )) (Vol(B ǫ )) k. 4
5 Now let K be the kernel of an integral operator K on L 2 (Ω) with the spectral decomposition K(x, y) = k λ kϕ k (x)ϕ k (y), where {ϕ k } k is an orthonormal set in L 2 (Ω). Definition: X is said to be a determinantal point process with kernel K if its joint intensities are ) ρ k (x 1,...,x k ) = det ((K(x i, x j )) 1 i,j k, (1) for every k 1 and x 1,...,x k Ω. 5
6 Key facts: (Maachi) A locally finite determinantal process with the Hermitian kernel K exists if and only if K is locally of trace class and 0 λ k 1 k. If K(x, y) = n k=1 ϕ k(x)ϕ k (y), then the total number of points in X is n, almost surely. Since the corresponding integral operator K on L 2 (Ω) is a projection, such processes are said to be determinantal projection process. 6
7 Karlin-McGregor (1958) Consider n independent simple symmetric random walks on Z started from i 1 < i 2 <... < i n where all the i j s are even. Let P i,j (t) be the t-step transition probabilities. Then the probability that at time t, the random walks are at j 1 < j 2 <... < j n and have mutually disjoint paths is P i1,j 1 (t)... P i1,j n (t) det P in,j 1 (t)... P in,j n (t) This is intimately related to determinantal processes. For instance, one can show that if t is even and we also condition the walks to return to i 1,...,i n, then the positions of the walks at any time s (1 s t) are determinantal. (See Johanson(2004) for this and more general results) 7
8 Uniform Spanning Tree Let G be a finite undirected graph. Let T be uniformly chosen from the set of spnning trees of G. Orient the edges of G arbitrarily. Let ě be the opposite orientation of e. For each directed edge e, let χe := 1 e 1ě denote the unit flow along e. l 2 (E) = {f : E R : f(e) = f(ě)} = span{ e=v χ e : where v is a vertex.} = span{ n χ e i : e 1,...,e n is an oriented cycle} i=1 It is easy to see that l 2 (E) =. Define I e := P χe, the orthogonal projection onto. Kirchoff (1847) proved that for any edge e, P[e T] = (I e, I e ). 8
9 Theorem: (Burton and Pemantle (1993)) The set of edges in T forms a determinantal process with kernel Y (e, f) := (I e, I f ). i.e., for any distinct edges e 1,...,e k P[e 1,...,e k T] = det[(y (e i, e j )) 1 i,j k ]. (2) 9
10 Ginibre Ensemble Let A be an n n matrix with i.i.d. standard complex normal entries. Then the eigenvalues of A form a determinantal process in C with the kernel K n (z, w) = 1 π e 1 2 ( z 2 + w 2 n 1 ) (zw) k. k=0 k! As n, we get a determinantal process with the kernel K(z, w) = 1 π e 1 2 ( z 2 + w 2 ) = 1 π e 1 2 ( z 2 + w 2 )+zw. k=0 (zw) k. k! 10
11 Construction of determinantal projection processes Define K H δ x ( ) = K(, x). The intensity measure of the process is given by µ H (x) = ρ 1 (x)dµ(x) = K H δ x 2 dµ(x). (3) Note that µ H (M) = dim(h), so µ H / dim(h) is a probability measure on M. We construct the determinantal process as follows. Start with n = dim(h), and H n = H. If n = 0, stop. Pick a random point X n from the probability measure µ Hn /n. Let H n 1 H n be the orthocomplement of the function K Hn δ x in H n. In the discrete case, H n 1 = {f H n : f(x n ) = 0}. Note that dim(h n 1 ) = n 1 a.s. Decrease n by 1 and iterate. 11
12 Proposition: The points (X 1,...,X n ) constructed by this algorithm are distributed as a uniform random ordering of the points in a determinantal process X with kernel K. Proof: Let ψ j = K H δ xj. Projecting to H j is equivalent to first projecting to H and then to H j, and it is easy to check that K Hj δ xj = K Hj ψ j. Thus, by (3), the density of the random vector (X 1,...,X n ) constructed by the algorithm equals n K Hj ψ j 2 p(x 1,...,x n ) =. j Note that H j = H ψ j+1,...,ψ n, and therefore V = n j=1 K Hj ψ j is exactly the repeated base times height formula for the volume of the parallelepiped determined by the vectors ψ 1,...,ψ n in the finite-dimensional vector space H L 2 (M). It is well-known that V 2 equals the determinant of the Gram matrix whose i, j entry is given by the scalar product of j=1 12
13 ψ i, ψ j, that is ψ i ψ j dµ = K(x i, x j ). We get p(x 1,...,x n ) = 1 n! det(k(x i, x j )), so the random variables X 1,...,X n are exchangeable. Viewed as a point process, the n-point joint intensity of {X j } n j=1 is n!p(x 1,...,x n ), which agrees with that of the determinantal process X. The claim now follows since X contains n points almost surely. 13
14 We have the following remarkable fact that connects the kernel K to the distribution of X: Theorem: (Shirai-Takahashi (2002)) Suppose X is a determinantal process on E with kernel K(x, y) = k λ kϕ k (x)ϕ k (y). Then L(X) = α(s)l (X(S)), (4) S N where X(S) is the determinantal process in E with kernel j S ϕ j(x)ϕ j (y) and α(s) = λ j (1 λ j). j S j S In particular the number of points in the process X has the distribution of a sum of independent Bernoulli(λ k ) random variables. 14
15 Proof: (HKPV) Assume K has finite rank i.e., take K(x, y) = n k=1 λ kϕ k (x)ϕ k (y). Otherwise we can approximate by finite rank kernels, and deduce the same for general K since the corresponding processes increase (stochastically) to the original process. Let I k, 1 k n be independent Bernoulli random variables with I k Bernoulli(λ k ). Then set K I (x, y) = n k=1 I kϕ k (x)ϕ k (y). K I is a random analogue of the kernel K. We want to prove m, x i s, [ )] ) E det ((K I (x i, x j )) 1 i,j m = det ((K(x i, x j )) 1 i,j m. (5) 15
16 Proof of (5): Take m = n first. Then we write K I (x 1, x 1 ) K I (x 1, x n ) = K I (x n, x 1 ) K I (x n, x n ) I 1 ϕ 1 (x 1 )... I n ϕ n (x 1 ) ϕ 1 (x 1 )... ϕ 1 (x n ) I 1 ϕ 1 (x 2 )... I n ϕ n (x 2 ) ϕ 2 (x 1 )... ϕ 2 (x n ) I 1 ϕ 1 (x n )... I n ϕ n (x n ) ϕ n (x 1 )... ϕ n (x n ). Hence det((k I (x i, x j ))) 1 i,j n = I 1...I n det(a A) where A is the second matrix on the right side above. On taking expectations we get [ )] E det ((K I (x i, x j )) 1 i,j n = λ 1...λ n det(a A). 16
17 Now we also have K(x 1, x 1 ) K(x 1, x n ) = K(x n, x 1 ) K(x n, x n ) λ 1 ϕ 1 (x 1 )... λ n ϕ n (x 1 ) ϕ 1 (x 1 )... ϕ 1 (x n ) λ 1 ϕ 1 (x 2 )... λ n ϕ n (x 2 ) ϕ 2 (x 1 )... ϕ 2 (x n ) λ 1 ϕ 1 (x n )... λ n ϕ n (x n ) ϕ n (x 1 )... ϕ n (x n ) From this we get ) det ((K(x i, x j )) 1 i,j n = λ 1...λ n det(a A).. 17
18 This proves that the two point processes X (determinantal with kernel K) and X I (determinantal with kernel K I ) have the same n-point joint intensity. But both these processes have at most n points. Therefore for every m, the m-point joint intensities are determined by the n-point joint intensities (zero for m > n, got by integrating for m < n). This proves the theorem. 18
19 Zeros of the i.i.d. Gaussian power series [Virág-P.]. Let f U (z) = a n z n n=0 Z U = zeros(f U ) (6) with {a n } complex Gaussian, density(re iθ ) = e r2. Theorem: (Hannay, Zelditch-Shiffman,...) iα z λ Law of Z U invariant under Möbius transformations z e 1 λz that preserve unit disk. 19
20 Euclidean analog: satisfies Thus, f C = n=0 [ a Cov[f C (z), f C (w)] = E n z n n n! k = n=0 zn w n n! = e z w. a n z n n!, (7) ] ā k w k k! (z+a)( w+ā) Cov[f C [ (z + a), f C (w + a)] = e ] = Cov e a 2 /2eāz f C (z), e a 2 /2eāw f C (w). Since Gaussian processes are determined by Cov(, ) this proves translation invariance of Law[zeros(f C )]. 20
21 Definition: Let p ǫ (z 1,...,z n ) denote the probability that a random function f has zeros in B ǫ (z 1 ),...B ǫ (z n ). Joint intensity of zeros (if it exists) is defined to be p(z 1,...,z n ) = lim ǫ 0 p ǫ (z 1,...,z n ) (πǫ 2 ) n (8) Theorem: (Hammersley) Let f be a Gaussian analytic function in a planar ( domain ) D, z 1,...,z n D, and consider the matrix A = Ef(z i )f(z j ). If A is non-singular then p(z 1,...z n ) exists and equals ( ) E f (z 1 ) f (z n ) 2 f(z1 )= =f(z n )=0 det(πa). 21
22 Theorem: (Virág - P.) The joint intensity of zeros for f U is [ ] p(z 1,...,z n ) = π n 1 det (1 z i z j ) 2 = det[k(z i, z j )] where K(z, w) = 1 π(1 z w) 2 is the Bergman kernel for U. i,j 22
23 Let Proof of Determinantal Formula T β (z) = z β 1 βz denote a Möbius transformation fixing the unit disk. Also, for fixed z 1,...,z n U denote (9) Υ(z) = n j=1 T zj (z). (10) Key facts: 1. Letf = f U and z 1,...,z n U. The distribution of the random function T z1 (z) T zn (z)f(z) is the same as the conditional distribution of f(z) given f(z 1 ) =... = f(z n ) = 0. 23
24 2. It follows that the conditional joint distribution of the random variables ( f (z k ) : k = 1,...,n ) given f(z 1 ) =... = f(z n ) = 0, is the same as the unconditional joint distribution of ( Υ (z k )f(z k ) : k = 1,...,n ). 3. Consider the n n matrices A jk = Ef(z j )f(z k ) = (1 z j z k ) 1, M jk = (1 z j z k ) 2. By the classical Cauchy determinant formula, det(a) = 1 (z k z j )(z k z j ) 1 z j z k k,j k<j n = Υ (z k ). (11) k=1 24
25 4. We also use Borchardt s identity: ( ) ( 1 perm x j +y det 1 k j,k x j +y k ) j,k = det ( 1 (x j +y k ) )j,k 2 setting x j = z 1 j and y k = z k and dividing both sides by j z2 j, gives that perm(a) det(a) = det(m). (12) 5. Finally, recall the Gaussian moment formula: If Z 1,...,Z n are jointly complex Gaussian random variables with covariance matrix C jk = EZ j Z k, then E ( Z 1 Z n 2) = perm(c). 25
26 From Hammersley s formula p(z 1,...,z n ) equals ( ) E f (z 1 ) f (z n ) 2 f(z 1 ),...,f(z n )=0 π n det(a). The numerator equals E ( f(z 1 ) f(z n ) 2) k Υ (z k ) 2 = perm(a) det(a) 2, where the last equality uses the Gaussian moment formula. Thus, p(z 1,...,z n ) = π n perm(a) det(a) = π n det(m). 26
27 Theorem 2: (Virág - P.) Let X k 1 r 2k 0 1 r 2k be independent. Then 1 X k and N r = Z U B(0, r) have same distribution. Corollary: Let h r = 4πr 2 /(1 r 2 ) (hyperbolic area). Then P(N r = 0) = e h r π 24 +o(h r) = e π2 /12+o(1) 1 r. (13) All of the above generalize to other simply connected domains with smooth boundary. ( ) E f D (z)f D (w) = 2πS D (z, w) (Szëgo Kernel) (14) 27
28 Denote q = r 2. Key to law of N r = Z U B(0, r) : ( ) Nr E = 1 p(z 1,...,z k )dz 1,...dz k k k! Euler s partition identity implies that has product form! B k r q (k+1 2 ) = (1 q)(1 q 2 )...(1 q k ) = γ k. k=0 E(1 + s) N r = γ k s k = (1 + q k s), (15) E k=0 ( Nr k ) s k = γ k s k (16) 28
29 Dynamics Let f U (t, z) = n a n (t)z n (17) with a n (t) performing Ornstein-Uhlenbeck diffusion, a n (t) = e t/2 W n (e t ). Suppose that the zero set of f U contains the origin. Movement of this zero satisfies stochastic differential equation dz = σdw (18) where 1 σ = f U(0) = c lim r r 2 z Z U 0< z <r z = c e 1/k z k. (19) k=1 29
Lecture 1 and 2: Random Spanning Trees
Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny
More informationIntroduction to determinantal point processes from a quantum probability viewpoint
Introduction to determinantal point processes from a quantum probability viewpoint Alex D. Gottlieb January 2, 2006 Abstract Determinantal point processes on a measure space (X, Σ, µ) whose kernels represent
More informationDeterminantal point processes and random matrix theory in a nutshell
Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes
More informationGinibre Random Field Hirofumi Osada (Kyushu University)
Ginibre Random Field Hirofumi Osada (Kyushu University) Joint work with Tomoyuki Shirai (Kyushu University) Configuration spaces and Random Point Fields (rpf) Ginibre random point field Properties of Ginibre
More informationBulk scaling limits, open questions
Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson
More informationComplex determinantal processes and H 1 noise
Complex determinantal processes and H 1 noise Brian Rider and Bálint Virág November 5, 006 Abstract For the plane, sphere, and hyperbolic plane we consider the canonical invariant determinantal point processes
More information2.1 Laplacian Variants
-3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have
More information440-3 Geometry/Topology: Cohomology Northwestern University Solution of Homework 5
440-3 Geometry/Topology: Cohomology Northwestern University Solution of Homework 5 You can use freely the following simple results from complex analysis (which can be proved in the same way as the case
More informationZeros of Gaussian Analytic Functions and Determinantal Point Processes. John Ben Hough Manjunath Krishnapur Yuval Peres Bálint Virág
Zeros of Gaussian Analytic Functions and Determinantal Point Processes John Ben Hough Manjunath Krishnapur Yuval Peres Bálint Virág HBK CAPITAL MANAGEMENT 350 PARK AVE, FL 20 NEW YORK, NY 0022 E-mail address:
More informationarxiv: v2 [math-ph] 21 Jul 2015
DIFFRACTION THEORY OF POINT PROCESSES: SYSTEMS WITH CLUMPING AND REPULSION MICHAEL BAAKE, HOLGER KÖSTERS AND ROBERT V. MOODY arxiv:1405.4255v2 [math-ph] 21 Jul 2015 Abstract. We discuss several examples
More informationNatural boundary and Zero distribution of random polynomials in smooth domains arxiv: v1 [math.pr] 2 Oct 2017
Natural boundary and Zero distribution of random polynomials in smooth domains arxiv:1710.00937v1 [math.pr] 2 Oct 2017 Igor Pritsker and Koushik Ramachandran Abstract We consider the zero distribution
More informationThe Matrix-Tree Theorem
The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries
More informationMIMO Capacities : Eigenvalue Computation through Representation Theory
MIMO Capacities : Eigenvalue Computation through Representation Theory Jayanta Kumar Pal, Donald Richards SAMSI Multivariate distributions working group Outline 1 Introduction 2 MIMO working model 3 Eigenvalue
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationMORE NOTES FOR MATH 823, FALL 2007
MORE NOTES FOR MATH 83, FALL 007 Prop 1.1 Prop 1. Lemma 1.3 1. The Siegel upper half space 1.1. The Siegel upper half space and its Bergman kernel. The Siegel upper half space is the domain { U n+1 z C
More informationREAL ANALYSIS II HOMEWORK 3. Conway, Page 49
REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each
More informationTriangular matrices and biorthogonal ensembles
/26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n
More informationAN INTRODUCTION TO THE THEORY OF REPRODUCING KERNEL HILBERT SPACES
AN INTRODUCTION TO THE THEORY OF REPRODUCING KERNEL HILBERT SPACES VERN I PAULSEN Abstract These notes give an introduction to the theory of reproducing kernel Hilbert spaces and their multipliers We begin
More informationRANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS
RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:
More informationRandom Matrix Eigenvalue Problems in Probabilistic Structural Mechanics
Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationMET Workshop: Exercises
MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When
More informationBERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS
BERGMAN KERNEL ON COMPACT KÄHLER MANIFOLDS SHOO SETO Abstract. These are the notes to an expository talk I plan to give at MGSC on Kähler Geometry aimed for beginning graduate students in hopes to motivate
More information1 9/5 Matrices, vectors, and their applications
1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationEigenvalue variance bounds for Wigner and covariance random matrices
Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual
More informationNumerical Range of non-hermitian random Ginibre matrices and the Dvoretzky theorem
Numerical Range of non-hermitian random Ginibre matrices and the Dvoretzky theorem Karol Życzkowski Institute of Physics, Jagiellonian University, Cracow and Center for Theoretical Physics, PAS, Warsaw
More informationLinear Algebra and Dirac Notation, Pt. 2
Linear Algebra and Dirac Notation, Pt. 2 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14
More informationTwo-periodic Aztec diamond
Two-periodic Aztec diamond Arno Kuijlaars (KU Leuven) joint work with Maurice Duits (KTH Stockholm) Optimal and Random Point Configurations ICERM, Providence, RI, U.S.A., 27 February 2018 Outline 1. Aztec
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationStable Process. 2. Multivariate Stable Distributions. July, 2006
Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationDiagonalizing Matrices
Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationThere are two things that are particularly nice about the first basis
Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationCHAPTER 8. Smoothing operators
CHAPTER 8 Smoothing operators Lecture 8: 13 October, 2005 Now I am heading towards the Atiyah-Singer index theorem. Most of the results proved in the process untimately reduce to properties of smoothing
More informationor E ( U(X) ) e zx = e ux e ivx = e ux( cos(vx) + i sin(vx) ), B X := { u R : M X (u) < } (4)
:23 /4/2000 TOPIC Characteristic functions This lecture begins our study of the characteristic function φ X (t) := Ee itx = E cos(tx)+ie sin(tx) (t R) of a real random variable X Characteristic functions
More informationOn prediction and density estimation Peter McCullagh University of Chicago December 2004
On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating
More informationStatistical Mechanics and Combinatorics : Lecture IV
Statistical Mechanics and Combinatorics : Lecture IV Dimer Model Local Statistics We ve already discussed the partition function Z for dimer coverings in a graph G which can be computed by Kasteleyn matrix
More informationRepresenter theorem and kernel examples
CS81B/Stat41B Spring 008) Statistical Learning Theory Lecture: 8 Representer theorem and kernel examples Lecturer: Peter Bartlett Scribe: Howard Lei 1 Representer Theorem Recall that the SVM optimization
More informationRandom Eigenvalue Problems Revisited
Random Eigenvalue Problems Revisited S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationNUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford
NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e.,
More informationAsymptotic Geometry. of polynomials. Steve Zelditch Department of Mathematics Johns Hopkins University. Joint Work with Pavel Bleher Bernie Shiffman
Asymptotic Geometry of polynomials Steve Zelditch Department of Mathematics Johns Hopkins University Joint Work with Pavel Bleher Bernie Shiffman 1 Statistical algebraic geometry We are interested in asymptotic
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More information9 Radon-Nikodym theorem and conditioning
Tel Aviv University, 2015 Functions of real variables 93 9 Radon-Nikodym theorem and conditioning 9a Borel-Kolmogorov paradox............. 93 9b Radon-Nikodym theorem.............. 94 9c Conditioning.....................
More informationTHIRD SEMESTER M. Sc. DEGREE (MATHEMATICS) EXAMINATION (CUSS PG 2010) MODEL QUESTION PAPER MT3C11: COMPLEX ANALYSIS
THIRD SEMESTER M. Sc. DEGREE (MATHEMATICS) EXAMINATION (CUSS PG 2010) MODEL QUESTION PAPER MT3C11: COMPLEX ANALYSIS TIME:3 HOURS Maximum weightage:36 PART A (Short Answer Type Question 1-14) Answer All
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationMath 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank
Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs
More informationDeterminant density and the Vol-Det Conjecture
Determinant density and the Vol-Det Conjecture Ilya Kofman College of Staten Island and The Graduate Center City University of New York (CUNY) Joint work with Abhijit Champanerkar and Jessica Purcell December
More informationECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018
ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to
More informationRandom Toeplitz Matrices
Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n
More informationNonamenable Products are not Treeable
Version of 30 July 1999 Nonamenable Products are not Treeable by Robin Pemantle and Yuval Peres Abstract. Let X and Y be infinite graphs, such that the automorphism group of X is nonamenable, and the automorphism
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationSpectral Continuity Properties of Graph Laplacians
Spectral Continuity Properties of Graph Laplacians David Jekel May 24, 2017 Overview Spectral invariants of the graph Laplacian depend continuously on the graph. We consider triples (G, x, T ), where G
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationDeterminantal point processes and random matrix theory in a nutshell
Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes
More informationMP 472 Quantum Information and Computation
MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density
More informationThroughout these notes we assume V, W are finite dimensional inner product spaces over C.
Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationFrom the mesoscopic to microscopic scale in random matrix theory
From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system
More informationA complex geometric proof of Tian-Yau-Zelditch expansion
A complex geometric proof of Tian-Yau-Zelditch expansion Zhiqin Lu Department of Mathematics, UC Irvine, Irvine CA 92697 October 21, 2010 Zhiqin Lu, Dept. Math, UCI A complex geometric proof of TYZ expansion
More informationSpring, 2012 CIS 515. Fundamentals of Linear Algebra and Optimization Jean Gallier
Spring 0 CIS 55 Fundamentals of Linear Algebra and Optimization Jean Gallier Homework 5 & 6 + Project 3 & 4 Note: Problems B and B6 are for extra credit April 7 0; Due May 7 0 Problem B (0 pts) Let A be
More informationZeros of Random Analytic Functions
Zeros of Random Analytic Functions Zakhar Kabluchko University of Ulm September 3, 2013 Algebraic equations Consider a polynomial of degree n: P(x) = a n x n + a n 1 x n 1 +... + a 1 x + a 0. Here, a 0,
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationOutline of the course
School of Mathematical Sciences PURE MTH 3022 Geometry of Surfaces III, Semester 2, 20 Outline of the course Contents. Review 2. Differentiation in R n. 3 2.. Functions of class C k 4 2.2. Mean Value Theorem
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationUndirected Graphical Models
Undirected Graphical Models 1 Conditional Independence Graphs Let G = (V, E) be an undirected graph with vertex set V and edge set E, and let A, B, and C be subsets of vertices. We say that C separates
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationMeasures on spaces of Riemannian metrics CMS meeting December 6, 2015
Measures on spaces of Riemannian metrics CMS meeting December 6, 2015 D. Jakobson (McGill), jakobson@math.mcgill.ca [CJW]: Y. Canzani, I. Wigman, DJ: arxiv:1002.0030, Jour. of Geometric Analysis, 2013
More informationLectures 15: Cayley Graphs of Abelian Groups
U.C. Berkeley CS294: Spectral Methods and Expanders Handout 15 Luca Trevisan March 14, 2016 Lectures 15: Cayley Graphs of Abelian Groups In which we show how to find the eigenvalues and eigenvectors of
More informationhere, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional
15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:
More informationThe Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes
The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic
More informationOn the inverse matrix of the Laplacian and all ones matrix
On the inverse matrix of the Laplacian and all ones matrix Sho Suda (Joint work with Michio Seto and Tetsuji Taniguchi) International Christian University JSPS Research Fellow PD November 21, 2012 Sho
More informationIntroduction to Geometry
Introduction to Geometry it is a draft of lecture notes of H.M. Khudaverdian. Manchester, 18 May 211 Contents 1 Euclidean space 3 1.1 Vector space............................ 3 1.2 Basic example of n-dimensional
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationClark model in the general situation
Constanze Liaw (Baylor University) at UNAM April 2014 This talk is based on joint work with S. Treil. Classical perturbation theory Idea of perturbation theory Question: Given operator A, what can be said
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationPalm measures and rigidity phenomena in point processes
Palm measures and rigidity phenomena in point processes arxiv:1509.00898v1 [math.pr] 2 Sep 2015 Subhroshekhar Ghosh Princeton University September 4, 2015 Abstract We study the mutual regularity properties
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationLecture 19 October 28, 2015
PHYS 7895: Quantum Information Theory Fall 2015 Prof. Mark M. Wilde Lecture 19 October 28, 2015 Scribe: Mark M. Wilde This document is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike
More informationQuadrature for the Finite Free Convolution
Spectral Graph Theory Lecture 23 Quadrature for the Finite Free Convolution Daniel A. Spielman November 30, 205 Disclaimer These notes are not necessarily an accurate representation of what happened in
More information4.2. ORTHOGONALITY 161
4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance
More informationEmbeddings of finite metric spaces in Euclidean space: a probabilistic view
Embeddings of finite metric spaces in Euclidean space: a probabilistic view Yuval Peres May 11, 2006 Talk based on work joint with: Assaf Naor, Oded Schramm and Scott Sheffield Definition: An invertible
More informationINDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
More informationStatistical Mechanics and Combinatorics : Lecture II
Statistical Mechanics and Combinatorics : Lecture II Kircho s matrix-tree theorem Deinition (Spanning tree) A subgraph T o a graph G(V, E) is a spanning tree i it is a tree that contains every vertex in
More information