Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2011/12

Similar documents
Analysis on Graphs. Alexander Grigoryan Lecture Notes. University of Bielefeld, WS 2009/10

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Topological properties

Graph Theory. Thomas Bloom. February 6, 2015

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

0 Sets and Induction. Sets

Notes on Iterated Expectations Stephen Morris February 2002

Lifting to non-integral idempotents

THE MAXIMAL SUBGROUPS AND THE COMPLEXITY OF THE FLOW SEMIGROUP OF FINITE (DI)GRAPHS

Lecture 9 Classification of States

DR.RUPNATHJI( DR.RUPAK NATH )

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Very few Moore Graphs

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Tree sets. Reinhard Diestel

Lecture 5: Random Walks and Markov Chain

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

2 Lecture 2: Logical statements and proof by contradiction Lecture 10: More on Permutations, Group Homomorphisms 31

12. Hilbert Polynomials and Bézout s Theorem

MORE ON CONTINUOUS FUNCTIONS AND SETS

Chapter 3. Introducing Groups

3.1 Basic properties of real numbers - continuation Inmum and supremum of a set of real numbers

Detailed Proof of The PerronFrobenius Theorem

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

D-MATH Algebra I HS18 Prof. Rahul Pandharipande. Solution 1. Arithmetic, Zorn s Lemma.

Laplacian Integral Graphs with Maximum Degree 3

Discrete dynamics on the real line

P-adic Functions - Part 1

Sets, Structures, Numbers

In N we can do addition, but in order to do subtraction we need to extend N to the integers

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Economics 204 Fall 2011 Problem Set 1 Suggested Solutions

Part III. 10 Topological Space Basics. Topological Spaces

SUMS PROBLEM COMPETITION, 2000

Nordhaus-Gaddum Theorems for k-decompositions

Partial cubes: structures, characterizations, and constructions

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

2. Transience and Recurrence

ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM

ADDENDUM B: CONSTRUCTION OF R AND THE COMPLETION OF A METRIC SPACE

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Balance properties of multi-dimensional words

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Eigenvalues, random walks and Ramanujan graphs

Algebraic Methods in Combinatorics

Out-colourings of Digraphs

ON THE RELATIONSHIP BETWEEN SETS AND GROUPS

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

MATH 117 LECTURE NOTES

Notes on the matrix exponential

Math 145. Codimension

Micro-support of sheaves

Introduction to Real Analysis

This section is an introduction to the basic themes of the course.

Properties of Ramanujan Graphs

Bichain graphs: geometric model and universal graphs

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Bi-Arc Digraphs and Conservative Polymorphisms

5 Set Operations, Functions, and Counting

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Chapter 2 Linear Transformations

The 4-periodic spiral determinant

1 Random Walks and Electrical Networks

Continuity. Chapter 4

4.1 Notation and probability review

2 JOHAN JONASSON we address in this paper is if this pattern holds for any nvertex graph G, i.e. if the product graph G k is covered faster the larger

Real Analysis Notes. Thomas Goller

Math 42, Discrete Mathematics

MA257: INTRODUCTION TO NUMBER THEORY LECTURE NOTES

Lecture 2: Review of Prerequisites. Table of contents

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

MAT 257, Handout 13: December 5-7, 2011.

LECTURE NOTES IN CRYPTOGRAPHY

Near convexity, metric convexity, and convexity

Introduction to Real Analysis Alternative Chapter 1

Measurable functions are approximately nice, even if look terrible.

Classification of Root Systems

Monoids. Definition: A binary operation on a set M is a function : M M M. Examples:

Maximising the number of induced cycles in a graph

MATH FINAL EXAM REVIEW HINTS

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Ramsey Theory. May 24, 2015

are the q-versions of n, n! and . The falling factorial is (x) k = x(x 1)(x 2)... (x k + 1).

1 Introduction We adopt the terminology of [1]. Let D be a digraph, consisting of a set V (D) of vertices and a set E(D) V (D) V (D) of edges. For a n

Principles of Real Analysis I Fall I. The Real Number System

4.6 Montel's Theorem. Robert Oeckl CA NOTES 7 17/11/2009 1

Chapter 8. P-adic numbers. 8.1 Absolute values

Maths 212: Homework Solutions

Nader H. Bshouty Lisa Higham Jolanta Warpechowska-Gruca. Canada. (

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

Abstract. We show that a proper coloring of the diagram of an interval order I may require 1 +

Transcription:

Analysis on Graphs Alexander Grigoryan Lecture Notes University of Bielefeld, WS 0/

Contents The Laplace operator on graphs 5. The notion of a graph............................. 5. Cayley graphs.................................. 8.3 Random walks...................................4 The Laplace operator.............................. 3.5 The Dirichlet problem............................. 6 Spectral properties of the Laplace operator 3. Green's formula................................. 3. Eigenvalues of the Laplace operator...................... 3.3 Convergence to equilibrium.......................... 37.4 More about the eigenvalues.......................... 4.5 Convergence to equilibrium for bipartite graphs............... 46.6 Eigenvalues of Z m................................ 47.7 Products of graphs............................... 49.8 Eigenvalues and mixing times in Z n m...................... 5 3 Geometric bounds for the eigenvalues 57 3. Cheeger's inequality............................... 57 3. Estimating from below via diameter.................... 65 3.3 Expansion rate................................. 67 4 Eigenvalues on innite graphs 77 4. Dirichlet Laplace operator........................... 77 4. Cheeger's inequality............................... 8 4.3 Isoperimetric inequalities............................ 8 4.4 Solving the Dirichlet problem by iterations.................. 83 4.5 Isoperimetric inequalities on Cayley graphs.................. 86 5 Estimates of the heat kernel 9 5. The notion and basic properties of the heat kernel.............. 9 5. One-dimensional simple random walk..................... 9 5.3 Carne-Varopoulos estimate........................... 98 5.4 On-diagonal upper estimates of the heat kernel................ 0 5.5 On-diagonal lower bound via the Dirichlet eigenvalues............ 09 5.6 Volume growth and on-diagonal lower bound................. 3 5.7 Escape rate of random walk.......................... 5 3

4 CONTENTS 6 The type problem 7 6. Recurrence and transience........................... 7 6. Recurrence and transience on Cayley graphs................. 6.3 Volume tests for recurrence.......................... 3 6.4 Isoperimetric tests for transience........................ 8 References....................................... 9

Chapter The Laplace operator on graphs. The notion of a graph A graph is a couple (V; E) where V is a set of vertices, that is, an arbitrary set, whose elements are called vertices, and E is a set of edges, that is, E consists of some couples (x; y) where x; y V. We write x y (x is connected to y, or x is joint to y, or x is adjacent to y, or x is a neighbor of y ) if (x; y) E. Graphs are normally represented graphically as a set of points on a plane, and if x y then one connects the corresponding points on the plane by a line. There are two versions of the denition of edges:. The couples (x; y) are ordered, that is, (x; y) and (y; x) are considered as dierent (unless x = y). In this case, the graph is called directed or oriented.. The couples (x; y) are unordered, that is, (x; y) = (y; x). In this case, x y is equivalent to y x, and the graph is called undirected or unoriented. Unless otherwise specied, all graphs will be undirected. The edge (x; y) will be normally denoted by xy, and x; y are called the endpoints of this edge. The edge xx with the same endpoints (should it exist) is called a loop. A graph is called simple if it has no loops. A graph is called locally nite if each vertex has a nite number of edges. For each point x, dene its degree deg (x) = # fy V : x yg ; that is, deg (x) is the number of the edges with endpoint x. A graph is called nite if the number of vertices is nite. Of course, a nite graph is locally nite. We start with a simple observation. Lemma. (Double counting of edges) On any simple nite graph (V; E), the following identity holds: deg (x) = #E: xv Proof. Let n = #V and let us enumerate all vertices as ; ; :::; n. Consider the adjacency matrix A = (a ij ) n i;j= of the graph that is dened as follows: ; i j a ij = 0; i 6 j: 5

6 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS This matrix is symmetric and the sum of the entries in the row i (and in the column i) is equal to deg (i) so that n deg (i) = i= n i=! n n a ij = a ij : j= i;j= The entries a ij in the last summation are 0 and, and a ij = if and only if (i; j) is an edge. In this case i 6= j and (j; i) is also an edge. Therefore, each edge ij contributes to this sum the value twice: as (i; j) and as (j; i). Hence, which nishes the proof. n a ij = #E; i;j= Example. Consider some examples of graphs.. A complete graph K n. The set of vertices is V = f; ; :::; ng, and the edges are dened as follows: i j for any two distinct i; j V. That is, any two distinct points in V are connected. Hence, the number of edges in K n is n i= deg (i) = n (n ) :. A complete bipartite graph K n;m. The set of vertices is V = f; ::; n; n + ; :::; n + mg, and the edges are dened as follows: i j if and only if either i < n and j n or i n and j < n. That is, the set of vertices is split into two groups: S = f; :::; ng and S = fn + ; :::; mg, and the vertices are connected if and only if they belong to the dierent groups. The number of edges in K n;m is equal to nm: 3. A lattice graph Z. The set of vertices V consists of all integers, and the integers x,y are connected if and only if jx yj = : 4. A lattice graph Z n. The set of vertices consists of all n-tuples (x ; :::; x n ) where x i are integers, and (x ; :::; x n ) (y ; :::; y n ) if and only if n jx i y i j = : i= That is, x i is dierent from y i for exactly one value of the index i, and jx i y i j = for this value of i. Denition. A weighted graph is a couple ( ; ) where a non-negative function on V V such that = (V; E) is a graph and xy is. xy = yx ;. xy > 0 if and only if x y.

.. THE NOTION OF A GRAPH 7 Alternatively, xy can be considered as a positive function on the set E of edges, that is extended to be 0 on non-edge pairs (x; y). The weight is called simple if xy = for all edges x y. Any weight xy gives rise to a function on vertices as follows: (x) = xy : (.) y;yx Then (x) is called the weight of a vertex x. For example, if the weight xy is simple then (x) = deg (x). The following lemma extends Lemma.. Lemma. On any simple nite weighted graph ( ; ), (x) = : E xv Proof. Rewrite (.) in the form (x) = yv xy where the summation is extended to all y V. This does not change the sum in (.) because we add only non-edges (x; y) where xy = 0. Therefore, we obtain (x) = xy = xy = xy = : xv xv yv x;yv x;y:xy E Denition. A nite sequence fx k g n k=0 of vertices on a graph is called a path if x k x k+ for all k = 0; ; :::; n. The number n of edges in the path is referred to as the length of the path. Denition. A graph (V; E) is called connected if, for any two vertices x; y V, there is a path connecting x and y, that is, a path fx k g n k=0 such that x 0 = x and x n = y. If (V; E) is connected then dene the graph distance d (x; y) between any two distinct vertices x; y as follows: if x 6= y then d (x; y) is the minimal length of a path that connects x and y, and if x = y then d (x; y) = 0: The connectedness here is needed to ensure that d (x; y) < for any two points. Lemma.3 On any connected graph, the graph distance is a metric, so that (V; d) is a metric space. Proof. We need to check the following axioms of a metric.. Positivity: 0 d (x; y) <, and d (x; y) = 0 if and only if x = y.. Symmetry: d (x; y) = d (y; x) : 3. The triangle inequality: d (x; y) d (x; z) + d (z; y) :

8 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS The rst two properties are obvious for the graph distance. To prove the triangle inequality, choose a shortest path fx k g n k=0 connecting x and z, a shortest path fy kg m k=0 connecting y and z, so that Then the sequence d (x; z) = n and d (z; y) = m: x ; :::; x n ; z; y ; :::; y m is a path connecting x and y, that has the length n + m, which implies that d (x; y) n + m = d (x; z) + d (z; y) : Lemma.4 If (V; E) is a connected locally nite graph then the set of vertices V is either nite or countable. Proof. Fix a reference point x V and consider the set B n : fy V : d (x; y) ng ; that is a ball with respect to the distance d. Let us prove by induction in n that #B n < : Inductive step for n = 0 is trivial because B 0 = fxg. Inductive step: assuming that B n is nite, let us prove that B n+ is nite. It suces to prove that B n+ n B n is nite. For any vertex y B n+ nb n ; we have d (x; y) = n+ so that there is a path fx k g n+ k=0 from x to y of length n + : Consider the vertex z = x n. Clearly, the path fx k g n k=0 connects x and z and has the length n, which implies that d (x; z) n and, hence, z B n. On the other hand, we have by construction z y. Hence, we have shown that every vertex y B n+ n B n is connected to one of the vertices in B n. However, the number of vertices in B n is nite, and each of them has nitely many neighbors. Therefore, the total number of the neighbors of B n is nite, which implies # (B n+ n B n ) < and #B n+ < : Finally, observe that V = S n= B n because for any y V we have d (x; y) < so that y belongs to some B n. Then V is either nite or countable as a countable union of nite sets.. Cayley graphs Here we discuss a large class of graphs that originate from groups. Recall that a group (G; ) is a set G equipped with a binary operation that satises the following properties:. for all x; y G, x y is an element of G;. associative law: x (y z) = (x y) y; 3. there exists a neutral element e such that x e = e x = x for all x G; 4. there exists the inverse x element for any x G, such that x x = x x = e:

.. CAYLEY GRAPHS 9 If in addition the operation is commutative, that is, x y = y x then the group G is called abelian or commutative. In the case of abelian group, one uses the additive notation. Namely, the group operation is denoted + instead of, the neutral element is denoted by 0 instead of e, and the inverse element is denoted by x rather than x : Example.. Consider the set Z of all integers with the operation +. Then (Z; +) is an abelian group where the neutral element is the number 0 and the inverse of x is the negative of x.. Fix an integer q and consider the set Z q of all residues (Restklassen) modulo q, with the operation +. In other words, the elements of Z q are the equivalence classes of integers modulo q. Namely, one says that two integers x; y are congruent modulo q and writes x = y mod q if x y is divisible by q. This relation is an equivalence relation and gives rise to q equivalence classes, that are called the residues and are denoted by 0; ; :::; q as integers, so that the integer k belongs to the residue k. The addition in Z q is inherited from Z as follows: x + y = z in Z q, x + y = z mod q in Z: Then (Z q ; +) is an abelian group, the neutral element is 0, and the inverse of x is q x (except for x = 0). For example, consider Z = f0; g. Apart from trivial sums x + 0 = x, we have the following rules in this group: + = 0 and =. If Z 3 = f0; ; g, we have + = ; + = 0; + = = ; = : Given two groups, say (A; +) ; (B; +), one can consider a direct product of them: the group (A B; +) that consists of pairs (a; b) where a A and b B with the operation (a; b) + (a 0 ; b 0 ) = (a + a 0 ; b + b 0 ) The neutral element of A B is (0 A ; 0 B ), and the inverse to (a; b) is ( a; b). More generally, given n groups (A k ; +) where k = ; :::; n, dene their direct product (A A ::: A n ; +) as the set of all sequences (a k ) n k= where a k A k, with the operation (a ; :::; a n ) + (a 0 ; :::; a 0 n) = (a + a 0 ; :::; a n + a 0 n) : The neutral element is (0 A ; :::; 0 An ) and the inverse is (a ; :::; a n ) = ( a ; :::; a n ) : If the groups are abelian then their product is also abelian.

0 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS Example.. The group Z n is dened as the direct product Z Z {z ::: Z } of n copies n of the group Z.. The group (Z Z 3 ; +) consists of pairs (a; b) where a is a residue mod and b is a residue mod 3: For example, we have the following sum in this group: whence it follows that (; ) = (; ) : (; ) + (; ) = (0; 0) ; What is the relation of the groups to graphs? Groups give rise to a class of graphs that are called Cayley graphs. Let (G; ) be a group and S be a subset of G with the property that if x S then x S and that e = S. Such a set S will be called symmetric. A group G and a symmetric set S G determine a graph (V; E) as follows: the set V of vertices coincides with G, and the set E of edges is dened as follows: x y, x y S: (.) Note that the relation x y is symmetric in x; y, that is, x y implies y x, because y x = x y S: Hence, the graph (V; E) is undirected. The fact that e = S implies that x 6 x because x x = e = S. Hence, the graph (V; E) contains no loops. Denition. The graph (V; E) dened as above is denoted by (G; S) and is called the Cayley graph of the group G with the edge generating set S. There may be many dierent Cayley graphs of the same group since they depend also on the choice of S. It follows from the construction that deg (x) = #S for any x V: In particular, if S is nite then the graph (V; E) is locally nite. In what follows, we will consider only locally nite Cayley graphs and always assume that they are equipped with a simple weight. If the group operation is + then (.) becomes x y, y x S, x y S: In this case, the symmetry of S means that 0 = S and if x S then also x S. Example.. G = Z with + and S = f; g : Then x y if x y = or x y =, that is, if x and y are neighbors on the real axis: ::: ::: If S = f; g then x y if jx yj = or jx yj = so that we obtain a dierent graph.. Let G = Z n with +. Let S consist of points (x ; :::; x n ) Z n such that exactly one of x i is equal to and the others are 0; that is ( ) n S = (x ; :::; x n ) Z m : jx i j = : i=

.. CAYLEY GRAPHS For example, in the case n = we have The connection x y means that x 0; equivalently, this means that S = f(; 0) ; ( ; 0) ; (0; ) ; (0; )g : y has exactly one coordinate, and all others are n jx i y i j = : i= Hence, the Cayley graph of (Z n ; S) is exactly the standard lattice graph Z n. In the case n =, it looks as follows: j j j j j j j j j j j j j j j j j j j j j j j j j j j j j j Consider now another edge generating set S on Z with two more elements: S = f(; 0) ; ( ; 0) ; (0; ) ; (0; ) ; (; ) ; ( ; )g : The corresponding graph (Z ; S) is shown here: j j j j j j j j j j j j j j j j j j j j j j j j j j j j j j 3. Let G = Z = f0; g. The only possibility for S is S = fg (note that = ). The graph (Z ; S) coincides with K and is shown here: 4. Let G = Z q where q >, and S = fg. That is, each residue k = 0; ; :::; q has two neighbors: k and k +. For example, 0 has the neighbors and q. The

CHAPTER. THE LAPLACE OPERATOR ON GRAPHS graph (Z q ; S) is called the q-cycle and is denoted by C q. Here are the graphs C 3 and C 4 : C 3 = j ; C 4 = 5. Consider Z q with the symmetric set S = Z q n f0g. That is, every two distinct elements x; y Z q are connected by an edge. Hence, the resulting Cayley graph is the complete graph K q. 6. Let G = Z n := Z Z ::: Z {z }, that consists of n-tuples (x ; :::; x n ) of residues n mod, that is, each x i is 0 or. Let S consist of all elements (x ; :::; x n ) : such that exactly one x i is equal to and all others are 0. Then the graph (Z n ; S) is called the n-dimensional binary cube and is denoted by f0; g n, analogously to the geometric n-dimensional cube [0; ] n. Clearly, f0; g = K and f0; g = C 4. The graph f0; g 3 is shown here in two ways: f0; g 3 = j j j j j j j j = j j j j j j j j j j j j j j 7. Let G = Z q Z : Then G consists of pairs (x; y) where x Z q and y Z. Then G can be split into two disjoint subsets each having q elements. Set S = G. Then G 0 = Z q f0g = f(x; 0) : x Z q g G = Z q fg = f(x; ) : x Z q g ; (x; a) (y; b), a b =, a 6= b: In other words, (x; a) (y; b) if and only if the points (x; a) and (y; b) belong to dierent subsets G 0 ; G. Hence, the graph (Z q Z ; S) coincides with the complete bipartite graph K q;q with the partition G 0 ; G. Denition. A graph (V; E) is called D-regular, if all vertices x V have the same degree D (that is, each vertex has D edges). A graph is called regular if it is D-regular for some D. Of course, there are plenty of examples of non-regular graphs. Clearly, all Cayley graphs are regular. All regular graphs that we have discussed above, were Cayley graphs. However, there regular graphs that are not Cayley graphs (cf. Exercise 8)..3 Random walks Consider a classical problem from Probability theory. Let fx k g k= be a sequence of independent random variables, taking values and with probabilities = each; that is, P (x k = ) = P (x k = ) = =:

.3. RANDOM WALKS 3 Consider the sum n = x + ::: + x n and ask, what is a likely behavior of n for large n? Historically, this type of problem came from the game theory (and the gambling practice): at each integer value of time, a player either wins with probability = or loses with the same probability. The games at dierent times are independent. Then n represents the gain at time n if n > 0, and the loss at time n is n < 0. Of course, the mean value of n, that is, the expectation, is 0 because E (x k ) = ( ) = 0 and n E ( n ) = E (x k ) = 0: k= The games with this property are called fair games or martingales. However, the deviation of n from the average value 0 can be still signicant in any particular game. We will adopt a geometric point of view on n as follows. Note that n Z and n is dened inductively as follows: 0 = 0 and n+ n is equal to or with equal probabilities =. Hence, we can consider n as a position on Z of a walker that jumps at any time n from its current position to a neighboring integer, either right or left, with equal probabilities =, and independently of the previous movements. Such a random process is called a random walk. Note that this random walk is related to the graph structure of Z: namely, a walker moves at each step along the edges of this graph. Hence, n can be regarded as a random walk on the graph Z. Similarly, one can dene a random walk on Z N : at any time n = 0; ; ; :::, let n be the position of a walker in Z N. It starts at time 0 at the origin, and at time n+ it moves with equal probability = (N) by one of the vectors e ; :::; e N ; where e ; :::; e n is the canonical basis in R n. That is, 0 = 0 and P ( n+ n = e k ) = N : We always assume that the random walk in question has the Markov property: the choice of the move at any time n is independent of the previous movement. The following picture of the trace of a random walk on Z was copied from Wikipedia: More generally, one can dene a random walk on any locally nite graph (V; E) : Namely, imagine a walker that at any time n = 0; ; ::: has a random position n at one of the vertices of V that is dened as follows: 0 = x 0 is a given vertex, and n+ is obtained from n by moving with equal probabilities along one of the edges of n, that is, = deg (x) ; y x; P ( n+ = y j n = x) = (.3) 0; y 6 x: The random walk f n g dened in this way, is called a simple random walk on (V; E). The adjective "simple" refers to the fact that the walker moves to the neighboring vertices with equal probabilities. A simple random walk is a particular case of a Markov chain. Given a nite or countable set V (that is called a state space), a Markov kernel on V is any function P (x; y) : V V! [0; +) with the property that P (x; y) = 8x V: (.4) yv

4 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS If V is countable then the summation here is understood as a series. Any Markov kernel denes a Markov chain f n g n=0 as a sequence of random variables with values in V such that the following identity holds P ( n+ = y j n = x) = P (x; y) ; (.5) and that the behavior of the process at any time n onwards is independent of the past. The latter requirement is called the Markov property and it will be considered in details below. Observe that the rule (.3) dening the random walk on a graph (V; E) can be also written in the form (.5) where deg(x) P (x; y) = ; y x; (.6) 0; y 6 x: The condition (.4) is obviously satised because P (x; y) = P (x; y) = deg (x) = : yx yx yv Hence, the random walk on a graph is a particular case of a Markov chain, with a specic Markov kernel (.6). Let us discuss the Markov property. The exact meaning of it is given by the following identity: P ( = x ; = x ; :::; n+ = x n+ j 0 = x) = P ( = x ; = x ; :::; n = x n j 0 = x) P ( n+ = x n+ j n = x n ) (.7) that postulates the independence of the jump from x n to x n+ from the previous path. Using (.5) and (.7), one obtains by induction that P ( = x ; = x ; :::; n = x n j 0 = x) = P (x; x ) P (x ; x ) :::P (x n ; x n ) : (.8)

.3. RANDOM WALKS 5 Obviously, (.8) implies back (.7). In fact, (.8) can be used to actually construct the Markov chain. Indeed, it is not obvious that there exists a sequence of random variables satisfying (.5) and (.7). Proposition.5 The Markov chain exists for any Markov kernel. Proof. This is the only statement in this course that requires a substantial use of the foundations of Probability Theory. Indeed, it is about construction of a probability space (; P) and dening a sequence f n g n=0 of random variables satisfying the required conditions. The set will be taken to be the set V of all sequences fx k g k= of points of V, that represent the nal outcome of the process. In order to construct a probability measure P on, we rst construct a probability measure P (n) of the set of nite sequences fx k g n k= : Note that the set of sequences of n points of V is nothing other than the product V n = V ::: V : Hence, our strategy is as follows: rst construct a probability measure P {z } () on V, then n P (n) on V n, and then extend it to a measure P on V. In fact, we will construct a family of probability measures P x indexed by a point x V, so that P x is associated with a Markov chain starting from the point 0 = x. Fix a point x V and observe that P (x; ) determines a probability measure P () x for any subset A V, set Clearly, P x is -additive, that is, P () x (A) = ya P (x; y) : on V as follows: P () x! G A k = P x (A k ) ; k= k= and P () x (V ) = by (.4). Next, dene a probability measure P x (n) on the product V n = V ::: V as follows. Firstly, dene {z } n the measure of any point (x ; :::; x n ) V n by and then extend it to all sets A V n by P (n) x (x ; :::; x n ) = P (x; x ) P (x ; x ) :::P (x n ; x n ) ; (.9) P (n) x (A) = (x ;:::;x n)a P (n) x (x ; :::; x n ) : Let us verify that it is indeed a probability measure, that is, P (n) x (V n ) =. The inductive basis was proved above, let us make the inductive step from n to n + P (n) x (x ; :::; x n+ ) = = = x ;:::;x n+v x ;:::;x n+v x ;:::;x nv x n+v x ;:::;x nv = = ; P (x; x ) P (x ; x ) :::P (x n ; x n ) P (x n ; x n+ ) P (x; x ) P (x ; x ) :::P (x n ; x n ) P (x n ; x n+ ) P (x; x ) P (x ; x ) :::P (x n ; x n ) x n+v P (x n ; x n+ ) Note that measure P (n) x is not a product measure of n copies of P () x since the latter would have been P (x; x ) P (x; x ) :::P (x; x n ) :

6 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS where we have use (.4) and the n inductive o hypothesis. The sequence of measures constructed above is consistence in the following sense. Fix P (n) x n= two positive integers n < m: Then every point (x ; :::; x n ) V n can be regarded as a subset of V m that consists of all sequence where the rst n terms are exactly x ; ::; x n, and the rest terms are arbitrary. Then we have P (n) x (x ; :::; x n ) = P (m) x (x ; :::; x n ) ; (.0) that is proved as follows: P (m) x (x ; :::; x n ) = = x n+;:::;x mv x n+;:::;x mv P (m) x (x ; :::; x n ; x n+ ; :::; x m ) P (x; x ) P (x ; x ) :::P (x n ; x n ) P (x n ; x n+ ) :::P (x m ; x m ) = P (x; x ) P (x ; x ) :::P (x n ; x n ) = P (n) (n m) x (x ; :::; x n ) P x n V n m = P (n) x (x ; :::; x n ) : x n+;:::;x mv P (x n ; x n+ ) :::P (x m ; x m ) In the same way, any subset A V n admits the cylindrical extension A 0 to a subset of V m as follows: a sequence (x ; :::; x m ) belongs to A 0 if (x ; :::; x n ) A: It follows from (.0) that P (n) x (A) = P (m) x (A 0 ) : (.) This is a Kolmogorov consistency condition, that allows to extend a sequence of measures P x (n) a measure on V : Consider rst cylindrical subsets of V, that is, sets of the form where A is a subset of V n, and set on V n to A 0 = ffx k g k= : (x ; :::; x n ) Ag (.) P x (A 0 ) = P (n) x (A) : (.3) Due to the consistency condition (.), this denition does not depend on the choice of n. Kolmogorov's extension theorem says that the functional P x dened in this way on cylindrical subsets of V, extends uniquely to a probability measure on the minimal -algebra containing all cylindrical sets. Now we dene the probability space as V endowed with the family fp x g of probability measures. The random variable n is a function on with values in V that is dened by Then (.9) can be rewritten in the form n (fx k g k= ) = x n: P x ( = x ; :::; n = x n ) = P (x; x ) P (x ; x ) :::P (x n ; x n ) : (.4) The identity (.4) together with (.4) are the only properties of Markov chains that we need here and in what follows. Let us use (.4) to prove that the sequence f n g is indeed a Markov chain with the Markov kernel P (x; y). We need to verify (.5) and (.8). The latter is obviously equivalent to (.4). To prove the former, write P x ( n = y) = P x ( = x ; :::; n = x n ; n = y) and = P x ( n = y; n+ = z) = = x ;:::;x n x ;:::;x n x ;:::;x n x ;:::;x n V V V V P (x; x ) P (x ; x ) :::P (x n ; y) P x ( = x ; :::; n = x n ; n = y; n+ = z) P (x; x ) P (x ; x ) :::P (x n ; y) P (y; z)

.3. RANDOM WALKS 7 whence which is equivalent to (.5). P x ( n+ = z j n = y) = P x ( n = y; n+ = z) P x ( n = y) = P (y; z) ; Given a Markov chain f n g with a Markov kernel P (x; y), note that by (.4) P (x; y) = P x ( = y) so that P (x; ) is the distribution of. Denote by P n (x; ) the distribution of n, that is, P n (x; y) = P x ( n = y) : The function P n (x; y) is called the transition function or the transition probability of the Markov chain. Indeed, it fully describes what happens to random walk at time n. For a xed n, the function P n (x; y) is also called the n-step transition function. It is easy to deduce a recurrence relation between P n and P n+. Proposition.6 For any Markov chain, we have P n+ (x; y) = zv P n (x; z) P (z; y) : (.5) Proof. By (.4), we have P n (x; z) = P x ( n = z) = P x ( = x ; :::; n = x n ; n = z) = x ;:::;x n V x ;:::;x n V P (x; x ) P (x ; x ) :::P (x n ; z) : Applying the same argument to P n+ (x; y), we obtain P n+ (x; y) = P (x; x ) P (x ; x ) :::P (x n ; x n ) P (x n ; y) x ;:::;x nv 0 = @ P (x; x ) P (x ; x ) :::P (x n ; z) A P (z; y) zv x ;:::;x n V = P n (x; z) P (z; y) ; zv which nishes the proof. Corollary.7 For any xed n, P n (x; y) is also a Markov kernel on V. Proof. We need only to verify that P n (x; y) = : yv

8 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS For n = this is given, the inductive step from n to n + follows from (.5): P n+ (x; y) = P n (x; z) P (z; y) yv yv zv = P n (x; z) P (z; y) zv yv = P n (x; z) = : zv Corollary.8 We have, for all positive integers n; k, P n+k (x; y) = zv P n (x; z) P k (z; y) : (.6) Proof. Induction in k. The case k = is covered by (.5). The inductive step from k to k + : P n+(k+) (x; y) = P n+k (x; w) P (w; y) wv = P n (x; z) P k (z; w) P (w; y) wv zv = P n (x; z) P k (z; w) P (w; y) zv wv = zv P n (x; z) P k+ (z; y) : Now we impose one restriction on a Markov chain. Denition. A Markov kernel P (x; y) is called reversible if there exists a positive function (x) on the state space V, such that P (x; y) (x) = P (y; x) (y) : (.7) A Markov chain is called reversible if its Markov kernel is reversible. It follows by induction from (.7) and (.5) that P n (x; y) is also a reversible Markov kernel. The condition (.7) means that the function xy := P (x; y) (x) is symmetric in x; y. For example, this is the case when P (x; y) is symmetric in x; y; then we just take (x) = : However, the reversibility condition can be satised for nonsymmetric Markov kernel as well. For example, in the case of a simple random walk on a graph (V; E), we have by (.6) ; x y xy = P (x; y) deg (x) = 0; x 6 y :

.3. RANDOM WALKS 9 which is symmetric. Hence, a simple random walk is a reversible Markov chain. Any reversible Markov chain on V gives rise to a graph structure on V as follows. Dene the set E of edges on V by the condition x y, xy > 0: Then xy can be considered as a weight on (V; E) (cf. Section.). Note that the function (x) can be recovered from xy by the identity y;yx xy = yv P (x; y) (x) = (x) ; which matches (.). Let = (V; E) be a graph. Recall that a non-negative function xy on V V is called a weight, if xy = yx and xy > 0 if and only if x y: A couple (V; ) (or ( ; )) is called a weighted graph. Note that the information about the edges is contained in the weight so that the set E of edges is omitted in the notation (V; ). As we have seen above, any reversible Markov kernel on V determines a weighted graph (V; ). Conversely, a weighted graph (V; ) determines a reversible Markov kernel on V provided the set V is nite or countable and 0 < yv xy < for all x V: (.8) For example, the positivity condition in (.8) holds if the graph (V; E) that is determined by the weight xy, has no isolated vertices, that is, the vertices without neighbors, and the niteness condition holds if the graph (V; E) is locally nite, so that the summation in (.8) has nitely many positive terms. The full condition (.8) is satised for locally nite graphs without isolated vertices. If (.8) holds, then the weight on vertices (x) = yv xy is nite and positive for all x, and we can set P (x; y) = xy (x) (.9) so that the reversibility condition (.7) is obviously satised. In this context, a reversible Markov chain is also referred to as a random walk on a weighted graph. From now on, we stay in the following setting: we have a weighted graph (V; ) satisfying (.8), the associated reversible Markov kernel P (x; y), and the corresponding random walk (= Markov chain) f n g. Fix a point x 0 V and consider the functions v n (x) = P x0 ( n = x) = P n (x 0 ; x) and u n (x) = P x ( n = x 0 ) = P n (x; x 0 )

0 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS The function v n (x) is the distribution of n at time n. By Corollary.7, we have P xv v n (x) = : Function u n (x) is somewhat more convenient to be dealt with. The function v n and u n are related follows: v n (x) (x 0 ) = P n (x 0 ; x) (x 0 ) = P n (x; x 0 ) (x) = u n (x) (x) ; where we have used the reversibility of P n. Hence, we have the identity v n (x) = u n (x) (x) : (.0) (x 0 ) Extend u n and v n to n = 0 by setting u 0 = v 0 = fx0 g, where A denotes the indicator function of a set A V, that is, the function that has value at any point of A and value 0 outside A. Corollary.9 For any reversible Markov chain, we have, for all x V and n = 0; ; ; :::, v n+ (x) = (y) v n (y) xy (.) y (a forward equation) and u n+ (x) = (x) u n (y) xy (.) y (a backward equation). Proof. If n = 0 then (.) becomes P (x 0 ; x) = (x 0 ) xx 0 which is a dening identity for P. For n, we obtain, using (.9) and (.5), (y) v n (y) yx = P n (x 0 ; y) P (y; x) y y = P n+ (x 0 ; x) = v n+ (x) ; which proves (.). Substituting (.0) into (.), we obtain (.). In particular, for a simple random walk we have xy = for x y and (x) = deg (x) so that we obtain the following identities: v n+ (x) = yx u n+ (x) = deg (x) deg (y) v n (y) : u n (y) : The last identity means that u n+ (x) is the mean-value of u n (y) taken at the points y x. Note that in the case of a regular graph, when deg (x) const, we have u n v n by (.0). yx

.3. RANDOM WALKS Example. Let us compute the function u n (x) on the lattice graph Z. Since Z is regular, u n = v n so that u n (x) the distribution of n at time n provided 0 = 0:We evaluate inductively u n using the initial condition u 0 = f0g and the recurrence relation: u n+ (x) = (u (x + ) + u (x )) : (.3) The computation of u n (x) for n = ; ; 3; 4 and jxj 4 are shown here (all empty elds are 0): x Z -4-3 - - 0 3 4 n = 0 n = n = 4 4 n = 3 8 3 8 3 8 8 n = 4 6 4.............................. n = 0 0:7 0:05 0:46 0:05 0:7 One can observe (and prove) that u n (x)! 0 as n! : 3 8 4 6 Example. Consider also computation of u n (x) on the graph C 3 = (Z 3 ; fg). The formula (.3) is still true provided that one understands x as a residue mod 3. We have then the following computations for n = ; :::; 6: x Z 3 0 n = 0 0 0 n = 0 n = 4 n = 3 3 8 n = 4 5 6 n = 5 3 n = 6 64 4 3 8 5 6 3 4 3 8 5 6 3 64 Here one can observe that the function u n (x) converges to a constant function =3, and later we will prove this. Hence, for large n, the probability that n visits a given point is nearly =3, which should be expected.

CHAPTER. THE LAPLACE OPERATOR ON GRAPHS The following table contains a similar computation of u n on C 5 = (Z 5 ; fg) x Z 5 0 3 4 n = 0 0 0 0 0 n = 0 0 0 n = 0 0 4 4 n = 3 8 3 3 0 8 8 8 n = 4 4 6 3 8 6.................. n = 5 0:99 0:0 0:98 0:0 0:99 Here u n (x) approaches to but the convergence is much more slower than in the case of 5 C 3. Consider one more example: a complete graph K 5. In this case, function u n (x) satises the identity u n+ (x) = u n (y) : 4 y6=x The computation shows the following values of u n (x): x K 5 0 3 4 n = 0 0 0 0 0 n = 4 0 4 4 4 4 n = 3 6 3 6 4 3 6 3 6 n = 3 3 64 3 64 3 6 3 64 3 64 n = 4 0:99 0:99 0:03 0:99 0:99 This time the convergence to the constant =5 occurs much faster, than in the previous example, although C 5 and K 5 has the same number of vertices. The extra edges in K 5 allows a quicker mixing than in the case of C 5. As we will see, for nite graphs it is typically the case that the transition function u n (x) converges to a constant as n!. For the function v n this means that v n (x) = u n (x) (x) 0 (x)! c (x) as n! for some constant c. The constant c is determined by the requirement that c (x) is a probability measure on V, that is, from the identity c xv (x) = : Hence, c (x) is asymptotically the distribution of n as n! :. The function c (x) on V is called the stationary measure or the equilibrium measure of the Markov chain.

.4. THE LAPLACE OPERATOR 3 One of the problems for nite graphs that will be discussed in this course, is the rate of convergence of v n (x) to the equilibrium measure. The point is that n can be considered for large n as a random variable with the distribution function c (x) so that we obtain a natural generator of a random variable with a prescribed law. However, in order to be able to use this, one should know for which n the distribution of n is close enough to the equilibrium measure. The value of n, for which this is the case, is called the mixing time. For innite graphs the transition functions u n (x) and v n (x) typically converge to 0 as n!, and an interesting question is to determine the rate of convergence to 0. For example, we will show that, for a simple random walk in Z N, v n (x) ' n N= as n! : The distribution function v n (x) is very sensitive to the geometry of the underlying graph. Another interesting question that arises on innite graphs, is to distinguish the following two alternatives in the behavior of a random walk n on a graph:. n returns innitely often to a given point x 0 with probability,. n visits x 0 nitely many times and then never comes back, also with probability : In the rst case, the random walk is called recurrent, and in the second case - transient. By a theorem of Polya, a simple random walk in Z N is recurrent if and only if N..4 The Laplace operator Let f (x) be a function on R. Recall that so that f 0 (x) f 0 (x) = lim h!0 f (x + h) f (x) h f (x + h) f h (x) f (x) f (x h) h for small h. The operators f(x+h) f(x) f(x) f(x h) and are called the dierence operators h h and can be considered as numerical approximations of the derivative. What would be the approximation of the second derivative? f 00 (x) f 0 (x + h) f 0 (x) h = f(x+h) h f (x + h) f (x) + f (x h) h = h f (x + h) + f (x h) f(x) h f (x) : f(x) f(x h) h Hence, f 00 is determined by the average value of f at neighboring points x + h and x h, minus f (x) :

4 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS For functions f (x; y) on R, one can develop similarly numerical approximations for second order partial derivatives @ f and @ f, and then for the Laplace operator @x @y f = @ f @x + @ f @y : More generally, for a function f of n variables x ; :::; x n, the Laplace operator is dened by n @ f f = : (.4) @x k k= This operator in the three-dimensional space was discovered by Pierre-Simon Laplace in 784-85 while investigating the movement of the planets in the solar system, using the gravitational law of Newton. It turns out that the Laplace operator occurs in most of the equations of mathematical physics: wave propagation, heat propagation, diusion processes, electromagnetic phenomena (Maxwell's equations), quantum mechanics (the Schrodinger equation). The two-dimensional Laplace operator (.4) admits the following approximation: f (x; y) 4 h f (x + h; y) + f (x h; y) + f (x; y + h) + f (x; y h) 4 that is, f (x; y) is determined by the average value of f at neighboring points (x + h; y) ; (x h; y) ; (x; y + h) ; (x; y h) ; f (x) minus the value at (x; y). This observation motivates us to dene a discrete version of the Laplace operator on any graph as follows. Denition. Let (V; E) be a locally nite graph without isolated points (so that 0 < deg (x) < for all x V ). For any function f : V! R, dene the function f by f (x) = deg (x) f (y) f (x) : The operator on functions on V is called the Laplace operator of (V; E). In words, f (x) is the dierence between the arithmetic mean of f (y) at all vertices y x and f (x). Note that the set R of values of f can be replaced by any vector space over R, for example, by C. For example, on the lattice graph Z we have yx while on Z f (x; y) = f (x) = f (x + ) + f (x ) f (x) ; f (x + ; y) + f (x ; y) + f (x; y + ) + f (x; y ) 4 f (x) :

.4. THE LAPLACE OPERATOR 5 The notion of the Laplace operator can be extended to weighted graphs as follows. Denition. Let (V; ) be a locally nite weighted graph without isolated points. For any function f : V! R, dene the function f by f (x) = (x) f (y) xy f (x) : (.5) y The operator acting on functions on V, is called the weighted Laplace operator of (V; ). Note that the summation in (.5) can be restricted to y x because otherwise xy = 0. Hence, f (x) is the dierence between the weighted average of f (y) at the vertices y x and f (x). The Laplace operator is a particular case of the weighted Laplace operator when the weight is simple, that is, when xy = for all x y. Denote by F the set of all real-valued functions on V. Then F is obviously a linear space with respect to addition of functions and multiplication by a constant. Then can be regarded as an operator in F, that is, : F! F. Note that is a linear operator on F, that is, (f + g) = f + g for all functions f; g F and R; which obvious from (.5). Another useful property to mention: const = 0 (a similar property holds for the dierential Laplace operator). Indeed, if f (x) c then (x) f (y) xy = c (x) xy = c y whence the claim follows. Recall that the corresponding reversible Markov kernel is given by P (x; y) = xy (x) y so that we can write f (x) = y P (x; y) f (y) f (x) : Consider the Markov kernel also as an operator on functions as follows: P f (x) = y P (x; y) f (y) : This operator P is called the Markov operator. Hence, the Laplace operator and the Markov operator P are related by a simple identity = P id; where id is the identical operator in F.

6 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS Example. Let us approximate f 00 (x) on R using dierent values h and h for the steps of x: f 00 (x) f 0 (x + h ) f 0 (x) f (x + h ) f (x) f (x) f (x h ) h h h h = f (x + h ) + f (x h ) f (x) h h h = h h + h " h + h + h h f (x + h ) + f (x h ) h h Hence, we obtain the weighted average of f (x + h ) and f (x h ) with the weights h and h, respectively. This average can be realized as a weighted Laplace operator as follows. Consider a sequence of reals fx k g kz that is dened by the rules f (x) # : xk + h x 0 = 0; x k+ = ; x k + h ; k is even k is odd. For example, x = h, x = h + h, x = h, x = h h, etc. Set V = fx k g kv and dene the edge set E on V by x k x k+. Now dene the weight xy on edges by =h ; k is even, xk x k+ = =h ; k is odd. Then we have (x k ) = xk x k+ + xk x k = =h + =h and, for any function f on V, f (x k ) = f (x k+ ) (x k ) xk xk+ + f (x k ) xk xk h = f (x k + h ) + h f (x k h ) ; k is even, =h + =h h f (x k + h ) + h f (x k h ) ; k is odd..5 The Dirichlet problem Broadly speaking, the Dirichlet problem is a boundary value problem of the following type: nd a function u in a domain assuming that u is known in and u is known at the boundary @. For example, if is an interval (0; ) then this problem becomes as follows: nd a function u (x) on [0; ] such that u 00 (x) = f (x) for all x (0; ) u (0) = a and u () = b where the function f and the reals a; b are given. This problem can be solved by repeated integrations, provided f is continuous. A similar problem for n-dimensional Laplace operator = P n @ k= is stated as follows: given a bounded open domain R n, nd @x k a function u in the closure that satises the conditions u (x) = f (x) for all x ; (.6) u (x) = g (x) for all x @; where f and g are given functions. Under certain natural hypotheses, this problem can be solved, and a solution is unique.

.5. THE DIRICHLET PROBLEM 7 One of the sources of the Dirichlet problem is Electrical Engineering. If u (x) is the potential of an electrostatic eld in R 3 then u satises in the equation u = f where f (x) is the density of a charge inside, while the values of u at the boundary are determined by the exterior conditions. For example, if the surface @ is a metal then it is equipotential so that u (x) = const on @. Another source of the Dirichlet problem is Thermodynamics. If u (x) is a stationary temperature at a point x in a domain then u satises the equation u = f where f (x) is the heat source at the point x. Again the values of u at @ are determined by the exterior conditions. Let us consider an analogous problem on a graph that, in particular, arises from a discretization of the problem (.6) for numerical purposes. Theorem.0 Let (V; ) be a connected locally nite weighted graph (V; ), and let be a subset of V. Consider the following Dirichlet problem: u (x) = f (x) for all x ; u (x) = g (x) for all x c (.7) ; where u : V! R is an unknown function while the functions f :! R and g : c! R are given. If is nite and c is non-empty then, for all functions f; g as above, the Dirichlet problem (.7) has a unique solution. Note that, by the second condition in (.7), the function u is already dened outside, so the problem is to construct an extension of u to that would satisfy the equation u = f in. Dene the vertex boundary of as follows: @ = fy c : y x for some x g : Observe that the Laplace equation u (x) = f (x) for x involves the values u (y) at neighboring vertices y of x, and any neighboring point y belongs to either or to @. Hence, the equation u (x) = f (x) uses the prescribed values of u only at the boundary @; which means that the second condition in (.7) can be restricted to @ as follows: u (x) = g (x) for all x @: This condition (as well as the second condition in (.7) is called the boundary condition. If c is empty then the statement of Theorem.0 is not true. For example, in this case any constant function u satises the same equation u = 0 so that there is no uniqueness. The existence also fails in this case, see Exercises. The proof of Theorem.0 is based on the following maximum principle. A function u : V! R is called subharmonic in if u (x) 0 for all x, and superharmonic in if u (x) 0 for all x. A function u is called harmonic in if it is both subharmonic and superharmonic, that is, if it satises the Laplace equation u = 0. For example, the constant function is harmonic on all sets. Lemma. (A maximum/minimum principle) Let (V; ) be a connected locally nite weighted graph and let be a non-empty nite subset of V such that c is non-empty. Then, for any function u : V! R, that is subharmonic in ; we have max u sup u; c

8 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS and for any function u : V! R, that is superharmonic in ; we have min u inf u: c Proof. It suces to prove the rst claim. If sup c u = + then there is nothing to prove. If sup c u < then, by replacing u by u + const, we can assume that sup c u = 0. Set M = max u and show that M 0, which will settle the claim. Assume from the contrary that M > 0 and consider the set S := fx V : u (x) = Mg : (.8) Clearly, S and S is non-empty. Claim. If x S then all neighbors of x also belong to S. Indeed, we have u (x) 0 which can be rewritten in the form u (x) yx P (x; y) u (y) : Since u (y) M for all y V, we have yx P (x; y) u (y) M yx P (x; y) = M: Since u (x) = M, all inequalities in the above two lines must be equalities, whence it follows that u (y) = M for all y x. This implies that all such y belong to S. Claim. Let S be a non-empty set of vertices of a connected graph (V; E) such that x S implies that all neighbors of x belong to S. Then S = V. Indeed, let x S and y be any other vertex. Then there is a path fx k g n k=0 between x and y, that is, x = x 0 x x ::: x n = y: Since x 0 S and x x 0, we obtain x S. Since x x, we obtain x S. By induction, we conclude that all x k S, whence y S. It follows from the two claims that the set (.8) must coincide with V, which is not possible since u (x) 0 in c. This contradiction shows that M 0. Proof of Theorem.0. Let us rst prove the uniqueness. If we have two solutions u and u of (.7) then the dierence u = u u satises the conditions u (x) = 0 for all x ; u (x) = 0 for all x c : We need to prove that u 0: Since u is both subharmonic and superharmonic in, Lemma. yields 0 = inf u min u max u sup u = 0; c c whence u 0:

.5. THE DIRICHLET PROBLEM 9 Let us now prove the existence of a solution to (.7) for all f; g. For any x, rewrite the equation u (x) = f (x) in the form yx;y P (x; y) u (y) u (x) = f (x) yx;y c P (x; y) g (y) ; (.9) where we have moved to the right hand side the terms with y c and used that u (y) = g (y). Denote by F the set of all real-valued functions u on and observe that the left hand side of (.9) can be regarded as an operator in this space; denote it by Lu, that is, Lu (x) = P (x; y) u (y) u (x) ; yx;y for all x. Rewrite the equation (.9) in the form Lu = h where h is the right hand side of (.9), which is a given function on. Note that F is a linear space. Since the family fxg x of indicator functions form obviously a basis in F ; we obtain that dim F = # <. Hence, the operator L : F! F is a linear operator in a nitely dimensional space, and the rst part of the proof shows that Lu = 0 implies u = 0 (indeed, just set f = 0 and g = 0 in (.9), that is, the operator L is injective. By Linear Algebra, any injective operator acting in the spaces of equal dimensions, must be bijective (alternatively, one can say that the injectivity of L implies that det L 6= 0 whence it follows that L is invertible and, hence, bijective). Hence, for any h F, there is a solution u = L h F, which nishes the proof. How to calculate numerically the solution of the Dirichlet problem? Denote N = # and observe that solving the Dirichlet problem amounts to solving a linear system Lu = h where L is an N N matrix. If N is very large then the usual elimination method (not to say about inversion of matrices) requires too many operations. A more economical Jacobi's method uses an approximating sequence fu n g that is constructed as follows. Using that = P id, rewrite the equation u = f in the form u = P u f and consider a sequence of functions fu n g given by the recurrence relation u n+ = P un f in g in c : The initial function u 0 can be chosen arbitrarily to satisfy the boundary condition; for example, take u 0 = 0 in and u 0 = g in c. In the case f = 0, we obtain the same recurrence relation u n+ = P u n as for the distribution of the random walk, although now we have in addition some boundary values. Let us estimate the amount of computation for this method. Assuming that deg (x) is uniformly bounded, computation of P u n (x) f (x) for all x requires ' N operations, and this should be multiplied by the number of iterations. As we will see later (see Section 4.4), if is a subset of Z m of a cubic shape then the number of iterations should be ' N =m. Hence, the Jacobi method requires ' N +=m operations. For comparison, the row reduction requires ' N 3 operations. If m = then the Jacobi method requires also ' N 3 operations, but for higher dimensions m the Jacobi method is more economical. For the row reduction method, one needs ' N of row operation, and each row operation requires ' N of elementary operations. Hence, one needs ' N 3 of elementary operation.

30 CHAPTER. THE LAPLACE OPERATOR ON GRAPHS Example. Let us look at a numerical example in the lattice graph Z for the set = f; ; :::; 9g, for the boundary value problem u (x) = 0 in u (0) = 0; u (0) = : The exact solution is a linear function u (x) = x=0. approximating sequence in the form u n+ (x) = u n (x + ) + u n (x ) ; x f; ; :::; 9g Using the explicit expression for, write the while u n (0) = 0 and u n (0) = for all n. Set u 0 (x) = 0 for x f; ; :::; 9g. The computations yield the following: x Z 0 3 4 5 6 7 8 9 0 n = 0 0 0 0 0 0 0 0 0 0 0 n = 0 0 0 0 0 0 0 0 0 n = 0 0 0 0 0 0 0 0 n = 3 0 0 0 0 0 0 0 8 n = 4 0 0 0 0 0 0 6 8 4 4 3 8 5 8 5 8... ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: n = 50 0:00 0:084 0:7 0:6 0:35 0:45 0:55 0:68 0:77 0:88 :00... ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: ::: n = 8 0:00 0:097 0:9 0:9 0:39 0:49 0:59 0:69 0:79 0:897 :00 so that u 8 is rather close to the exact solution. Here N = 9 and, indeed, one needs ' N iterations to approach to the solution.

Chapter Spectral properties of the Laplace operator Let (V; ) be a locally nite weighted graph without isolated points and weighted Laplace operator on (V; ). be the. Green's formula Let us consider the dierence operator r xy that is dened for any two vertices x; y V and maps F to R as follows: r xy f = f (y) f (x) : The relation between the Laplace operator and the dierence operator is given by f (x) = (x) (r xy f) xy = y y P (x; y) (r xy f) Indeed, the right hand side here is equal to (f (y) f (x)) P (x; y) = y y = y f (y) P (x; y) f (x) P (x; y) f (y) P (x; y) f (x) = f (x) : y The following theorem is one of the main tools when working with the Laplace operator. For any subset of V, denote by c the complement of, that is, c = V n. Theorem. (Green's formula) Let (V; ) be a locally nite weighted graph without isolated points, and let be a non-empty nite subset of V. Then, for any two functions f; g on V, f(x)g(x)(x) = (r xy f) (r xy g) xy + (r xy f) g(x) xy (.) x x;y x y c The formula (.) is analogous to the integration by parts formula 3