Diameter of random spanning trees in a given graph
|
|
- Jeffery Shepherd
- 6 years ago
- Views:
Transcription
1 Diameter of random spanning trees in a given graph Fan Chung Paul Horn Linyuan Lu June 30, 008 Abstract We show that a random spanning tree formed in a general graph G (such as a power law graph) has diameter much larger than the diameter of G. We show, with high probability the diameter of a random spanning tree of G is shown to be between c n and c n log n, where c and c depend on the spectral gap of G and the ratio of the moments of the degree sequence. For the special case of regular graphs, this result improves the previous lower bound by Aldous by a factor of log n. Introduction Many information networks or social networks have very small diameters, as dictated by the so-called small world phenomenon. However, in a recent paper by Liben-Nowell and Kleinberg [8], it was observed that in many social networks, typical spanning trees often have relatively large diameter. Examples of such spanning trees include those resulting from passing information to a small selected number of neighbors in various scenarios such as spam or gossip. A sparse subgraph naturally has very different behavior from its host graph. It is of interest to understand the connections between a graph and its subgraph. What invariants of the host graph can or cannot be translated to its subgraph? Under what conditions, can we predict the behavior of subgraphs? In particular, why does a random spanning tree have a large diameter while the opposite is true for the host graph? In this paper, we would like to address of this paradox by evaluating the diameter of random spanning sub-tree in G. A spanning tree T of a connected graph G is a subgraph on V (G), which is isomorphic to a tree. The number of spanning tree is determined by the celebrated matrix-tree theorem of Kirchoff [7]. Letting A denote the adjacency matrix and D denote the diagonal matrix of degrees, the matrix-tree theorem University of California, San Diego University of California, San Diego University of South Carolina
2 states that the number of spanning tree is equal to the absolute value of the determinant of any n n sub-matrix of D A. The diameter of a subgraph is always larger than or equal to the diameter of G. However, the diameter of a spanning tree could be much larger than the diameter of the graph. The case that the host graph G is the complete graph K n is well-studied in the literature. The number of spanning trees of K n is n n by Cayley s theorem. Rényi and Szekeres [] showed that the diameter of a random spanning tree is of order n, which contrasts with the fact that the diameter of K n is. Motivated by these examples, we ask what is true story of the diameter of random spanning trees for a general graph. Previously Aldous [] proved that in a regular graph G with spectral bound σ (which will be defined later), the expected diameter of a spanning tree T of G, denoted by diam(t ) has expected value satisfying c( σ) n log n E(diam(T )) c n log n σ for some absolute constant c, where here (and throughout this paper) log refers to the natural logarithm. We partially improve Aldous result as follows: Theorem. For a d-regular graph G on n vertices with spectral gap σ, a spanning tree T of G has expected value satisfying c n log n n E(diam(T )) c log(/σ) for some absolute constants c and c provided that d log n log σ. Theorem is an immediate consequence of the following result for general graphs. Theorem. In a connected graph G on n vertices, we assume that its average degree d, minimum degree δ, and second-order average degree d v d v/ u d u satisfy, for some given ɛ > 0, d log n log σ. Then with probability ɛ, the diameter diam(t ) of a random spanning trees T in G satisfies ɛnd diam(t ) ( ɛ) d. () and for some constant c 0. diam(t ) c nd log n. () ɛ δ log(/σ)
3 While the conditions look technical, they are derived from the proofs in Sections 4 and 5. We note that the average degree requirement is satisfied for any graph so long as, for instance, the average degree is Ω(log n) (a constant multiple of log n for some constant), and σ o(). For random d-regular graphs, it is known that σ is about d. We consider the random graph model G(w) for a given expected degree sequence w (w, w,..., w n ), as introduced in [4]. The probability p ij that there is an edge between v i and v j is proportional to the product w i w j (as well as the loop at v i with probability proportional to wi ). Namely, p ij w iw j k w k w iw j. (3) It has been shown in [6] that G(w) has σ ( + o()) 4 d provided that the minimum of weights is Ω(log n). Theorem implies the diameter of random spanning tree is Ω( log n n) if the average degree is d Ω(( d d log log n ) ). The upper bound is within a multiplicative factor of d δ log n. It has been observed that many real-world information networks satisfy the so-called power law. We say a graph satisfies power law with exponent β if the degree sequence of the graph satisfies the property that the number of vertices having degree k is asymptotically proportional to k β. There are many models being used to capture the behavior of such power law graphs [5], especially for the exponent β in the range between and 3. If we use the random graph model G(w) with w satisfying the power law. In random graph model G((w)), the maximum degree can be as large as n. (In other words, if the maximum degree exceeds n, then G(w) can only be used to model the subgraph with degree no larger than n.) Also in G(w) the second average degree is of order d β m 3 β. Using Theorem, the diameter of a random spanning tree in such random power law graph is at least cn (β )/4 (log n) ( β)/ and at most c n(log n) 3/ for some constant c and c. The paper is organized as follows. In section, we will give definitions and prove some useful facts on the spectrum of the Laplacian, random walks, and spanning trees. In Section 3, we describe a method of using random walks to generate a uniform spanning tree. In Section 4, we will prove the lower bound for the diameter of a random spanning tree and give an upper bound in section 5. Preliminaries Suppose G is a connected (non-bipartite) graph on vertex set [n] {,,..., n}. Let A (a ij ) be adjacency matrix of G defined by { if ij is an edge; a ij 0 otherwise. 3
4 For i n let d i j a ij be the degree of vertex i. Let max(d,..., d n ) be the maximum degree and δ min(d,..., d n ) be the minimum degree. For each k, we define the k-th volume, closely related to the k-th moment of the degree sequence, of G to be n vol k (G) d k i. The volume is simply the sum of all degrees, i.e. vol (G). We define the average degree d vol(g) n vol 0(G) and the second order average degree d vol(g) vol. (G) Let D diag(d, d,..., d n ) denote the diagonal degree matrix. The Laplacian matrix is defined as L I D AD. The spectrum of the Laplacian is the eigenvalues of L sorted in increasing order. i 0 λ 0 λ λ n. The first eigenvalue λ 0 is always equal to 0. λ > 0 if G is connected and λ n with equality holding only if G is bipartite graph. Let σ max{ λ, λ n }. Thus σ < if G is connected and nonbipartite. Note that σ is closely related to the mixing rate of random walks on G. Let α 0, α,..., α n be orthonormal eigenvectors of the Laplacian L, U (α 0, α,..., α n ), where α i is viewed as a column vector. Also we define Λ diag(λ 0,..., λ n ). We can write L UΛU T. For 0 i n, we define φ i α T i Dα i. Then we have Lemma. The degree spectrum (φ 0, φ,..., φ n ) satisfies the following properties.. φ d.. For 0 i n, δ φ i. 3. n i0 φ i. d Proof. Note α 0 (,..., dn ) T since Lα 0 0. We have φ 0 α0 T Dα 0 n di d d i i d. n i d i 4
5 We have Thus, we have We also have φ i δ + αi T Dα i δ + αi T (D δ + I)α i D δ + I δ. δ φ i. φ i Tr(U T DU) i Tr(D). Lemma. For any integer j, Tr(A(D A) j ) d + σ j ( d). Proof. We have Tr(A(D A) j ) Tr(D(D A) j ) Tr(D(D AD ) j Tr(D(I L) j ) Tr(DU(I Λ) j U T ) Tr(U T DU(I Λ) j ) n φ i ( λ i ) j i0 d + φ i ( λ i ) j i>0 d + φ i σ j i>0 d + ( d)σ j. A simple random walk on G is a sequence of vertices v 0, v,..., v k,... with { P(v k j v k i) p ij d i if ij E(G) 0 otherwise 5
6 for all k. The transition matrix P is a n n matrices with entries p ij for i, j n. We can write P D A. A probability distribution over the set of vertices is a row vector β (β R n ) satisfying. The entries of β are non-negative.. The L -norm β ( β) equal to where denotes a column vector with all entries. If β is a probability distribution, so is βp. The stationary distribution (if exists) is denoted by π satisfying π πp and π (d, d,..., d n ). The eigenvalues of P are, λ,..., λ n, since P D (I L)D. In general, P is not symmetric unless G is regular. The following lemma concerns the mixing rate of the random walks. Lemma 3. For any integer t > 0, any α R n, and any two probability distributions β and γ, we have (β γ)p t, αd σ t (β γ)d / ) αd /. (4) In particular, (β γ)p t D / σ t (β γ)d /. (5) Proof. Let ϕ 0 ( d,..., d n ) D denote the (row) eigenvector of I L for the eigenvalue. The matrix (I L) t ϕ T ϕ, which stands for the projection of (I L) t to the hyperspace ϕ, has L -norm σ t. Note that (β γ)d ϕ (β γ) 0. We have (β γ)p t, D α (β γ)d [(I L) t ξξ ]D α (β γ)d σ t D α. Now we choose α [(β γ)p t ] T, obtain (5) as desired. The mixing rate of the random walks on G measures how fast βp t converges to the stationary distribution π from an initial distribution β. We can use the above lemma to show that the distribution βp t converges to π rapidly if σ is strictly less than. 6
7 3 Random spanning trees generated by random walks The following so-called groundskeeper algorithm gives a method of generating spanning trees: Start a random walk at a vertex, v. The first time a vertex is visited, we observe the edge it was visited on and add that edge to our spanning tree. Once the graph is covered, the resulting set of edges form a spanning tree. This gives a map Φ from random walks to random spanning trees. Aldous [] and Broder [] independently show that the groundskeeper algorithm generates a uniform spanning tree: Theorem 3 (Groundskeeper Algorithm). The image of Φ is uniformly distributed over all spanning trees. It is independent of the choice of initial vertex v. We pick up an random initial vertex with stationary distribution π. Then at any step t, the distribution remains the same p t π. For an integer g 3, consider the following g-truncated random walks. We construct a random spanning tree by collecting edges v t v t if v t is first visited. We allow the backtrack step v t+ v t i for some i g. However, if v t+ v t i for some i > g, the random walk stops. Lemma 4. The probability that a g-truncated random walk stops before or at time t is at most (t g + 3)(t g + ) d nd + (t k) σg σ. Proof. When the truncated random walk stops, there exists a closed walk C v i, v i+,..., v t, v i+k v i of length k g for some 0 i t k +. For a fixed i and k, the probability f(i, k) for such a closed walk is at most f(i, k) < d i k closed walk: v i,...,v i+k v i j Tr(A(D A) k ) d + d σk ( ) d + σk. d vi+j By summing up for i 0, k g, and i + k t +, we have t g+ i0 t i+ kg f(i, k) t g+ i0 t i+ kg d + σk 7
8 (t g + 3)(t g + ) (t g + 3)(t g + )) t g+ d + d i0 kg σ k + (t g + ) σg σ. 4 Proving a diameter Lower Bound for random spanning trees In this section we will prove a diameter lower bound for spanning trees of G as stated in inequality () of Theorem. Proof of (): Let t ( ɛ) ɛ d dn and g log ɛ( σ) «δ 4t d log(σ). Note that g is chosen so that σ g σ ɛ 4t. Apply the g-truncated random walk. By Lemma, the g-truncated random walk will survive up to time t with probability at least (t g + 3)(t g + ) d nd (t g + ) σg σ > t d nd t σ g σ > ɛ ɛ 4 3ɛ 4. For i,..., t, we say v i v i is a forward step if v i v j for some j < i; we say v i v i is a k-backward step if v i v i k for some k g. Let X i k if v i v i is a k-backward step and X i otherwise. For all i, we have (g ) X i. Let Y be the distance of v 0 v t in the random spanning tree and X t i X i Conditioning on that the truncated random walk survives up to time t, we have Y X. Or equivalently, P(Y < X) < 3ɛ 4. Let F i be the σ-algebra that v 0,..., v i is revealed. For i 0,..., t, E(X F i ) forms a martingale. We would like to establish a Lipschitz condition for this martingale. For i, j t, it is enough to bound E(X j F i ) E(X j F i ). For j < i, X j is completely determined by the information on v 0, v,..., v i. In this case we have E(X j F i ) E(X j F i ). 8
9 For j i, E(X j F i ) and E(X j F i ) are different because v i is exposed. For i j i + g 3, we apply the trivial bound E(X j F i ) E(X j F i ) g. For j i + g, X j only depends on v j g+, v j g+3,..., v j+. Note that the random walk at step i only depends on the current position v i and is independent of history position v 0,..., v i. Thus E(X j v j g+ ) is independent of v i because of i < j g +. We use the mixing of our random walk to show that information gained from knowing v i is quickly lost. Let p be the distribution of v i giving v i and q be the distribution of v i given v i (q is a singleton distribution). Let p be the distribution of v j g+ giving v i and q be the distribution of v j g+ giving v i. We have Therefore, (p q )D / (p q)d / σ j g+ i δ σ j g+ i. E(X j F i ) E(X j F i ) n (p u q u)e(x j v j g+ u) u p q (g ) (p q )D / (g ) (g ) σ j g+ i. δ We have E(X F i ) E(X F i ) t E(X j F i ) E(X j F i ) j (g ) + t ji+g (g ) σ (g ) g + (g ) δ( σ) σ j g+ i δ 3g noting that g has been chosen so that σ g σ ɛ δ 4( ɛ) is sufficiently small to make the last inequality hold. Thus we have established that E(X F i ). By applying Azuma s inequality [5], we have P(X E(X) < α) < e α 8g 4 t 9
10 Note that E(X) t E(X i ) i t i j t i j t n E(X i v i j)p(v i j) ( n i j t i ( g d j ) + g k n g(g ) d j ( ) d j ( ) g(g )n ) k d j d j g(g ) ( )t. d By choosing α 8g 4 t log 4 ɛ, we have ( ) g(g ) P X < ( )t 8g d 4 t log 4 < ɛ ɛ 4. Putting all together, we have ( ) g(g ) P Y < ( )t 8g d 4 t log 4 ɛ P(Y < X) + P < 3ɛ 4 + ɛ 4 ɛ. (X < ( d )t 8g 4 t log 4 ɛ ) To complete the proof, it suffices to check that our degree conditions imply that g(g ) ( )t 8g d 4 t log 4ɛ ( ɛ o()) ɛ nd d. In particular it suffices to check that g d o(), as g 4 is clearly o(t). Since ( log g 4 ɛ( σ) ) ( + log log(/σ) t d δ ) + 0
11 and log ( t d ) log(( ɛ) ɛ) + ( ) dδ δ log n we have g/ d o(), since as hypothesized. d log (n) log (/σ) 5 Proof of Upper Bound For the upper bound, we follow the general strategy of Aldous in []. In particular we provide a (relatively straightforward) generalization of theorem 5 of Aldous paper to give an upper bound in the general degree case. Here, we let X t denote the position of a random walk at time t. We denote by T B the hitting time of a set B; that is T B min{t : X t B}. We denote the return time of a set B to be T + B min{t : X t B}. (Note that if the random walker does not start in B, T B T + B.) When considering the probability that our random walk has some property under some number of steps we use the notation P ρ to denote that we condition on our random walker having initial distribution ρ. Likewise, E ρ denotes expectation conditioning on the initial distribution. If no distribution is given, it is assumed to be starting from the stationary distribution. As a convenient abuse of notation, for a vertex v, P v denotes starting with the distribution that places weight on v. The first tool is the following, rather standard, mixing lemma Lemma 5. For all initial distributions ρ and all B G, there exists an (absolute) constant K ( P ρ T B > 3 log n ) log(/σ) vol(b) Proof. We begin by bounding P s (ρ, B) π(b) ; where ρ is an (arbitrary) initial distribution and π(b) vol(b). Write ρ D / i α i ϕ i
12 where the ϕ i are left eigenvectors of (I L) corresponding to eigenvalues ( λ i). Then α 0 ρ D /, D/. volg Thus Then P s (ρ, B) π(b) ρ D / D / + α i ϕ i. i ρ P s χ B D χ B (ρ D / (I L) s D / ) D / χ B D / + ( λ i ) s α i ϕ i D / D / χ B i i σ s α i ϕ id / χ B σ s nvol / (B) σ s n B vol(b) where the last step follows from an application of Cauchy-Schwarz inequality. Let ( ) vol(b) s log / log(σ) n B so Fix t i is, then σ s n B vol(b) vol(b). P(T B > x) P(X t B, X t B,..., X tx/s B) Fix x log()s vol(b) P(X t B)P(X t B X t B) P(X tx/s B X tj B, j < i) ( vol(b) + ) x/s n B vol(b)σ s ( vol(b) ) x/s. and it is easy to check that P(T B > x).
13 In all, we have ( ) n B log vol(b) x log() 3 log n log(/σ)vol(b) log(/σ) vol(b) The following result (and it s proof) are due to Aldous []. Let B {v 0,..., v c } denote a set of vertices and let P B denote the event that the path from v 0 to the root (starting location of our random walker for generating a UST, chosen by the uniform distribution) in a uniform spanning tree starts v 0, v,..., v c. Then Lemma 6. P(T vc l P B ) P v c (T + B > l) E vc (T + B ) Proof. For i < c, we denote the event D i to be D i {T {v0,...,v i} T vi, X Tvi v i+ }. In words, D i is the event that v i is hit before v j for j < i, and indeed v i is first hit from v i, so i<c D i P B. Then: {T vc l} P B {T vc l T B }. Note that, from the Markov property, it is clear that P ( i<c D i T vc T B l) does not depend on l (this is the motivation for writing P B in an obtuse way), thus: P(T vc l P B ) αp(t vc T B l) for l 0,,... and for some α which (critically) does not depend on l. We have that: P vc (T + B > l) w P vc (X 0 v c, X l w, X i / B for i l) π(v c ) π(v c ) P π (X 0 v c, X l w, X i / B for i l) w P π (X 0 w, X l v c, X i / B for i l) w π(v c ) P π(x l v c, X i / B for i l) π(v c ) P π(t vc T B l) with the third to last equality following from time reversal for the stationary Markov chain. This implies: P π (T vc l P B ) απ(v c )P vc (T + B > l). 3
14 Note finally, then that P π (T vc l P B ) απ(v c )P vc (T + B > l) απ(v c)e vc (T + B ). l0 l0 so απ(v c ) E vc (T + B ), implying the result. One can observe that, actually, that while the normalizing constant is easy to compute, the exact value is unnecessary for the proof of the upper bound itself. We now prove the upper bound, establishing () in Theorem ; whose proof mimics that of Aldous. Proof of (): Let us start our random walk from the stationary distribution (unless explicitly noted, all probabilities related with the random walk which generates the spanning tree are taken to start with π). We begin by fixing a path v 0, v,..., v c in our graph; and B be the set {v 0,..., v c }. As above, P B will denote the event that the path from v 0 to the root (that is, the starting location of our random walk, X 0 ) in our uniform spanning tree starts out along the path v 0,..., v c. If we let 3 3 s log(/σ) (c + )δ log(/σ) vol(b) log n then, by iterating lemma 5 we have that P vc (T + B > js) j P v c (T + B > s). We are now in the position to apply lemma 6 to both sides; note that the normalizing constant will cancel and we are left with: P(T vc js P B ) (/) j P(T vc s P B ) (/) j s. where the last inequality follows from the fact that the right hand side of (6) in lemma 6 is decreasing with l and hence the left hand side must as well. This monotonicity property, and summing gives: Further summing gives P(js T vc (j + )s P B ) (/) j. P(js T vc P B ) (/) j. If P B occurs; then naturally we have that the distance from v 0 to the root, X 0, satisfies d(x 0, v 0 ) d(x 0, v c ) + c T vc + c. We also clearly have that if d(x 0, v 0 ) > c, then P B occurs for some unique path v 0,..., v c. Thus: P(d(X 0, v 0 ) > c + js) (/) j. 4
15 Clearly diam(t )/ max v d(x 0, v); so This gives us P(diam(T)/ > c + js) n(/) j. E(diam(T)) c + 3s log n c + 3 c log(/σ)δ log (n), with the second inequality coming from the definition of S. These terms are the same order of magnitude when setting c volg δ log(/σ) log n; giving the desired bound. To establish the bound in the form stated in (), simply apply Markov s inequality. Note that by minimizing c + 3 c log(/σ)δ log (n) we actually get that E(diam(T)) 6 log n. δ log(/σ) References [] D. Aldous, The Random Walk construction of Uniform Spanning Trees, SIAM J. Discrete Math. 3 (990), [] A. Broder, Generating random spanning trees, Symp. foundations of Computer Sci., Institute for Electrical and Electronic Engineers, New York [3] F. Chung, Spectral Graph Theory, AMS Publications, 997 [4] F. Chung and L. Lu, Connected components in a random graph with given degree sequences, Annals of Combinatorics 6, (00), [5] F. Chung and L. Lu, Complex Networks and Graphs, CBMS Lecture Series, No. 07, AMS Publications, 006. [6] F. Chung, L. Lu and V. Vu, The spectra of random graphs with given expected degrees, Proceedings of National Academy of Sciences, 00, no., (003), [7] F. Kirchhoff, Über die Auflösung der Gleichungen, auf welche man bei der Undersuchung der linearen Verteilung galvanisher Str ome gef uhrt wird, Ann. Phys. chem. 7 (847)
16 [8] David Liben-Nowell and Jon Kleinberg, Tracing Information Flow on a Global Scale Using Internet. [9] R. Lyons, Asymptotic enumeration of spanning trees. Combin. Probab. Comput. 4 (005), no. 4, [0] R. Pemantle, Uniform random spanning trees, Topics in contemporary probability and its applications, J. L. Snell, editor, CRC Press: Boca Raton (994). [] A. Rényi and G. Szekeres, On the height of trees, J. Austral. Math. Soc., 7 (967), pp
Damped random walks and the characteristic polynomial of the weighted Laplacian on a graph
Damped random walks and the characteristic polynomial of the weighted Laplacian on a graph arxiv:mathpr/0506460 v1 Jun 005 MADHAV P DESAI and HARIHARAN NARAYANAN September 11, 006 Abstract For λ > 0, we
More informationEigenvalues of Random Power Law Graphs (DRAFT)
Eigenvalues of Random Power Law Graphs (DRAFT) Fan Chung Linyuan Lu Van Vu January 3, 2003 Abstract Many graphs arising in various information networks exhibit the power law behavior the number of vertices
More informationA spectral Turán theorem
A spectral Turán theorem Fan Chung Abstract If all nonzero eigenalues of the (normalized) Laplacian of a graph G are close to, then G is t-turán in the sense that any subgraph of G containing no K t+ contains
More informationSpanning trees on the Sierpinski gasket
Spanning trees on the Sierpinski gasket Shu-Chiuan Chang (1997-2002) Department of Physics National Cheng Kung University Tainan 70101, Taiwan and Physics Division National Center for Theoretical Science
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More information1 Adjacency matrix and eigenvalues
CSC 5170: Theory of Computational Complexity Lecture 7 The Chinese University of Hong Kong 1 March 2010 Our objective of study today is the random walk algorithm for deciding if two vertices in an undirected
More informationThe giant component in a random subgraph of a given graph
The giant component in a random subgraph of a given graph Fan Chung, Paul Horn, and Linyuan Lu 2 University of California, San Diego 2 University of South Carolina Abstract We consider a random subgraph
More informationLaplacian and Random Walks on Graphs
Laplacian and Random Walks on Graphs Linyuan Lu University of South Carolina Selected Topics on Spectral Graph Theory (II) Nankai University, Tianjin, May 22, 2014 Five talks Selected Topics on Spectral
More informationEdge-Disjoint Spanning Trees and Eigenvalues of Regular Graphs
Edge-Disjoint Spanning Trees and Eigenvalues of Regular Graphs Sebastian M. Cioabă and Wiseley Wong MSC: 05C50, 15A18, 05C4, 15A4 March 1, 01 Abstract Partially answering a question of Paul Seymour, we
More informationEigenvalues, random walks and Ramanujan graphs
Eigenvalues, random walks and Ramanujan graphs David Ellis 1 The Expander Mixing lemma We have seen that a bounded-degree graph is a good edge-expander if and only if if has large spectral gap If G = (V,
More informationOn the Spectra of General Random Graphs
On the Spectra of General Random Graphs Fan Chung Mary Radcliffe University of California, San Diego La Jolla, CA 92093 Abstract We consider random graphs such that each edge is determined by an independent
More informationPercolation in General Graphs
Percolation in General Graphs Fan Chung Paul Horn Linyuan Lu Abstract We consider a random subgraph G p of a host graph G formed by retaining each edge of G with probability p. We address the question
More informationRandom Walks on Graphs. One Concrete Example of a random walk Motivation applications
Random Walks on Graphs Outline One Concrete Example of a random walk Motivation applications shuffling cards universal traverse sequence self stabilizing token management scheme random sampling enumeration
More informationDiffusion and random walks on graphs
Diffusion and random walks on graphs Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural
More informationThe Matrix-Tree Theorem
The Matrix-Tree Theorem Christopher Eur March 22, 2015 Abstract: We give a brief introduction to graph theory in light of linear algebra. Our results culminates in the proof of Matrix-Tree Theorem. 1 Preliminaries
More informationLecture 5: Random Walks and Markov Chain
Spectral Graph Theory and Applications WS 20/202 Lecture 5: Random Walks and Markov Chain Lecturer: Thomas Sauerwald & He Sun Introduction to Markov Chains Definition 5.. A sequence of random variables
More informationLaplacian Integral Graphs with Maximum Degree 3
Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,
More informationEuler s idoneal numbers and an inequality concerning minimal graphs with a prescribed number of spanning trees
arxiv:110.65v [math.co] 11 Feb 01 Euler s idoneal numbers and an inequality concerning minimal graphs with a prescribed number of spanning trees Jernej Azarija Riste Škrekovski Department of Mathematics,
More informationLecture: Local Spectral Methods (1 of 4)
Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationPowerful tool for sampling from complicated distributions. Many use Markov chains to model events that arise in nature.
Markov Chains Markov chains: 2SAT: Powerful tool for sampling from complicated distributions rely only on local moves to explore state space. Many use Markov chains to model events that arise in nature.
More informationIntroducing the Laplacian
Introducing the Laplacian Prepared by: Steven Butler October 3, 005 A look ahead to weighted edges To this point we have looked at random walks where at each vertex we have equal probability of moving
More informationIMFM Institute of Mathematics, Physics and Mechanics Jadranska 19, 1000 Ljubljana, Slovenia. Preprint series Vol. 49 (2011), 1157 ISSN
IMFM Institute of Mathematics, Physics and Mechanics Jadranska 19, 1000 Ljubljana, Slovenia Preprint series Vol. 9 (011), 1157 ISSN -09 EULER S IDONEAL NUMBERS AND AN INEQUALITY CONCERNING MINIMAL GRAPHS
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More information25.1 Markov Chain Monte Carlo (MCMC)
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,
More informationA sequence of triangle-free pseudorandom graphs
A sequence of triangle-free pseudorandom graphs David Conlon Abstract A construction of Alon yields a sequence of highly pseudorandom triangle-free graphs with edge density significantly higher than one
More informationKernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman
Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu
More informationDeterminantal Probability Measures. by Russell Lyons (Indiana University)
Determinantal Probability Measures by Russell Lyons (Indiana University) 1 Determinantal Measures If E is finite and H l 2 (E) is a subspace, it defines the determinantal measure T E with T = dim H P H
More informationLink Analysis. Leonid E. Zhukov
Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization
More informationData Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings
Data Analysis and Manifold Learning Lecture 3: Graphs, Graph Matrices, and Graph Embeddings Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline
More informationMATH 829: Introduction to Data Mining and Analysis Clustering II
his lecture is based on U. von Luxburg, A Tutorial on Spectral Clustering, Statistics and Computing, 17 (4), 2007. MATH 829: Introduction to Data Mining and Analysis Clustering II Dominique Guillot Departments
More informationSpanning trees in subgraphs of lattices
Spanning trees in subgraphs of lattices Fan Chung University of California, San Diego La Jolla, 9293-2 Introduction In a graph G, for a subset S of the vertex set, the induced subgraph determined by S
More informationApproximate Counting and Markov Chain Monte Carlo
Approximate Counting and Markov Chain Monte Carlo A Randomized Approach Arindam Pal Department of Computer Science and Engineering Indian Institute of Technology Delhi March 18, 2011 April 8, 2011 Arindam
More informationLab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018
Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google
More informationOn the concentration of eigenvalues of random symmetric matrices
On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest
More informationOn the number of spanning trees on various lattices
On the number of spanning trees on various lattices E Teufl 1, S Wagner 1 Mathematisches Institut, Universität Tübingen, Auf der Morgenstelle 10, 7076 Tübingen, Germany Department of Mathematical Sciences,
More informationLecture Introduction. 2 Brief Recap of Lecture 10. CS-621 Theory Gems October 24, 2012
CS-62 Theory Gems October 24, 202 Lecture Lecturer: Aleksander Mądry Scribes: Carsten Moldenhauer and Robin Scheibler Introduction In Lecture 0, we introduced a fundamental object of spectral graph theory:
More informationRandom Lifts of Graphs
27th Brazilian Math Colloquium, July 09 Plan of this talk A brief introduction to the probabilistic method. A quick review of expander graphs and their spectrum. Lifts, random lifts and their properties.
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationQuantum walk algorithms
Quantum walk algorithms Andrew Childs Institute for Quantum Computing University of Waterloo 28 September 2011 Randomized algorithms Randomness is an important tool in computer science Black-box problems
More informationAn Introduction to Spectral Graph Theory
An Introduction to Spectral Graph Theory Mackenzie Wheeler Supervisor: Dr. Gary MacGillivray University of Victoria WheelerM@uvic.ca Outline Outline 1. How many walks are there from vertices v i to v j
More informationMath 304 Handout: Linear algebra, graphs, and networks.
Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all
More informationDiscrete quantum random walks
Quantum Information and Computation: Report Edin Husić edin.husic@ens-lyon.fr Discrete quantum random walks Abstract In this report, we present the ideas behind the notion of quantum random walks. We further
More informationOn the Limiting Distribution of Eigenvalues of Large Random Regular Graphs with Weighted Edges
On the Limiting Distribution of Eigenvalues of Large Random Regular Graphs with Weighted Edges Leo Goldmakher 8 Frist Campus Ctr. Unit 0817 Princeton University Princeton, NJ 08544 September 2003 Abstract
More informationLecture 4: Applications: random trees, determinantal measures and sampling
Lecture 4: Applications: random trees, determinantal measures and sampling Robin Pemantle University of Pennsylvania pemantle@math.upenn.edu Minerva Lectures at Columbia University 09 November, 2016 Sampling
More informationLecture 14: Random Walks, Local Graph Clustering, Linear Programming
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These
More informationSpectral Graph Theory and its Applications
Spectral Graph Theory and its Applications Yi-Hsuan Lin Abstract This notes were given in a series of lectures by Prof Fan Chung in National Taiwan University Introduction Basic notations Let G = (V, E)
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationSpectral Graph Theory
Spectral raph Theory to appear in Handbook of Linear Algebra, second edition, CCR Press Steve Butler Fan Chung There are many different ways to associate a matrix with a graph (an introduction of which
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationGraph coloring, perfect graphs
Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive
More informationUniversity of Chicago Autumn 2003 CS Markov Chain Monte Carlo Methods
University of Chicago Autumn 2003 CS37101-1 Markov Chain Monte Carlo Methods Lecture 4: October 21, 2003 Bounding the mixing time via coupling Eric Vigoda 4.1 Introduction In this lecture we ll use the
More informationWAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME)
WAITING FOR A BAT TO FLY BY (IN POLYNOMIAL TIME ITAI BENJAMINI, GADY KOZMA, LÁSZLÓ LOVÁSZ, DAN ROMIK, AND GÁBOR TARDOS Abstract. We observe returns of a simple random wal on a finite graph to a fixed node,
More informationLecture: Local Spectral Methods (2 of 4) 19 Computing spectral ranking with the push procedure
Stat260/CS294: Spectral Graph Methods Lecture 19-04/02/2015 Lecture: Local Spectral Methods (2 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationConductance and Rapidly Mixing Markov Chains
Conductance and Rapidly Mixing Markov Chains Jamie King james.king@uwaterloo.ca March 26, 2003 Abstract Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states.
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationGraph fundamentals. Matrices associated with a graph
Graph fundamentals Matrices associated with a graph Drawing a picture of a graph is one way to represent it. Another type of representation is via a matrix. Let G be a graph with V (G) ={v 1,v,...,v n
More informationProbability & Computing
Probability & Computing Stochastic Process time t {X t t 2 T } state space Ω X t 2 state x 2 discrete time: T is countable T = {0,, 2,...} discrete space: Ω is finite or countably infinite X 0,X,X 2,...
More informationSpectral Analysis of Random Walks
Graphs and Networks Lecture 9 Spectral Analysis of Random Walks Daniel A. Spielman September 26, 2013 9.1 Disclaimer These notes are not necessarily an accurate representation of what happened in class.
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More information18.312: Algebraic Combinatorics Lionel Levine. Lecture 19
832: Algebraic Combinatorics Lionel Levine Lecture date: April 2, 20 Lecture 9 Notes by: David Witmer Matrix-Tree Theorem Undirected Graphs Let G = (V, E) be a connected, undirected graph with n vertices,
More informationORIE 6334 Spectral Graph Theory September 22, Lecture 11
ORIE 6334 Spectral Graph Theory September, 06 Lecturer: David P. Williamson Lecture Scribe: Pu Yang In today s lecture we will focus on discrete time random walks on undirected graphs. Specifically, we
More informationIntroduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa
Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis
More informationEIGENVECTOR NORMS MATTER IN SPECTRAL GRAPH THEORY
EIGENVECTOR NORMS MATTER IN SPECTRAL GRAPH THEORY Franklin H. J. Kenter Rice University ( United States Naval Academy 206) orange@rice.edu Connections in Discrete Mathematics, A Celebration of Ron Graham
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationCUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK
CUTOFF FOR THE STAR TRANSPOSITION RANDOM WALK JONATHAN NOVAK AND ARIEL SCHVARTZMAN Abstract. In this note, we give a complete proof of the cutoff phenomenon for the star transposition random walk on the
More informationOnline Social Networks and Media. Link Analysis and Web Search
Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information
More informationSpectral Continuity Properties of Graph Laplacians
Spectral Continuity Properties of Graph Laplacians David Jekel May 24, 2017 Overview Spectral invariants of the graph Laplacian depend continuously on the graph. We consider triples (G, x, T ), where G
More informationSpectral radius, symmetric and positive matrices
Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an
More informationU.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018
U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze
More informationhttp://www.math.uah.edu/stat/markov/.xhtml 1 of 9 7/16/2009 7:20 AM Virtual Laboratories > 16. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 1. A Markov process is a random process in which the future is
More informationLecture 13: Spectral Graph Theory
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 13: Spectral Graph Theory Lecturer: Shayan Oveis Gharan 11/14/18 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationConvergence of Random Walks
Graphs and Networks Lecture 9 Convergence of Random Walks Daniel A. Spielman September 30, 010 9.1 Overview We begin by reviewing the basics of spectral theory. We then apply this theory to show that lazy
More informationLink Analysis Ranking
Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query
More informationA generalized Alon-Boppana bound and weak Ramanujan graphs
A generalized Alon-Boppana bond and weak Ramanjan graphs Fan Chng Abstract A basic eigenvale bond de to Alon and Boppana holds only for reglar graphs. In this paper we give a generalized Alon-Boppana bond
More informationMarkov Decision Processes
Markov Decision Processes Lecture notes for the course Games on Graphs B. Srivathsan Chennai Mathematical Institute, India 1 Markov Chains We will define Markov chains in a manner that will be useful to
More informationLecture 8 : Eigenvalues and Eigenvectors
CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with
More informationCHOOSING A SPANNING TREE FOR THE INTEGER LATTICE UNIFORMLY
CHOOSING A SPANNING TREE FOR THE INTEGER LATTICE UNIFORMLY Running Head: RANDOM SPANNING TREES Robin Pemantle 1 Dept. of Mathematics, White Hall 2 Cornell University Ithaca, NY 14853 June 26, 2003 ABSTRACT:
More informationSVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)
Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular
More informationStatistics 992 Continuous-time Markov Chains Spring 2004
Summary Continuous-time finite-state-space Markov chains are stochastic processes that are widely used to model the process of nucleotide substitution. This chapter aims to present much of the mathematics
More informationA simple branching process approach to the phase transition in G n,p
A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationSpectral Clustering. Spectral Clustering? Two Moons Data. Spectral Clustering Algorithm: Bipartioning. Spectral methods
Spectral Clustering Seungjin Choi Department of Computer Science POSTECH, Korea seungjin@postech.ac.kr 1 Spectral methods Spectral Clustering? Methods using eigenvectors of some matrices Involve eigen-decomposition
More informationSpectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics
Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial
More informationOn sum of powers of the Laplacian and signless Laplacian eigenvalues of graphs
On sum of powers of the Laplacian and signless Laplacian eigenvalues of graphs Saieed Akbari 1,2 Ebrahim Ghorbani 1,2 Jacobus H. Koolen 3,4 Mohammad Reza Oboudi 1,2 1 Department of Mathematical Sciences
More informationA = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1
Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random
More informationA Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes
Classical and Modern Branching Processes, Springer, New Yor, 997, pp. 8 85. Version of 7 Sep. 2009 A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes by Thomas G. Kurtz,
More informationLocal Kesten McKay law for random regular graphs
Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the
More informationLecture 1 and 2: Random Spanning Trees
Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny
More informationLecture 7: Spectral Graph Theory II
A Theorist s Toolkit (CMU 18-859T, Fall 2013) Lecture 7: Spectral Graph Theory II September 30, 2013 Lecturer: Ryan O Donnell Scribe: Christian Tjandraatmadja 1 Review We spend a page to review the previous
More informationModern Discrete Probability Spectral Techniques
Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite
More informationDATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS
DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information
More informationDecomposition of random graphs into complete bipartite graphs
Decomposition of random graphs into complete bipartite graphs Fan Chung Xing Peng Abstract We consider the problem of partitioning the edge set of a graph G into the minimum number τg) of edge-disjoint
More informationEECS 495: Randomized Algorithms Lecture 14 Random Walks. j p ij = 1. Pr[X t+1 = j X 0 = i 0,..., X t 1 = i t 1, X t = i] = Pr[X t+
EECS 495: Randomized Algorithms Lecture 14 Random Walks Reading: Motwani-Raghavan Chapter 6 Powerful tool for sampling complicated distributions since use only local moves Given: to explore state space.
More information2.1 Laplacian Variants
-3 MS&E 337: Spectral Graph heory and Algorithmic Applications Spring 2015 Lecturer: Prof. Amin Saberi Lecture 2-3: 4/7/2015 Scribe: Simon Anastasiadis and Nolan Skochdopole Disclaimer: hese notes have
More informationNear-domination in graphs
Near-domination in graphs Bruce Reed Researcher, Projet COATI, INRIA and Laboratoire I3S, CNRS France, and Visiting Researcher, IMPA, Brazil Alex Scott Mathematical Institute, University of Oxford, Oxford
More informationData Analysis and Manifold Learning Lecture 7: Spectral Clustering
Data Analysis and Manifold Learning Lecture 7: Spectral Clustering Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inrialpes.fr http://perception.inrialpes.fr/ Outline of Lecture 7 What is spectral
More informationThe Kemeny constant of a Markov chain
The Kemeny constant of a Markov chain Peter Doyle Version 1.0 dated 14 September 009 GNU FDL Abstract Given an ergodic finite-state Markov chain, let M iw denote the mean time from i to equilibrium, meaning
More informationImproved Upper Bounds for the Laplacian Spectral Radius of a Graph
Improved Upper Bounds for the Laplacian Spectral Radius of a Graph Tianfei Wang 1 1 School of Mathematics and Information Science Leshan Normal University, Leshan 614004, P.R. China 1 wangtf818@sina.com
More information215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that
15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that
More information