Fast Linear Solvers for Laplacian Systems
|
|
- Noah Tyler
- 5 years ago
- Views:
Transcription
1 s Solver for UCSD Research Exam University of California, San Diego November 8, 2013
2 s Solver 1 2 Matrices Linear in the Graph 3 Overview 4 Solving Linear with Using Computing A of the Problem Solving the System 5
3 s Solver Definitions Let G = (V, E) be an undirected edge-weighted graph on n vertices and m edges. We can generalize an unweighted graph to the case where all edge weights are equal to 1. The degree of a vertex v V is the sum of the weights of the edges adjacent to it, d v = u v w(u, v). The degree matrix is the diagonal matrix whose entries are the degrees of the vertices, (D) vv = d v.
4 Definitions s Solver The adjacency matrix is the n n matrix which is the weight w(e) in the entries corresponding to an edge, { w(u, v) if {u, v} E, (A) uv = 0 otherwise. The is the matrix L = D A.
5 Example s Solver For the following simple graph, L = =
6 Properties of the s Solver Symmetric Diagonally dominant All zero row-sums Non-positive off-diagonal entries Clean quadratic form: x T Lx = (x(u) x(v)) 2 u v
7 A network of n decision-making agents Consensus s Solver edges are communication channels x i is the decision value of each agent values can be influenced by communicating neighbors two agents v i, v j are said to agree when x i = x j Goal: all agents reach a common decision value, called the consensus
8 A network of n decision-making agents Consensus s Solver The potential is a measure of disagreement in the system, Ψ G (x) = 1 2 x T Lx = 1 (x i x j ) 2. 2 v i v j all agents agree Ψ G (x) = 0. Goal: find the vector x which minimizes the potential [OM03]
9 Why? s Solver arise in a number of natural contexts, Characterizing the motion of coupled oscillators [HS08] Approximating Fiedler eigenvectors [ST04]... Computing effective resistance in an electrical network [Kir1847]
10 Electrical Networks s Solver Edges are wires in the network Edge weights correspond to the conductance of each wire Goal: set voltages x 1, x 2, x 3 on the vertices to create a flow of electrical current
11 Electrical Networks s Solver Ohm s law: take conductance(e) = 1/resistance(e), then current(e) = conductance(e) voltage(u) voltage(v) Consider the flow of current from V 1. By Ohm s law: current(v 1, V 2) = 1(voltage(V 1) voltage(v 2)) current(v 1, V 3) = 2(voltage(V 1) voltage(v 3))
12 Electrical Networks s Solver Kirchhoff s law [Kir1847]: For every point in the network, netflow(v) := flow in (v) flow out (v) = 0, except at the injection point, netflow = 1, and the extraction point, netflow = 1.
13 Electrical Networks s Solver So, at vertex V 1 the voltage x 1 must satisfy: netflow(v 1) = current(v 1, V 2) + current(v 1, V 3) 1 = 1(x 1 x 2 ) + 2(x 1 x 3 ) 1 = 3x 1 x 2 2x 3
14 Electrical Networks s Solver Applying the same rules at V 2 and V 3 yields the following system of equations: 3x 1 x 2 2x 3 = 1 x 1 + 2x 2 x 3 = 0 2x 1 x 2 + 3x 3 = 1
15 Electrical Networks s Solver Or, in matrix form, x 1 x 2 x 3 = 1 0 1
16 Electrical Networks s Solver Or, in matrix form, x 1 x 2 x 3 = A system in the of the network
17 Electrical Networks s Solver When current is injected at V 1 and extracted at V 3, the solution vector x can be used to compute the effective resistance of the edge (V 1, V 3), R (V 1,V 3) = x 1 x 3
18 s Solver [OM03] A connected network of decision-making agents Consensus Again edges are communication channels x i is the decision value of each agent values can be influenced by communicating neighbors two agents v i, v j are said to agree when x i = x j Goal: all agents reach a common decision value, called the consensus
19 s Solver [OM03] A connected network of decision-making agents Consensus Again Suppose each agent evolves their decision value according to the distributed linear protocol, ẋ i (t) = v j v i x j (t) x i (t). Then the solution to the system ẋ = Lx, x(0) R n is the vector of decision values as a function of t. Goal: all agents reach a common decision value, called the consensus
20 s Solver 1 2 Matrices Linear in the Graph 3 Overview 4 Solving Linear with Using Computing A of the Problem Solving the System 5
21 Solving Directly s Solver Given a system of linear equations, Ax = b, we can solve the system directly using Gaussian elimination. Gaussian elimination takes O(n 3 ) time in general. - [CW90] show the exponent can be as small as [Wil12] improves this to
22 Iterative s Solver Iterative methods for solving systems Ax = b are based on a sequence of increasingly better approximations. The idea is to construct a sequence x (0), x (1),..., x (i),..., x (N),... that converges to the true solution, x, lim k x (k) = x. In practice, the iterative process is stopped after N iterations when x (N) x < ɛ for any vector norm and a prescribed error parameter ɛ.
23 Iterative s Solver Richardson s Method([Young53],[GV61]) An iterative method that improves approximations using the residual error, b Ax (i), at each step: x (i+1) = x (i) + (b Ax (i) ) = b + (I A)x (i)
24 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b.
25 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b. What makes a matrix B a good preconditioner for A?
26 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b. What makes a matrix B a good preconditioner for A? 1 It is a very good approximation of the matrix
27 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b. What makes a matrix B a good preconditioner for A? 1 It is a very good approximation of the matrix 2 It reduces the number of iterations
28 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b. What makes a matrix B a good preconditioner for A? 1 It is a very good approximation of the matrix 2 It reduces the number of iterations 3 B can be computed quickly
29 Preconditioned Iterative s Solver To accelerate the iterative process, instead use an approximation of the matrix, called a preconditioner. This method will produce a solution to the preconditioned system: B 1 Ax = B 1 b. What makes a matrix B a good preconditioner for A? 1 It is a very good approximation of the matrix 2 It reduces the number of iterations 3 B can be computed quickly 4 in B can be solved quickly
30 Preconditioned Iterative s Solver Preconditioned Richardson s Method([Young53],[GV61]) x (i+1) = b + (I A)x (i)
31 Preconditioned Iterative s Solver Preconditioned Richardson s Method([Young53],[GV61]) x (i+1) = B 1 b + (I B 1 A)x (i)
32 Preconditioned Iterative s Solver Preconditioned Richardson s Method([Young53],[GV61]) x (i+1) = B 1 b + (I B 1 A)x (i) The term B 1 b involves solving a system in B.
33 Convergence of Iterative s Solver Since the solution x is not known, different criteria must be used to determine the value N for which x (N) x < ɛ.
34 Convergence of Iterative s Solver Since the solution x is not known, different criteria must be used to determine the value N for which x (N) x < ɛ. - One criterion uses the residual vector, r (i) = b Ax (i) - The minimum iterations N should satisfy x x (N) x κ(a) r (N) b ɛκ(a), where κ(a) = A 1 A is the condition number of A for any matrix norm [QRS07].
35 Convergence of Iterative s Solver Since the solution x is not known, different criteria must be used to determine the value N for which x (N) x < ɛ. In the case of preconditioned methods, this becomes B 1 r (N) B 1 r (0) ɛ. In particular, this means the rate of convergence will depend on how quickly systems in B can be solved.
36 Convergence of Iterative s Solver Preconditioned Chebyshev method will find solutions with absolute error ɛ in time O(mS(B) log(κ(a)/ɛ) κ(a, B)), [GO88] where S(B) is the time required so solve a system in B and ( )( ) x T Ax x T Bx κ(a, B) = max x:ax 0 x T max Bx x:ax 0 x T. Ax
37 Convergence of Iterative s Solver Factors: minimum number of iterations N, depends on κ(a) time to solve the system in B In general, worst case time bounds are O(mn).
38 s Solver 1 2 Matrices Linear in the Graph 3 Overview 4 Solving Linear with Using Computing A of the Problem Solving the System 5
39 A First Nearly-Linear Time Solver s Solver Spielman and Teng [ST04] presented the first nearly-linear time algorithm for solving systems of equations in symmetric, diagonally-dominant (SDD) systems. The success of the ST-solver can be ascribed to two innovations:
40 A First Nearly-Linear Time Solver s Solver Spielman and Teng [ST04] presented the first nearly-linear time algorithm for solving systems of equations in symmetric, diagonally-dominant (SDD) systems. The success of the ST-solver can be ascribed to two innovations: 1 enhancing a one-level iterative solver using recursion
41 A First Nearly-Linear Time Solver s Solver Spielman and Teng [ST04] presented the first nearly-linear time algorithm for solving systems of equations in symmetric, diagonally-dominant (SDD) systems. The success of the ST-solver can be ascribed to two innovations: 1 enhancing a one-level iterative solver using recursion 2 using a preconditioning matrix based on a subgraph
42 s Solver The Recursive Solver The ST-solver addresses the problem of solving systems in the preconditioning matrix B with recursion: solve(a,b): if dimension-check(a) and sparse(a): return PrecChebyshev(A,b) B = precondition(a) A = reduce(b) solve(a,b) Compute B, the preconditioner for A Reduce the system in B to a system in A 1, a refined version of A Recursively solve the system in A 1 At the base, use the Preconditioned Chebyshev method
43 s Solver The Recursive Solver The ST-solver addresses the problem of solving systems in the preconditioning matrix B with recursion: A B A 1 B 1 A k Compute B, the preconditioner for A Reduce the system in B to a system in A 1, a refined version of A Recursively solve the system in A 1 At the base, use the Preconditioned Chebyshev method
44 The Recursive Solver s Solver Two methods are used at each level of the recursion. reduce(b): B i A i+1 - Reduce the preconditioned matrix by greedily removing rows and columns with at most two non-zero entries - Done with a partial Cholesky decomposition precondition(a): A i B i - Sparsify the graph associated to A to obtain a subgraph H - Set B to be a matrix of H
45 Graph Preconditioners s Solver [GMZ95]: There is a linear time transformation: Ax = b Lx = b SDD system system For the recursive sparsifying procedures, Spielman and Teng use the underlying graphs of the matrices to produce a chain of progressively sparser graphs.
46 Graph Preconditioners s Solver [GMZ95]: There is a linear time transformation: Ax = b Lx = b SDD system system For the recursive sparsifying procedures, Spielman and Teng use the underlying graphs of the matrices to produce a chain of progressively sparser graphs. A B A 1 B 1 A k
47 Graph Preconditioners s Solver [GMZ95]: There is a linear time transformation: Ax = b Lx = b SDD system system For the recursive sparsifying procedures, Spielman and Teng use the underlying graphs of the matrices to produce a chain of progressively sparser graphs. L A L B L A1 L B1 L Ak
48 Graph Preconditioners s Solver [GMZ95]: There is a linear time transformation: Ax = b Lx = b SDD system system For the recursive sparsifying procedures, Spielman and Teng use the underlying graphs of the matrices to produce a chain of progressively sparser graphs. G H G 1 H 1 G k
49 Spanning Trees s Solver [Vaidya91]: subgraphs serve as good preconditioners maximum weight spanning tree as a preconditioning base solves SDD systems with non-positive off-diagonal entries of degree d in time O(dn) 1.75 log(κ(a)/ɛ) [ST04]: subgraphs serve as good preconditioners a better base tree uses an edge measure called stretch
50 s Solver The Stretch of an Edge The detour forced by traversing T instead of G
51 s Solver The Stretch of an Edge The detour forced by traversing T instead of G e = {V 1, V 4}, w(e) = 1
52 s Solver The Stretch of an Edge The detour forced by traversing T instead of G A spanning tree, T
53 s Solver The Stretch of an Edge The detour forced by traversing T instead of G e = {V 1, V 4}, Tree path: {V 1, V 3}, {V 3, V 5}, {V 5, V 4}
54 The Stretch of an Edge s Solver Let T be a spanning tree of a weighted graph G, and let the weight of an edge e = {u, v} be denoted by w(e). Define w (e) = 1/w(e), which is the resistance of the edge, r e. Let e 1, e 2,..., e k be the unique path in T from u to v. Then the stretch of the edge by T is defined k i=1 stretch T (e) = w (e 1 ) k w = 1/r e (e) The total stretch of a graph G by the tree T is the sum of the stretch of all the off-tree edges. i=1 r ei
55 Low Stretch Spanning Trees s Solver Low stretch spanning tree (LSST) for the preconditioning base A spine that keeps resistance low
56 Low Stretch Spanning Trees s Solver Low stretch spanning tree (LSST) for the preconditioning base A spine that keeps resistance low LSSTs for preconditioners had been implemented before ([AKPW95, BH01, BH03]), but combining the high quality preconditioners with a recursive solver amounted to an algorithm for solving SDD systems in nearly-linear time.
57 s Solver [ST04]: Given a system of equations Ax = b in a symmetric, diagonally dominant matrix, the ST-solver computes a vector ˆx satisfying Aˆx b < ɛ and ˆx x ɛ in time O(m log O(1) m), where the exponent is a large constant.
58 s Solver of the ST-solver 1 Compute an LSST T of the graph G 2 Reweight the edges of T by a constant factor k, call the reweighted tree T 3 Replace T in G by T 4 H T 5 Add off-tree edges to H by sampling with probabilities related to vertex degree
59 s Solver of Koutis et al. [KMP10, KMP11] 1 Compute an LSST T of the graph G 2 Reweight the edges of T by a constant factor k, call the reweighted tree T 3 Replace T in G by T 4 H T 5 Add off-tree edges to H by sampling with probabilities related to effective resistance
60 s Solver pro: Probabilities related to effective resistance yield sparsifiers with few edges [SS11] con: Computing effective resistances involves solving a system of linear equations This is problem is avoided by instead using upper bounds for edge probabilities, p e w e R e, where R e is the effective resistance of the edge.
61 s Solver pro: Probabilities related to effective resistance yield sparsifiers with few edges [SS11] con: Computing effective resistances involves solving a system of linear equations This is problem is avoided by instead using upper bounds for edge probabilities, p e w e R e, where R e is the effective resistance of the edge. The sparsifier of [KMP11] improved the expected time bound to O(m log 2 n log(1/ɛ)) for an approximate solution with absolute error bounded by ɛ.
62 Summary of Discussed s Solver Gaussian elimination O(n 3 ) [CW90] O(n ) [Wil12] O(n ) Prec Chebyshev [GO88] O(mS(B) log(κ(a)/ɛ) κ(a, B)) [Vaidya91] O(dn) 1.75 log(κ(a)/ɛ) [ST04] O(m log O(1) m) [KMP11] O(m log 2 n log(1/ɛ))
63 s Solver 1 2 Matrices Linear in the Graph 3 Overview 4 Solving Linear with Using Computing A of the Problem Solving the System 5
64 The of a Subset s Solver Let S be a subset of vertices in a graph. S = {S1, S2, S3}
65 The of a Subset s Solver Let S be a subset of vertices in a graph. The vertex boundary of S is the set of vertices not in S which border S δ(s) = {v / S : {v, u} E for some u S}.
66 The of a Subset s Solver Let b be a vector over the vertices of a graph. Then a vector x over the satisfies the boundary condition of b for a subset S when x(v) = b(v) v δ(s). δ(s) = {v / S : {v, u} E for some u S}.
67 Leader-Following Formation s Solver a network with a leader, and a group of agents values x i correspond to position leader moves independently of agents Goal: design a distributed protocol for agents to follow the leader
68 Leader-Following Formation s Solver The leader is the boundary of the subset of agents. So our solution should: - Respect the leader s position (fix the solution on the boundary) - Use local information among the agents (compute the solution on the subset) Goal: design a distributed protocol for agents to follow the leader
69 s Solver [NC10]: leader. Leader-Following Formation Consider a multi-agent system of n agents and one 1 The dynamics of the leader is ẋ 0 = Ax 0, which is independent. 2 The dynamics of each agent is ẋ i = v j v i x j x i, and the vector x of agent positions is given by ẋ = Lx.
70 s Solver Satisfying the Condition Consider a linear system Lx = b. Suppose there exists a subset S such that the induced subgraph on S is connected the boundary δ(s) is non-empty support(b) δ(s) x satisfies the boundary condition b: x(v) = b(v), v δ(s). Goal: find the solution x restricted to S. That is, the solution will satisfy { 1 x(v) = d v u v x(u) if v S b(v) if v δ(s).
71 Solving Linear with s Solver Main tools and techniques: Fast computation of a heat kernel pagerank vector Expressing a linear system with boundary conditions in terms of the heat kernel of the graph Approximate the solution by a sum of heat kernel pagerank vectors
72 s Solver on a Graph Consider P = D 1 A as a random walk matrix /2 0 1/ /2 0 1/2 P = 1/4 1/4 0 1/4 1/ /3 1/3 1/3 0 0
73 s Solver on a Graph Consider P = D 1 A as a random walk matrix /2 0 1/ /2 0 1/2 P = 1/4 1/4 0 1/4 1/ /3 1/3 1/3 0 0
74 s Solver on a Graph Consider P = D 1 A as a random walk matrix /2 0 1/ /2 0 1/2 P = 1/4 1/4 0 1/4 1/ /3 1/3 1/3 0 0 When f is a probability distribution vector, f T P k is the distribution after k random walk steps. Define to be the Laplace operator, defined = I P.
75 s Solver The heat kernel pagerank vector is determined by parameters t R + and f R n, ρ t,f = f T e t = e t k=0 t k k! f T P k.
76 s Solver The heat kernel pagerank vector is determined by parameters t R + and f R n, ρ t,f = f T e t = e t k=0 t k k! f T P k. If f is a starting distribution over the vertices, then f T P k will be the distribution after k random walk steps
77 s Solver The heat kernel pagerank vector is determined by parameters t R + and f R n, ρ t,f = f T e t = e t k=0 t k k! f T P k. If f is a starting distribution over the vertices, then f T P k will be the distribution after k random walk steps The sum of the coefficients satisfies k=0 e t tk k! = e t e t = 1
78 s Solver The heat kernel pagerank vector is determined by parameters t R + and f R n, ρ t,f = f T e t = e t k=0 t k k! f T P k. If f is a starting distribution over the vertices, then f T P k will be the distribution after k random walk steps The sum of the coefficients satisfies k=0 t tk e k! = e t e t = 1 Then if k steps of a P random walk are taken with probability e t tk k!
79 s Solver The heat kernel pagerank vector is determined by parameters t R + and f R n, ρ t,f = f T e t = e t k=0 t k k! f T P k. If f is a starting distribution over the vertices, then f T P k will be the distribution after k random walk steps The sum of the coefficients satisfies k=0 t tk e k! = e t e t = 1 Then if k steps of a P random walk are taken with probability e t tk k! ρ t,f is the expected distribution of the process
80 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value.
81 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value. Avoid an exponential sum by taking samples.
82 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value. Avoid an exponential sum by taking samples. Control error by drawing enough samples r(ɛ)
83 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value. Avoid an exponential sum by taking samples. Control error by drawing enough samples r(ɛ) Avoid long walk processes by sampling truncated random walks.
84 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value. Avoid an exponential sum by taking samples. Control error by drawing enough samples r(ɛ) Avoid long walk processes by sampling truncated random walks. Control contribution lost in later step by stopping after enough steps K(ɛ)
85 s Solver Computing Q: A good way to compute the heat kernel pagerank? A: Draw enough samples of the random process to get the expected value. Let p k be the probability of taking k random walk steps, t tk p k = e k!. Let X be the distribution over values of k. Then obtain a good approximation by taking the average of r random walks of length at most K. The variables r, K are both in terms of a prescribed error parameter, ɛ.
86 s Solver Computing Let the output of the algorithm be ˆρ t,f. Then, to achieve we choose (1 ɛ)ρ t,f [v] ɛ ˆρ t,f [v] (1 + ɛ)ρ t,f [v], r 16 log s, ɛ3 K log(ɛ 1 ) log log(ɛ 1 ). Then, assuming that (1) taking a random walk step and (2) sampling from a distribution take constant time, the runtime of the heat kernel pagerank computation is r K = 16 ɛ 3 log s log(ɛ 1 ) log log(ɛ 1 ) = O ( log s log(ɛ 1 ) ɛ 3 log log ɛ 1 ).
87 Solving Linear with s Solver Main tools and techniques: Fast computation of a heat kernel pagerank vector Expressing a linear system with boundary conditions in terms of the heat kernel of the graph Approximate the solution by a sum of heat kernel pagerank vectors
88 Linear with s Solver Recall what it means for a solution vector x to satisfy the boundary condition in the system Lx = b. S = {v V : b[v] = 0}, { 1 x(v) = d v u v x(u) b(v) if v S if v δ(s).
89 Linear with s Solver Given a graph G and a vector b of boundary conditions, Let S be the subset of s = S vertices S = {v V : b[v] = 0}. Let A δs be the s δ(s) matrix A with columns restricted to the vertices of δ(s) and rows to vertices of S. Let L S, A S and D S be L, A and D with rows and columns restricted to vertices of S. Let x S be the solution vector over the vertices of S, and let b δs be b over δ(s).
90 Linear with s Solver Then we would like to find a vector x R s satisfying: or, equivalently: D S x S = A S x S + A δs b δs x S = (D S A S ) 1 (A δs b δs ) = L 1 S (A δsb δs ),
91 Linear with s Solver Then we would like to find a vector x R s satisfying: or, equivalently: D S x S = A S x S + A δs b δs x S = (D S A S ) 1 (A δs b δs ) = L 1 S (A δsb δs ), The inverse L 1 S exists when the induced subgraph on S is connected and the boundary of S is non-empty.
92 s Solver The Normalized The normalized is defined 1 if u = v, (L) uv = 1 dud if u v, v 0 otherwise. Degree-nomalized version of L, L = D 1/2 LD 1/2 Symmetric version of = I P, L = D 1/2 D 1/2
93 s Solver The Normalized When G has no isolated vertex and the matrix D is invertible, Lx = b and Lx = b the system in L can be deduced from the system in L by setting x = D 1/2 x and b = D 1/2 b. Then the solution x R s satisfying the boundary condition should satisfy x S = L 1 S (A δsb δs )
94 s Solver The Normalized When G has no isolated vertex and the matrix D is invertible, Lx = b and Lx = b the system in L can be deduced from the system in L by setting x = D 1/2 x and b = D 1/2 b. Then the solution x R s satisfying the boundary condition should satisfy x S = L 1 S (A δsb δs ) x S = L 1 S b 1. And computing b 1 takes time proportional to the sum of vertex degrees in δ(s) := vol(δ(s)).
95 Dirichlet s Solver The Dirichlet heat kernel is defined H S,t = e tl S = D 1/2 S e t S D 1/2 S.
96 s Solver The Dirichlet heat kernel is defined Lemma ([CS13]) Dirichlet H S,t = e tl S = D 1/2 S e t S D 1/2 S. Let S be a strict subset of vertices of a graph G and let the induced subgraph on S be connected. Then, L 1 S = 0 H S,t dt.
97 s Solver The Dirichlet heat kernel is defined Lemma ([CS13]) Dirichlet H S,t = e tl S = D 1/2 S e t S D 1/2 S. Let S be a strict subset of vertices of a graph G and let the induced subgraph on S be connected. Then, L 1 S = x S = L 1 S b 1 = 0 0 H S,t dt. H S,t b 1 dt.
98 Solving Linear with s Solver Main tools and techniques: Fast computation of a heat kernel pagerank vector Expressing a linear system with boundary conditions in terms of the heat kernel of the graph Approximate the solution by a sum of heat kernel pagerank vectors
99 Solving Linear with s Solver Approximate the solution 1 express the solution as an improper integral of heat kernel pagerank 2 approximate with a definite integral by limiting the range 3 approximate by taking a finite sum of heat kernel pagerank vectors 4 approximate each term by sampling truncated random walks
100 Approximating with s Solver Claim For Lx = b, ( ) x S = ρ t,f dt 0 D 1/2 S, f = b T 1 D 1/2 S.
101 s Solver Claim For Lx = b : Proof. x S = H S,t b 1 dt 0 x S = b1 T H S,t dt 0 x S = 0 x S = 0 Approximating with ( ) x S = 0 ρ t,f dt D 1/2 S, f = b1 T D1/2 S. from Lemma H S,t symmetric b T 1 (D 1/2 S e t S D 1/2 S ) dt definition of H S,t b1 T D 1/2 S e t S dt D 1/2 S
102 s Solver Claim For Lx = b : Proof. x S = H S,t b 1 dt 0 x S = b1 T H S,t dt 0 x S = 0 x S = 0 Approximating with ( ) x S = 0 ρ t,f dt D 1/2 S, f = b1 T D1/2 S. from Lemma H S,t symmetric b T 1 (D 1/2 S e t S D 1/2 S ) dt definition of H S,t b1 T D 1/2 S e t S dt D 1/2 S
103 s Solver Claim For Lx = b : Proof. x S = H S,t b 1 dt 0 x S = b1 T H S,t dt 0 x S = 0 x S = 0 Approximating with ( ) x S = 0 ρ t,f dt D 1/2 S, f = b1 T D1/2 S. from Lemma H S,t symmetric b T 1 (D 1/2 S e t S D 1/2 S ) dt definition of H S,t b1 T D 1/2 S e t S dt D 1/2 S
104 s Solver Claim For Lx = b : Proof. x S = H S,t b 1 dt 0 x S = b1 T H S,t dt 0 x S = 0 Approximating with ( ) x S = 0 ρ t,f dt D 1/2 S, f = b1 T D1/2 S. from Lemma H S,t symmetric b T 1 (D 1/2 S e t S D 1/2 S ) dt definition of H S,t x S = b1 T D 1/2 S e t S }{{} 0 dt D 1/2 S ρ t,f
105 Finding the Solution s Solver Let x 1 = x S D1/2 S. We can compute x 1 = by a number of approximations, 0 ρ t,f dt, f = b T 1 D 1/2 S. 1. Ignore the tail, take the integral to a finite T, x 1 T 0 ρ t,f dt
106 Finding the Solution s Solver Let x 1 = x S D1/2 S. We can compute x 1 = by a number of approximations, 0 ρ t,f dt, f = b T 1 D 1/2 S. 2. Discretize with a finite Riemann sum for small intervals T /N, x 1 N ρ jt /N,f T /N j=1
107 Finding the Solution s Solver Let x 1 = x S D1/2 S. We can compute x 1 = by a number of approximations, 0 ρ t,f dt, f = b T 1 D 1/2 S. 3. Obtain a good approximation by finitely many samples of ρ jt /N,f over values of t. for r times: draw j from the interval [1,N] compute ApproxHKPR(G, jt/n, f)
108 Solving the System with s Solver [CS13] show that to achieve an approximation vector ˆx satisfying x ˆx O(ɛ(1 + b )), the number of samples needed is r = ɛ 2 (log s + log(ɛ 1 )), where s = S.
109 Solving the System with s Solver [CS13] show that to achieve an approximation vector ˆx satisfying x ˆx O(ɛ(1 + b )), the number of samples needed is r = ɛ 2 (log s + log(ɛ 1 )), where s = S. Then using that the time to compute a heat kernel pagerank vector is ( log s log(ɛ 1 ) ) O ɛ 3 log log ɛ 1, and assuming additional preprocessing time proportional to vol(δs), the main result of [CS13] is the following.
110 Solving the System with s Solver Theorem ( Solver [CS13]) For a graph G and a linear system Lx = b, assume: x is required to satisfy the boundary condition b S = V \ support(b) and s = S the boundary of S is nonempty the induced subgraph on S is connected.
111 Solving the System with s Solver Theorem ( Solver [CS13]) Then the approximate solution ˆx output by the boundary solver satisfies the following with probability 1 ɛ. 1 x ˆx O(ɛ(1 + b )) 2 The running time is ( (log s) 2 (log(ɛ 1 )) 2 ) O ɛ 5 log log(ɛ 1 ) with additional preprocessing time O(vol(δ(S))).
112 s Solver 1 2 Matrices Linear in the Graph 3 Overview 4 Solving Linear with Using Computing A of the Problem Solving the System 5
113 s Solver Capitalize on speed. Use fast linear solvers to improve existing graph algorithms Interior point algorithms [SD08] Learning problems [ZHS05] Graph partitioning [ST04],[ACL06] Sparsification [ST04],[SS11] Find more applications for these linear solvers Find more applications for heat kernel pagerank Expected distribution of random walks Vertex similarity graph distance
114 s Solver References Noga Alon, Richard M. Karp, David Peleg, and Douglas West (1995) A Graph-Theoretic Game and its Applications to the k-sever Problem. SIAM J. Computing, 21(1), Reid Andersen, Fan Chung, and Kevin Lang (2006) Local Graph Partitioning Using PageRank Vectors. FOCS 06, Erik Boman and Bruce Hendrickson (2001) On Spanning Tree Preconditioners. Sandia National Laboratories Manuscript. Erik Boman and Bruce Hendrickson (2003) Support Theory for Preconditioning. SIAM J. Matrix Analysis and Applications, 25(3), Sergey Brin and Lawrence Page (1998) The Anatomy of a Large-Scale Hypertextual Web Search Engine. Computer Networks and ISDN, 30(1),
115 s Solver References Fan Chung and (2013) Solving Linear with using. WAW 13. Don Coppersmith and Shmuel Winograd (1990) Matrix Multiplication via Arithmetic Progressions. J. Symbolic Computation, 9(3), Samuel I. Daitch and Daniel A. Spielman (2008) Faster Approximate Lossy Generalized Flow Via Interior Point Algorithms. STOC 08, Gene H. Golub and Michael L. Overton (1988) The Convergence of Inexact Chebyshev and Richardson Iterative for Solving Linear. Numerische Mathematik, 53(5),
116 s Solver References Gene H. Golub and Richard S. Varga (1961) Chebyshev Semi-Iterative, Successive Overrelaxation Iterative, and Second Order Richardson Iterative. Numerical Math. 3, Keith D. Gremban, Gary L. Miller, and Marco Zagha (1995) Performance Evaluation of a New Parallel Preconditioner. International Parallel Processing Symposium 95, Aric Hagberg and Daniel A. Schult (2008) Rewiring Networks for Synchronization. Chaos: An Interdisciplinary Journal of Nonlinear Science, 18(3). Gustav Kirchhoff (1847) Über die Auflösung der Gleichungen, auf welche man bei der Untersuchung der linearen Vertheilung galvanischer Ströme geführt wird. Annalen der Physik, 148(12),
117 s Solver References Ioannis Koutis, Gary L. Miller, and Richard Peng (2010) Approaching Optimality for Solving SDD Linear. FOCS 10, Ioannis Koutis, Gary L. Miller, and Richard Peng (2011) A Nearly-m log n Time Solver for SDD Linear. FOCS 11, Ioannis Koutis, Gary L. Miller, and Richard Peng (2012) A Fast Solver for a Class of Linear. Communications of the ACM, 55(10), Wei Ni and Daizhan Cheng (2010) Leader-Following Consensus of Multi-Agent Under Fixed and Switching Topologies. & Control Letters, 59(3), Reza Olfati-Saber and Richard Murray (2003) Consensus Protocols for Networks of Dynamic Agents. American Control Conference 03,
118 s Solver Daniel A. Spielman and Nikhil Srivastava (2011) Graph Sparsification by Effective Resistances. SIAM J. Computing, 40(6), References Daniel A. Spielman and Shang-Hua Teng (2004) Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear. STOC 04, Pravin M. Vaidya (1991) Solving Linear Equations with Symmetric Diagonally Dominant Matrices by Constructing Good Preconditioners. UIUC manuscript based on a talk. Alfio Quarteroni, Riccardo Sacco, and Fausto Saleri (2007) Numerical Mathematics, vol 37. Virginia Vassilevska Williams (2012) Multiplying Matrices Faster than Coppersmith-Winograd. STOC 12,
119 References s Solver David Young (1950) On Richardson s Method for Solving Linear with Positive Definite Matrices. J. Math. and Physics 32, Denyong Zhou, Jiayuan Huang, and Bernhard Sch olkopf (2005) Learning From Labeled and Unlabeled Data on a Directed Graph International Conference on Machine Learning
120 s Solver Thank You
Approximate Gaussian Elimination for Laplacians
CSC 221H : Graphs, Matrices, and Optimization Lecture 9 : 19 November 2018 Approximate Gaussian Elimination for Laplacians Lecturer: Sushant Sachdeva Scribe: Yasaman Mahdaviyeh 1 Introduction Recall sparsifiers
More informationLecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving
Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney
More informationAlgorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons. Daniel A. Spielman Yale University
Algorithms, Graph Theory, and the Solu7on of Laplacian Linear Equa7ons Daniel A. Spielman Yale University Rutgers, Dec 6, 2011 Outline Linear Systems in Laplacian Matrices What? Why? Classic ways to solve
More informationU.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Last revised. Lecture 21
U.C. Berkeley CS270: Algorithms Lecture 21 Professor Vazirani and Professor Rao Scribe: Anupam Last revised Lecture 21 1 Laplacian systems in nearly linear time Building upon the ideas introduced in the
More informationAlgorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm
Algorithmic Primitives for Network Analysis: Through the Lens of the Laplacian Paradigm Shang-Hua Teng Computer Science, Viterbi School of Engineering USC Massive Data and Massive Graphs 500 billions web
More informationSolving Local Linear Systems with Boundary Conditions Using Heat Kernel Pagerank
olving Local Linear ystems with Boundary Conditions Using Heat Kernel Pagerank Fan Chung and Olivia impson Department of Computer cience and Engineering, University of California, an Diego La Jolla, CA
More informationAlgorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University
Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Shang- Hua Teng Carter Fussell TPS David Harbater UPenn Pavel Cur9s Todd Knoblock CTY Serge Lang 1927-2005
More informationAlgorithm Design Using Spectral Graph Theory
Algorithm Design Using Spectral Graph Theory Richard Peng CMU-CS-13-121 August 2013 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Gary L. Miller, Chair Guy
More informationORIE 6334 Spectral Graph Theory October 13, Lecture 15
ORIE 6334 Spectral Graph heory October 3, 206 Lecture 5 Lecturer: David P. Williamson Scribe: Shijin Rajakrishnan Iterative Methods We have seen in the previous lectures that given an electrical network,
More informationFast Linear Solvers for Laplacian Systems UCSD Research Exam
Fast Linear olvers for Laplacian ystems UCD Research Exam Olivia impson Fall 2013 olving a system of linear equations is a fundamental problem that has deep implications in the computational sciences,
More informationDamped random walks and the characteristic polynomial of the weighted Laplacian on a graph
Damped random walks and the characteristic polynomial of the weighted Laplacian on a graph arxiv:mathpr/0506460 v1 Jun 005 MADHAV P DESAI and HARIHARAN NARAYANAN September 11, 006 Abstract For λ > 0, we
More informationSpectral and Electrical Graph Theory. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity
Spectral and Electrical Graph Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Spectral Graph Theory: Understand graphs through eigenvectors and
More informationRanking and sparsifying a connection graph
Ranking and sparsifying a connection graph Fan Chung 1, and Wenbo Zhao 1 Department of Mathematics Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 9093,
More informationAlgorithms, Graph Theory, and Linear Equations in Laplacian Matrices
Proceedings of the International Congress of Mathematicians Hyderabad, India, 200 Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices Daniel A. Spielman Abstract The Laplacian matrices
More informationA spectral Turán theorem
A spectral Turán theorem Fan Chung Abstract If all nonzero eigenalues of the (normalized) Laplacian of a graph G are close to, then G is t-turán in the sense that any subgraph of G containing no K t+ contains
More informationWalks, Springs, and Resistor Networks
Spectral Graph Theory Lecture 12 Walks, Springs, and Resistor Networks Daniel A. Spielman October 8, 2018 12.1 Overview In this lecture we will see how the analysis of random walks, spring networks, and
More informationAn Empirical Comparison of Graph Laplacian Solvers
An Empirical Comparison of Graph Laplacian Solvers Kevin Deweese 1 Erik Boman 2 John Gilbert 1 1 Department of Computer Science University of California, Santa Barbara 2 Scalable Algorithms Department
More informationGraph Sparsification I : Effective Resistance Sampling
Graph Sparsification I : Effective Resistance Sampling Nikhil Srivastava Microsoft Research India Simons Institute, August 26 2014 Graphs G G=(V,E,w) undirected V = n w: E R + Sparsification Approximate
More informationDetecting Sharp Drops in PageRank and a Simplified Local Partitioning Algorithm
Detecting Sharp Drops in PageRank and a Simplified Local Partitioning Algorithm Reid Andersen and Fan Chung University of California at San Diego, La Jolla CA 92093, USA, fan@ucsd.edu, http://www.math.ucsd.edu/~fan/,
More informationSparsification by Effective Resistance Sampling
Spectral raph Theory Lecture 17 Sparsification by Effective Resistance Sampling Daniel A. Spielman November 2, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened
More informationFour graph partitioning algorithms. Fan Chung University of California, San Diego
Four graph partitioning algorithms Fan Chung University of California, San Diego History of graph partitioning NP-hard approximation algorithms Spectral method, Fiedler 73, Folklore Multicommunity flow,
More informationConstant Arboricity Spectral Sparsifiers
Constant Arboricity Spectral Sparsifiers arxiv:1808.0566v1 [cs.ds] 16 Aug 018 Timothy Chu oogle timchu@google.com Jakub W. Pachocki Carnegie Mellon University pachocki@cs.cmu.edu August 0, 018 Abstract
More informationA Tutorial on Matrix Approximation by Row Sampling
A Tutorial on Matrix Approximation by Row Sampling Rasmus Kyng June 11, 018 Contents 1 Fast Linear Algebra Talk 1.1 Matrix Concentration................................... 1. Algorithms for ɛ-approximation
More informationGraph Sparsifiers. Smaller graph that (approximately) preserves the values of some set of graph parameters. Graph Sparsification
Graph Sparsifiers Smaller graph that (approximately) preserves the values of some set of graph parameters Graph Sparsifiers Spanners Emulators Small stretch spanning trees Vertex sparsifiers Spectral sparsifiers
More informationSpectral Graph Theory and its Applications. Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity
Spectral Graph Theory and its Applications Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale Unviersity Outline Adjacency matrix and Laplacian Intuition, spectral graph drawing
More informationEffective Resistance and Schur Complements
Spectral Graph Theory Lecture 8 Effective Resistance and Schur Complements Daniel A Spielman September 28, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in
More informationEdge-Disjoint Spanning Trees and Eigenvalues of Regular Graphs
Edge-Disjoint Spanning Trees and Eigenvalues of Regular Graphs Sebastian M. Cioabă and Wiseley Wong MSC: 05C50, 15A18, 05C4, 15A4 March 1, 01 Abstract Partially answering a question of Paul Seymour, we
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 23 Iterative solvers for linear equations Daniel A. Spielman November 26, 2018 23.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving
More informationSolving SDD linear systems in time Õ(m log n log(1/))
Solving SDD linear systems in time Õ(m log n log(1/)) Ioannis Koutis CSD-UPRRP ioannis.koutis@upr.edu Gary L. Miller CSD-CMU glmiller@cs.cmu.edu April 7, 2011 Richard Peng CSD-CMU yangp@cs.cmu.edu Abstract
More informationCombinatorial Preconditioners Introduction Symmetric Diagonally-Dominant Matrices and Graphs...
Chapter 12 Combinatorial Preconditioners Sivan Toledo and Haim Avron Tel Aviv University 12.1 Introduction... 12.2 Symmetric Diagonally-Dominant Matrices and Graphs... 215 216 12.2.1 Incidence Factorizations
More informationPhysical Metaphors for Graphs
Graphs and Networks Lecture 3 Physical Metaphors for Graphs Daniel A. Spielman October 9, 203 3. Overview We will examine physical metaphors for graphs. We begin with a spring model and then discuss resistor
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A. Spielman September 19, 2018 7.1 Overview In today s lecture we will justify some of the behavior we observed when using eigenvectors
More informationLaplacian Matrices of Graphs: Spectral and Electrical Theory
Laplacian Matrices of Graphs: Spectral and Electrical Theory Daniel A. Spielman Dept. of Computer Science Program in Applied Mathematics Yale University Toronto, Sep. 28, 2 Outline Introduction to graphs
More informationTowards Practically-Efficient Spectral Sparsification of Graphs. Zhuo Feng
Design Automation Group Towards Practically-Efficient Spectral Sparsification of Graphs Zhuo Feng Acknowledgements: PhD students: Xueqian Zhao, Lengfei Han, Zhiqiang Zhao, Yongyu Wang NSF CCF CAREER grant
More informationSpanning trees on the Sierpinski gasket
Spanning trees on the Sierpinski gasket Shu-Chiuan Chang (1997-2002) Department of Physics National Cheng Kung University Tainan 70101, Taiwan and Physics Division National Center for Theoretical Science
More informationFaster generation of random spanning trees
Faster generation of random spanning trees The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Kelner,
More informationComputational Economics and Finance
Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:
More informationSparsification of Graphs and Matrices
Sparsification of Graphs and Matrices Daniel A. Spielman Yale University joint work with Joshua Batson (MIT) Nikhil Srivastava (MSR) Shang- Hua Teng (USC) HUJI, May 21, 2014 Objective of Sparsification:
More informationCutting Graphs, Personal PageRank and Spilling Paint
Graphs and Networks Lecture 11 Cutting Graphs, Personal PageRank and Spilling Paint Daniel A. Spielman October 3, 2013 11.1 Disclaimer These notes are not necessarily an accurate representation of what
More informationFiedler s Theorems on Nodal Domains
Spectral Graph Theory Lecture 7 Fiedler s Theorems on Nodal Domains Daniel A Spielman September 9, 202 7 About these notes These notes are not necessarily an accurate representation of what happened in
More informationPersonal PageRank and Spilling Paint
Graphs and Networks Lecture 11 Personal PageRank and Spilling Paint Daniel A. Spielman October 7, 2010 11.1 Overview These lecture notes are not complete. The paint spilling metaphor is due to Berkhin
More informationGraph Sparsifiers: A Survey
Graph Sparsifiers: A Survey Nick Harvey UBC Based on work by: Batson, Benczur, de Carli Silva, Fung, Hariharan, Harvey, Karger, Panigrahi, Sato, Spielman, Srivastava and Teng Approximating Dense Objects
More informationOn the number of spanning trees on various lattices
On the number of spanning trees on various lattices E Teufl 1, S Wagner 1 Mathematisches Institut, Universität Tübingen, Auf der Morgenstelle 10, 7076 Tübingen, Germany Department of Mathematical Sciences,
More informationFinding Consensus in Multi-Agent Networks Using Heat Kernel Pagerank
Finding Consensus in Multi-Agent Networs Using Heat Kernel Pageran Olivia Simpson and Fan Chung 2 Abstract We present a new and eicient algorithm or determining a consensus value or a networ o agents.
More informationResearch on Consistency Problem of Network Multi-agent Car System with State Predictor
International Core Journal of Engineering Vol. No.9 06 ISSN: 44-895 Research on Consistency Problem of Network Multi-agent Car System with State Predictor Yue Duan a, Linli Zhou b and Yue Wu c Institute
More informationCS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi. Gradient Descent
CS 435, 2018 Lecture 3, Date: 8 March 2018 Instructor: Nisheeth Vishnoi Gradient Descent This lecture introduces Gradient Descent a meta-algorithm for unconstrained minimization. Under convexity of the
More informationGraph Embeddings and Laplacian Eigenvalues
NASA/CR-1998-0845 ICASE Report No. 98-3 Graph Embeddings and Laplacian Eigenvalues Stephen Guattery ICASE, Hampton, Virginia Gary L. Miller Carnegie Mellon University, Pittsburgh, Pennsylvania Institute
More informationHardness Results for Structured Linear Systems
58th Annual IEEE Symposium on Foundations of Computer Science Hardness Results for Structured Linear Systems Rasmus Kyng Department of Computer Science Yale University New Haven, CT, USA Email: rasmus.kyng@yale.edu
More informationRandNLA: Randomized Numerical Linear Algebra
RandNLA: Randomized Numerical Linear Algebra Petros Drineas Rensselaer Polytechnic Institute Computer Science Department To access my web page: drineas RandNLA: sketch a matrix by row/ column sampling
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 17 Iterative solvers for linear equations Daniel A. Spielman October 31, 2012 17.1 About these notes These notes are not necessarily an accurate representation of what happened
More informationEigenvalue Problems Computation and Applications
Eigenvalue ProblemsComputation and Applications p. 1/36 Eigenvalue Problems Computation and Applications Che-Rung Lee cherung@gmail.com National Tsing Hua University Eigenvalue ProblemsComputation and
More informationLower-Stretch Spanning Trees
Lower-Stretch Spanning Trees Michael Elkin Ben-Gurion University Joint work with Yuval Emek, Weizmann Institute Dan Spielman, Yale University Shang-Hua Teng, Boston University + 1 The Setting G = (V, E,
More informationsix lectures on systems biology
six lectures on systems biology jeremy gunawardena department of systems biology harvard medical school lecture 3 5 april 2011 part 2 seminar room, department of genetics a rather provisional syllabus
More informationConsensus Protocols for Networks of Dynamic Agents
Consensus Protocols for Networks of Dynamic Agents Reza Olfati Saber Richard M. Murray Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 e-mail: {olfati,murray}@cds.caltech.edu
More informationDiameter of random spanning trees in a given graph
Diameter of random spanning trees in a given graph Fan Chung Paul Horn Linyuan Lu June 30, 008 Abstract We show that a random spanning tree formed in a general graph G (such as a power law graph) has diameter
More informationSolving systems of linear equations
Chapter 5 Solving systems of linear equations Let A R n n and b R n be given. We want to solve Ax = b. What is the fastest method you know? We usually learn how to solve systems of linear equations via
More informationEuler s idoneal numbers and an inequality concerning minimal graphs with a prescribed number of spanning trees
arxiv:110.65v [math.co] 11 Feb 01 Euler s idoneal numbers and an inequality concerning minimal graphs with a prescribed number of spanning trees Jernej Azarija Riste Škrekovski Department of Mathematics,
More informationDeterminantal Probability Measures. by Russell Lyons (Indiana University)
Determinantal Probability Measures by Russell Lyons (Indiana University) 1 Determinantal Measures If E is finite and H l 2 (E) is a subspace, it defines the determinantal measure T E with T = dim H P H
More informationLink Analysis. Leonid E. Zhukov
Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization
More informationAlgebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes
Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov Processes Elena Virnik, TU BERLIN Algebraic Multigrid Preconditioners for Computing Stationary Distributions of Markov
More informationLecture: Local Spectral Methods (1 of 4)
Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide
More informationPreliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationLower-Stretch Spanning Trees
Lower-Stretch Spanning Trees Michael Elkin Department of Computer Science Ben-Gurion University of the Negev Daniel A. Spielman Department of Mathematics Massachusetts Institute of Technology Y. Emek Weizmann
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationNCS Lecture 8 A Primer on Graph Theory. Cooperative Control Applications
NCS Lecture 8 A Primer on Graph Theory Richard M. Murray Control and Dynamical Systems California Institute of Technology Goals Introduce some motivating cooperative control problems Describe basic concepts
More informationAlgebraic Representation of Networks
Algebraic Representation of Networks 0 1 2 1 1 0 0 1 2 0 0 1 1 1 1 1 Hiroki Sayama sayama@binghamton.edu Describing networks with matrices (1) Adjacency matrix A matrix with rows and columns labeled by
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationFinding a Heaviest Triangle is not Harder than Matrix Multiplication
Finding a Heaviest Triangle is not Harder than Matrix Multiplication Artur Czumaj Department of Computer Science New Jersey Institute of Technology aczumaj@acm.org Andrzej Lingas Department of Computer
More informationObtaining Consensus of Multi-agent Linear Dynamic Systems
Obtaining Consensus of Multi-agent Linear Dynamic Systems M I GRCÍ-PLNS Universitat Politècnica de Catalunya Departament de Matemàtica plicada Mineria 1, 08038 arcelona SPIN mariaisabelgarcia@upcedu bstract:
More informationGraph Partitioning Using Random Walks
Graph Partitioning Using Random Walks A Convex Optimization Perspective Lorenzo Orecchia Computer Science Why Spectral Algorithms for Graph Problems in practice? Simple to implement Can exploit very efficient
More informationGraph Embedding Techniques for Bounding Condition Numbers of Incomplete Factor Preconditioners
NASA/CR-01741 ICASE Report No. 97-47 Graph Embedding Techniques for Bounding Condition Numbers of Incomplete Factor Preconditioners Stephen Guattery ICASE Institute for Computer Applications in Science
More informationApproximate Spectral Clustering via Randomized Sketching
Approximate Spectral Clustering via Randomized Sketching Christos Boutsidis Yahoo! Labs, New York Joint work with Alex Gittens (Ebay), Anju Kambadur (IBM) The big picture: sketch and solve Tradeoff: Speed
More informationGraph Sparsification by Edge-Connectivity and Random Spanning Trees
Graph Sparsification by Edge-Connectivity and Random Spanning Trees Wai Shing Fung Nicholas J. A. Harvey Abstract We present new approaches to constructing graph sparsifiers weighted subgraphs for which
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationPreconditioner Approximations for Probabilistic Graphical Models
Preconditioner Approximations for Probabilistic Graphical Models Pradeep Ravikumar John Lafferty School of Computer Science Carnegie Mellon University Abstract We present a family of approximation techniques
More informationSpanning trees in subgraphs of lattices
Spanning trees in subgraphs of lattices Fan Chung University of California, San Diego La Jolla, 9293-2 Introduction In a graph G, for a subset S of the vertex set, the induced subgraph determined by S
More informationIterative solvers for linear equations
Spectral Graph Theory Lecture 15 Iterative solvers for linear equations Daniel A. Spielman October 1, 009 15.1 Overview In this and the next lecture, I will discuss iterative algorithms for solving linear
More informationEfficient Sampling for Gaussian Graphical Models via Spectral Sparsification
JMLR: Workshop and Conference Proceedings vol 40: 27, 205 Efficient Sampling for Gaussian Graphical Models via Spectral Sparsification Dehua Cheng Yu Cheng Yan Liu Richard Peng 2 Shang-Hua Teng University
More informationGraphs, Vectors, and Matrices Daniel A. Spielman Yale University. AMS Josiah Willard Gibbs Lecture January 6, 2016
Graphs, Vectors, and Matrices Daniel A. Spielman Yale University AMS Josiah Willard Gibbs Lecture January 6, 2016 From Applied to Pure Mathematics Algebraic and Spectral Graph Theory Sparsification: approximating
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationA NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES
A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES Donald E. KNUTH Department of Computer Science. Standford University. Stanford2 CA 94305. USA Christos H. PAPADIMITRIOU Department of Computer Scrence and
More informationEigenvalues of Random Power Law Graphs (DRAFT)
Eigenvalues of Random Power Law Graphs (DRAFT) Fan Chung Linyuan Lu Van Vu January 3, 2003 Abstract Many graphs arising in various information networks exhibit the power law behavior the number of vertices
More informationUsing Petal-Decompositions to Build a Low Stretch Spanning Tree
Using Petal-Decompositions to Build a Low Stretch Spanning Tree Ittai Abraham Ofer Neiman March 22, 2012 Abstract We prove that any graph G = (V, E) with n points and m edges has a spanning tree T such
More informationDirect and Incomplete Cholesky Factorizations with Static Supernodes
Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationProperties of Expanders Expanders as Approximations of the Complete Graph
Spectral Graph Theory Lecture 10 Properties of Expanders Daniel A. Spielman October 2, 2009 10.1 Expanders as Approximations of the Complete Graph Last lecture, we defined an expander graph to be a d-regular
More informationDiffusion and random walks on graphs
Diffusion and random walks on graphs Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural
More information18.06 Professor Johnson Quiz 1 October 3, 2007
18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal
More informationOutline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,
Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences
More informationA new approach to Laplacian solvers and flow problems
A new approach to Laplacian solvers and flow problems Patrick Rebeschini Department of Electrical Engineering Yale University, New Haven, New Haven, CT 065, USA Sekhar Tatikonda Department of Electrical
More informationOn the hardness of losing width
On the hardness of losing width Marek Cygan 1, Daniel Lokshtanov 2, Marcin Pilipczuk 1, Micha l Pilipczuk 1, and Saket Saurabh 3 1 Institute of Informatics, University of Warsaw, Poland {cygan@,malcin@,mp248287@students}mimuwedupl
More informationMath 307 Learning Goals
Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear
More informationNotes on the Matrix-Tree theorem and Cayley s tree enumerator
Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationIMFM Institute of Mathematics, Physics and Mechanics Jadranska 19, 1000 Ljubljana, Slovenia. Preprint series Vol. 49 (2011), 1157 ISSN
IMFM Institute of Mathematics, Physics and Mechanics Jadranska 19, 1000 Ljubljana, Slovenia Preprint series Vol. 9 (011), 1157 ISSN -09 EULER S IDONEAL NUMBERS AND AN INEQUALITY CONCERNING MINIMAL GRAPHS
More informationA Graph-Theoretic Characterization of Controllability for Multi-agent Systems
A Graph-Theoretic Characterization of Controllability for Multi-agent Systems Meng Ji and Magnus Egerstedt Abstract In this paper we continue our pursuit of conditions that render a multi-agent networked
More informationFast Approximate Matrix Multiplication by Solving Linear Systems
Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,
More informationRandomized Numerical Linear Algebra: Review and Progresses
ized ized SVD ized : Review and Progresses Zhihua Department of Computer Science and Engineering Shanghai Jiao Tong University The 12th China Workshop on Machine Learning and Applications Xi an, November
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationConvex Optimization of Graph Laplacian Eigenvalues
Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the
More information