Robust PageRank: Stationary Distribution on a Growing Network Structure

Size: px
Start display at page:

Download "Robust PageRank: Stationary Distribution on a Growing Network Structure"

Transcription

1 oname manuscript o. will be inserted by the editor Robust PageRank: Stationary Distribution on a Growing etwork Structure Anna Timonina-Farkas Received: date / Accepted: date Abstract PageRank PR is a challenging and important network ranking algorithm, which plays a crucial role in information technologies and numerical analysis due to its huge dimension and wide range of possible applications. The traditional approach to PR goes back to the pioneering paper of S. Brin and L. Page [5], who developed the initial method in order to rank websites in the search engine results. Recently, A. Juditsky and B. Polyak in the work [3] proposed a robust formulation of the PageRank model for the case, when links in the network structure may vary, i.e. some links may appear or disappear influencing the transportation matrix defined by the network structure. In this article, we make a further step forward, allowing the network to vary not only in links, but also in the number of nodes. We focus on growing network structures e.g. Internet and we propose a new robust formulation of the PageRank problem for uncertain networks with fixed growth rate i.e. the expected number of pages which appear in the future is fixed. Further, we compare our results with the ranks estimated by the method of A. Juditsky and B. Polyak [3], as well as with the true ranks of tested network structures. We formulate the robust PageRank in terms of non-convex optimization problems and we bound these formulations from above by convex but non-smooth optimization problems. In the numerical part of the article, we propose some smoothening techniques, which allow to obtain the solution accurately and efficiently in case of middle-size networks by the use of the well-known subgradient algorithm avoiding all non-smooth points. Furthermore, we address high-dimensional algorithms by modelling PageRank via the use of the multinomial distribution. Keywords World Wide Web PageRank robust optimization growing networks The research is conducted in line with the Schrödinger Fellowship of the Austrian Science Fund FWF Anna Timonina-Farkas, PhD Risk Analytics and Optimization Chair, École Polytechnique Fédérale de Lausanne RAO, EPFL EPFL-CD-TEI-RAO, Station 5, CH-05 Lausanne, Switzerland T: ; F: anna.farkas@epfl.ch Homepage:

2 Anna Timonina-Farkas Introduction PageRank PR is an automated information retrieval algorithm, which is designed by S. Brin and L. Page [5,] during the rise of the World Wide Web in order to improve the quality of at the moment existing search engines. The algorithm relies not only on keyword matching but also on additional structure present in the hypertext providing higher quality search results. athematically speaking, the rank of some page i, i =,..., is the probability for a random user to be on this page. We denote the rank of the page i by x i, i =,...,. Let L i be the set of all web-pages, which refer to the page i. The probability, that a random user sitting on a page L i clicks on the link to the page i is equal to P i Figure. Set L i Web-site k L i Web-site L i P ik P i Web-site i Fig. : Links outgoing from the web-site. The idea of the PageRank algorithm is, therefore, based on the interchange of ranks between pages, which is formulated in line with probability theory rules: x i = L i P i x, i =,...,. Intuitively, the equation can be seen in the following way: the page gives a part of its rank or score to the page i, if there is a direct link from the page to the page i. The amount of the transferred score is proportional to the probability for a random user to click on the link to the page i sitting on the page. In the initial PageRank formulation [6,8,5,], the probability P i is assumed to be an inverse of the total number of links outgoing from the page denoted by n, i.e. P i = n. This assumption lacks realism, as it assigns equal probabilities for a random user to move to the page i from the page. In reality, these probabilities are rankdependent: the user is more likely to move to the page with a higher rank, than to the page with a lower rank. If one denotes by P the transportation or transition matrix with entries P i, i, =

3 Robust PageRank 3,...,, the PageRank problem can be formulated as the problem of finding the principal eigenvector x = x,..., x T of this matrix, which exists due to the well-known Perron-Frobenius theorem [0] for non-negative matrices: P x = x. As the principal eigenvector corresponding the eigenvalue λ = is not necessarily unique for non-negative stochastic matrices [0], the original PageRank problem is replaced with finding the principal eigenvector of the following modified matrix [5, 8,] known as the Google matrix: x Σ G = αp + αs, where α 0, is the damping factor [5] and S is the doubly stochastic matrix of the form: S =, i, =,...,. i In the pioneering paper of S. Brin and L. Page [5], the following intuitive ustification is given to the matrix G: We assume there is a random surfer who is given a web page at random and keeps clicking on links, never hitting back but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank. And, the α damping factor is the probability at each page the random surfer will get bored and request another random page. The principal eigenvector ȳ = ȳ,...,ȳ of the matrix G is unique according to the Perron-Frobenius theorem for positive matrices [0]. oreover, the well-known power method y k+ = Gy k [] converges to ȳ for each starting value y 0 satisfying simplex constraints, which allows to work with high-dimensional cases. However, in the work of B. Polyak and A. Timonina see [5], it is shown that the solution ȳ of the modified problem Gȳ = ȳ can be far enough from the original x of the system P x = x. oreover, for specific network structures it may happen that the eigenvector of the matrix G stops distinguishing between high-ranked pages [5], that build the core of search engine results: if a large amount of high-ranked pages obtains the same score, the ranking becomes ineffective, making the difference between high-ranked and low-ranked pages to be almost binary. Therefore, the approach with the perturbed matrix G works well only in the presence of a small number of high-ranked pages, which has been true when the initial algorithm [5] has been developed but which currently needs reconsideration as the amount of information on the web continues to grow and the number of experienced users in the art of web research is increasing. In the work of B. Polyak and A. Timonina see [5], the authors propose an l - regularization method which avoids low-ranked pages: Px x + ε x, where parameter ε regulates the number of high-ranked pages one would like to consider and where Σ denotes the standard simplex on R, i.e. Σ = ν R : i= ν i, ν i 0. This optimization problem can be solved via the coordinate descent method analogous to the Gauss-Seidel algorithm for solving linear equations [6, 5,, 5, 7], which allows to enhance efficiency of numerical ranking and to improve accuracy by differentiating between high-importance pages.

4 4 Anna Timonina-Farkas However, the optimization problem has a drawback: it is not robust to variations in the Internet structure, while the World Wide Web is, clearly, changing over time: the amount of information on the web is growing, new pages appear, some web-sites old or spam ones disappear. This is particularly important due to the fact, that PageRank computation takes such a significant amount of time Google re-evaluation of PageRank takes a month, that around -3% of new web-pages opens during this time with new links influencing the transportation matrix P. In the work of A. Juditsky and B. Polyak [3], the robust reformulation of the PageRank model is proposed for the case, when links in the network structure may vary, i.e. some links may appear or disappear influencing the transition matrix P: x Σ P + ξ x x, 3 ξ P where is l or l norm, P is a set of allowed perturbations, such that the matrix P + ξ stays column-stochastic, Σ is the standard simplex on R. The optimization problem 3 can be bounded from above by a convex non-smooth optimization problem of the following type see [,3,3]: x Σ Px x + ε x, 4 where ε is a parameter dependent on the perturbation set P. However, the formulation 3 lacks some realism: though the matrix P+ξ is uncertain in this case, the dimension of the matrix stays the same, meaning that no pages may appear or disappear. In reality though, i the number of web-pages in the Internet grows and ii one does not explicitly know, how the structure of the Internet changes. In this article, we allow the network to vary not only in links, but also in the number of nodes, considering, therefore, growing network structures with transition matrix QP: x Σ QP Ξ QPx x, 5 where QP depends on the matrix P. Further, Ξ is the perturbation set adusted for the network growth so that the matrix Q stays column-stochastic. In Section we propose l, l and Frobenius-norm robust reformulations for PageRank of the uncertain network with fixed growth rate i.e. the expected number of pages which appear in the future is fixed and we bound these reformulations from above by convex but non-smooth optimization problems. In Section 3 we demonstrate that formultations robust to network growth impose upper bounds on the formulations robust to perturbations in links, i.e. on the formulations proposed by A. Juditsky and B. Polyak in the work [3]. In Section 4 we study properties of the chosen perturbation set, which helps to shed some light on the parameter ε of problems and 4. In Section 5 we consider smoothening techniques for the subgradient method in order to solve formulated optimization problems in middle-dimensional cases. Further in the article, we address high-dimensional algorithms by modelling PageRank via the use of the multinomial distribution and, afterwards, we conclude, as well as we state directions for future research.

5 Robust PageRank 5 Problem Formulation Consider a column-stochastic non-negative transportation matrix P R, which satisfies P i 0, i, =,..., and i= P i =, =,..., by definition. The wellknown Perron-Frobenius theorem [0] states that there exists doant or principal eigenvector x Σ, where we denote by Σ = v R : i= v i =, v i 0 the standard simplex on R : P x = x. 6 For each column of the matrix P, the entry P i, i =,..., corresponds to the transition probability from the node to the node i of the network i.e. the probability to move from the web-page to the web-page i in the Internet. In general, matrix P is huge-dimensional and very sparse, as the number of links outgoing from every node is much smaller than the total number of nodes in the network: the average number of links outgoing from a web-page in the Internet is equal to 0 for the whole web see [3], while the total number of pages in the Internet is ca The doant vector x describes the stationary distribution on the network structure: each element of the vector x denotes the rank of the corresponding node. For the Internet, the ranks can be seen as the time which an average user spends on the particular web-site. The doant vector x is not robust and may be highly vulnerable to small changes in matrix P. A. Juditsky and B. Polyak in their work [3] reformulated the problem in the robust optimization form, allowing matrix P to vary according to the law P+ξ under some conditions on ξ. For this matrix, they found the stationary distribution x robust to variations in links and, therefore, stable with respect to small changes of P. However, in their work, the size of matrix ξ was assumed to be the same as of matrix P, i.e., meaning that growing in the number of nodes was not considered in the network further, this formulation is referred to as a fixed-size model. In reality, though, changes of matrix P happen not only in links, but also in the number of nodes. The number of domains being registered per ute corresponds to the -3% growth of the Internet per month total-number-of-websites/. That is why, in this article we consider the following changes of matrix P: P + ξ ζ Q =, 7 ψ χ where P is the column-stochastic transportation matrix describing the current state of the network with pages; ξ is the matrix describing variations in links of the initial network; ζ, ψ and χ are matrices describing links to and from new pages, which may appear in the future ξ is of the size, ψ is of the size, ζ is of the size and χ is of the size. In reality, 0.03 per month. As matrices P and Q must be column-stochastic, ξ, ζ, ψ and χ must satisfy the following properties: ξ i P i, i, =,...,; ψ i 0, i =,...,, =,...,; 8 ζ i 0, i =,...,, =,...,; χ i 0, i, =,...,,

6 6 Anna Timonina-Farkas saying that all elements of the matrix Q are non-negative, as well as the following properties must hold: i= P i + ξ i + i= ψ i =, =,..., i= ζ i + i= χ i =, =,..., i= ξ i + i= ψ i = 0, =,..., i= ζ i + i= χ i =, =,...,, saying that every column of the matrix Q sums up to notice, that here we use the fact that P is also column-stochastic and i= P i =, =,...,. Similar to the work of A. Juditsky and B. Polyak [3], the function Qx x, Q Ξ where is some norm, can be seen as a measure of goodness of a vector x as a common doant eigenvector of the family Ξ, where Ξ stands for the set of perturbed stochastic matrices of the form 7 under conditions 8 and 9. x Further, let us denote by x = a feasible point, which is a candidate for the x common doant eigenvector of the family Ξ. Let x be of the size and x be of the size. Hence, x is of the size +. otice, that the vector x must belong to the standard simplex Σ + = v R + : + i= v i =, v i 0, that means x i 0, i =,...,, x 0, =,..., and i= x i + = x = i.e. x + x =. We say that the vector ˆx is a robust solution of the eigenvector problem on Ξ if ˆx Arg x Σ + Q Ξ Qx x 9, 0 where is some norm further, we consider l, l and Frobenius-norm robust formulations. The reasonable choice of the uncertainty set Ξ would impose some bounds on the column-wise norms of matrices ξ, ζ, ψ and χ, meaning that the perturbation in links of the current and future states of the network would be bounded: i.e. [ξ ] ε ξ, where [ ] denotes the -th column of a matrix. oreover, the total uncertainty budget for matrices ξ, ζ, ψ and χ could be fixed see [3]: this would imply constraints on the overall possible perturbations of the transportation matrix P. By solving the optimization problem 0, one would protect the rank vector ˆx against high fluctuation in case of link or node perturbations influencing the transportation matrix P. Currently, the Google scores vector is being updated once per month without accounting for the -3% growth rate during this month. By solving the optimization problem of the type 0 one could decrease the number of updates of the score vector and, therefore, reduce the underlying personnel and machinery costs. In the following sections, we formulate robust optimization problems for finding the stationary distribution of the extended network Q for the case of l, l and Frobenius-norms, and we bound these problems from above by convex but nonsmooth optimization problems. [ψ] ε ψ, [ζ ] ε ζ and [χ] ε χ,

7 Robust PageRank 7. l -norm formulation Let us consider =. We say that the vector ˆx l is the l -robust solution of the eigenvector problem on the set of perturbed matrices Ξ l, if ˆx l Arg Qx x : x Σ + Q Q is column-stochastic, [ξ ] ε ξ [ψ] ε ψ [ζ ] ε ζ [χ] ε χ, =,...,, and ξ i ε ξ, i,, =,...,, and ψ i ε ψ,, =,...,, and ζ i ε ζ,, =,...,, and χ i ε, χ where we have l -norm constraints on each column [ ], of matrices ξ, ζ, ψ and χ, as well as we bound the sum of absolute values of all elements of these matrices. These constraints bound the perturbation in links of the current and future states of the network. At the same time, the total uncertainty budget for matrices ξ, ζ, ψ and χ is fixed. otice, that the problem is not convex-concave, meaning that the function φ x = Q Ξ l Qx x cannot be computed efficiently. Importantly, the formulation does not discourage sparsity in transition probabilities. Consider, for example, the uncertainty matrix ψ, which describes links from existing pages to new ones. All elements of the matrix ψ are non-negative. If we reduce one positive element from -th column of this matrix by a small enough δ, the norm [ψ] and the sum i, ψ i decrease by this δ, regardless of the value of the element we decrease. This means, that l -norm formulation does not make a preference which transition probabilities to decrease in order to satisfy the constraints. By this, a lot of transition probabilities of the matrix Q can result in being zeros. In contrast, for l - and Frobenius-norm formulations, which we consider in the next sections, the reduction of larger terms of the matrix ψ by δ results in a much greater reduction in norms than doing so with smaller terms. Therefore, l - and Frobenius-norm formulations discourage sparsity by yielding diishing reductions for elements closer to zero. Proposition Optimal value φ ˆx l of the non-convex optimization problem can be bounded from above by the optimal value of the following convex optimization problem with ε = ε ξ + ε ψ and ε l = ε ζ + ε χ + : φ ˆx l Px x + ε x a + ε l x b, x Σ + i, i, i,

8 8 Anna Timonina-Farkas where x a = x b = λ+µ=x λ+µ=x λ + = λ + = Proof See the Appendix 9. for the proof. ε ξ +ε ψ µ ε ξ +ε, ψ. ε ζ +ε χ + µ ε ζ +ε χ + otice, that x b = 0 if there are no new pages in the network i.e. if = 0. In this case, the optimization problem completely coincides with the l - reformulation proposed by A. Juditsky and B. Polyak in the work [3].. l -norm formulation Let us consider =. We say that the vector ˆx l is the l -robust solution of the eigenvector problem on the set of perturbed matrices Ξ l, if ˆx l Arg Qx x : x Σ + Q Q is column-stochastic, [ξ ] ε ξ, =,...,, and ξ F ε ξ, [ψ] ε ψ, =,...,, and ψ F ε ψ, 3 [ζ ] ε ζ, =,...,, and ζ F ε ζ, [χ] ε χ, =,...,, and χ F ε χ, where we have constraints on -th column [ ] of matrices ξ, ζ, ψ and χ, as well as we have second-order constraints on matrices themselves. otice, that the problem 3 is not convex-concave, meaning that the function cannot be computed efficiently. φ x = Q Ξ l Qx x Proposition Optimal value φ ˆx l of the non-convex optimization problem 3 can be bounded from above by the optimal value of the following convex optimization problem with ε = ε ξ + ε ψ and ε l = ε ζ + ε χ + : φ ˆx l Px x + ε x c + ε l x d, 4 where x c = x d = λ+µ=x x Σ + λ+µ=x λ + = λ + = Proof See the Appendix 9. for the proof. ε ξ +ε ψ µ ε ξ +ε, ψ. ε ζ +ε χ + µ ε ζ +ε χ +

9 Robust PageRank 9.3 Frobenius-norm formulation Let us further consider =. We say that the vector ˆx F is the robust solution of the eigenvector problem on the set of perturbed matrices Ξ F, if ˆx F Arg Qx x : x Σ + Q Q is column-stochastic, ξ F ε ξ, ψ F ε ψ, 5 ζ F ε ζ, χ F ε χ, where F is the Frobenius norm. otice, that the problem 5 is not convexconcave, meaning that the function φ 3 x = Q Ξ F Qx x cannot be computed efficiently. otice also, that the formulation 5 is an upper bound for the l formulation 3. Proposition 3 Optimal value φ 3 ˆx F of the non-convex optimization problem 5 can be bounded from above by the optimal value of the following convex optimization problem: φ 3 ˆx F Px x + ε x + ε F x, 6 x Σ + where ε = ε ξ + ε ψ and ε F = ε ζ + ε χ +. Proof See the Appendix 9.3 for the proof. otice, that the parameter ε is equal to ε ξ + ε ψ and does not depend on the problem formulation. This parameter describes the total uncertainty in links of current pages, implied by the change in already existing links i.e. P + ξ and by the uncertainty ψ corresponding to newly appeared links from existing to new pages. However, parameters ε l and ε l = ε F depend on the problem formulation. For the l -norm formulation the parameter ε l is equal to ε ζ + ε χ + and denotes the total uncertainty in links between new pages, as well as from them. This parameter is clearly dependent on the number of new pages, giving more weight to their ranks as the number grows. Differently, for the l - and Frobenius-norm formulations, parameters ε l = ε F dependency on the number of new pages. Let us denote by ε the following parameter: are equal to ε ζ +ε χ + and do not show explicit ε = [ ε ζ + ε χ +, for l norm formulation; ε ζ + ε χ +, for l and Frobenius-norm formulations.

10 0 Anna Timonina-Farkas In this case, optimization problems, 4 and 6 can be written in the following general form: x Arg Px x + ε x + ε x, 7 x Σ + where, and correspond to the norms from the formulations above. For the l -norm formulation, defines the l -norm, is equal to a and is equal to b. For the l -norm formulation 4, defines the l - norm, is equal to c and is equal to d. For the Frobenius-norm formulation 6,, and denote l -norms. We refer to the vector x as to the computable robust doant eigenvector of the corresponding family of perturbed matrices Ξ l, Ξ l or Ξ F. Further in the article and before we proceed with the numerical solution of the problem 7, we show that the formulation 7 provides the upper bound on the formulation of A. Juditsky and B. Polyak in the work [3]. oreover, we discuss the choice of the perturbation set defined by parameters ε ξ, ε ψ, ε ζ and ε χ. 3 Comparison to the model with fixed-size network Optimization problems, 3 and 5 account for uncertainties in links between current and future not yet existing pages: these uncertainties are incorporated via matrices ξ, ψ, ζ and χ, which worst-case realization gives us the opportunity to compute the robust PageRank. These optimization problems differ from robust formulations corresponding to the fixed-size network model proposed by A. Juditsky and B. Polyak in the work [3], who studied uncertainties implied by the matrix P + ξ, describing variations in links of existing pages with constant network size. In this section, we study the relationship between the fixed-size and the growing network model. For this, we compute the lower bound of the norm Q Ξ Qx x for the case of l -, l - and Frobenius-norm and we prove that the growing network model imposes the upper bound for the fixed-size network model. Theorem Upper Bounds Optimization problems, 3 and 5 under conditions 8 and 9 impose upper bounds on the fixed-size network model in the following sense: φ x = Q Ξ l Qx x T [ξ ] =0 [ξ ] ε ξ i, ξ i ε ξ P + ξ x x, 8 φ x = Qx x P + ξ x x, 9 Q Ξ l T [ξ ] =0 [ξ ] ε ξ ξ F ε ξ φ 3 x = Q Ξ F Qx x P + ξ x x. 0 T [ξ ] =0 ξ F ε ξ

11 Robust PageRank Proof Let u = u u, where u is a vector of the length and u is a vector of the length. For the case of l -norm, the following equality holds Qx x = P + ξ ζ x x ψ χ x = P + ξ I x + ζ x x ψx + χ I x = = P + ξ I x + ζ x + ψx + χ I x, where I and I are identity matrices of the size and correspondingly. Using the norm duality, i.e. P + ξ I x + ζ x = u T u R P + ξ I x + ζ x u ψx + χ I x = u T u R u ψx + χ I x, we choose such feasible u = u, that P+ξ x x = u T P+ξ I x, u R and u and we fix u =, where is the vector of all-ones. By this, we compute the lower bound for the norm Qx x : Qx x u T P + ξ I x + ζ x + T ψx + χ I x = = P + ξ x x + u T ζ x + T ψx + T χx T x = = P + ξ x x + T ψx + u T ζ x, where the final equation holds due to equalities and with T ψx 0 and u T ζ x 0: T ψ = T ξ ; T χ = T T ζ. otice, that equalities and hold due to the column-stochasticity of the matrix Q i.e. due to conditions 8 and 9. Therefore, we compute the lower bound for the function φ x, i.e. φ x P + ξ x x + T ψx + u T ζ x, ξ,ψ,ζ Ξ l As soon as this bound holds for all ξ,ψ,ζ in the perturbation set, we can set ζ = 0 and ψ = 0 without any loss of generality. oreover, by setting ψ = 0 we guarantee that conditions of A. Juditsky and B. Polyak in the work [3] are satisfied, i.e. that the matrix P + ξ is column-stochastic. Therefore, we guarantee that the l -norm formulation of A. Juditsky and B. Polyak in the work [3] is a lower bound for our optimization problem, i.e. the bound 8 follows. Analogically, one can show, that the same type of bound holds for the case of l - and

12 Anna Timonina-Farkas Frobenius-norm robust formulations. otice, that the following equality holds due to the duality of the second norm: Qx x = P + ξ ζ x x ψ χ x = P + ξ Ix + ζ x x ψx + χ Ix = T u P + ξ I x = + ζ x u R + u ψx + χ I x. u To compute the lower bound for the norm Qx x, we choose such feasible u = u, that P + ξ x x = u T P + ξ Ix, u R, u which exists due to the duality of the norm and we fix u = 0 i.e. zero-vector of the length. In this case, we can write the following: Qx x u T P + ξ I x + ζ x = P + ξ x x + u T ζ x, which leads to the following bounds for l - and Frobenius-norm formulations correspondingly: Qx x P + ξ x x + u T ζ x, ξ and ζ Ξ l, Q Ξ l Qx x P + ξ x x + u T ζ x, ξ and ζ Ξ F. Q Ξ F Analogically to the l -norm formulation, we can set ζ = 0 without any loss of generality, as the bounds hold for all ξ,ζ in the perturbation set. Therefore, one can guarantee that the l - and the Frobenius-norm formulations of A. Juditsky and B. Polyak in the work [3] impose lower bounds for our optimization problems, i.e. bounds 9 and 0 hold. ow, consider robust reformulations, 4 and 6. In case = 0 i.e. if there is no growth in the network, these reformulations fully coincide with upper bounds proposed by A. Juditsky and B. Polyak in the work [3]. In general, for > 0, robust reformulations, 4 and 6 differ from the bounds imposed by the fixedsize network. However, if there are no links from old to new web-pages, the current links are perturbed only by ξ, as ψ = 0. Therefore, Q is reducible and the difference between the fixed-size and the growing network models is fully imposed by the links from new pages. If a random user starting from some new page among i = +,..., + keeps clicking on links, he eventually results in one of pages i =,..., as ζ 0. However, he is not able to get back to the starting page by clicking links as ψ = 0. Therefore, ranks x should a priori be lower than x. oreover, one cannot say which of pages i = +,..., + have higher ranks and which have lower ones, as links are actually unknown to the search engine before the Internet structure is updated. Therefore, without loss of generality, the ranks x could be assumed to be zeros and the problem could be solved numerically using algorithms proposed in [3]. Therefore, further we focus on the case of irreducible transition matrices Q, which imply ε ψ > 0, ε ζ > 0, ε χ > 0.

13 Robust PageRank 3 4 Bounds on the perturbation set Ξ Consider optimization problems, 4 and 6 in the general form 7: x Arg Px x + ε x + ε x. x Σ + otice, that ε ξ + ε ψ is denoted by ε, while ε is equal to ε ζ + ε χ + for the l -norm robust formulation and to ε ζ + ε χ + for both l - and Frobenius-norm formulations 4 and 6. The size of the optimization problem 7 can be reduced, as the problem can be subdivided into two separate smaller-size optimization problems. Lemma Let ỹ and ỹ be solutions of optimization problems 3 and 4: ỹ Arg Py y + ε y, 3 y Σ ỹ Arg ε y, 4 y Σ where Σ = ν R : i= ν i, ν i 0 and Σ = ν R : i= ν i, ν i 0. Then the following holds:. If Pỹ ỹ + ε ỹ < ε ỹ, then the optimal solution to the x problem 7 is x = with x = ỹ and x = 0; x. If Pỹ ỹ + ε ỹ > ε ỹ, then the optimal solution to the x problem 7 is x = with x = 0 and x = ỹ ; x 3. If Pỹ ỹ + ε ỹ = ε ỹ, then there are infinitely many optimal solutions to the problem 7. Proof Consider the optimization problem 7. Based on the fact, that x + x = and x 0, x 0, let us make the following change of variables with s [0, ]: x = sy, x = sy, 5 where y =, y = and y 0, y 0. Further, x = s and x = x s, s [0, ]. Hence, simplex constraints on x = are satisfied. Therefore, the optimization problem 7 can be rewritten as: ỹ Arg s Py y + ε y + s ε y, 6 y Σ y Σ s [0, ] x

14 4 Anna Timonina-Farkas where ỹ = ỹ ỹ and s denote the optimal solution of the optimization problem 6. Furthermore, ỹ and ỹ are independent of each other. At optimality of the problem 6, s = if Pỹ ỹ +ε ỹ < ε ỹ and s = 0 if Pỹ ỹ + ε ỹ > ε ỹ. In case Pỹ ỹ + ε ỹ = ε ỹ, s can take any value in the interval [0, ]. Hence, the statement of the Lemma follows. By comparing optimal values of problems 3 and 4, one can conclude the optimal solution s and, therefore, one can get the optimal solution of the problem 7 by the system 5. In general, one would like to avoid the optimal solution x = 0, x > 0, as it would mean that the uncertainty about current pages is larger than the uncertainty about future not yet existent pages. For this, one needs to guarantee that the optimal value of the problem 3 is not greater than the optimal value of the problem 4 i.e. one would like to avoid point of the Lemma. Further, we consider l, l and Frobenius-norm formulations, 4 and 6 and we explicitly solve the optimization problem 4 for each of these norms. oreover, we state conditions on parameters ε ξ, ε ψ, ε ζ and ε χ, which guarantee that points or 3 of the Lemma are sufficiently satisfied. Statement Consider the robust reformulation and apply the proposed change of variables 5. In this case, y = λ + ε ζ +ε χ + = µ ε ζ +ε χ + λ+µ=y and the following holds: y, if εζ +ε χ + = ε ζ +ε χ +, =,..., ζ y ε Σ +ε χ ζ + ε, if ε ζ +ε χ + Proof See the Appendix 9.4 for the proof. +ε χ + ε ζ +ε χ + <. 7 Statement Consider the robust reformulation 4 and apply the proposed change of variables 5. In this case, y = λ + ε ζ +ε χ + = µ ε ζ +ε χ + and the following holds: y, if εζ = y ζ ε Σ λ+µ=y +ε χ + ε ζ +ε χ + +ε χ + ε ζ +ε χ + Proof See the Appendix 9.5 for the proof., =,...,, if ε ζ +ε χ + ε ζ +ε χ + <. Statement 3 Consider the robust reformulation 6 and apply the proposed change of variables 5. In this case, y = y and the following holds: 8 y =. 9 y Σ

15 Robust PageRank 5 Proof The proof obviously follows from the direct imization of the l -norm. Statements, and 3 impose conditions on the optimal value of the problem 3, under which points or 3 of the Lemma are satisfied. These conditions vary for l, l and Frobenius-norm formulations and can be stated in the following way: For the l -case: Py y + ε y a y Σ Py y + ε y a y Σ εζ +ε χ +, ε ζ + ε χ For the l -case: Py y + ε y c y Σ Py y + ε y c y Σ εζ +ε χ +, ε ζ + ε χ +. 3 For the Frobenius-norm: Py y + ε y εζ + ε χ +. 3 y Σ otice, that ε = ε ξ + ε ψ. Theorem Sufficient Conditions Consider statements 30, 3 and 3.. Condition 30 is sufficiently satisfied, if ε ξ + ε ψ. ε ξ. Condition 3 is sufficiently satisfied, if + ε ψ, ε ζ + ε χ. 3. Condition 3 is sufficiently satisfied, if ε ζ + ε χ ε ξ + ε ψ. Proof Consider l -norm and let ȳ be such a vector, that ȳ = Pȳ and ȳ Σ notice, that it exists due to the Perron-Frobenius theorem. In this case, Py y + ε y a ε ȳ a ε = ε ξ + ε ψ, y Σ where the last inequality holds due to the definition of the norm a. Therefore, condition 30 is satisfied, if ε ξ + ε ψ + εζ +ε χ, ε ξ + ε ψ + ε ζ + ε χ. It is, therefore, sufficient that ε ξ +ε ψ in the absence of information about new pages ambiguity parameters i.e. about ε ζ and ε χ. Hence, the statement of the Theorem follows. Analogically, we obtain the sufficient conditions for l - and Frobenius-norm reformulations, i.e. statements and 3 of the Theorem.

16 6 Anna Timonina-Farkas Condition of the Theorem depends neither on the number of new pages in the network nor on the uncertainty levels ε ζ and ε χ, which makes the l formulation convenient for the analysis. For the case of l robust reformulation, condition of the Theorem states that the uncertainty about future pages ε ζ + ε χ should grow faster than as the number of new pages increases. For the case = 0 and =, this condition is automatically satisfied. For higher dimensions i.e. >, this condition can be statistically tested from real-world data. Similarly, for the Frobenius-norm formulation 6, condition 3 of the Theorem requires that the uncertainty about future pages ε ζ + ε χ grows faster than ε ξ + ε ψ as the number of new pages increases. otice, that in the work of A. Juditsky and B. Polyak [3] small perturbations ε = 0.0 were considered in order to avoid complete equalization of the scores. oreover, = 0 in the work [3]. Thus, conditions, and 3 of the Theorem were satisfied in the work of A. Juditsky and B. Polyak [3]. Further, let us assume that conditions of the Theorem are satisfied. In this case, we focus on the numerical solution of the problem 3, which we formulate in the following general form: Px x + ε x, 33 x Σ where we, with some abuse of notations, denote ỹ by x and ε by ε and where norms and correspond to the norms in l, l and Frobenius-norm formulations, 4 and 6. In the following section, we study numerical techniques for the solution of the optimization problem umerical algorithms Consider optimization problems, 4 and 6. As it is demonstrated in the previous section for each of these problems, it is sufficient to solve the optimization problem of the type 33 if conditions 30, 3 or 3 are correspondingly satisfied. In medium- to large-dimensional cases i.e. =.e3.e6, convex non-smooth optimization problem 33 can be solved numerically using available optimization techniques, including interior-point methods, mirror-descent algorithms [4, 9] and randomized techniques [,,8]. In huge-dimensional cases i.e. =.e6.e9, one can employ randomized subgradient algorithms, which are specifically designed for sparse matrices by Y. esterov [0]. oreover, one can use less accurate but extremely fast numerical methods proposed in [3,7,5]. In this section, we do not consider these algorithms, but we propose some techniques, which allow to smoothen optimization problems, 4, 6 and to solve them numerically via approximation. Further, we consider the optimization problem 33 with = and =, i.e. we focus on the Frobenius-norm formulation 6 under conditions of the Theorem. We choose this formulation, as it imposes the upper bound on the l formulation 4 and as it is a non-smooth non-separable optimization problem, which

17 Robust PageRank 7 implies additional difficulty. Differently, one could use the following upper bound for the approximate solution of the optimization problem 4: x Σ λ+µ=x Px x + ε λ + = ε µ x Σ Px x + = ε x, which we do not consider in this article but which can be approached analogically to the Frobenius-norm formulation. We also do not consider numerical algorithms for the solution of l norm formulation, as this problem can be bounded by the optimization problem with a separable non-smooth part and, therefore, it can be solved approximately via the wellknown proected coordinate descent algorithm see, for example, [5]. Therefore, let us consider the optimization problem 6. Its reformulation 3 under condition 3 implies the following optimization problem: x Arg Px x + ε x, 34 x Σ where ε = ε ξ + ε ψ. We solve the optimization problem 34 using the following steps: Step : Apply a nonstandard normalization instead of simplex constraints x Σ. This would allow us to simplify the constraints of the optimization problem 34; Step : Bound the feasible set of the problem 34 so, that it does not include any of non-smooth points; Step 3: Solve the optimization problem 34 via proected subgradient method on the feasible set. Further, we consider each of these steps in more details. onstandard normalization: Assume that one page page is known to have the highest rank see [5] and, therefore, let us use the nonstandard normalization x = instead of i= x i =. By this, we introduce the following optimization problem: z Arg x X f x, f x = Px x + ε x, 35 where X = x R, x =, x 0. otice, that the optimal value and the optimal solution of the problem 34 are related to those of the problem 35 in the following way: f x = x f z, where z = x x x x Ṇ. x = x z,. This leads to the statement x = i= z i.

18 8 Anna Timonina-Farkas on-smooth points: The optimization problem 35 is non-smooth at least at one point P x = x, which is a possible optimal solution corresponding to the doant or principal eigenvector of the non-perturbed matrix P. Therefore, we first of all check if the optimal solution of the problem 35 is always the point x: P x = x. For this, we solve the optimization problem 35 in small-dimensional cases with randomly chosen non-negative column-stochastic matrix P. We can see that the following two cases are possible:. Optimal solution of the problem 35 is the point x : P x = x Figure a, in which case the function f x is non-smooth at optimality;. Optimal solution of the problem 35 is a point x : Px x Figure b, in which case the function f x is smooth at optimality. a on-smooth optimality. b Smooth at optimality. Fig. : Function f x for randomly chosen P in small-dimensional cases. The optimal solution of the problem 35 is not necessarily the principal eigenvector of the matrix P. Therefore, there may exist at least one not optimal point, where the function is non-smooth. Further, notice that the optimization problem 35 is non-smooth x such that P x = x. As soon as the matrix P is non-negative and column-stochastic, its imal eigenvalue λ = is not necessarily a simple root of the characteristic polynomial according to the Perron-Frobenius theorem [0]. Therefore, there may exist multiple number of points x : P x = x. If the matrix P would be irreducible, the imal eigenvalue λ = would be the simple root of the characteristic polynomial. However, there still could exist multiple complex eigenvalues with absolute value, which would strongly influence convergence of numerical algorithms for example, the well-known power method would not converge. We propose a technique, which guarantees that at each iteration k the obective function is smooth at the point x k. otice, that the subgradient of the function f x at any point x : Px x is unique and is equal to: f x = P IT P Ix Px x x + ε, where Px x. 36 x

19 Robust PageRank 9 otice also, that x > 0 for all points in the feasible set of the problem 35, as x =. ow, consider a perturbed Google matrix G = αp + αs, where S is the doubly stochastic matrix with entries equal to and where α = 0.85 is the damping factor [9,6]. This matrix was initially proposed by S. Brin and L. Page [5] and was used by Google in the well-known PageRank algorithm y k+ = Gy k. For the matrix G one can guarantee the uniqueness of the imal in absolute value eigenvalue i.e. λ = and, therefore, the uniqueness of the eigenvector ȳ corresponding to it [5]. oreover, one can guarantee the convergence of the power method y k+ = Gy k to ȳ : Gȳ = ȳ due to the Perron-Frobenius theorem for positive matrices. Further, consider the following convex function and its corresponding subgradient: gx = Gx x + ε x, gx = G IT G Ix Gx x x + ε. x otice, that the function gx has a unique non-smooth point ȳ, which corresponds to Gȳ = ȳ and which, in general, may differ from x : P x = x. The Perron-Frobenius theorem for positive matrices implies that the principal eigenvector of the matrix G has only positive entries. Hence, one can guarantee that Gx x for each chosen x by setting ranks of some spam pages to zero i.e. x i = 0 for some i. By this, one guarantees the uniqueness of the subgradient at each feasible point. Therefore, we solve the following optimization problem numerically: gx, gx = Gx x + ε x, 37 x X where X = x R, x = 0, x =, x 0, where x corresponds to the spam page. Subgradient method: We solve the optimization problem 37 by the proected subgradient algorithm, where we do not iterate over elements x and x : x k+ i = x k [G]i e i,gx x i t k + ε x i, 0, i =,, 38 Gx x x where [G] i is the i-th column of the matrix G, e i is the i-th column of the matrix I and t k is the chosen step size. In the method 38, the subgradient is unique at each feasible point of the problem 37. In general, subgradient methods are not necessarily the descent methods. Therefore, one keeps track of the best function value at every iteration k. i.e. g best = i,,...,k gxi. In this article, we test the stopping rule gx k+ > gx k instead of keeping track of the best function value, as well as we do not iterate over pages with zero-ranks. This allows to enhance efficiency of the algorithm. 5. umerical results Consider the following two ranking models Figure 3 a and b, which are also discussed in the works [3,5,8]. For both models, nodes e.g. web-pages are shown as circles, while references e.g. web-links are denoted by arrows.

20 0 Anna Timonina-Farkas,,,n,,,n n, n, n,n a odel.,,,n,,,n n, n, n,n b odel. Fig. 3: etwork models for testing purposes. odel : Let us start by considering odel Figure 3 a. In this model, node n,n represents a dangling vertex, which makes the transition matrix P reducible, leading to non-uniqueness of the eigenvector corresponding to the eigenvalue λ =. To avoid reducibility of the matrix P, we need to guarantee that the number of outgoing links is non-zero for each page. For this, we assume the ability of equally probable transitions from the node n,n to any node in the odel similarly to the approach used by search engines. The transition matrix P corresponding to the odel with equally probable transitions from the vertex n,n becomes irreducible and aperiodic, which guarantees the uniqueness of the eigenvector corresponding to the eigenvalue λ =, as well as the convergence of the well-known power method x k+ = Px k [3,5, 8]. Figure 4 demonstrates the sparsity of the matrix P corresponding to the odel for networks with = n = 9 and = n = 00 nodes: zero elements of the matrix P are shown with light blue color. The matrix P is, clearly, highly sparse. a =9. b =00. Fig. 4: on-zero elements of the transition matrix P corresponding to the odel.

21 Robust PageRank We are interested in finding ranks x i, of each node of the network. Taking arbitrary value of x n,n, say x n,n = n, we obtain the system of equations describing ranks of the odel [5]: x, = x n n,n =, x i, = x,i = x i, +, i =,...,n, x i, = x i, + x i, +,,i =,...,n, 39 x n, = x,n = x n, + x n, +, =,...,n, x n,n = x n,n + x n,n + = n. The system of equations 39 has a closed form solution and, therefore, it can be solved explicitly for each x i, [5,8]. This allows us to test the performance of the subgradient algorithm 38 in comparison with the true ranks of the odel. odel : The second model i.e. odel in the Figure 3 b differs from the odel, as it has only one link from the node n,n. As the transition matrix corresponding to the odel is periodic, the power method x k+ = Px k does not converge for it [5,8]. Taking arbitrary value of x,, we obtain the system of equations describing ranks of the odel [5]: x i, = x,i = x i,, i =,...,n, x, =, x i, = x i, + x i,,,i =,...,n, x n, = x n = x n, + x 40 n,, =,...,n, x n,n = x n,n + x n,n. Analogically to the odel, the system of equations 40 can be solved explicitly. Further, we proceed to standard normalization and get the normalized ranks as x i, = x i, i, x i, for both odel and odel see Figure 5. otice, that these ranks correspond to the doant eigenvector of the matrix P. However, they are not necessarily the optimal solution of optimization problems 34 or 37. Ranks i node index node index 50 a odel. b odel. Fig. 5: Explicit ranks of a odel and b odel = 5000 and = 50 correspondingly.

22 Anna Timonina-Farkas For = we i compute exact ranks of the described models i.e. odel and odel via systems 39 and 40. Afterwards, we ii solve the reformulation 37 by the algorithm 38 described in the Section 5. Further, we test different possible step sizes t k in the algorithm 38. For odel and odel correspondingly, Figure 6 demonstrates the optimal value convergence of the problem 37 obtained by the algorithm 38 with the diishing step size i.e. lim k t k = 0, k= t k = and with the starting point x = 0,,..., T. Gx k -x k +ɛ x Gx k -x k + ԑ x Iteration number a odel Iteration number b odel. Optimal value of the initial problem 34 Optimal value of 35 under additional constraint x =0 Optimal value of the problem 37 =/k+ =/k+ 0.9 =/k+ 0.5 =/k+ 0.3 =/k+ 0. Optimal value of the initial problem 34 Optimal value of 35 under additional constraint x =0 Optimal value of the problem 37 =/k+ =/k+ 0.9 Subgradientdescent t k =/k+ 0.5 =/k+ 0.3 =/k+ 0. Fig. 6: Optimal value convergence for the problem 37 obtained by the algorithm 38 with diishing step size t k = k+ d, k with ε = and d 0,. As the subgradient method is not necessarily the descent method, one should, in general, keep track of the best function value at every iteration k: g best = i,,...,k gxi. We, however, test the performance of the stopping rule gx k+ > gx k instead of keeping track of the best function value. This allows to enhance efficiency of the algorithm Figure 6. In the Figure 6, one can see that larger step sizes lead to the earlier

23 Robust PageRank 3 break of iterations under the stopping rule gx k+ > gx k. Figure 7 compares ranks estimated via algorithm 38 with the doant eigenvector of the matrix P Last row ranks imposed by the problem 37 Ranks 4 3 Last row ranks imposed by the eigenvector of matrix P =/k+ =/k+ 0.5 =/k+ 0.3 =/k ode number in the last row Diagonal ranks imposed by the eigenvector of matrix P a Last row elements. Ranks 4 3 Diagonal ranks imposed by the problem 37 Step size /k+ 0.5 Step size /k ode number on the diagonal b Diagonal elements. Fig. 7: odel : Estimated ranks v.s the doant eigenvector. In the example of Figure 7 we use ε = as the parameter of the optimization problem 37. For larger values of ε, the robust eigenvector stops distinguishing ranks of highimportance pages. This is in line with conditions 30, 3 and 3 and the Theorem, which claim that the uncertainty about future pages becomes doant if the value of ε = ε gets too high, which makes ranks of current pages indistinguishable from each other. Therefore, increasing the value of ε even further would lead to the solution, where one cannot differ between ranks of pages x,...,x at all notice, that x = 0 and x = are fixed in the optimization problem 37. For small values of ε e.g. ε <<, the optimal solution of the problem 34 should approach the doant eigenvector of the unperturbed transition matrix P. As we solve the optimization problem 37 instead of the problem 34 for numerical pur-

24 4 Anna Timonina-Farkas poses, our optimal solution approaches the unique doant eigenvector of the Google matrix G = αp + αs, with S being the doubly stochastic matrix with elements and the damping factor α = Figure 8 a demonstrates values x i,, i, =,...,n of the doant eigenvector of the matrix G with α = 0.99 for odel. Large amount of high-importance pages have the same rank already for α = 0.99, which can be also observed in the Figure 8 b view from above [5, 8]. Even more pages become indistinguishable if we decrease the damping factor α. a Ranks x i,, i, =,...,n. b View from above. Fig. 8: Eigenvector of the Google matrix G = αp + αs with α = 0.99 corresponding to the odel. By this, one could claim that the doant eigenvector of the matrix G should be viewed as the robust eigenvector of the perturbed family of matrices Q defined by 7, 8 and 9. This, however, would not provide the methodology to rank highimportance pages, which are the core of all search engine results. For small- and medium-size problems, the optimal solution of the optimization problem 37 with ε provides better results in terms of ranking of high-importance pages than the direct use of the transition matrix G. Further, we provide the results for the subgradient method 38 for the following step sizes for odel with = 0.000: Constant step length: t k = h gx k, k; Polyak s step size: t k = gxk g, where g is the unknown optimal value for the gx k problem 34: t k = gxk g gx k t k = gxk g gx k gx k i,,...,k gxi + k gx k, 4 gx k i,,...,k gxi + k gx k. 4

25 =/ gx k Robust PageRank 5 otice, that one can use t k = and t k gx k k = k gx instead of corresponding step sizes 4 and 4 in case one applies the stopping rule gx k+ > gx k. k This follows from the fact, that gx k = g best = i,,...,k gx i if the stopping rule is implemented. Otherwise, one needs to use step sizes 4 and 4 directly Optimal value of the Optimal value of the initial problem 34 initial problem Optimal value of 35 under additional Optimal value of 35 under additional Gx k -x k + ԑ x constraint x =0 Optimal value of the problem 37 =/ gx k =0.9/ gx k =0.5/ gx k Gx k -x k + ԑ x constraint x =0 Optimal value of the problem 37 =/k gx k =0.5/k gx k =/k 0.5 gx k =0.3/ gx k =0.5/k 0.5 gx k =0./ gx k =/k 0.3 gx k Iteration number Iteration number a Constant step length. b Polyak s step size. Fig. 9: odel : Optimal solution of the problem 37 obtained by the algorithm 38 with ε = and the stopping rule gx k+ > gx k Last row ranks imposed by the problem 37 4 Last row ranks imposed by the eigenvector of matrix P Ranks 3 =0.9/ gx k =0./ gx k ode number in the last row a Last row elements. Antidiagonal ranks imposed by the problem 37 Antidiagonal ranks imposed by the eigenvector of matrix P =/k gx k Ranks.5.5 =/k 0.5 gx k =0.5/k 0.5 gx k =/k 0.3 gx k ode number on the antidiagonal b Antidiagonal elements. Fig. 0: odel : Estimated ranks v.s the doant eigenvector.

26 6 Anna Timonina-Farkas Similarly to the Figure 7, in Figures 9 and 0 we use ε = as the parameter of the optimization problem 37. For larger values of ε, the robust eigenvector does not distinguish ranks of high-importance pages according to conditions 30, 3 and 3 and the Theorem. Finally, we compare results obtained by the use of the stopping rule gx k+ > gx k and the results implied by the standard for subgradient algorithms choice of the function value: g best = i,,...,k gxi. From Figures and, one can see that there is no sufficient gain in estimation accuracy in case one keeps track of the best function value till the iteration k: the function value decreases monotonically till the iteration number k 00 see Figures and, after which the step size is being reduced but no additional accuracy is being achieved inimal value of Px-x + ԑ x on standard simplex Gx k -x k + ԑ x inimal value of Gx-x + ԑ x on standard simplex with x, =0 Subgradient descent with t k =/k 0. gx k Iteration number Fig. : odel : Optimal solution of the problem 37 obtained by the algorithm 38 without stopping rule. Gx k -x k + ԑ x k inimal value of Px-x + ԑ x on standard simplex inimal value of Gx-x + ԑ x on standard simplex with x, =0 =/k 0. gx k Iteration number Fig. : odel : Optimal solution of the problem 35 obtained by the algorithm 38 with the standard for subgradient algorithms choice of the function value g best. In the next section, we discuss the Google matrix G and corresponding power methods designed for high-dimensional cases.

Graph Models The PageRank Algorithm

Graph Models The PageRank Algorithm Graph Models The PageRank Algorithm Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 The PageRank Algorithm I Invented by Larry Page and Sergey Brin around 1998 and

More information

Uncertainty and Randomization

Uncertainty and Randomization Uncertainty and Randomization The PageRank Computation in Google Roberto Tempo IEIIT-CNR Politecnico di Torino tempo@polito.it 1993: Robustness of Linear Systems 1993: Robustness of Linear Systems 16 Years

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

IR: Information Retrieval

IR: Information Retrieval / 44 IR: Information Retrieval FIB, Master in Innovation and Research in Informatics Slides by Marta Arias, José Luis Balcázar, Ramon Ferrer-i-Cancho, Ricard Gavaldá Department of Computer Science, UPC

More information

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa Introduction to Search Engine Technology Introduction to Link Structure Analysis Ronny Lempel Yahoo Labs, Haifa Outline Anchor-text indexing Mathematical Background Motivation for link structure analysis

More information

Link Analysis Ranking

Link Analysis Ranking Link Analysis Ranking How do search engines decide how to rank your query results? Guess why Google ranks the query results the way it does How would you do it? Naïve ranking of query results Given query

More information

Application. Stochastic Matrices and PageRank

Application. Stochastic Matrices and PageRank Application Stochastic Matrices and PageRank Stochastic Matrices Definition A square matrix A is stochastic if all of its entries are nonnegative, and the sum of the entries of each column is. We say A

More information

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS DATA MINING LECTURE 3 Link Analysis Ranking PageRank -- Random walks HITS How to organize the web First try: Manually curated Web Directories How to organize the web Second try: Web Search Information

More information

1998: enter Link Analysis

1998: enter Link Analysis 1998: enter Link Analysis uses hyperlink structure to focus the relevant set combine traditional IR score with popularity score Page and Brin 1998 Kleinberg Web Information Retrieval IR before the Web

More information

eigenvalues, markov matrices, and the power method

eigenvalues, markov matrices, and the power method eigenvalues, markov matrices, and the power method Slides by Olson. Some taken loosely from Jeff Jauregui, Some from Semeraro L. Olson Department of Computer Science University of Illinois at Urbana-Champaign

More information

A Note on Google s PageRank

A Note on Google s PageRank A Note on Google s PageRank According to Google, google-search on a given topic results in a listing of most relevant web pages related to the topic. Google ranks the importance of webpages according to

More information

Calculating Web Page Authority Using the PageRank Algorithm

Calculating Web Page Authority Using the PageRank Algorithm Jacob Miles Prystowsky and Levi Gill Math 45, Fall 2005 1 Introduction 1.1 Abstract In this document, we examine how the Google Internet search engine uses the PageRank algorithm to assign quantitatively

More information

Data Mining Recitation Notes Week 3

Data Mining Recitation Notes Week 3 Data Mining Recitation Notes Week 3 Jack Rae January 28, 2013 1 Information Retrieval Given a set of documents, pull the (k) most similar document(s) to a given query. 1.1 Setup Say we have D documents

More information

Online Social Networks and Media. Link Analysis and Web Search

Online Social Networks and Media. Link Analysis and Web Search Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information

More information

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018 Lab 8: Measuring Graph Centrality - PageRank Monday, November 5 CompSci 531, Fall 2018 Outline Measuring Graph Centrality: Motivation Random Walks, Markov Chains, and Stationarity Distributions Google

More information

Link Analysis. Leonid E. Zhukov

Link Analysis. Leonid E. Zhukov Link Analysis Leonid E. Zhukov School of Data Analysis and Artificial Intelligence Department of Computer Science National Research University Higher School of Economics Structural Analysis and Visualization

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/7/2012 Jure Leskovec, Stanford C246: Mining Massive Datasets 2 Web pages are not equally important www.joe-schmoe.com

More information

MATH36001 Perron Frobenius Theory 2015

MATH36001 Perron Frobenius Theory 2015 MATH361 Perron Frobenius Theory 215 In addition to saying something useful, the Perron Frobenius theory is elegant. It is a testament to the fact that beautiful mathematics eventually tends to be useful,

More information

Lecture 3: Huge-scale optimization problems

Lecture 3: Huge-scale optimization problems Liege University: Francqui Chair 2011-2012 Lecture 3: Huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) March 9, 2012 Yu. Nesterov () Huge-scale optimization problems 1/32March 9, 2012 1

More information

0.1 Naive formulation of PageRank

0.1 Naive formulation of PageRank PageRank is a ranking system designed to find the best pages on the web. A webpage is considered good if it is endorsed (i.e. linked to) by other good webpages. The more webpages link to it, and the more

More information

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10 PageRank Ryan Tibshirani 36-462/36-662: Data Mining January 24 2012 Optional reading: ESL 14.10 1 Information retrieval with the web Last time we learned about information retrieval. We learned how to

More information

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 55, NO. 9, SEPTEMBER 2010 1987 Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE Abstract

More information

Subgradient methods for huge-scale optimization problems

Subgradient methods for huge-scale optimization problems Subgradient methods for huge-scale optimization problems Yurii Nesterov, CORE/INMA (UCL) May 24, 2012 (Edinburgh, Scotland) Yu. Nesterov Subgradient methods for huge-scale problems 1/24 Outline 1 Problems

More information

Link Analysis. Stony Brook University CSE545, Fall 2016

Link Analysis. Stony Brook University CSE545, Fall 2016 Link Analysis Stony Brook University CSE545, Fall 2016 The Web, circa 1998 The Web, circa 1998 The Web, circa 1998 Match keywords, language (information retrieval) Explore directory The Web, circa 1998

More information

Link Mining PageRank. From Stanford C246

Link Mining PageRank. From Stanford C246 Link Mining PageRank From Stanford C246 Broad Question: How to organize the Web? First try: Human curated Web dictionaries Yahoo, DMOZ LookSmart Second try: Web Search Information Retrieval investigates

More information

Lecture 5 : Projections

Lecture 5 : Projections Lecture 5 : Projections EE227C. Lecturer: Professor Martin Wainwright. Scribe: Alvin Wan Up until now, we have seen convergence rates of unconstrained gradient descent. Now, we consider a constrained minimization

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 14: Random Walks, Local Graph Clustering, Linear Programming Lecturer: Shayan Oveis Gharan 3/01/17 Scribe: Laura Vonessen Disclaimer: These

More information

Math 304 Handout: Linear algebra, graphs, and networks.

Math 304 Handout: Linear algebra, graphs, and networks. Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our

More information

Lecture 7 Mathematics behind Internet Search

Lecture 7 Mathematics behind Internet Search CCST907 Hidden Order in Daily Life: A Mathematical Perspective Lecture 7 Mathematics behind Internet Search Dr. S. P. Yung (907A) Dr. Z. Hua (907B) Department of Mathematics, HKU Outline Google is the

More information

The Google Markov Chain: convergence speed and eigenvalues

The Google Markov Chain: convergence speed and eigenvalues U.U.D.M. Project Report 2012:14 The Google Markov Chain: convergence speed and eigenvalues Fredrik Backåker Examensarbete i matematik, 15 hp Handledare och examinator: Jakob Björnberg Juni 2012 Department

More information

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. . Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A. Nemirovski Arkadi.Nemirovski@isye.gatech.edu Linear Optimization Problem,

More information

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES MICHELE BENZI AND CHRISTINE KLYMKO Abstract This document contains details of numerical

More information

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Stationary Distributions Monday, September 28, 2015 2:02 PM No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1. Homework 1 due Friday, October 2 at 5 PM strongly

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Updating PageRank. Amy Langville Carl Meyer

Updating PageRank. Amy Langville Carl Meyer Updating PageRank Amy Langville Carl Meyer Department of Mathematics North Carolina State University Raleigh, NC SCCM 11/17/2003 Indexing Google Must index key terms on each page Robots crawl the web software

More information

Page rank computation HPC course project a.y

Page rank computation HPC course project a.y Page rank computation HPC course project a.y. 2015-16 Compute efficient and scalable Pagerank MPI, Multithreading, SSE 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and

More information

Entropy Rate of Stochastic Processes

Entropy Rate of Stochastic Processes Entropy Rate of Stochastic Processes Timo Mulder tmamulder@gmail.com Jorn Peters jornpeters@gmail.com February 8, 205 The entropy rate of independent and identically distributed events can on average be

More information

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University. Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org #1: C4.5 Decision Tree - Classification (61 votes) #2: K-Means - Clustering

More information

How does Google rank webpages?

How does Google rank webpages? Linear Algebra Spring 016 How does Google rank webpages? Dept. of Internet and Multimedia Eng. Konkuk University leehw@konkuk.ac.kr 1 Background on search engines Outline HITS algorithm (Jon Kleinberg)

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal

More information

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine CS 277: Data Mining Mining Web Link Structure Class Presentations In-class, Tuesday and Thursday next week 2-person teams: 6 minutes, up to 6 slides, 3 minutes/slides each person 1-person teams 4 minutes,

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Slides based on those in:

Slides based on those in: Spyros Kontogiannis & Christos Zaroliagis Slides based on those in: http://www.mmds.org High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering

More information

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating Applied Mathematical Sciences, Vol. 4, 200, no. 9, 905-9 A New Method to Find the Eigenvalues of Convex Matrices with Application in Web Page Rating F. Soleymani Department of Mathematics, Islamic Azad

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

On the mathematical background of Google PageRank algorithm

On the mathematical background of Google PageRank algorithm Working Paper Series Department of Economics University of Verona On the mathematical background of Google PageRank algorithm Alberto Peretti, Alberto Roveda WP Number: 25 December 2014 ISSN: 2036-2919

More information

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged. Web Data Mining PageRank, University of Szeged Why ranking web pages is useful? We are starving for knowledge It earns Google a bunch of money. How? How does the Web looks like? Big strongly connected

More information

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University How works or How linear algebra powers the search engine M. Ram Murty, FRSC Queen s Research Chair Queen s University From: gomath.com/geometry/ellipse.php Metric mishap causes loss of Mars orbiter

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 8 Random Walks, Matrices and PageRank Graphs

More information

Data and Algorithms of the Web

Data and Algorithms of the Web Data and Algorithms of the Web Link Analysis Algorithms Page Rank some slides from: Anand Rajaraman, Jeffrey D. Ullman InfoLab (Stanford University) Link Analysis Algorithms Page Rank Hubs and Authorities

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior

More information

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson)

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson) Link Analysis Web Ranking Documents on the web are first ranked according to their relevance vrs the query Additional ranking methods are needed to cope with huge amount of information Additional ranking

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google

More information

Lecture 12: Link Analysis for Web Retrieval

Lecture 12: Link Analysis for Web Retrieval Lecture 12: Link Analysis for Web Retrieval Trevor Cohn COMP90042, 2015, Semester 1 What we ll learn in this lecture The web as a graph Page-rank method for deriving the importance of pages Hubs and authorities

More information

Analysis of Google s PageRank

Analysis of Google s PageRank Analysis of Google s PageRank Ilse Ipsen North Carolina State University Joint work with Rebecca M. Wills AN05 p.1 PageRank An objective measure of the citation importance of a web page [Brin & Page 1998]

More information

Google Page Rank Project Linear Algebra Summer 2012

Google Page Rank Project Linear Algebra Summer 2012 Google Page Rank Project Linear Algebra Summer 2012 How does an internet search engine, like Google, work? In this project you will discover how the Page Rank algorithm works to give the most relevant

More information

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states. Chapter 8 Finite Markov Chains A discrete system is characterized by a set V of states and transitions between the states. V is referred to as the state space. We think of the transitions as occurring

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports Jevin West and Carl T. Bergstrom November 25, 2008 1 Overview There

More information

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October Finding normalized and modularity cuts by spectral clustering Marianna Bolla Institute of Mathematics Budapest University of Technology and Economics marib@math.bme.hu Ljubjana 2010, October Outline Find

More information

Lecture: Local Spectral Methods (1 of 4)

Lecture: Local Spectral Methods (1 of 4) Stat260/CS294: Spectral Graph Methods Lecture 18-03/31/2015 Lecture: Local Spectral Methods (1 of 4) Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning: these notes are still very rough. They provide

More information

Web Structure Mining Nodes, Links and Influence

Web Structure Mining Nodes, Links and Influence Web Structure Mining Nodes, Links and Influence 1 Outline 1. Importance of nodes 1. Centrality 2. Prestige 3. Page Rank 4. Hubs and Authority 5. Metrics comparison 2. Link analysis 3. Influence model 1.

More information

Updating Markov Chains Carl Meyer Amy Langville

Updating Markov Chains Carl Meyer Amy Langville Updating Markov Chains Carl Meyer Amy Langville Department of Mathematics North Carolina State University Raleigh, NC A. A. Markov Anniversary Meeting June 13, 2006 Intro Assumptions Very large irreducible

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Computing PageRank using Power Extrapolation

Computing PageRank using Power Extrapolation Computing PageRank using Power Extrapolation Taher Haveliwala, Sepandar Kamvar, Dan Klein, Chris Manning, and Gene Golub Stanford University Abstract. We present a novel technique for speeding up the computation

More information

The Push Algorithm for Spectral Ranking

The Push Algorithm for Spectral Ranking The Push Algorithm for Spectral Ranking Paolo Boldi Sebastiano Vigna March 8, 204 Abstract The push algorithm was proposed first by Jeh and Widom [6] in the context of personalized PageRank computations

More information

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix Steve Kirkland University of Regina June 5, 2006 Motivation: Google s PageRank algorithm finds the stationary vector of a stochastic

More information

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term; Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many

More information

Markov Chains Handout for Stat 110

Markov Chains Handout for Stat 110 Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of

More information

Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci

Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci Link Analysis Information Retrieval and Data Mining Prof. Matteo Matteucci Hyperlinks for Indexing and Ranking 2 Page A Hyperlink Page B Intuitions The anchor text might describe the target page B Anchor

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics Spectral Graph Theory and You: and Centrality Metrics Jonathan Gootenberg March 11, 2013 1 / 19 Outline of Topics 1 Motivation Basics of Spectral Graph Theory Understanding the characteristic polynomial

More information

The PageRank Problem, Multi-Agent. Consensus and Web Aggregation

The PageRank Problem, Multi-Agent. Consensus and Web Aggregation The PageRank Problem, Multi-Agent Consensus and Web Aggregation A Systems and Control Viewpoint arxiv:32.904v [cs.sy] 6 Dec 203 Hideaki Ishii and Roberto Tempo PageRank is an algorithm introduced in 998

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

Statistical Ranking Problem

Statistical Ranking Problem Statistical Ranking Problem Tong Zhang Statistics Department, Rutgers University Ranking Problems Rank a set of items and display to users in corresponding order. Two issues: performance on top and dealing

More information

Monte Carlo methods in PageRank computation: When one iteration is sufficient

Monte Carlo methods in PageRank computation: When one iteration is sufficient Monte Carlo methods in PageRank computation: When one iteration is sufficient Nelly Litvak (University of Twente, The Netherlands) e-mail: n.litvak@ewi.utwente.nl Konstantin Avrachenkov (INRIA Sophia Antipolis,

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Mathematical Properties & Analysis of Google s PageRank

Mathematical Properties & Analysis of Google s PageRank Mathematical Properties & Analysis of Google s PageRank Ilse Ipsen North Carolina State University, USA Joint work with Rebecca M. Wills Cedya p.1 PageRank An objective measure of the citation importance

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

THEORY OF SEARCH ENGINES. CONTENTS 1. INTRODUCTION 1 2. RANKING OF PAGES 2 3. TWO EXAMPLES 4 4. CONCLUSION 5 References 5

THEORY OF SEARCH ENGINES. CONTENTS 1. INTRODUCTION 1 2. RANKING OF PAGES 2 3. TWO EXAMPLES 4 4. CONCLUSION 5 References 5 THEORY OF SEARCH ENGINES K. K. NAMBIAR ABSTRACT. Four different stochastic matrices, useful for ranking the pages of the Web are defined. The theory is illustrated with examples. Keywords Search engines,

More information

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank David Glickenstein November 3, 4 Representing graphs as matrices It will sometimes be useful to represent graphs

More information

Applications to network analysis: Eigenvector centrality indices Lecture notes

Applications to network analysis: Eigenvector centrality indices Lecture notes Applications to network analysis: Eigenvector centrality indices Lecture notes Dario Fasino, University of Udine (Italy) Lecture notes for the second part of the course Nonnegative and spectral matrix

More information

6.842 Randomness and Computation March 3, Lecture 8

6.842 Randomness and Computation March 3, Lecture 8 6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo Group Prof. Daniel Cremers 11. Sampling Methods: Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

(i) The optimisation problem solved is 1 min

(i) The optimisation problem solved is 1 min STATISTICAL LEARNING IN PRACTICE Part III / Lent 208 Example Sheet 3 (of 4) By Dr T. Wang You have the option to submit your answers to Questions and 4 to be marked. If you want your answers to be marked,

More information

Problem 1 (Exercise 2.2, Monograph)

Problem 1 (Exercise 2.2, Monograph) MS&E 314/CME 336 Assignment 2 Conic Linear Programming January 3, 215 Prof. Yinyu Ye 6 Pages ASSIGNMENT 2 SOLUTIONS Problem 1 (Exercise 2.2, Monograph) We prove the part ii) of Theorem 2.1 (Farkas Lemma

More information

A hybrid reordered Arnoldi method to accelerate PageRank computations

A hybrid reordered Arnoldi method to accelerate PageRank computations A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

Lecture 25: Subgradient Method and Bundle Methods April 24

Lecture 25: Subgradient Method and Bundle Methods April 24 IE 51: Convex Optimization Spring 017, UIUC Lecture 5: Subgradient Method and Bundle Methods April 4 Instructor: Niao He Scribe: Shuanglong Wang Courtesy warning: hese notes do not necessarily cover everything

More information

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors Michael K. Ng Centre for Mathematical Imaging and Vision and Department of Mathematics

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 22: Linear Programming Revisited Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/ School

More information