Robust PageRank: Stationary Distribution on a Growing Network Structure

Similar documents
Graph Models The PageRank Algorithm

Uncertainty and Randomization

Online Social Networks and Media. Link Analysis and Web Search

IR: Information Retrieval

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

Link Analysis Ranking

Application. Stochastic Matrices and PageRank

DATA MINING LECTURE 13. Link Analysis Ranking PageRank -- Random walks HITS

1998: enter Link Analysis

eigenvalues, markov matrices, and the power method

A Note on Google s PageRank

Calculating Web Page Authority Using the PageRank Algorithm

Data Mining Recitation Notes Week 3

Online Social Networks and Media. Link Analysis and Web Search

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Link Analysis. Leonid E. Zhukov

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

MATH36001 Perron Frobenius Theory 2015

Lecture 3: Huge-scale optimization problems

0.1 Naive formulation of PageRank

PageRank. Ryan Tibshirani /36-662: Data Mining. January Optional reading: ESL 14.10

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Subgradient methods for huge-scale optimization problems

Link Analysis. Stony Brook University CSE545, Fall 2016

Link Mining PageRank. From Stanford C246

Lecture 5 : Projections

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Lecture 14: Random Walks, Local Graph Clustering, Linear Programming

Math 304 Handout: Linear algebra, graphs, and networks.

Near-Potential Games: Geometry and Dynamics

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

Lecture 7 Mathematics behind Internet Search

The Google Markov Chain: convergence speed and eigenvalues

Selected Examples of CONIC DUALITY AT WORK Robust Linear Optimization Synthesis of Linear Controllers Matrix Cube Theorem A.

SUPPLEMENTARY MATERIALS TO THE PAPER: ON THE LIMITING BEHAVIOR OF PARAMETER-DEPENDENT NETWORK CENTRALITY MEASURES

No class on Thursday, October 1. No office hours on Tuesday, September 29 and Thursday, October 1.

Near-Potential Games: Geometry and Dynamics

Updating PageRank. Amy Langville Carl Meyer

Page rank computation HPC course project a.y

Entropy Rate of Stochastic Processes

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

How does Google rank webpages?

STA141C: Big Data & High Performance Statistical Computing

CS 277: Data Mining. Mining Web Link Structure. CS 277: Data Mining Lectures Analyzing Web Link Structure Padhraic Smyth, UC Irvine

Written Examination

Slides based on those in:

A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

Separation of Variables in Linear PDE: One-Dimensional Problems

On the mathematical background of Google PageRank algorithm

PageRank algorithm Hubs and Authorities. Data mining. Web Data Mining PageRank, Hubs and Authorities. University of Szeged.

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

ECEN 689 Special Topics in Data Science for Communications Networks

Data and Algorithms of the Web

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery

Web Ranking. Classification (manual, automatic) Link Analysis (today s lesson)

Degenerate Perturbation Theory. 1 General framework and strategy

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

Lecture 12: Link Analysis for Web Retrieval

Analysis of Google s PageRank

Google Page Rank Project Linear Algebra Summer 2012

Definition A finite Markov chain is a memoryless homogeneous discrete stochastic process with a finite number of states.

LINEAR SYSTEMS (11) Intensive Computation

Pseudocode for calculating Eigenfactor TM Score and Article Influence TM Score using data from Thomson-Reuters Journal Citations Reports

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Lecture: Local Spectral Methods (1 of 4)

Web Structure Mining Nodes, Links and Influence

Updating Markov Chains Carl Meyer Amy Langville

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Krylov Subspace Methods to Calculate PageRank

Computing PageRank using Power Extrapolation

The Push Algorithm for Spectral Ranking

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Markov Chains Handout for Stat 110

Link Analysis Information Retrieval and Data Mining. Prof. Matteo Matteucci

Computational Economics and Finance

Spectral Graph Theory and You: Matrix Tree Theorem and Centrality Metrics

The PageRank Problem, Multi-Agent. Consensus and Web Aggregation

ARock: an algorithmic framework for asynchronous parallel coordinate updates

Statistical Ranking Problem

Monte Carlo methods in PageRank computation: When one iteration is sufficient

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Multi-Robotic Systems

Optimization methods

Mathematical Properties & Analysis of Google s PageRank

minimize x subject to (x 2)(x 4) u,

Linear Programming: Simplex

THEORY OF SEARCH ENGINES. CONTENTS 1. INTRODUCTION 1 2. RANKING OF PAGES 2 3. TWO EXAMPLES 4 4. CONCLUSION 5 References 5

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Applications to network analysis: Eigenvector centrality indices Lecture notes

6.842 Randomness and Computation March 3, Lecture 8

Markov Chains and Stochastic Sampling

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

Lecture 3: Semidefinite Programming

(i) The optimisation problem solved is 1 min

Problem 1 (Exercise 2.2, Monograph)

A hybrid reordered Arnoldi method to accelerate PageRank computations

Lecture 25: Subgradient Method and Bundle Methods April 24

MultiRank and HAR for Ranking Multi-relational Data, Transition Probability Tensors, and Multi-Stochastic Tensors

Optimisation and Operations Research

Transcription:

oname manuscript o. will be inserted by the editor Robust PageRank: Stationary Distribution on a Growing etwork Structure Anna Timonina-Farkas Received: date / Accepted: date Abstract PageRank PR is a challenging and important network ranking algorithm, which plays a crucial role in information technologies and numerical analysis due to its huge dimension and wide range of possible applications. The traditional approach to PR goes back to the pioneering paper of S. Brin and L. Page [5], who developed the initial method in order to rank websites in the search engine results. Recently, A. Juditsky and B. Polyak in the work [3] proposed a robust formulation of the PageRank model for the case, when links in the network structure may vary, i.e. some links may appear or disappear influencing the transportation matrix defined by the network structure. In this article, we make a further step forward, allowing the network to vary not only in links, but also in the number of nodes. We focus on growing network structures e.g. Internet and we propose a new robust formulation of the PageRank problem for uncertain networks with fixed growth rate i.e. the expected number of pages which appear in the future is fixed. Further, we compare our results with the ranks estimated by the method of A. Juditsky and B. Polyak [3], as well as with the true ranks of tested network structures. We formulate the robust PageRank in terms of non-convex optimization problems and we bound these formulations from above by convex but non-smooth optimization problems. In the numerical part of the article, we propose some smoothening techniques, which allow to obtain the solution accurately and efficiently in case of middle-size networks by the use of the well-known subgradient algorithm avoiding all non-smooth points. Furthermore, we address high-dimensional algorithms by modelling PageRank via the use of the multinomial distribution. Keywords World Wide Web PageRank robust optimization growing networks The research is conducted in line with the Schrödinger Fellowship of the Austrian Science Fund FWF Anna Timonina-Farkas, PhD Risk Analytics and Optimization Chair, École Polytechnique Fédérale de Lausanne RAO, EPFL EPFL-CD-TEI-RAO, Station 5, CH-05 Lausanne, Switzerland T: +4 0 693 00 36; F: +4 0 693 4 89 E-mail: anna.farkas@epfl.ch Homepage: http://homepage.univie.ac.at/anna.timonina/

Anna Timonina-Farkas Introduction PageRank PR is an automated information retrieval algorithm, which is designed by S. Brin and L. Page [5,] during the rise of the World Wide Web in order to improve the quality of at the moment existing search engines. The algorithm relies not only on keyword matching but also on additional structure present in the hypertext providing higher quality search results. athematically speaking, the rank of some page i, i =,..., is the probability for a random user to be on this page. We denote the rank of the page i by x i, i =,...,. Let L i be the set of all web-pages, which refer to the page i. The probability, that a random user sitting on a page L i clicks on the link to the page i is equal to P i Figure. Set L i Web-site k L i Web-site L i P ik P i Web-site i Fig. : Links outgoing from the web-site. The idea of the PageRank algorithm is, therefore, based on the interchange of ranks between pages, which is formulated in line with probability theory rules: x i = L i P i x, i =,...,. Intuitively, the equation can be seen in the following way: the page gives a part of its rank or score to the page i, if there is a direct link from the page to the page i. The amount of the transferred score is proportional to the probability for a random user to click on the link to the page i sitting on the page. In the initial PageRank formulation [6,8,5,], the probability P i is assumed to be an inverse of the total number of links outgoing from the page denoted by n, i.e. P i = n. This assumption lacks realism, as it assigns equal probabilities for a random user to move to the page i from the page. In reality, these probabilities are rankdependent: the user is more likely to move to the page with a higher rank, than to the page with a lower rank. If one denotes by P the transportation or transition matrix with entries P i, i, =

Robust PageRank 3,...,, the PageRank problem can be formulated as the problem of finding the principal eigenvector x = x,..., x T of this matrix, which exists due to the well-known Perron-Frobenius theorem [0] for non-negative matrices: P x = x. As the principal eigenvector corresponding the eigenvalue λ = is not necessarily unique for non-negative stochastic matrices [0], the original PageRank problem is replaced with finding the principal eigenvector of the following modified matrix [5, 8,] known as the Google matrix: x Σ G = αp + αs, where α 0, is the damping factor [5] and S is the doubly stochastic matrix of the form: S =, i, =,...,. i In the pioneering paper of S. Brin and L. Page [5], the following intuitive ustification is given to the matrix G: We assume there is a random surfer who is given a web page at random and keeps clicking on links, never hitting back but eventually gets bored and starts on another random page. The probability that the random surfer visits a page is its PageRank. And, the α damping factor is the probability at each page the random surfer will get bored and request another random page. The principal eigenvector ȳ = ȳ,...,ȳ of the matrix G is unique according to the Perron-Frobenius theorem for positive matrices [0]. oreover, the well-known power method y k+ = Gy k [] converges to ȳ for each starting value y 0 satisfying simplex constraints, which allows to work with high-dimensional cases. However, in the work of B. Polyak and A. Timonina see [5], it is shown that the solution ȳ of the modified problem Gȳ = ȳ can be far enough from the original x of the system P x = x. oreover, for specific network structures it may happen that the eigenvector of the matrix G stops distinguishing between high-ranked pages [5], that build the core of search engine results: if a large amount of high-ranked pages obtains the same score, the ranking becomes ineffective, making the difference between high-ranked and low-ranked pages to be almost binary. Therefore, the approach with the perturbed matrix G works well only in the presence of a small number of high-ranked pages, which has been true when the initial algorithm [5] has been developed but which currently needs reconsideration as the amount of information on the web continues to grow and the number of experienced users in the art of web research is increasing. In the work of B. Polyak and A. Timonina see [5], the authors propose an l - regularization method which avoids low-ranked pages: Px x + ε x, where parameter ε regulates the number of high-ranked pages one would like to consider and where Σ denotes the standard simplex on R, i.e. Σ = ν R : i= ν i, ν i 0. This optimization problem can be solved via the coordinate descent method analogous to the Gauss-Seidel algorithm for solving linear equations [6, 5,, 5, 7], which allows to enhance efficiency of numerical ranking and to improve accuracy by differentiating between high-importance pages.

4 Anna Timonina-Farkas However, the optimization problem has a drawback: it is not robust to variations in the Internet structure, while the World Wide Web is, clearly, changing over time: the amount of information on the web is growing, new pages appear, some web-sites old or spam ones disappear. This is particularly important due to the fact, that PageRank computation takes such a significant amount of time Google re-evaluation of PageRank takes a month, that around -3% of new web-pages opens during this time with new links influencing the transportation matrix P. In the work of A. Juditsky and B. Polyak [3], the robust reformulation of the PageRank model is proposed for the case, when links in the network structure may vary, i.e. some links may appear or disappear influencing the transition matrix P: x Σ P + ξ x x, 3 ξ P where is l or l norm, P is a set of allowed perturbations, such that the matrix P + ξ stays column-stochastic, Σ is the standard simplex on R. The optimization problem 3 can be bounded from above by a convex non-smooth optimization problem of the following type see [,3,3]: x Σ Px x + ε x, 4 where ε is a parameter dependent on the perturbation set P. However, the formulation 3 lacks some realism: though the matrix P+ξ is uncertain in this case, the dimension of the matrix stays the same, meaning that no pages may appear or disappear. In reality though, i the number of web-pages in the Internet grows and ii one does not explicitly know, how the structure of the Internet changes. In this article, we allow the network to vary not only in links, but also in the number of nodes, considering, therefore, growing network structures with transition matrix QP: x Σ QP Ξ QPx x, 5 where QP depends on the matrix P. Further, Ξ is the perturbation set adusted for the network growth so that the matrix Q stays column-stochastic. In Section we propose l, l and Frobenius-norm robust reformulations for PageRank of the uncertain network with fixed growth rate i.e. the expected number of pages which appear in the future is fixed and we bound these reformulations from above by convex but non-smooth optimization problems. In Section 3 we demonstrate that formultations robust to network growth impose upper bounds on the formulations robust to perturbations in links, i.e. on the formulations proposed by A. Juditsky and B. Polyak in the work [3]. In Section 4 we study properties of the chosen perturbation set, which helps to shed some light on the parameter ε of problems and 4. In Section 5 we consider smoothening techniques for the subgradient method in order to solve formulated optimization problems in middle-dimensional cases. Further in the article, we address high-dimensional algorithms by modelling PageRank via the use of the multinomial distribution and, afterwards, we conclude, as well as we state directions for future research.

Robust PageRank 5 Problem Formulation Consider a column-stochastic non-negative transportation matrix P R, which satisfies P i 0, i, =,..., and i= P i =, =,..., by definition. The wellknown Perron-Frobenius theorem [0] states that there exists doant or principal eigenvector x Σ, where we denote by Σ = v R : i= v i =, v i 0 the standard simplex on R : P x = x. 6 For each column of the matrix P, the entry P i, i =,..., corresponds to the transition probability from the node to the node i of the network i.e. the probability to move from the web-page to the web-page i in the Internet. In general, matrix P is huge-dimensional and very sparse, as the number of links outgoing from every node is much smaller than the total number of nodes in the network: the average number of links outgoing from a web-page in the Internet is equal to 0 for the whole web see [3], while the total number of pages in the Internet is ca. 0 9. The doant vector x describes the stationary distribution on the network structure: each element of the vector x denotes the rank of the corresponding node. For the Internet, the ranks can be seen as the time which an average user spends on the particular web-site. The doant vector x is not robust and may be highly vulnerable to small changes in matrix P. A. Juditsky and B. Polyak in their work [3] reformulated the problem in the robust optimization form, allowing matrix P to vary according to the law P+ξ under some conditions on ξ. For this matrix, they found the stationary distribution x robust to variations in links and, therefore, stable with respect to small changes of P. However, in their work, the size of matrix ξ was assumed to be the same as of matrix P, i.e., meaning that growing in the number of nodes was not considered in the network further, this formulation is referred to as a fixed-size model. In reality, though, changes of matrix P happen not only in links, but also in the number of nodes. The number of domains being registered per ute corresponds to the -3% growth of the Internet per month http://www.internetlivestats.com/ total-number-of-websites/. That is why, in this article we consider the following changes of matrix P: P + ξ ζ Q =, 7 ψ χ where P is the column-stochastic transportation matrix describing the current state of the network with pages; ξ is the matrix describing variations in links of the initial network; ζ, ψ and χ are matrices describing links to and from new pages, which may appear in the future ξ is of the size, ψ is of the size, ζ is of the size and χ is of the size. In reality, 0.03 per month. As matrices P and Q must be column-stochastic, ξ, ζ, ψ and χ must satisfy the following properties: ξ i P i, i, =,...,; ψ i 0, i =,...,, =,...,; 8 ζ i 0, i =,...,, =,...,; χ i 0, i, =,...,,

6 Anna Timonina-Farkas saying that all elements of the matrix Q are non-negative, as well as the following properties must hold: i= P i + ξ i + i= ψ i =, =,..., i= ζ i + i= χ i =, =,..., i= ξ i + i= ψ i = 0, =,..., i= ζ i + i= χ i =, =,...,, saying that every column of the matrix Q sums up to notice, that here we use the fact that P is also column-stochastic and i= P i =, =,...,. Similar to the work of A. Juditsky and B. Polyak [3], the function Qx x, Q Ξ where is some norm, can be seen as a measure of goodness of a vector x as a common doant eigenvector of the family Ξ, where Ξ stands for the set of perturbed stochastic matrices of the form 7 under conditions 8 and 9. x Further, let us denote by x = a feasible point, which is a candidate for the x common doant eigenvector of the family Ξ. Let x be of the size and x be of the size. Hence, x is of the size +. otice, that the vector x must belong to the standard simplex Σ + = v R + : + i= v i =, v i 0, that means x i 0, i =,...,, x 0, =,..., and i= x i + = x = i.e. x + x =. We say that the vector ˆx is a robust solution of the eigenvector problem on Ξ if ˆx Arg x Σ + Q Ξ Qx x 9, 0 where is some norm further, we consider l, l and Frobenius-norm robust formulations. The reasonable choice of the uncertainty set Ξ would impose some bounds on the column-wise norms of matrices ξ, ζ, ψ and χ, meaning that the perturbation in links of the current and future states of the network would be bounded: i.e. [ξ ] ε ξ, where [ ] denotes the -th column of a matrix. oreover, the total uncertainty budget for matrices ξ, ζ, ψ and χ could be fixed see [3]: this would imply constraints on the overall possible perturbations of the transportation matrix P. By solving the optimization problem 0, one would protect the rank vector ˆx against high fluctuation in case of link or node perturbations influencing the transportation matrix P. Currently, the Google scores vector is being updated once per month without accounting for the -3% growth rate during this month. By solving the optimization problem of the type 0 one could decrease the number of updates of the score vector and, therefore, reduce the underlying personnel and machinery costs. In the following sections, we formulate robust optimization problems for finding the stationary distribution of the extended network Q for the case of l, l and Frobenius-norms, and we bound these problems from above by convex but nonsmooth optimization problems. [ψ] ε ψ, [ζ ] ε ζ and [χ] ε χ,

Robust PageRank 7. l -norm formulation Let us consider =. We say that the vector ˆx l is the l -robust solution of the eigenvector problem on the set of perturbed matrices Ξ l, if ˆx l Arg Qx x : x Σ + Q Q is column-stochastic, [ξ ] ε ξ [ψ] ε ψ [ζ ] ε ζ [χ] ε χ, =,...,, and ξ i ε ξ, i,, =,...,, and ψ i ε ψ,, =,...,, and ζ i ε ζ,, =,...,, and χ i ε, χ where we have l -norm constraints on each column [ ], of matrices ξ, ζ, ψ and χ, as well as we bound the sum of absolute values of all elements of these matrices. These constraints bound the perturbation in links of the current and future states of the network. At the same time, the total uncertainty budget for matrices ξ, ζ, ψ and χ is fixed. otice, that the problem is not convex-concave, meaning that the function φ x = Q Ξ l Qx x cannot be computed efficiently. Importantly, the formulation does not discourage sparsity in transition probabilities. Consider, for example, the uncertainty matrix ψ, which describes links from existing pages to new ones. All elements of the matrix ψ are non-negative. If we reduce one positive element from -th column of this matrix by a small enough δ, the norm [ψ] and the sum i, ψ i decrease by this δ, regardless of the value of the element we decrease. This means, that l -norm formulation does not make a preference which transition probabilities to decrease in order to satisfy the constraints. By this, a lot of transition probabilities of the matrix Q can result in being zeros. In contrast, for l - and Frobenius-norm formulations, which we consider in the next sections, the reduction of larger terms of the matrix ψ by δ results in a much greater reduction in norms than doing so with smaller terms. Therefore, l - and Frobenius-norm formulations discourage sparsity by yielding diishing reductions for elements closer to zero. Proposition Optimal value φ ˆx l of the non-convex optimization problem can be bounded from above by the optimal value of the following convex optimization problem with ε = ε ξ + ε ψ and ε l = ε ζ + ε χ + : φ ˆx l Px x + ε x a + ε l x b, x Σ + i, i, i,

8 Anna Timonina-Farkas where x a = x b = λ+µ=x λ+µ=x λ + = λ + = Proof See the Appendix 9. for the proof. ε ξ +ε ψ µ ε ξ +ε, ψ. ε ζ +ε χ + µ ε ζ +ε χ + otice, that x b = 0 if there are no new pages in the network i.e. if = 0. In this case, the optimization problem completely coincides with the l - reformulation proposed by A. Juditsky and B. Polyak in the work [3].. l -norm formulation Let us consider =. We say that the vector ˆx l is the l -robust solution of the eigenvector problem on the set of perturbed matrices Ξ l, if ˆx l Arg Qx x : x Σ + Q Q is column-stochastic, [ξ ] ε ξ, =,...,, and ξ F ε ξ, [ψ] ε ψ, =,...,, and ψ F ε ψ, 3 [ζ ] ε ζ, =,...,, and ζ F ε ζ, [χ] ε χ, =,...,, and χ F ε χ, where we have constraints on -th column [ ] of matrices ξ, ζ, ψ and χ, as well as we have second-order constraints on matrices themselves. otice, that the problem 3 is not convex-concave, meaning that the function cannot be computed efficiently. φ x = Q Ξ l Qx x Proposition Optimal value φ ˆx l of the non-convex optimization problem 3 can be bounded from above by the optimal value of the following convex optimization problem with ε = ε ξ + ε ψ and ε l = ε ζ + ε χ + : φ ˆx l Px x + ε x c + ε l x d, 4 where x c = x d = λ+µ=x x Σ + λ+µ=x λ + = λ + = Proof See the Appendix 9. for the proof. ε ξ +ε ψ µ ε ξ +ε, ψ. ε ζ +ε χ + µ ε ζ +ε χ +

Robust PageRank 9.3 Frobenius-norm formulation Let us further consider =. We say that the vector ˆx F is the robust solution of the eigenvector problem on the set of perturbed matrices Ξ F, if ˆx F Arg Qx x : x Σ + Q Q is column-stochastic, ξ F ε ξ, ψ F ε ψ, 5 ζ F ε ζ, χ F ε χ, where F is the Frobenius norm. otice, that the problem 5 is not convexconcave, meaning that the function φ 3 x = Q Ξ F Qx x cannot be computed efficiently. otice also, that the formulation 5 is an upper bound for the l formulation 3. Proposition 3 Optimal value φ 3 ˆx F of the non-convex optimization problem 5 can be bounded from above by the optimal value of the following convex optimization problem: φ 3 ˆx F Px x + ε x + ε F x, 6 x Σ + where ε = ε ξ + ε ψ and ε F = ε ζ + ε χ +. Proof See the Appendix 9.3 for the proof. otice, that the parameter ε is equal to ε ξ + ε ψ and does not depend on the problem formulation. This parameter describes the total uncertainty in links of current pages, implied by the change in already existing links i.e. P + ξ and by the uncertainty ψ corresponding to newly appeared links from existing to new pages. However, parameters ε l and ε l = ε F depend on the problem formulation. For the l -norm formulation the parameter ε l is equal to ε ζ + ε χ + and denotes the total uncertainty in links between new pages, as well as from them. This parameter is clearly dependent on the number of new pages, giving more weight to their ranks as the number grows. Differently, for the l - and Frobenius-norm formulations, parameters ε l = ε F dependency on the number of new pages. Let us denote by ε the following parameter: are equal to ε ζ +ε χ + and do not show explicit ε = [ ε ζ + ε χ +, for l norm formulation; ε ζ + ε χ +, for l and Frobenius-norm formulations.

0 Anna Timonina-Farkas In this case, optimization problems, 4 and 6 can be written in the following general form: x Arg Px x + ε x + ε x, 7 x Σ + where, and correspond to the norms from the formulations above. For the l -norm formulation, defines the l -norm, is equal to a and is equal to b. For the l -norm formulation 4, defines the l - norm, is equal to c and is equal to d. For the Frobenius-norm formulation 6,, and denote l -norms. We refer to the vector x as to the computable robust doant eigenvector of the corresponding family of perturbed matrices Ξ l, Ξ l or Ξ F. Further in the article and before we proceed with the numerical solution of the problem 7, we show that the formulation 7 provides the upper bound on the formulation of A. Juditsky and B. Polyak in the work [3]. oreover, we discuss the choice of the perturbation set defined by parameters ε ξ, ε ψ, ε ζ and ε χ. 3 Comparison to the model with fixed-size network Optimization problems, 3 and 5 account for uncertainties in links between current and future not yet existing pages: these uncertainties are incorporated via matrices ξ, ψ, ζ and χ, which worst-case realization gives us the opportunity to compute the robust PageRank. These optimization problems differ from robust formulations corresponding to the fixed-size network model proposed by A. Juditsky and B. Polyak in the work [3], who studied uncertainties implied by the matrix P + ξ, describing variations in links of existing pages with constant network size. In this section, we study the relationship between the fixed-size and the growing network model. For this, we compute the lower bound of the norm Q Ξ Qx x for the case of l -, l - and Frobenius-norm and we prove that the growing network model imposes the upper bound for the fixed-size network model. Theorem Upper Bounds Optimization problems, 3 and 5 under conditions 8 and 9 impose upper bounds on the fixed-size network model in the following sense: φ x = Q Ξ l Qx x T [ξ ] =0 [ξ ] ε ξ i, ξ i ε ξ P + ξ x x, 8 φ x = Qx x P + ξ x x, 9 Q Ξ l T [ξ ] =0 [ξ ] ε ξ ξ F ε ξ φ 3 x = Q Ξ F Qx x P + ξ x x. 0 T [ξ ] =0 ξ F ε ξ

Robust PageRank Proof Let u = u u, where u is a vector of the length and u is a vector of the length. For the case of l -norm, the following equality holds Qx x = P + ξ ζ x x ψ χ x = P + ξ I x + ζ x x ψx + χ I x = = P + ξ I x + ζ x + ψx + χ I x, where I and I are identity matrices of the size and correspondingly. Using the norm duality, i.e. P + ξ I x + ζ x = u T u R P + ξ I x + ζ x u ψx + χ I x = u T u R u ψx + χ I x, we choose such feasible u = u, that P+ξ x x = u T P+ξ I x, u R and u and we fix u =, where is the vector of all-ones. By this, we compute the lower bound for the norm Qx x : Qx x u T P + ξ I x + ζ x + T ψx + χ I x = = P + ξ x x + u T ζ x + T ψx + T χx T x = = P + ξ x x + T ψx + u T ζ x, where the final equation holds due to equalities and with T ψx 0 and u T ζ x 0: T ψ = T ξ ; T χ = T T ζ. otice, that equalities and hold due to the column-stochasticity of the matrix Q i.e. due to conditions 8 and 9. Therefore, we compute the lower bound for the function φ x, i.e. φ x P + ξ x x + T ψx + u T ζ x, ξ,ψ,ζ Ξ l As soon as this bound holds for all ξ,ψ,ζ in the perturbation set, we can set ζ = 0 and ψ = 0 without any loss of generality. oreover, by setting ψ = 0 we guarantee that conditions of A. Juditsky and B. Polyak in the work [3] are satisfied, i.e. that the matrix P + ξ is column-stochastic. Therefore, we guarantee that the l -norm formulation of A. Juditsky and B. Polyak in the work [3] is a lower bound for our optimization problem, i.e. the bound 8 follows. Analogically, one can show, that the same type of bound holds for the case of l - and

Anna Timonina-Farkas Frobenius-norm robust formulations. otice, that the following equality holds due to the duality of the second norm: Qx x = P + ξ ζ x x ψ χ x = P + ξ Ix + ζ x x ψx + χ Ix = T u P + ξ I x = + ζ x u R + u ψx + χ I x. u To compute the lower bound for the norm Qx x, we choose such feasible u = u, that P + ξ x x = u T P + ξ Ix, u R, u which exists due to the duality of the norm and we fix u = 0 i.e. zero-vector of the length. In this case, we can write the following: Qx x u T P + ξ I x + ζ x = P + ξ x x + u T ζ x, which leads to the following bounds for l - and Frobenius-norm formulations correspondingly: Qx x P + ξ x x + u T ζ x, ξ and ζ Ξ l, Q Ξ l Qx x P + ξ x x + u T ζ x, ξ and ζ Ξ F. Q Ξ F Analogically to the l -norm formulation, we can set ζ = 0 without any loss of generality, as the bounds hold for all ξ,ζ in the perturbation set. Therefore, one can guarantee that the l - and the Frobenius-norm formulations of A. Juditsky and B. Polyak in the work [3] impose lower bounds for our optimization problems, i.e. bounds 9 and 0 hold. ow, consider robust reformulations, 4 and 6. In case = 0 i.e. if there is no growth in the network, these reformulations fully coincide with upper bounds proposed by A. Juditsky and B. Polyak in the work [3]. In general, for > 0, robust reformulations, 4 and 6 differ from the bounds imposed by the fixedsize network. However, if there are no links from old to new web-pages, the current links are perturbed only by ξ, as ψ = 0. Therefore, Q is reducible and the difference between the fixed-size and the growing network models is fully imposed by the links from new pages. If a random user starting from some new page among i = +,..., + keeps clicking on links, he eventually results in one of pages i =,..., as ζ 0. However, he is not able to get back to the starting page by clicking links as ψ = 0. Therefore, ranks x should a priori be lower than x. oreover, one cannot say which of pages i = +,..., + have higher ranks and which have lower ones, as links are actually unknown to the search engine before the Internet structure is updated. Therefore, without loss of generality, the ranks x could be assumed to be zeros and the problem could be solved numerically using algorithms proposed in [3]. Therefore, further we focus on the case of irreducible transition matrices Q, which imply ε ψ > 0, ε ζ > 0, ε χ > 0.

Robust PageRank 3 4 Bounds on the perturbation set Ξ Consider optimization problems, 4 and 6 in the general form 7: x Arg Px x + ε x + ε x. x Σ + otice, that ε ξ + ε ψ is denoted by ε, while ε is equal to ε ζ + ε χ + for the l -norm robust formulation and to ε ζ + ε χ + for both l - and Frobenius-norm formulations 4 and 6. The size of the optimization problem 7 can be reduced, as the problem can be subdivided into two separate smaller-size optimization problems. Lemma Let ỹ and ỹ be solutions of optimization problems 3 and 4: ỹ Arg Py y + ε y, 3 y Σ ỹ Arg ε y, 4 y Σ where Σ = ν R : i= ν i, ν i 0 and Σ = ν R : i= ν i, ν i 0. Then the following holds:. If Pỹ ỹ + ε ỹ < ε ỹ, then the optimal solution to the x problem 7 is x = with x = ỹ and x = 0; x. If Pỹ ỹ + ε ỹ > ε ỹ, then the optimal solution to the x problem 7 is x = with x = 0 and x = ỹ ; x 3. If Pỹ ỹ + ε ỹ = ε ỹ, then there are infinitely many optimal solutions to the problem 7. Proof Consider the optimization problem 7. Based on the fact, that x + x = and x 0, x 0, let us make the following change of variables with s [0, ]: x = sy, x = sy, 5 where y =, y = and y 0, y 0. Further, x = s and x = x s, s [0, ]. Hence, simplex constraints on x = are satisfied. Therefore, the optimization problem 7 can be rewritten as: ỹ Arg s Py y + ε y + s ε y, 6 y Σ y Σ s [0, ] x

4 Anna Timonina-Farkas where ỹ = ỹ ỹ and s denote the optimal solution of the optimization problem 6. Furthermore, ỹ and ỹ are independent of each other. At optimality of the problem 6, s = if Pỹ ỹ +ε ỹ < ε ỹ and s = 0 if Pỹ ỹ + ε ỹ > ε ỹ. In case Pỹ ỹ + ε ỹ = ε ỹ, s can take any value in the interval [0, ]. Hence, the statement of the Lemma follows. By comparing optimal values of problems 3 and 4, one can conclude the optimal solution s and, therefore, one can get the optimal solution of the problem 7 by the system 5. In general, one would like to avoid the optimal solution x = 0, x > 0, as it would mean that the uncertainty about current pages is larger than the uncertainty about future not yet existent pages. For this, one needs to guarantee that the optimal value of the problem 3 is not greater than the optimal value of the problem 4 i.e. one would like to avoid point of the Lemma. Further, we consider l, l and Frobenius-norm formulations, 4 and 6 and we explicitly solve the optimization problem 4 for each of these norms. oreover, we state conditions on parameters ε ξ, ε ψ, ε ζ and ε χ, which guarantee that points or 3 of the Lemma are sufficiently satisfied. Statement Consider the robust reformulation and apply the proposed change of variables 5. In this case, y = λ + ε ζ +ε χ + = µ ε ζ +ε χ + λ+µ=y and the following holds: y, if εζ +ε χ + = ε ζ +ε χ +, =,..., ζ y ε Σ +ε χ ζ + ε, if ε ζ +ε χ + Proof See the Appendix 9.4 for the proof. +ε χ + ε ζ +ε χ + <. 7 Statement Consider the robust reformulation 4 and apply the proposed change of variables 5. In this case, y = λ + ε ζ +ε χ + = µ ε ζ +ε χ + and the following holds: y, if εζ = y ζ ε Σ λ+µ=y +ε χ + ε ζ +ε χ + +ε χ + ε ζ +ε χ + Proof See the Appendix 9.5 for the proof., =,...,, if ε ζ +ε χ + ε ζ +ε χ + <. Statement 3 Consider the robust reformulation 6 and apply the proposed change of variables 5. In this case, y = y and the following holds: 8 y =. 9 y Σ

Robust PageRank 5 Proof The proof obviously follows from the direct imization of the l -norm. Statements, and 3 impose conditions on the optimal value of the problem 3, under which points or 3 of the Lemma are satisfied. These conditions vary for l, l and Frobenius-norm formulations and can be stated in the following way: For the l -case: Py y + ε y a y Σ Py y + ε y a y Σ εζ +ε χ +, ε ζ + ε χ +. 30 For the l -case: Py y + ε y c y Σ Py y + ε y c y Σ εζ +ε χ +, ε ζ + ε χ +. 3 For the Frobenius-norm: Py y + ε y εζ + ε χ +. 3 y Σ otice, that ε = ε ξ + ε ψ. Theorem Sufficient Conditions Consider statements 30, 3 and 3.. Condition 30 is sufficiently satisfied, if ε ξ + ε ψ. ε ξ. Condition 3 is sufficiently satisfied, if + ε ψ, ε ζ + ε χ. 3. Condition 3 is sufficiently satisfied, if ε ζ + ε χ ε ξ + ε ψ. Proof Consider l -norm and let ȳ be such a vector, that ȳ = Pȳ and ȳ Σ notice, that it exists due to the Perron-Frobenius theorem. In this case, Py y + ε y a ε ȳ a ε = ε ξ + ε ψ, y Σ where the last inequality holds due to the definition of the norm a. Therefore, condition 30 is satisfied, if ε ξ + ε ψ + εζ +ε χ, ε ξ + ε ψ + ε ζ + ε χ. It is, therefore, sufficient that ε ξ +ε ψ in the absence of information about new pages ambiguity parameters i.e. about ε ζ and ε χ. Hence, the statement of the Theorem follows. Analogically, we obtain the sufficient conditions for l - and Frobenius-norm reformulations, i.e. statements and 3 of the Theorem.

6 Anna Timonina-Farkas Condition of the Theorem depends neither on the number of new pages in the network nor on the uncertainty levels ε ζ and ε χ, which makes the l formulation convenient for the analysis. For the case of l robust reformulation, condition of the Theorem states that the uncertainty about future pages ε ζ + ε χ should grow faster than as the number of new pages increases. For the case = 0 and =, this condition is automatically satisfied. For higher dimensions i.e. >, this condition can be statistically tested from real-world data. Similarly, for the Frobenius-norm formulation 6, condition 3 of the Theorem requires that the uncertainty about future pages ε ζ + ε χ grows faster than ε ξ + ε ψ as the number of new pages increases. otice, that in the work of A. Juditsky and B. Polyak [3] small perturbations ε = 0.0 were considered in order to avoid complete equalization of the scores. oreover, = 0 in the work [3]. Thus, conditions, and 3 of the Theorem were satisfied in the work of A. Juditsky and B. Polyak [3]. Further, let us assume that conditions of the Theorem are satisfied. In this case, we focus on the numerical solution of the problem 3, which we formulate in the following general form: Px x + ε x, 33 x Σ where we, with some abuse of notations, denote ỹ by x and ε by ε and where norms and correspond to the norms in l, l and Frobenius-norm formulations, 4 and 6. In the following section, we study numerical techniques for the solution of the optimization problem 33. 5 umerical algorithms Consider optimization problems, 4 and 6. As it is demonstrated in the previous section for each of these problems, it is sufficient to solve the optimization problem of the type 33 if conditions 30, 3 or 3 are correspondingly satisfied. In medium- to large-dimensional cases i.e. =.e3.e6, convex non-smooth optimization problem 33 can be solved numerically using available optimization techniques, including interior-point methods, mirror-descent algorithms [4, 9] and randomized techniques [,,8]. In huge-dimensional cases i.e. =.e6.e9, one can employ randomized subgradient algorithms, which are specifically designed for sparse matrices by Y. esterov [0]. oreover, one can use less accurate but extremely fast numerical methods proposed in [3,7,5]. In this section, we do not consider these algorithms, but we propose some techniques, which allow to smoothen optimization problems, 4, 6 and to solve them numerically via approximation. Further, we consider the optimization problem 33 with = and =, i.e. we focus on the Frobenius-norm formulation 6 under conditions of the Theorem. We choose this formulation, as it imposes the upper bound on the l formulation 4 and as it is a non-smooth non-separable optimization problem, which

Robust PageRank 7 implies additional difficulty. Differently, one could use the following upper bound for the approximate solution of the optimization problem 4: x Σ λ+µ=x Px x + ε λ + = ε µ x Σ Px x + = ε x, which we do not consider in this article but which can be approached analogically to the Frobenius-norm formulation. We also do not consider numerical algorithms for the solution of l norm formulation, as this problem can be bounded by the optimization problem with a separable non-smooth part and, therefore, it can be solved approximately via the wellknown proected coordinate descent algorithm see, for example, [5]. Therefore, let us consider the optimization problem 6. Its reformulation 3 under condition 3 implies the following optimization problem: x Arg Px x + ε x, 34 x Σ where ε = ε ξ + ε ψ. We solve the optimization problem 34 using the following steps: Step : Apply a nonstandard normalization instead of simplex constraints x Σ. This would allow us to simplify the constraints of the optimization problem 34; Step : Bound the feasible set of the problem 34 so, that it does not include any of non-smooth points; Step 3: Solve the optimization problem 34 via proected subgradient method on the feasible set. Further, we consider each of these steps in more details. onstandard normalization: Assume that one page page is known to have the highest rank see [5] and, therefore, let us use the nonstandard normalization x = instead of i= x i =. By this, we introduce the following optimization problem: z Arg x X f x, f x = Px x + ε x, 35 where X = x R, x =, x 0. otice, that the optimal value and the optimal solution of the problem 34 are related to those of the problem 35 in the following way: f x = x f z, where z = x x x x Ṇ. x = x z,. This leads to the statement x = i= z i.

8 Anna Timonina-Farkas on-smooth points: The optimization problem 35 is non-smooth at least at one point P x = x, which is a possible optimal solution corresponding to the doant or principal eigenvector of the non-perturbed matrix P. Therefore, we first of all check if the optimal solution of the problem 35 is always the point x: P x = x. For this, we solve the optimization problem 35 in small-dimensional cases with randomly chosen non-negative column-stochastic matrix P. We can see that the following two cases are possible:. Optimal solution of the problem 35 is the point x : P x = x Figure a, in which case the function f x is non-smooth at optimality;. Optimal solution of the problem 35 is a point x : Px x Figure b, in which case the function f x is smooth at optimality. a on-smooth optimality. b Smooth at optimality. Fig. : Function f x for randomly chosen P in small-dimensional cases. The optimal solution of the problem 35 is not necessarily the principal eigenvector of the matrix P. Therefore, there may exist at least one not optimal point, where the function is non-smooth. Further, notice that the optimization problem 35 is non-smooth x such that P x = x. As soon as the matrix P is non-negative and column-stochastic, its imal eigenvalue λ = is not necessarily a simple root of the characteristic polynomial according to the Perron-Frobenius theorem [0]. Therefore, there may exist multiple number of points x : P x = x. If the matrix P would be irreducible, the imal eigenvalue λ = would be the simple root of the characteristic polynomial. However, there still could exist multiple complex eigenvalues with absolute value, which would strongly influence convergence of numerical algorithms for example, the well-known power method would not converge. We propose a technique, which guarantees that at each iteration k the obective function is smooth at the point x k. otice, that the subgradient of the function f x at any point x : Px x is unique and is equal to: f x = P IT P Ix Px x x + ε, where Px x. 36 x

Robust PageRank 9 otice also, that x > 0 for all points in the feasible set of the problem 35, as x =. ow, consider a perturbed Google matrix G = αp + αs, where S is the doubly stochastic matrix with entries equal to and where α = 0.85 is the damping factor [9,6]. This matrix was initially proposed by S. Brin and L. Page [5] and was used by Google in the well-known PageRank algorithm y k+ = Gy k. For the matrix G one can guarantee the uniqueness of the imal in absolute value eigenvalue i.e. λ = and, therefore, the uniqueness of the eigenvector ȳ corresponding to it [5]. oreover, one can guarantee the convergence of the power method y k+ = Gy k to ȳ : Gȳ = ȳ due to the Perron-Frobenius theorem for positive matrices. Further, consider the following convex function and its corresponding subgradient: gx = Gx x + ε x, gx = G IT G Ix Gx x x + ε. x otice, that the function gx has a unique non-smooth point ȳ, which corresponds to Gȳ = ȳ and which, in general, may differ from x : P x = x. The Perron-Frobenius theorem for positive matrices implies that the principal eigenvector of the matrix G has only positive entries. Hence, one can guarantee that Gx x for each chosen x by setting ranks of some spam pages to zero i.e. x i = 0 for some i. By this, one guarantees the uniqueness of the subgradient at each feasible point. Therefore, we solve the following optimization problem numerically: gx, gx = Gx x + ε x, 37 x X where X = x R, x = 0, x =, x 0, where x corresponds to the spam page. Subgradient method: We solve the optimization problem 37 by the proected subgradient algorithm, where we do not iterate over elements x and x : x k+ i = x k [G]i e i,gx x i t k + ε x i, 0, i =,, 38 Gx x x where [G] i is the i-th column of the matrix G, e i is the i-th column of the matrix I and t k is the chosen step size. In the method 38, the subgradient is unique at each feasible point of the problem 37. In general, subgradient methods are not necessarily the descent methods. Therefore, one keeps track of the best function value at every iteration k. i.e. g best = i,,...,k gxi. In this article, we test the stopping rule gx k+ > gx k instead of keeping track of the best function value, as well as we do not iterate over pages with zero-ranks. This allows to enhance efficiency of the algorithm. 5. umerical results Consider the following two ranking models Figure 3 a and b, which are also discussed in the works [3,5,8]. For both models, nodes e.g. web-pages are shown as circles, while references e.g. web-links are denoted by arrows.

0 Anna Timonina-Farkas,,,n,,,n...... n, n, n,n a odel.,,,n,,,n...... n, n, n,n b odel. Fig. 3: etwork models for testing purposes. odel : Let us start by considering odel Figure 3 a. In this model, node n,n represents a dangling vertex, which makes the transition matrix P reducible, leading to non-uniqueness of the eigenvector corresponding to the eigenvalue λ =. To avoid reducibility of the matrix P, we need to guarantee that the number of outgoing links is non-zero for each page. For this, we assume the ability of equally probable transitions from the node n,n to any node in the odel similarly to the approach used by search engines. The transition matrix P corresponding to the odel with equally probable transitions from the vertex n,n becomes irreducible and aperiodic, which guarantees the uniqueness of the eigenvector corresponding to the eigenvalue λ =, as well as the convergence of the well-known power method x k+ = Px k [3,5, 8]. Figure 4 demonstrates the sparsity of the matrix P corresponding to the odel for networks with = n = 9 and = n = 00 nodes: zero elements of the matrix P are shown with light blue color. The matrix P is, clearly, highly sparse. a =9. b =00. Fig. 4: on-zero elements of the transition matrix P corresponding to the odel.

Robust PageRank We are interested in finding ranks x i, of each node of the network. Taking arbitrary value of x n,n, say x n,n = n, we obtain the system of equations describing ranks of the odel [5]: x, = x n n,n =, x i, = x,i = x i, +, i =,...,n, x i, = x i, + x i, +,,i =,...,n, 39 x n, = x,n = x n, + x n, +, =,...,n, x n,n = x n,n + x n,n + = n. The system of equations 39 has a closed form solution and, therefore, it can be solved explicitly for each x i, [5,8]. This allows us to test the performance of the subgradient algorithm 38 in comparison with the true ranks of the odel. odel : The second model i.e. odel in the Figure 3 b differs from the odel, as it has only one link from the node n,n. As the transition matrix corresponding to the odel is periodic, the power method x k+ = Px k does not converge for it [5,8]. Taking arbitrary value of x,, we obtain the system of equations describing ranks of the odel [5]: x i, = x,i = x i,, i =,...,n, x, =, x i, = x i, + x i,,,i =,...,n, x n, = x n = x n, + x 40 n,, =,...,n, x n,n = x n,n + x n,n. Analogically to the odel, the system of equations 40 can be solved explicitly. Further, we proceed to standard normalization and get the normalized ranks as x i, = x i, i, x i, for both odel and odel see Figure 5. otice, that these ranks correspond to the doant eigenvector of the matrix P. However, they are not necessarily the optimal solution of optimization problems 34 or 37. Ranks 0.0 0.008 0.006 0.004 0.00 0 40 i node index 0 0 0 0 40 30 0 node index 50 a odel. b odel. Fig. 5: Explicit ranks of a odel and b odel = 5000 and = 50 correspondingly.

Anna Timonina-Farkas For = 0.000 we i compute exact ranks of the described models i.e. odel and odel via systems 39 and 40. Afterwards, we ii solve the reformulation 37 by the algorithm 38 described in the Section 5. Further, we test different possible step sizes t k in the algorithm 38. For odel and odel correspondingly, Figure 6 demonstrates the optimal value convergence of the problem 37 obtained by the algorithm 38 with the diishing step size i.e. lim k t k = 0, k= t k = and with the starting point x = 0,,..., T. Gx k -x k +ɛ x Gx k -x k + ԑ x 0.009 0.008 0.007 0.006 0.005 0.004 0.009 0.008 0.007 0.006 0.005 0.004 0 40 60 80 00 0 40 Iteration number a odel. 0 40 60 80 00 0 40 Iteration number b odel. Optimal value of the initial problem 34 Optimal value of 35 under additional constraint x =0 Optimal value of the problem 37 =/k+ =/k+ 0.9 =/k+ 0.5 =/k+ 0.3 =/k+ 0. Optimal value of the initial problem 34 Optimal value of 35 under additional constraint x =0 Optimal value of the problem 37 =/k+ =/k+ 0.9 Subgradientdescent t k =/k+ 0.5 =/k+ 0.3 =/k+ 0. Fig. 6: Optimal value convergence for the problem 37 obtained by the algorithm 38 with diishing step size t k = k+ d, k with ε = and d 0,. As the subgradient method is not necessarily the descent method, one should, in general, keep track of the best function value at every iteration k: g best = i,,...,k gxi. We, however, test the performance of the stopping rule gx k+ > gx k instead of keeping track of the best function value. This allows to enhance efficiency of the algorithm Figure 6. In the Figure 6, one can see that larger step sizes lead to the earlier

Robust PageRank 3 break of iterations under the stopping rule gx k+ > gx k. Figure 7 compares ranks estimated via algorithm 38 with the doant eigenvector of the matrix P. 0-4 5 Last row ranks imposed by the problem 37 Ranks 4 3 Last row ranks imposed by the eigenvector of matrix P =/k+ =/k+ 0.5 =/k+ 0.3 =/k+ 0. 0 0-4 5 0 0 30 40 50 60 70 80 90 00 ode number in the last row Diagonal ranks imposed by the eigenvector of matrix P a Last row elements. Ranks 4 3 Diagonal ranks imposed by the problem 37 Step size /k+ 0.5 Step size /k+ 0.3 0 0 0 0 30 40 50 60 70 80 90 00 ode number on the diagonal b Diagonal elements. Fig. 7: odel : Estimated ranks v.s the doant eigenvector. In the example of Figure 7 we use ε = as the parameter of the optimization problem 37. For larger values of ε, the robust eigenvector stops distinguishing ranks of highimportance pages. This is in line with conditions 30, 3 and 3 and the Theorem, which claim that the uncertainty about future pages becomes doant if the value of ε = ε gets too high, which makes ranks of current pages indistinguishable from each other. Therefore, increasing the value of ε even further would lead to the solution, where one cannot differ between ranks of pages x,...,x at all notice, that x = 0 and x = are fixed in the optimization problem 37. For small values of ε e.g. ε <<, the optimal solution of the problem 34 should approach the doant eigenvector of the unperturbed transition matrix P. As we solve the optimization problem 37 instead of the problem 34 for numerical pur-

4 Anna Timonina-Farkas poses, our optimal solution approaches the unique doant eigenvector of the Google matrix G = αp + αs, with S being the doubly stochastic matrix with elements and the damping factor α = 0.85. Figure 8 a demonstrates values x i,, i, =,...,n of the doant eigenvector of the matrix G with α = 0.99 for odel. Large amount of high-importance pages have the same rank already for α = 0.99, which can be also observed in the Figure 8 b view from above [5, 8]. Even more pages become indistinguishable if we decrease the damping factor α. a Ranks x i,, i, =,...,n. b View from above. Fig. 8: Eigenvector of the Google matrix G = αp + αs with α = 0.99 corresponding to the odel. By this, one could claim that the doant eigenvector of the matrix G should be viewed as the robust eigenvector of the perturbed family of matrices Q defined by 7, 8 and 9. This, however, would not provide the methodology to rank highimportance pages, which are the core of all search engine results. For small- and medium-size problems, the optimal solution of the optimization problem 37 with ε provides better results in terms of ranking of high-importance pages than the direct use of the transition matrix G. Further, we provide the results for the subgradient method 38 for the following step sizes for odel with = 0.000: Constant step length: t k = h gx k, k; Polyak s step size: t k = gxk g, where g is the unknown optimal value for the gx k problem 34: t k = gxk g gx k t k = gxk g gx k gx k i,,...,k gxi + k gx k, 4 gx k i,,...,k gxi + k gx k. 4

=/ gx k Robust PageRank 5 otice, that one can use t k = and t k gx k k = k gx instead of corresponding step sizes 4 and 4 in case one applies the stopping rule gx k+ > gx k. k This follows from the fact, that gx k = g best = i,,...,k gx i if the stopping rule is implemented. Otherwise, one needs to use step sizes 4 and 4 directly. 0.009 Optimal value of the 0.009 Optimal value of the initial problem 34 initial problem 34 0.008 Optimal value of 35 under additional 0.008 Optimal value of 35 under additional Gx k -x k + ԑ x 0.007 0.006 constraint x =0 Optimal value of the problem 37 =/ gx k =0.9/ gx k =0.5/ gx k Gx k -x k + ԑ x 0.007 0.006 constraint x =0 Optimal value of the problem 37 =/k gx k =0.5/k gx k =/k 0.5 gx k 0.005 =0.3/ gx k 0.005 =0.5/k 0.5 gx k =0./ gx k =/k 0.3 gx k 0.004 0 40 60 80 00 0 40 Iteration number 0.004 0 40 60 80 00 0 40 Iteration number a Constant step length. b Polyak s step size. Fig. 9: odel : Optimal solution of the problem 37 obtained by the algorithm 38 with ε = and the stopping rule gx k+ > gx k. 0-4 5 Last row ranks imposed by the problem 37 4 Last row ranks imposed by the eigenvector of matrix P Ranks 3 =0.9/ gx k =0./ gx k 0 0-4 4 3.5 3 0 0 30 40 50 60 70 80 90 00 ode number in the last row a Last row elements. Antidiagonal ranks imposed by the problem 37 Antidiagonal ranks imposed by the eigenvector of matrix P =/k gx k Ranks.5.5 =/k 0.5 gx k =0.5/k 0.5 gx k =/k 0.3 gx k 0.5 0 0 0 30 40 50 60 70 80 90 00 ode number on the antidiagonal b Antidiagonal elements. Fig. 0: odel : Estimated ranks v.s the doant eigenvector.

6 Anna Timonina-Farkas Similarly to the Figure 7, in Figures 9 and 0 we use ε = as the parameter of the optimization problem 37. For larger values of ε, the robust eigenvector does not distinguish ranks of high-importance pages according to conditions 30, 3 and 3 and the Theorem. Finally, we compare results obtained by the use of the stopping rule gx k+ > gx k and the results implied by the standard for subgradient algorithms choice of the function value: g best = i,,...,k gxi. From Figures and, one can see that there is no sufficient gain in estimation accuracy in case one keeps track of the best function value till the iteration k: the function value decreases monotonically till the iteration number k 00 see Figures and, after which the step size is being reduced but no additional accuracy is being achieved. 0.04 inimal value of Px-x + ԑ x on standard simplex Gx k -x k + ԑ x 0.0 0.0 0.008 inimal value of Gx-x + ԑ x on standard simplex with x, =0 Subgradient descent with t k =/k 0. gx k 0.006 0.004 0 50 00 50 Iteration number Fig. : odel : Optimal solution of the problem 37 obtained by the algorithm 38 without stopping rule. Gx k -x k + ԑ x k 0.04 0.0 0.0 0.008 inimal value of Px-x + ԑ x on standard simplex inimal value of Gx-x + ԑ x on standard simplex with x, =0 =/k 0. gx k 0.006 0.004 0 50 00 50 00 50 300 350 400 450 500 Iteration number Fig. : odel : Optimal solution of the problem 35 obtained by the algorithm 38 with the standard for subgradient algorithms choice of the function value g best. In the next section, we discuss the Google matrix G and corresponding power methods designed for high-dimensional cases.