A New Method to Find the Eigenvalues of Convex. Matrices with Application in Web Page Rating

Similar documents
On the eigenvalues of specially low-rank perturbed matrices

Google Page Rank Project Linear Algebra Summer 2012

Math 304 Handout: Linear algebra, graphs, and networks.

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Numerical Methods I: Eigenvalues and eigenvectors

Graph Models The PageRank Algorithm

Lecture 7 Mathematics behind Internet Search

A Note on Google s PageRank

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

1 Searching the World Wide Web

Eigenvalues and Eigenvectors

Affine iterations on nonnegative vectors

c 2005 Society for Industrial and Applied Mathematics

Introduction to Search Engine Technology Introduction to Link Structure Analysis. Ronny Lempel Yahoo Labs, Haifa

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

On the mathematical background of Google PageRank algorithm

Krylov Subspace Methods to Calculate PageRank

Link Mining PageRank. From Stanford C246

Applications to network analysis: Eigenvector centrality indices Lecture notes

Application. Stochastic Matrices and PageRank

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

Lab 8: Measuring Graph Centrality - PageRank. Monday, November 5 CompSci 531, Fall 2018

Uncertainty and Randomization

1998: enter Link Analysis

Google PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano

A hybrid reordered Arnoldi method to accelerate PageRank computations

Schur s Triangularization Theorem. Math 422

MATH36001 Perron Frobenius Theory 2015

Mini-project in scientific computing

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

PageRank Problem, Survey And Future Research Directions

Analysis of Google s PageRank

The Jordan Canonical Form

CS6220: DATA MINING TECHNIQUES

MAT 1302B Mathematical Methods II

How does Google rank webpages?

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

series. Utilize the methods of calculus to solve applied problems that require computational or algebraic techniques..

Symmetric functions and the Vandermonde matrix

Intrinsic products and factorizations of matrices

Definition (T -invariant subspace) Example. Example

1 Last time: least-squares problems

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Distributed Randomized Algorithms for the PageRank Computation Hideaki Ishii, Member, IEEE, and Roberto Tempo, Fellow, IEEE

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Justification and Application of Eigenvector Centrality

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Matrix Algebra: Summary

Lecture 8 : Eigenvalues and Eigenvectors

AUTOMATIC CONTROL. Andrea M. Zanchettin, PhD Spring Semester, Introduction to Automatic Control & Linear systems (time domain)

REU 2007 Apprentice Class Lecture 8

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

Definition 2.3. We define addition and multiplication of matrices as follows.

Online Social Networks and Media. Link Analysis and Web Search

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Slides based on those in:

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Applications of The Perron-Frobenius Theorem

CS6220: DATA MINING TECHNIQUES

eigenvalues, markov matrices, and the power method

Mathematical Properties & Analysis of Google s PageRank

Data Mining Recitation Notes Week 3

Data science with multilayer networks: Mathematical foundations and applications

Calculating Web Page Authority Using the PageRank Algorithm

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

Introduction to Data Mining

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Spectral radius, symmetric and positive matrices

Eigenvalues of Exponentiated Adjacency Matrices

A new approach to solve fuzzy system of linear equations by Homotopy perturbation method

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

How works. or How linear algebra powers the search engine. M. Ram Murty, FRSC Queen s Research Chair Queen s University

Eigenvalues of the Adjacency Tensor on Products of Hypergraphs

Algebraic Representation of Networks

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

STA141C: Big Data & High Performance Statistical Computing

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Analysis and Computation of Google s PageRank

Bounds on the Ratio of Eigenvalues

Commuting nilpotent matrices and pairs of partitions

Tutorials in Optimization. Richard Socher

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Where and how can linear algebra be useful in practice.

Slide source: Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University.

e j = Ad(f i ) 1 2a ij/a ii

Determinants of Partition Matrices

Dimension. Eigenvalue and eigenvector

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

Lecture: Local Spectral Methods (1 of 4)

Fixed point of ϕ-contraction in metric spaces endowed with a graph

Inf 2B: Ranking Queries on the WWW

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

1 Number Systems and Errors 1

SOMEWHAT STOCHASTIC MATRICES

IR: Information Retrieval

Transcription:

Applied Mathematical Sciences, Vol. 4, 200, no. 9, 905-9 A New Method to Find the Eigenvalues of Convex Matrices with Application in Web Page Rating F. Soleymani Department of Mathematics, Islamic Azad University Zahedan branch, Zahedan, Iran fazl_soley_bsb@yahoo.com Abstract Although many factors determine Google's overall ranking of search engine results, Google maintains that the heart of its search engine software is PageRank, the eigenvector corresponds to the maximal eigenvalue in the Google matrix. In this note, we deals with a very special case of this positive matrix; i.e. when it is a positive convex matrix, then we provide a new way which can be used as a direct method, to find the eigenvalues of this type of matrices. Although in practice, the likelihood of this scarce phenomenon is improbable, but it is worth mentioning. Mathematics Subject Classifications: 5A86, 65F5 Keywords: Eigenvalue, Google matrix, convex matrices, PageRank Introduction The computation of the eigenvalues a given matrix is one of the most important subjects in applied mathematics. There are several numerical methods to approximate these eigenvalues, such as Power Iteration Method or Rayleigh Quotient Method, etc. that are vastly discussed in literature (see, for example, [6]). Unfortunately, the computations costs of these methods are so much and more precision has been wanted in the real world, (see [4]), like the finding of Google matrix eigenvalues which play an important role in calculation of PageRank (the eigenvector of the Google matrix corresponds to the maximal eigenvalue, i.e., ). This paper considers a class of special matrices ( convexity preserving matrices for all ) that are significant in many applications, for instance in computer aided geometric design (see [,2,5]). As an another instance, in finding the

906 F. Soleymani PageRank. Note that, Google founders Sergey Brin and Larry Page With the financial assistance of a small group of initial investors founded the Web search engine company Google, Inc. in September 998. Approximately 94 million American adults use the Internet on a typical day. Search engine use is next in line and continues to increase in popularity. In fact, survey findings indicate that nearly 60 million American adults use search engines on a given day. Even though there are many Internet search engines, Google, Yahoo!, and MSN receive over 8% of all search requests. Despite claims that the quality of search provided by Yahoo! and MSN now equals that of Google, Google continues to thrive as the search engine of choice, receiving over 46% of all search requests, nearly double the volume of Yahoo! and over four times that of MSN [7]. To gain a better understanding of the upcoming propositions, it is better off to provide some fundamental definitions. Definitition.. Suppose that. A vector,,, is said to be k-convex if Δ 0 for all,2,,, where Δ. So it is crystal clear that a vector is 0 convex if and only if it is nonnegative and a vector is convex if and only if it is monotonically increasing. A matrix is said to be if for any convex vector, the vector is also convex. According to this definition, is 0 convexity preserving iff it transforms nonnegative vectors into nonnegative vectors and a matrix is convexity preserving iff it is monotonically preserving. As a result, it would be obvious that is concave when is convex. Lemma.. If we denote the lower triangular matrix as follow 0 0, 0

Eigenvalues of convex matrices 907 then 0 0 0. 0 0 0 By Lemma. for each,,, the following result could be concluded, let be the following matrix: and for 2, 0 :, 0 0, 0 where is the identity matrix and is the matrix given by above-mentioned lemma. The following result corresponds to Corollary 4.4 of [] and shows an important property of convexity preserving matrices for 0,,,. Proposition.. Let be a convexity preserving matrix for 0,,,. Then Λ, 0 where Λ is an upper triangular matrix whose diagonal elements 0 are the largest eigenvalues of, and is a nonnegative matrix with. Observe that, by the previous result, the first columns of form a basis of the invariant subspace corresponding to the dominant eigenvalues of a matrix which is convexity preserving for 0,,, 0. Before presenting the main propositions, it is necessary that mention the following note about the combinatorial numbers. By induction, it is easy to see that for all 0,. Given a square matrix A of order the following results provide, explicit expressions for the columns of and the rows of in terms of the and of the rows of.

908 F. Soleymani Another main definition which is important in the rest of the paper is the definition of Google matrix. The Google matrix is a positive matrix obtained by a special rank-one perturbation of a stochastic matrix which represents the hyperlink structure of the webpages. More technically, if,,, and is the probability vector, then the Google matrix is defined as follows:, where, i.e., the damping factor, is in the real open interval 0,, is a stochastic matrix The dimensional probability vector, also called personalization or stochastic vector, is a positive vector which its norm is equal to. That is,. Consequently is a personalization vector when it satisfies in the following relation:, 0. 2 Main Result Proposition 2.. Let be an square matrix of order with columns,,, and for,,. Let us denote by,,, the columns of. Then, for each,,, and, for each,,,,. 2 Proof. The simplest way to prove these formulas is the use of induction on,,. For we have by the definition of that

Eigenvalues of convex matrices 909,,,,,,. Then for all,, and therefore formulas () and (2) hold for. Let us suppose that () and (2) hold for,, and let us prove them for. We can write. Next by the induction hypothesis we have,,,, where are given by () for,, and by (2) for,,. By the definition of and the induction hypothesis we have that, for,,. Therefore () holds for,,. Analogously, by the definition of and the induction hypothesis we deduce, for,,. Reordering the terms (See for more [3]) in the previous formula it can be written as, Changing the index of the inner sum in the previous formula we have that, finally, applying the presented note, we deduce

90 F. Soleymani. Hence formula (2) also holds for,, and the induction follows. Proposition 2.2. Let, be the matrix defined in Proposition 2., with rows,,,,, and let for,,. Let us denote by,,,, the rows of. Then, for each,,,,,,, 3 and, for each,,,,,,, 4 where is the usual backward difference. Proof. Performing for all,, consists of subtracting from the rows,, of the rows,, of, respectively. Therefore, we have that, for each,,,,,, and that, for each,,,,,. Finally, the combinatorial formulas in (3) and (4) are well-known formulas for the backward difference. From the last two propositions we derive the following results. Corollary 2.. Let be a positive integer less than, let be an convexity preserving matrix for 0,,, and let be the matrix defined in proposition 2.2. Then,,, are the largest eigenvalues of satisfying 0, and the remaining eigenvalues of are the eigenvalue of the nonnegative matrix,. Google likely will remain the top search engine. But what set Google apart from its competitors in the first place? The answer is PageRank. So it is interesting to improve and discuss about the existed methods and try to improve them. The gist of this work is provided in the following corollary.

Eigenvalues of convex matrices 9 Corollary 2.2. If the Google matrix be an r-convexity preserving matrix then the above algorithm can be applied to find the eigenvalues of this matrix. 3 Concluding remarks and discussion By Corollary 2., the explicit formulas of proposition 2. and 2.2 allow us to calculate the largest eigenvalues of a matrix convexity preserving for 0,,,. If is an r-convexity preserving matrix for 0,,,, then by the previous corollary, all its eigenvalues only depend on the entries of the upper triangular part of. These formulas include two phases: the first phase corresponds to the calculation of entries above and to right of the first diagonal entries of (phase ) and the second one corresponds to the calculation of the first diagonal entries of (phase 2). We know that PageRank has connections to numerous areas of mathematics and computer science such as matrix theory, numerical analysis, information retrieval, and graph theory. As a result, much research continues to be devoted to explaining and improving PageRank. Here, by Corollary 2.2, it will be easy to find the eigenvalue of Googe matrix. References [] J.M. Carnicer, M. Garcia-Esnaola, J.M. Pena, Generalized convexity preserving transformations, Comput, Aided Geom. Design 3 (996) 79-97. [2] S. Cooper, S. Waldron, The eigenstructure of the Bernstein operator, J. Approx. theory 05 (2000) 33-65. [3] J. Delgado, J.M. Peña, Computation of the eigenvalues of convexity preserving matrices, Appl. Math. Lett. 22 (2009) 470-479. [4] J. Ding, A. Zhou, Eigenvalues of rank-one updated matrices with some applications, Appl. Math. Letters (2007) 223-226. [5] N. S. Sapidis, P. D. Kaklis, An algorithm for constructing convexity and monotonicity-preserving splines in tension, Computer Aided Geometric Design, Vol 5, 988, 27-37 [6] T. Sauer, Numerical Analysis, Addison Wesley Publication, USA, (2005). [7] S. Serra Capizzano, Google PageRank problem: The model and the analysis, in: Proceedings of the Dagstuhl Conference in Web Retrieval and Numerical Linear Algebra, 2007. Received: September, 2009