Google PageRank with Stochastic Matrix

Similar documents
Continuous Time Markov Chain

APPENDIX A Some Linear Algebra

6. Stochastic processes (2)

6. Stochastic processes (2)

Perron Vectors of an Irreducible Nonnegative Interval Matrix

2.3 Nilpotent endomorphisms

Foundations of Arithmetic

The lower and upper bounds on Perron root of nonnegative irreducible matrices

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Classes of States and Stationary Distributions

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Random Walks on Digraphs

Markov chains. Definition of a CTMC: [2, page 381] is a continuous time, discrete value random process such that for an infinitesimal

CSCE 790S Background Results

FINITE-STATE MARKOV CHAINS

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Dynamic Systems on Graphs

PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 7, July 1997, Pages 2119{2125 S (97) THE STRONG OPEN SET CONDITION

First day August 1, Problems and Solutions

Continuous Time Markov Chains

The Order Relation and Trace Inequalities for. Hermitian Operators

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Lecture 3: Probability Distributions

FACTORIZATION IN KRULL MONOIDS WITH INFINITE CLASS GROUP

Non-negative Matrices and Distributed Control

FINITELY-GENERATED MODULES OVER A PRINCIPAL IDEAL DOMAIN

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Lecture 10: May 6, 2013

MATH Homework #2

REAL ANALYSIS I HOMEWORK 1

5 The Rational Canonical Form

Singular Value Decomposition: Theory and Applications

Entropy of Markov Information Sources and Capacity of Discrete Input Constrained Channels (from Immink, Coding Techniques for Digital Recorders)

DS-GA 1002 Lecture notes 5 Fall Random processes

7. Products and matrix elements

Convergence of random processes

Appendix B. Criterion of Riemann-Stieltjes Integrability

Composite Hypotheses testing

SPECTRAL PROPERTIES OF IMAGE MEASURES UNDER THE INFINITE CONFLICT INTERACTION

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

MEM633 Lectures 7&8. Chapter 4. Descriptions of MIMO Systems 4-1 Direct Realizations. (i) x u. y x

On mutual information estimation for mixed-pair random variables

Changing Topology and Communication Delays

Lecture 3: Shannon s Theorem

Lecture Notes on Linear Regression

Randomness and Computation

MATH 5707 HOMEWORK 4 SOLUTIONS 2. 2 i 2p i E(X i ) + E(Xi 2 ) ä i=1. i=1

DIFFERENTIAL FORMS BRIAN OSSERMAN

SL n (F ) Equals its Own Derived Group

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

More metrics on cartesian products

Week 2. This week, we covered operations on sets and cardinality.

Representation theory and quantum mechanics tutorial Representation theory and quantum conservation laws

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

MATH 281A: Homework #6

Norm Bounds for a Transformed Activity Level. Vector in Sraffian Systems: A Dual Exercise

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Norms, Condition Numbers, Eigenvalues and Eigenvectors

On the set of natural numbers

ON THE EXTENDED HAAGERUP TENSOR PRODUCT IN OPERATOR SPACES. 1. Introduction

The Geometry of Logit and Probit

Bayesian epistemology II: Arguments for Probabilism

C/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1

AN EXTENDED CLASS OF TIME-CONTINUOUS BRANCHING PROCESSES. Rong-Rong Chen. ( University of Illinois at Urbana-Champaign)

On Finite Rank Perturbation of Diagonalizable Operators

Three Kinds of Geometric Convergence for Markov Chains and the Spectral Gap Property

GELFAND-TSETLIN BASIS FOR THE REPRESENTATIONS OF gl n

Randić Energy and Randić Estrada Index of a Graph

a b a In case b 0, a being divisible by b is the same as to say that

Eigenvalues of Random Graphs

Exercise Solutions to Real Analysis

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Lecture 21: Numerical methods for pricing American type derivatives

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Maximizing the number of nonnegative subsets

Lecture 3. Ax x i a i. i i

Communication with AWGN Interference

where a is any ideal of R. Lemma 5.4. Let R be a ring. Then X = Spec R is a topological space Moreover the open sets

), it produces a response (output function g (x)

The Feynman path integral

EXPONENTIAL ERGODICITY FOR SINGLE-BIRTH PROCESSES

Applied Stochastic Processes

A note on almost sure behavior of randomly weighted sums of φ-mixing random variables with φ-mixing weights

Statistical Mechanics and Combinatorics : Lecture III

The internal structure of natural numbers and one method for the definition of large prime numbers

MAT 578 Functional Analysis

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

Math 426: Probability MWF 1pm, Gasson 310 Homework 4 Selected Solutions

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

DECOUPLING THEORY HW2

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

TAIL PROBABILITIES OF RANDOMLY WEIGHTED SUMS OF RANDOM VARIABLES WITH DOMINATED VARIATION

Affine transformations and convexity

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Transcription:

Google PageRank wth Stochastc Matrx Md. Sharq, Puranjt Sanyal, Samk Mtra (M.Sc. Applcatons of Mathematcs) Dscrete Tme Markov Chan Let S be a countable set (usually S s a subset of Z or Z d or R or R d ). Let {X 0, X 1, X 2,...} be a sequence of random varables on a probablty space takng values n S. Then {X n : n 0, 1, 2,...} s called a Markov Chan wth state space S f for any n Z 0, any j 0, j 1,..., j n 1 S, any, j S one has Pr(X n+1 X 0 j 0, X 1 j 1,..., X n j) Pr(X n+1 X n j). In addton, f Pr(X n+1 X n j) Pr(X 1 X 0 j), j S and n Z 0 then we say {X n : n Z 0 } s a tme homogeneous Markov Chan. Notaton: We denote tme homogeneous Markov Chan by MC. Note: The set S s called state space and ts elements are called states. Column-Stochastc Matrx A column-stochastc matrx ( or column-transton probablty matrx) s a square matrx P ((p j )),j S (where S may be a fnte or countably nfnte set) satsfyng: () p j 0 for any, j S () S p j 1 for any j S Smlarly, row-stochastc matrx can be defned consderng j S p j 1 for any S. Consder the MC, {X n : n S} on the state space S. Let p j Pr(X 1 X 0 j), j S. 1

Then P ((p j )),j S s the column-stochastc matrx. We call P as the stochastc matrx of MC, {X n : n S}. Lemma: If A s a n n matrx whose rows(or columns) are lnearly dependent, then det(a) 0. Let r 1, r 2,..., r n be the rows of A. Gven, r 1, r 2,..., r n are dependent, hence n α 1, α 2,..., α n α 0 and α r 0 α 1 r 1 Consder a matrx A α 2 r 2 wth rows as.. α n r n Now, det(a ) det(a) n α. α 1 r 1 α det(a α 2 r 2 r 0 ) α. 2 r 2 α 2 r 2 α n r n.. α n r α n r n n det(a ) 0 and hence det(a) 0 ( n α 0). Theorem: A stochastc matrx P always has 1 as one of ts egenvalues. Let S {1, 2,..., n} and P ((p j )) 1,j n. Consder the dentty matrx I n, I n ((δ j )) 1,j n where δ j s Kronecker delta. p j 1 and δ j 1 (p j I j ) 0 1 j n 2

Consequently, the rows of P I n are not lnearly ndependent and hence det(p I n ) 0 (by the above lemma). P has 1 as ts egenvalue. Defnton: P (n) ((p (n) j )),j S where p(n) j Pr(X n X 0 j),, j S. A lttle work and we can see that P (n) P n n Z 1. Also P (n) s a column-stochastc matrx as S Pr(X n X 0 j) 1 Classfcaton of states of a Markov Chan Defnton 1: j (read as s accessble from j or the process can go from j to ) f p j (n) > 0 for some n Z 1. Note: j n Z 1 and j j 0, j 1, j 2,..., j n 1 S such that p jj1 > 0, p j1 j 2 > 0, p j2 j 3 > 0,..., p jn 2 j n 1 > 0, p jn 1 > 0. Defnton 2: j (read as and j communcate) f j and j. Essental and Inessental States s an essental state f j S j, then j (e., f any state j s accessble from, then s accessble from j). States that are not essental are called nessental states. Let ξ be set of all essental states. For ξ, let ξ() {j : j} where ξ() s the essental class of. Then ξ( 0 ) ξ(j 0 ) ff j 0 ξ( 0 ) (e., ξ() ξ(k) φ ff k / ξ()). Defnton: A stochastc matrx P havng one essental class and no nessental states (e., S ξ ξ() S) s called rreducble, and the correspondng MC s called rreducble. Let A be a n n matrx. The spectral radus of a n n matrx, ρ(a) s defned as ρ(a) max 1 n { λ : λ s an egenvalue of A} norm of a vector x s defned as x max 1 n x 3

norm of a matrx A s defned as A max ( n a j ). 1 n j1 Also A 2 ρ(a A) ρ(aa ) A 2. If V s a fnte dmensonal vector space, then all norms on V are equvalent. A A 2 A 2 A Lemma: If P s a n n column-stochastc matrx, then P 1. If P s column-stochastc, then P s row-stochastc (e., We know that P s stochastc P max ( p j ) 1 j n P max ( p j ) 1 j n P max 1 j n 1 P 1 P 1 p j 1). Also we know that f V s any fnte dmensonal vector space, then all norms on V are equvalent. P 1 Theorem: If P s a stochastc matrx, then ρ(p ) 1. Let λ be an egenvalue of P 1 n. Then t s also an egenvalue for P. Let x be an egenvector correspondng to the egenvalue λ of P. P x λ x λ x λ x P x P x λ x x 4

λ 1 Also we have proved that 1 s always an egenvalue of P, hence ρ(p ) 1. Defnton: Let ξ. Let A {n 1 : p (n) > 0}. A φ and the greatest common dvsor(gcd) of A s called the perod of state. If j, then and j have same perod. In partcular, perod s constant on each equvalence class of essental states. If a MC s rreducble, then we can defne perod for the correspondng stochastc matrx snce all the sates are essental. Defnton: Let d be the perod of the rreducble Markov chan. The Markov chan s called aperodc f d 1. If q (q 1, q 2,..., q n ) s a probablty dstrbuton for the states of the Markov chan at a gven terate wth q 0 and P q ( P 1j q j, j1 P 2j q j,..., j1 n q 1, then P nj q j ) s agan a probablty dstrbuton for the states at the next terate. A probablty dstrbuton w s sad to be a steady-state dstrbuton f t s nvarant under the transton,.e. P w w. Such a dstrbuton must be an egenvector of P correspondng to the egenvalue 1. The exstence as well as the unqueness of the steady-state dstrbuton s guaranteed for a class of Markov chans by the followng theorem due to Perron and Frobenus. Theorem:(Perron, 1907; Frobenus, 1912) If P s a stochastc matrx and P be rreducble, n the sense that p j > 0, j S, then 1 s a smple egenvalue of P. Moreover, the unque egenvector can be chosen to be the probablty vector w whch satsfes lm P (t) [w, w,..., w]. Furthermore, for any probablty vector q we have lm P (t) q w. Clam: lm j w 5 j1

P ((p j )),j S p j > 0, j S we have, δ mn,j S p j > 0 (P (t+1) ) j (P (t) P ) j p (t+1) j k p kj Let m (t) Now, mn j S p(t) j and M (t) max j S p(t) j m (t+1) mn j S 0 < m (t) M (t) < 1 k p kj m (t) the sequence (m (t) ) s non-decreasng. Also, M (t+1) the sequence (M (t) Hence, lm m (t) max j S k p kj M (t) ) s non-ncreasng. m M lm M (t) exst. We now try to prove that m M. Consder M (t+1) m (t+1) max max max j S [ max (t) [M l S p kj m (t) k p kj mn k p kl k (p kj p kl ) p kj M (t) k (p kj p kl ) + + k (p kj p kl ) ] (p kj p kl ) + + m (t) (p kj p kl ) ] where (p kj p kl ) + means the summaton of only the postve terms (p kj p kl > 0) and smlarly (p kj p kl ) means the summaton of only the negatve terms (p kj p kl < 0). 6

Let + (p kj p kl ) (p kj p kl ) + and Consder (p kj p kl ) (p kj p kl ) (p kj p kl ) If max M (t+1) m (t+1) 1 (M (t) + (p kj p kl ) p kj p kl p kj (1 + (p kl p kj ) + (p kj p kl ) + m (t) ) max (p kj p kl ) + 0, then M (t) m (t). p kl ) (p kj p kl ) +. If max (p kj p kl ) + 0, for the par j, l that gves the maxmum, let r be the number of terms n k S for whch p kj p kl > 0, and s be the number of terms for whch p kj p kl < 0. Then, r 1 and ñ r + s 1 as well as ñ n. e., + + (p kj p kl ) + p kj 1 Hence the estmate of M (t+1) M (t+1) m (t+1) p kj + p kl 1 sδ rδ 1 ñδ 1 δ < 1. m (t+1) (1 δ)(m (t) s p kl m (t) ) (1 δ) t (M (1) m (1) ) 0 7

as t. Let w M m. But, M m m (t) j M (t) lm j w j S lm P (t) [w, w,..., w] lm P (t) [w, w,..., w] P lm P (t 1) P [w, w,..., w] [P w, P w,..., P w] Hence, w s the egenvector correspondng to the egenvalue λ 1. Let x( 0) be an egenvector correspondng to the egenvalue λ 1. P x x P (t) x x lm P (t) x [w, w,..., w]x (w 1 ( x ), w 2 ( x ),..., w n ( x )) ( x )w. S S S S But, lm P (t) x x x ( S x )w ( S x 0 x 0) Hence, egenvector correspondng to egenvalue 1 s unque upto a constant multple. Fnally, for any probablty vector q, the above result shows that lm P (t) q (w 1 ( q ), w 2 ( q ),..., w n ( q )) w. S S S Let q be a probablty dstrbuton vector. Defne q (+1) P q () Z 0 where q (0) q From the above theorem q (t) P (t) q (0) P (t) q t Z 1 lm P (t) q w lm q (t) w 8