On the Solution of Linear Mean Recurrences

Similar documents
On the Solution of Linear Mean Recurrences

Notes on Linear Algebra and Matrix Theory

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Eigenvectors Via Graph Theory

Z-Pencils. November 20, Abstract

Markov Chains and Stochastic Sampling

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Detailed Proof of The PerronFrobenius Theorem

NOTES ON THE PERRON-FROBENIUS THEORY OF NONNEGATIVE MATRICES

Geometric Mapping Properties of Semipositive Matrices

Notes on uniform convergence

Spectral Properties of Matrix Polynomials in the Max Algebra

Metric spaces and metrizability

Spectral radius, symmetric and positive matrices

Linear Algebra, Summer 2011, pt. 2

Two Characterizations of Matrices with the Perron-Frobenius Property

A matrix over a field F is a rectangular array of elements from F. The symbol

R, 1 i 1,i 2,...,i m n.

DR.RUPNATHJI( DR.RUPAK NATH )

This section is an introduction to the basic themes of the course.

To appear in Monatsh. Math. WHEN IS THE UNION OF TWO UNIT INTERVALS A SELF-SIMILAR SET SATISFYING THE OPEN SET CONDITION? 1.

Numerical Analysis: Solving Systems of Linear Equations

Recall the convention that, for us, all vectors are column vectors.

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

M.A. Botchev. September 5, 2014

Markov Chains, Random Walks on Graphs, and the Laplacian

MATH36001 Perron Frobenius Theory 2015

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

1 Matrices and Systems of Linear Equations

On moving averages. April 1, Abstract

Complex numbers, the exponential function, and factorization over C

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Advanced Calculus: MATH 410 Real Numbers Professor David Levermore 5 December 2010

Monotone operators and bigger conjugate functions

ELEMENTARY LINEAR ALGEBRA

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

ELEMENTARY LINEAR ALGEBRA

Notes on generating functions in automata theory

Spectral Theorem for Self-adjoint Linear Operators

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

EIGENVALUES AND EIGENVECTORS 3

1.10 Matrix Representation of Graphs

Nonlinear Stationary Subdivision

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Review of similarity transformation and Singular Value Decomposition

Markov Chains and Spectral Clustering

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Perron Frobenius Theory

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

An alternative proof of the Barker, Berman, Plemmons (BBP) result on diagonal stability and extensions - Corrected Version

NORMS ON SPACE OF MATRICES

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

First In-Class Exam Solutions Math 410, Professor David Levermore Monday, 1 October 2018

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

Extremal H-colorings of graphs with fixed minimum degree

Sequences of Real Numbers

Supremum and Infimum

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Algebra Massoud Malek

Math 324 Summer 2012 Elementary Number Theory Notes on Mathematical Induction

Key words. Complementarity set, Lyapunov rank, Bishop-Phelps cone, Irreducible cone

Estimates for probabilities of independent events and infinite series

Nonnegative and spectral matrix theory Lecture notes

Laplacian Integral Graphs with Maximum Degree 3

Some bounds for the spectral radius of the Hadamard product of matrices

The speed of Shor s R-algorithm

Random Walks on Graphs. One Concrete Example of a random walk Motivation applications

2.1 Convergence of Sequences

Introduction to Group Theory

On the adjacency matrix of a block graph

MAT2342 : Introduction to Linear Algebra Mike Newman, 5 October assignment 1

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

FIXED POINT ITERATIONS

The Distribution of Mixing Times in Markov Chains

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

The following definition is fundamental.

Classification of root systems

Chapter 2 Spectra of Finite Graphs

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

5 Compact linear operators

Primitive Digraphs with Smallest Large Exponent

Positive entries of stable matrices

Section 3.9. Matrix Norm

Fourth Week: Lectures 10-12

Lecture 15 Perron-Frobenius Theory

Wavelets and Linear Algebra

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

DETERMINANTS. , x 2 = a 11b 2 a 21 b 1

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

(x, y) = d(x, y) = x y.

A linear algebra proof of the fundamental theorem of algebra

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Transcription:

On the Solution of Linear Mean Recurrences David Borwein Jonathan M. Borwein Brailey Sims October 6, 0 Abstract Motivated by questions of algorithm analysis, we provide several distinct approaches to determining convergence and limit values for a class of linear iterations. Introduction Problem I. Determine the behaviour of the sequence defined recursively by, x n := x n + x n + + x n m m and satisfying the initial conditions for n m + () where a, a,, a m are given real numbers. x k = a k, for k =,,, m, () This problem was encountered by Bauschke, Sarada and Wang [] while examining algorithms to compute zeroes of maximal monotone operators in optimization. Questions they raised concerning its resolution motivated our ensuing consideration of various approaches whereby it might be addressed. Department of Mathematics, University of Western Ontario, London, ON, Canada. Email: dborwein@uwo.ca. Centre for Computer-assisted Research Mathematics and its Applications (CARMA), School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW 308, Australia. Email: jonathan.borwein@newcastle.edu.au, jborwein@gmail.com Centre for Computer-assisted Research Mathematics and its Applications (CARMA), School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW 308, Australia. Email: brailey.sims@newcastle.edu.au.

We suspect that, like us, the first thing most readers do when presented with a discrete iteration is to try to solve for the limit, call it L, by taking the limit in (). Supposing the limit to exist we deduce m {}}{ L + L + + L L = = L, (3) m and learn nothing at least not about the limit. There is a clue in that the result is vacuous in large part because it involves an average, or mean. In the next three sections, we present three quite distinct approaches. While at least one will be familiar to many readers, we suspect not all three will be. Each has its advantages, both as an example of more general techniques and as a doorway to a beautiful corpus of mathematics. Spectral solution We start with what may well be the best known approach. It may be found in many linear algebra courses often along with a discussion of the Fibonacci numbers: F n = F n + F n with F 0 = 0, F =. Equation () is an example of a linear homogeneous recurrence relation of order m with constant coefficients. Standard theory, see for example [5, Chapter 3., p. 5] or [9, Section.5 on page 90], runs as follows. Theorem. (Linear recurrences). The general solution of a linear recurrence with constant coeffients, has the form x n = x n = α k x n k k= l q k (n) rk n (4) where the r k are the l distinct roots of the characteristic polynomial k= p(r) := r m α k r k, (5) with algebraic multiplicity m k and q k are polynomials of degree at most m k. k= Typically, elementary books only consider simple roots but we shall use a little more.

. Our equation analysed The linear recurrence relation specified by equation has characteristic polynomial: p(r) := r m m (rm + r m + + r + ) = mrm+ (m + )r m + m(r ) (6) with roots r =, r, r 3,..., r m. Since p () = m m m n= n = m m = m + the root at is simple. We next show that if p(r) = 0 and r, then r <. We argue as follows. We know from (6) that p(r) = 0 if and only if If r >, then r + mr m = + m. (7) r + mr m r + m r m < + m, since the function f(x) := x + mx m is strictly increasing for real x > and f() = + m. Thus p(r) 0 when r >. Suppose therefore that p(r) = 0 with r = e iθ, 0 θ < π. Then by (7) we must have which is only possible when θ = 0. By (4) we must have cos(θ) + cos( mθ) m = + m, x n = c + r q k (n) rk n (8) k= where r k lies in the open unit disc for k m. Thus, the limit in (8) exists and equals c = q k (), the constant polynomial coefficient of the eigenvalue.. Identifying the limit In fact we may use (6) to see all roots are simple. It follows from (6) that (( r)p(r)) = (m + )r m ( r), 3

and hence that the only possible multiple root of p is r = which we have already shown to be simple, and so the solution is actually of the form x n = c + c k rk n, with c,, c m constants. (9) k= Observe now that if r is any of the roots r, r 3,..., r m, then n= nr n = mrm+ (m + )r m+ + r (r ) = m rp(r) = 0, (0) r and so multiplying (9) by n and summing from n = to m we obtain c = m(m + ) na n. () Thence, we do have convergence and the limit L = c is given by (). Example. (The weighted mean). We may perform the same analysis, if the arithmetic average in () is replaced by any weighted arithmetic mean n= W (α) (x, x,, x m ) := α x + α x + + α m x m for strictly positive weights α k > 0 with m k= α k =. Then W (/m) = A is the arithmetic mean of Problem I. As is often the case, the analysis becomes easier when we generalize. The recurrence relation in this case is x n = α m x n + α m x n + + α x n m for n m +, with companion matrix α m α m α α 0 0 0 A m := 0 0 0 0 0 0 0 0. () The corresponding characteristic polynomial of the recurrence relation p(r) := r m ( α m r m + α m r m + + α r + α ) is also the characteristic polynomial of the matrix. 4

Clearly p() = 0. Now suppose r is a root of p and set ρ := r. Then the triangle inequality and the mean property of W (α) imply that ρ m k= α k ρ k max k m ρk, (3) and so 0 ρ. If ρ = but r then r = e iθ for 0 < θ < π and, on observing that r m p(r) = 0 and equating real parts, we get = α k e i(k m )θ = k= m k= α k cos((m + k)θ) + α m cos(θ) whence cos(θ) = which is a contradiction. [Alternatively we may note that the modulus is a strictly convex function, whence exp(iθ) = which is again a contradiction.] Thence, all roots other than have modulus strictly less than. Finally, since p () = m m k= (k )α k m (m ) m k= α k = the root at is still simple. Moreover, if σ k := α + α + + α k, then p(r) = (r ) σ k r k. (4) Hence, p has no other positive real root. In particular, from (4) we again have x n = L + k= r q k (n) rk n = L + ε n k= where ε n 0 since the root at is simple while all other roots are strictly inside the unit disc but need not be simple as illustrated in Example.4. Remark.3. An analysis of the proof in Example. shows that the conclusions continue to hold for non-negative weights as long as the highest-order term α m > 0. Example.4 (A weighted mean with multiple roots). The polynomial p(r) = r 6 r5 + r 4 + 6 r 3 + 8 r + 45 r + 8 6 (5) = 6 ( r + ) (r ) ( + 9 r ), (6) has a root at one and a repeated pair of conjugate roots at ± i 3. Nonetheless, the weighted mean iteration x n = 8 x n 6 + 45 x n 5 + 8 x n 4 + 6 x n 3 + x n + x n 6 5

is covered by the development of Example.. The limit is L := 6 a 6 + 6 a 5 + 60 a 4 + 44 a 3 + 6 a + 8 a. (7) 834 Once found, this is easily checked (in a computer algebra system) from the Invariance principle of the next section. In fact the coefficients were found by looking in Maple at the thousandth power of the corresponding matrix and converting the entries to be rational. The polynomial was constructed by examining how to place repeated roots on the imaginary axis while preserving increasing coefficients as required in (4). One general potential form is then p(σ, τ) := (r )(r + σ)(r + τ ) and we selected p(/, /3). In the same fashion ( p, ) = r 6 6 r5 + 8 r 3 + 6 r + r +, 3 in which r 4 has a zero coefficient, but the corresponding iteration remains well behaved, see Remark.3. We will show in Example 3.3 that the approach of the next section provides the most efficient way of identifying the limit in this generalization. (In fact, we shall discover that the numerator coefficients in (7) are the partial sums of those in (5).) Example 3.3 also provides a quick way to check the assertions about limits in the next example. Example.5 (Limiting examples I). Consider first 0 0 0 0 0 The corresponding iteration is x n = (x n + x n 3 )/ with limit a /4 + a /4 + a 3 /. By comparison, for 0 0 0 0 0 the corresponding iteration is x n = (x n + x n )/ with limit (a + a )/3. This can also be deduced by considering Problem I with m = and ignoring the third row and column. The third permutation 0 0 0 0 0 corresponding to the iteration x n = (x n + x n 3 )/ has limit (a + a + a 3 )/5. 6...

Finally, 0 0 0 0 0 0 has A 3 3 = I and so is Ak 3 is periodic of period three as is obvious from the iteration x n = x n 3. We return to these matrices in Example 4.7 of the penultimate section. 3 Mean iteration solution The second approach, based on [3, Section 8.7], deals very efficiently with equation ; as a bonus the proof we give below of convergence holds for nonlinear means given positive starting values. We say a real valued function of M is a strict m-variable mean if min(x, x,, x m ) M(x, x,, x m ) max(x, x,, x m ) with equality only when all variables are equal. We observe that when M is a weighted arithmetic mean we may take its domain to be R m, however certain nonlinear means such as G := (x x x m ) /m are defined only for positive values of the variables. 3. Convergence of mean iterations In the language of [3, Section 8.7], we have the following: Theorem 3. (Convergence of a mean iteration). Let M be any strict m-variable mean and consider the iteration x n := M(x n m, x n m+,, x n ), (8) so when M = A we recover the iteration in (). L(x, x,..., x m ). Then x n converges to a finite limit Proof. Indeed, specialization of [3, Exercise 7 of Section 8.7] actually establishes convergence for an arbitrary strict mean; but let us make this explicit for this case. Let x n := (x n, x n,, x n m+ ) and let a n := max x n, b n := min x n. As noted above, for general means we need to restrict the variables to non-negative values, but for linear means no such restriction is needed. Then for all n, the mean property implies that a n a n b n b n. (9) 7

Thus, a := lim n a n and b := lim n b n exist with a b. In particular x n remains bounded. Select a subsequence x nk with x nk x. It follows that while b min x max x a (0) b = min M(x) and max M(x) = a. () Since M is a strict mean we must have a = b and the iteration converges. It is both here and in Theorem 3. that we see the power of identifying the iteration as a mean iteration. 3. Determining the limit In what follows a mapping L : D n R, where D R, is said to be a diagonal mapping if L(x, x,, x) = x for all x D. Theorem 3. (Invariance principle [3]). For any mean iteration, the limit L is necessarily a mean and is the unique diagonal mapping satisfying the Invariance principle: L (x n m, x n m+,..., x n ) = L (x n m+,..., x n, M(x n m, x n m+,..., x n )). () Moreover, L is linear whenever M is. Proof. We sketch the proof (details may again be found in [3, Section 8.7]). One first checks that the limit, being a pointwise limit of means is itself a mean and so is continuous on the diagonal. The principle follows since, L(x m ) = = L(x n ) = L(x n+ ) = L(lim n x n ) = lim n (x n ). We leave it to the reader to show that L is linear whenever M is. We note that we can mix-and-match arguments if we have used the ideas of the previous section to convince ourselves that the limit exists, the Invariance principle is ready to finish the job. Example 3.3 (A general strict linear mean). If we suppose that M(y,..., y m ) = m i= α iy i, with all α i > 0, and that L(y,..., y m ) = m i= λ iy i are both linear, we may solve () to determine that for k =,,... m we have λ k+ = λ k + λ m α k+. (3) 8

Whence, on setting σ k := α + + α k, we obtain λ k λ m = σ k. (4) Further, since L is a linear mean we have = L(,,..., ) = Σ m k= λ k, whence, summing (3.3) from k = to m yields, λ m = Σ m k= σ k and so becomes, λ k = σ k m k= σ. (5) k In particular, setting α k m we compute that σ k = k m and so λ k = already determined in () of the previous section. Example 3.4 (A nonlinear mean). We may replace A by the Hölder mean ( H p (x, x,..., x m ) := m i= x p i ) /p k m(m+) as was for < p <. The limit will be ( m k= λ ka p /p k), with λk as in (5). In particular with p = 0 (taken as a limit) we obtain in the limit the weighted geometric mean G(a, a,, a m ) = m k= aλ k k. We also apply the same considerations to weighted Hölder means. We conclude this section with an especially neat application of the arithmetic Invariance principle to an example by Carlson [3, Section 8.7]. Example 3.5 (Carlson s logarithmic mean). Consider the iterations with a 0 := a > 0, b 0 := b > a and a n+ = a n + a n b n, b n+ = b n + a n b n, for n 0. In this case convergence is immediate since a n+ b n+ = a n b n /. If asked for the limit, you might make little progress. But suppose you are told the answer is the logarithmic mean L(a, b) := a b log a log b, for a b and a (the limit as a b) when a = b > 0. We check that L(a n+, b n+ ) = a n b n = L(a n, b n ), log an+ b na n b n+ b na n 9

an since log bn is the limit. In particular, for a >, = log an b n. The invariance principle of Theorem 3. then confirms that L(a, b) L ( a a, a ) = log a, which quite neatly computes the logarithm (slowly) using only arithmetic operations and square roots. 4 Nonnegative matrix solution A third approach is to directly exploit the non-negativity of the entries of the matrix A m. This seems best organized as a case of the Perron-Frobenius theorem [6, Theorem 8.8.]. Recall that a matrix A is row stochastic if all entries are non-negative and each row sums to one and is irreducible if for every pair of indices i and j, there exists a natural number k such that (A k ) ij is not equal to zero. Recall also that the spectral radius ρ(a) := sup{ λ : λ is an eigenvalue of A} [6, p. 77]. Since A is not assumed symmetric, we may have distinct eigenvectors for A and its transpose corresponding to the same non-zero eigenvalue. We call the later left eigenvectors. Theorem 4. (Perron Frobenius, Utility grade [, 6, 8]). Let A be a row-stochastic irreducible square matrix. Then the spectral radius ρ(a) = and is a simple eigenvalue. Moreover, the right eigenvector e := [,,, m ] and the left eigenvector l = [l m, l m,..., l ] are necessarily both strictly positive and hence one-dimensional. In consequence l m l m l l l m l m l l lim k Ak =. (6) l m l m l l l m l m l l [We choose to consider l as a column vector with the highest order entry at the top.] The full version of Theorem 4. treats arbitrary matrices with non-negative entries. Even in our setting, we do not know that the other eigenvalues are simple but we may observe that this is equivalent to the matrix A being similar to a diagonal matrix D whose entries are the eigenvalues in decreasing order say. Then A n = U D n U U D U where the diagonal of D = [, 0,, 0 m ]. More generally, the Jordan form [7] suffices to show that (6) still follows. See [8] for a very nice reprises of the general Perron-Frobenius theory and its multi-fold applications (and indeed []). In particular [8, 4] gives Karlin s resolvent proof of of Theorem 4.. 0

Remark 4. (Collatz and Wielandt, [4, 0]). An attractive proof of Theorem 4., originating with Collatz and before him Perron, is to consider { m j= g(x, x,, x m ) := a } j,kx j. min k m Then the maximum, max x j =,x j 0 g(x) = g(v) =, exists and yields uniquely the Perron- Frobenius vector v (which in our case is e). x k Example 4.3 (The closed form for l). The recursion we study is x n+ = Ax n where the matrix A has k-th row A k for m strict arithmetic means A k. Hence A is row stochastic and strictly positive and so its Perron eigenvalue is, while A l = l shows the limit l is the left or adjoint eigenvector. Equivalently, this is also a so called compound iteration L := A k as in [3, Section 8.7] and so mean arguments much as in the previous section also establish convergence. Here we identify the eigenvector l with the corresponding linear function L since L(x) = l, x. Remark 4.4 (The closed form for L). Again we can solve for the right eigenvector l = A l, either numerically (using a linear algebra package or direct iteration) or symbolically. Note that this closed form is simultaneously a generalisation of Theorem 3. and a specialization of the general Invariance principle in [3, Section 8.7]. The case originating in () again has A being the companion matrix a m a m a a 0 0 0 A m := 0 0 0 0 0 0 0 0 with a k > 0 and m k= a k =. Proposition 4.5. Suppose for all k m we have a k > 0 then the matrix A m m has all entries strictly positive. Proof. We induct on k. Suppose that the first k < m rows of A k m have strictly positive entries. Since a m a m a a 0 0 0 A k+ m = 0 0 0 A k m, 0 0 0 0 0

it follows that and that, for i k + m, (A k+ m ) ij = (A k+ m ) j = (A m ) r (A k m) rj > 0, r= (A m ) ir (A k m) rj = (A k m) i,j > 0. r= Thus, the first k + rows of A k+ m have strictly positive entries, and we are done. Remark 4.6 (A picture may be worth a thousand words). The last theorem ensures the irreducibility of A m by establishing the stronger condition that A m m is a strictly positive matrix. Both the irreducibility of A m and the stronger condition obtained above may be observed in the following alternative way. There are many equivalent conditions for the irreducibility of A. One obvious condition is that an m m matrix A with non-negative entries is irreducible if (and only if) A is irreducible, where A is A with each of its non-zero entries replaced by. Now, A may be interpreted as the adjacency matrix, see [6, Chapter 8], for the directed graph G with vertices labeled,,, m and an edge from i to j precisely when (A ) ij =. In which case, the ij entry in the k th power of A equals the number of paths of length k from i to j in G. Thus, irreducibility of A corresponds to G being strongly connected. For our particular matrix A m, as given in (), the associated graph G m is depicted in Figure. Figure : The graph G m with adjacency matrix A m. The presence of the cycle m m m m shows that G m is connected and hence that A m is irreducible. A moments checking also reveals that in G m any vertex i is connected to any other j by a path of length m (when forming such paths, the loop at may be traced as many times as necessary), thus, also establishing the strict positivity of A m m.

Example 4.7 (Limiting examples, II). We return to the matrices of Example.5. First we look again at 0 0 0 0 0 Then A 4 3 is coordinate-wise strictly positive (but A3 3 is not). Thus, A 3 is irreducible despite the first row not being strictly positive. The limit eigenvector is [/, /4, /4] and the corresponding iteration is x n = (x n + x n 3 )/ with limit a /4 + a /4 + a 3 /, where the a i are the given initial values. Next we consider 0 0 0 0 0 In this case A 3 is reducible and the limit eigenvector [/3, /3, 0] exists but is not strictly positive (see Remark.3). The corresponding iteration is x n = (x n + x n )/ with limit (a + a )/3. This is also deducible by considering our starting case in with m = and ignoring the third row and column. The third case 0 0 0 0 0 corresponds to the iteration x n = (x n + x n 3 )/. It, like the first, is irreducible with limit (a + a + a 3 )/5. Finally, 0 0 0 0 0 0 has A 3 3 = I and so Ak 3 is periodic of period three and does not converge as is obvious from the iteration x n = x n 3. 5 Conclusion All three approaches that we have shown have their delights and advantages. It seems fairly clear, however, that for the original problem, analysis as a mean iteration while the least well known is by far the most efficient and also the most elementary. Moreover, all three approaches provide for lovely examples in any linear algebra class, or any introduction... 3

to computer algebra. Indeed, they offer different flavours of algorithmics, combinatorics, analysis, algebra and graph theory. References [] Heinz H. Bauschke, Joshua Sarada and Xianfu Wang, On moving averages. Preprint, June 5, 0. [] Abraham Berman and Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 994, SIAM. [3] J.M. Borwein and P. B. Borwein, Pi and the AGM: A Study in Analytic Number Theory and Computational Complexity, John Wiley, New York, 987, paperback 998) [4] Lothar Collatz, Einschlieungssatz für die charakteristischen Zahlen von Matrize, Mathematische Zeitschrift. 48 () (94), -6, doi:0.007/bf08003 [5] Carl-Erik Froberg, Introduction to Numerical Analysis, nd edition, Addison-Wesley, 969. [6] Chris Godsil and Gordon F. Royle, Algebraic Graph Theory Springer, New York, 00. [7] Gene H. Golub and Charles F. Van Loan, Matrix Computations (3rd ed.), Johns Hopkins University Press, Baltimore, 996. [8] C. R. MacCluer, The Many Proofs and Applications of Perron s Theorem, SIAM Review, Vol. 4, No. 3 (Sep., 000), 487 498. [9] Alexander M. Ostrowski, Solution of Equations in Euclidean and Banach Spaces. Academic Press, 973. [0] Helmut Wielandt, Unzerlegbare, nicht negative Matrizen, Mathematische Zeitschrift 5 () (50), 64-648, doi:0.007/bf03070 [] Wikipedia entry on the Perron-Frobenius theorem: http://en.wikipedia.org/wiki/perron%e%80%93frobenius_theorem. 4