Novel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL

Similar documents
Analysis of Carleman Representation of Analytical Recursions

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Generalization of Hensel lemma: nding of roots of p-adic Lipschitz functions

Numerical Integration exact integration is not needed to achieve the optimal convergence rate of nite element solutions ([, 9, 11], and Chapter 7). In

Institute for Advanced Computer Studies. Department of Computer Science. On the Convergence of. Multipoint Iterations. G. W. Stewart y.

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Homework 7 Solutions to Selected Problems

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

On the classication of algebras

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

ARCHIVUM MATHEMATICUM (BRNO) Tomus 32 (1996), 13 { 27. ON THE OSCILLATION OF AN mth ORDER PERTURBED NONLINEAR DIFFERENCE EQUATION

Numerical Solution of Hybrid Fuzzy Dierential Equation (IVP) by Improved Predictor-Corrector Method

2 ALGEBRA II. Contents

EXAMPLES OF PROOFS BY INDUCTION

Finding Low Degree Annihilators for a Boolean Function Using Polynomial Algorithms

Fraction-free Row Reduction of Matrices of Skew Polynomials

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Automatica, 33(9): , September 1997.

The Erwin Schrodinger International Pasteurgasse 6/7. Institute for Mathematical Physics A-1090 Wien, Austria

Rough Sets, Rough Relations and Rough Functions. Zdzislaw Pawlak. Warsaw University of Technology. ul. Nowowiejska 15/19, Warsaw, Poland.

A BASIC FAMILY OF ITERATION FUNCTIONS FOR POLYNOMIAL ROOT FINDING AND ITS CHARACTERIZATIONS. Bahman Kalantari 3. Department of Computer Science

Properties of Arithmetical Functions

INFINITE SEQUENCES AND SERIES

A note on continuous behavior homomorphisms

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

7. Baker-Campbell-Hausdorff formula

Abstract. 1 Introduction. 2 Idempotent filters. Idempotent Nonlinear Filters

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

The WENO Method for Non-Equidistant Meshes

Vector Spaces ปร ภ ม เวกเตอร

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

Taylor series based nite dierence approximations of higher-degree derivatives

2 EBERHARD BECKER ET AL. has a real root. Thus our problem can be reduced to the problem of deciding whether or not a polynomial in one more variable

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Discrete Halanay-type inequalities and applications

MORE CONSEQUENCES OF CAUCHY S THEOREM

1 Groups Examples of Groups Things that are not groups Properties of Groups Rings and Fields Examples...

NP-hardness of the stable matrix in unit interval family problem in discrete time

Chapter 11 - Sequences and Series


Roots of Unity, Cyclotomic Polynomials and Applications

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

Our goal is to solve a general constant coecient linear second order. this way but that will not always happen). Once we have y 1, it will always

Linear Regression and Its Applications

X. Hu, R. Shonkwiler, and M.C. Spruill. School of Mathematics. Georgia Institute of Technology. Atlanta, GA 30332

LEBESGUE INTEGRATION. Introduction

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

MATH32031: Coding Theory Part 15: Summary

Abstract Minimal degree interpolation spaces with respect to a nite set of

form, but that fails as soon as one has an object greater than every natural number. Induction in the < form frequently goes under the fancy name \tra

New Lyapunov Krasovskii functionals for stability of linear retarded and neutral type systems

Math 361: Homework 1 Solutions

On reaching head-to-tail ratios for balanced and unbalanced coins

Notes on Iterated Expectations Stephen Morris February 2002

4. Higher Order Linear DEs

Pade approximants and noise: rational functions

Doc. Math. J. DMV 321 Chern Classes of Fibered Products of Surfaces Mina Teicher Received: March 16, 1998 Revised: September 18, 1998 Communicated by

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

THE LARGEST INTERSECTION LATTICE OF A CHRISTOS A. ATHANASIADIS. Abstract. We prove a conjecture of Bayer and Brandt [J. Alg. Combin.

Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Algebraic Expressions and Identities

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Chern characters via connections up to homotopy. Marius Crainic. Department of Mathematics, Utrecht University, The Netherlands

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over

8 Singular Integral Operators and L p -Regularity Theory

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

Linear Algebra, 3rd day, Wednesday 6/30/04 REU Info:

On the Third Order Rational Difference Equation

Lecture Notes: Mathematical/Discrete Structures

SCTT The pqr-method august 2016

2 IM Preprint series A, No. 1/2003 If vector x(k) = (x 1 (k) x 2 (k) ::: x n (k)) denotes the time instants in which all jobs started for the k th tim

Series Solutions. 8.1 Taylor Polynomials

Congurations of periodic orbits for equations with delayed positive feedback

Asymptotic expansion of multivariate conservative linear operators

1 Introduction Future high speed digital networks aim to serve integrated trac, such as voice, video, fax, and so forth. To control interaction among

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

GENERAL SOLUTION OF A FIBONACCI-LIKE RECURSION RELATION AND APPLICATIONS

Alternative theorems for nonlinear projection equations and applications to generalized complementarity problems

Contents. 2 Partial Derivatives. 2.1 Limits and Continuity. Calculus III (part 2): Partial Derivatives (by Evan Dummit, 2017, v. 2.

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

of Classical Constants Philippe Flajolet and Ilan Vardi February 24, 1996 Many mathematical constants are expressed as slowly convergent sums

Generating Functions of Partitions

REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

Centro de Processamento de Dados, Universidade Federal do Rio Grande do Sul,

Vector Spaces ปร ภ ม เวกเตอร

POLYNOMIALS OF AN ELLIPTIC CURVE. non-torsion point M such that M mod p is non-zero and non-singular, and it is asymptotically

Balance properties of multi-dimensional words

Methods for a Class of Convex. Functions. Stephen M. Robinson WP April 1996


ISSUED BY KENDRIYA VIDYALAYA - DOWNLOADED FROM Chapter - 2. (Polynomials)

Coecient bounds for certain subclasses of analytic functions of complex order

Transcription:

Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh University, 394693 Voronezh, RUSSIA (April 26, 1996) We propose a general method to map nonlinear recursions on a linear (but matrix) one. The solution of the recursion is represented as a product of matrices whose elements depend only on the form of the recursion and not on the initial conditions. First we consider the method for polynomial recursions of arbitrary order. Then we discuss the generalizations of the method for systems of polynomial recursions and arbitrary analytic recursions as well. The only restriction for these equations is to be solvable with respect to the highest order term (existence of a normal form). I. INTRODUCTION Recursions take a central place in various elds of science. Numerical solution of dierential equations and dierent models of evolution of a system involve, in general, recursions. By now, only linear recursions with constant coecients could be solved [1{3]. However, the nonlinearity changes the situation drastically. Solution of a rather simple recursion, the logistic map, y n+1 = y n (1 ; y n ) is far from being so simple as one might guess. The analysis of its behavior, while based on a roundabout approach, has revealed many amazing properties. In this paper we suggest a new approach to the solution of nonlinear recursions. It turns out, that the coecients of the i-th iteration of the polynomial depend linearly on the coecients of the (i ; 1)-th iteration. Using this fact we succeed to write down the general 1

solution of the polynomial recursion and to generalize it for any recursion y n+1 = f(y n ) with y y where f(x) is an analytic function. In the paper we study the conditions that the analytic function must satisfy to make this generalization possible. The gist of the method is the construction of a special transfer matrix, T. It allows to represent the solution in the form y n = het n yi (1) where he is the rst unit vector, yi is a vector of initial values and T n is the n-th power of the matrix T. II. FIRST-ORDER POLYNOMIAL RECURSION Here we consider a rst-order recursion equation in its normal form y n+1 = P (y n ) (2) where P (x) is a polynomial of degree m: P (x) = mx a k x k a m 6=: (3) Let y y be an initial value for the recursion (2). We denote by yi the column vector of powers of y yi = fy g 1 and the vector he is a row vector he =[ = 1] 1 : It should be = emphasized, that runs from, since in the general case a 6=. In this notation heyi is a scalar product that yields heyi = y: (4) Theorem 1. For any recursion of type Eq.(2) there exists a matrix T = ft k g 1 such that 2

y n = het n yi: (5) The matrix power T n exists for all n and all the operations in the right-hand side of Eq. (5) are associative. Proof. For n = the statement of the theorem is valid (see Eq. (4)). We introduce the n column vector y 1 i def = y 1 o 1 = where y 1 = P (y). Let T be a matrix such that y 1 i = Tyi : (6) The existence of this matrix will be proven later on. If such a matrix exists, then, analogically to Eq. (4), we have y 1 = hey 1 i = hetyi. Therefore, the statement of the theorem is true for n =1aswell. Assume, that Eq. (5) is valid for n = l for any initial value y. Theny l+1 can be represented as y l+1 = het l y 1 i, where y 1 = P (y) is considered as a new initial value of the recursion. Then, using Eq. (6) one gets y l+1 = het l y 1 i = hett l yi = het l+1 yi : Now we prove the existence of the matrix T. Onehashy 1 def =[P (y)] 1 =. In turn P (y) is the m-th degree polynomial P (y) = mx a i y i = mx T k y k (7) and we infer that T = ft k g 1 obeys Eq. (6). Note, that for and k satisfying k m we have T k, therefore, each row is nite (i.e., there is only a nite number of nonzero matrix elements in each row). This proves the existence of powers of T and associativity in Eq. (5). Thus, the proof is complete. III. SPECIAL CASES In some special cases the matrix T has the rather simple form. 3

(a) The binomial: P (x) =a p x p + a q x q. As one can see, in the general case elements of the matrix T have a form of rather complicated sums. However, they are degenerated to a fairly simple expression, when the polynomial (3) has only two terms. In this case one has P (y) =(a p y p + a q y q ) = X i a ;i p a i q yp(;i)+qi : Denoting k = p( ; i)+qi i = l(k) =(q ; p) ;1 (k ; p) we have P (y) = qx k=p y k l(k) a ;l(k) p a l(k) q : Thus, the matrix elements T k are T k = a ;l(k) p a l(k) q : (8) l(k) Example 1. To demonstrate this let us consider the following recursion: y n+1 = y n (1 ; y n ) with y y: (9) Plugging p =1,q =2,a p = ;a q = into Eq. (8) one immediately obtains T k =(;1) k; k ; : (1) Thus, we recover the result of Rabinovich et al [4] for the recursion equation known as the logistic mapping. form. (b) The trinomial P (x) =a + a p x p + a q x q and a 6=. Then the matrix T has a special Lemma 1. Let P (x) =a + a p x p + a q x q. Then, the following decomposition exists T = AT where T is a matrix corresponding to the polynomial P (x) = a p x p + a q x q and A is a triangular matrix. 4

Proof. Consider P (x) =a p x p + a q x q and the corresponding matrix T. It yields n o T yi = y i def 1 = P 1 (y) = : For the matrix T one gets Tyi = y 1 i def = fp (y) g 1 = P (y) = X i a ;i (a p y p + a q y q ) i : Denoting in the last line A i a ;i i one obtains y 1 i = Ay1i = AT yi = Tyi and T = AT. The lemma isproven. IV. POLYNOMIAL RECURSIONS WITH NON-CONSTANT COEFFICIENTS As shown in [4] a generalization of Eq. (9), y n+1 = n y n (1 ; y n ) with y y (11) can be solved using a similar approach. The solution is y n = het n T 2 T 1 yi (12) where the matrix elements of T i are now i-dependent: (T i ) k =(;1) k; ( i ) : (13) k ; The same argument isvalid for the general case. Theorem 2. Let the polynomial in the Eq. (2) depends on n. Then the solution Eq. (5) takes the form of Eq. (12) with the obvious changes (a ::: a m become i-dependent functions) in corresponding matrix elements. To illustrate this Theorem we consider Example 2. We are going to apply the previous material to the Riccati equation. Usually, people use this name for the equation y n+1 y n + a n y n+1 + b n y n + c n =: 5

However, by appropriate variable change [1,2] this equation can be reduced to a linear one and then treated by conventional techniques. Here we shall be dealing with the following recursion: y n+1 = a n + b n y n + c n y 2 n with y y: This is another possible (asymmetric) discrete analog of the Riccati dierential equation [5{7]. It is well-known that the latter cannot be solved in quadratures. The general results of the two previous sections can be employed to write down the solution of this recursion. Namely, the solution reads y n = het n T 2 T 1 yi where the matrix T i is a product of two matrices T i = A i S i : The matrix elements are given by (A i ) k = k a ;k i and (S i ) k = k ; b 2;k i c k; i : V. SYSTEM OF FIRST-ORDER NONLINEAR EQUATIONS Actually, very little is known about systems of nonlinear equations [8]. We now extend our method of Sect. II to handle systems of nonlinear equations. Let us demonstrate it on the following example: 8 >< >: u n+1 = u n (1 ; v n ) with u u v n+1 = v n (1 ; u n ) with v v: (14) Proceeding here as in Sect. II, we are checking the transformation of a product u v k : X [u(1 ; v)] [v(1 ; u)] k = u (;1) r v r k v k (;1) s k u s r s r s X = u p v q k (;1) (p;)+(q;k) k : (15) p q q ; k p ; 6

One can proceed with the aid of multidimensional matrices [11], but here we prefer to use more traditional two-dimensional matrices. Introduce now avector uvi, which consists of powers u v k of variables u and v, arranged in some order. Namely,introduce a biection ( ): N 2 N, wheren is the set of the natural numbers and zero. Then, the monomial u v k is the ( k)-th component of the vector uvi. For example, one can use ( k) =k + 1 ( + k)( + k +1): 2 Introducing a matrix T with the elements T ( k)(p q) =(;1) (p;)+(q;k) k q ; k p ; k we basically return to the familiar transfer-matrix construction. Indeed, we have u n = he (1 ) T n uvi v n = he ( 1) T n uvi where he ( k) = f ( k) i g 1 : Theorem 3. Consider a system of m rst-order nonlinear equations: x (i) n+1 = P i (x (1) ::: x(m) ) i =1 ::: m: (16) n n Then the matrix T exists such that x (i) n = he (i) T n x (1) x (m) i where the vectors he (i) and x (1) x (m) i are dened asabove, with the aid of some biection ( ::: ): N m N. Below, we present the scheme of the proof of a transfer-matrix representation for arbitrary systems of rst-order nonlinear equations. 7

As before we are checking the product P 1 1 P m m P (1 ::: m): The polynomial P (1 ::: m) depends on m variables x (1) ::: x (m) and therefore can be represented as P (1 ::: m)(x (1) ::: x (m) )=ht (1 ::: m)x (1) x (m) i where ht (1 ::: m) is the constant vector of coecients of the polynomial P (1 ::: m). This vector ht (1 ::: m) is the ( 1 ::: m )-th row of the transfer matrix T. The above procedure can be easily generalized on polynomials with variable coecients, P i (n x (1) ::: x (m) ), in complete analog to the Theorem 2. VI. RECURSION SOLUTION FOR ARBITRARY ANALYTIC FUNCTIONS IN THE RHS Let the series converge absolutely in [;r r]. Then the series a k x k f(x) (17) ~a k x k ~ f(x) (18) where ~a k = a k, also converge in [;r r]. We construct the following absolutely convergent in [;r r] [9,1] series " 1 X a k x k# = a + 1 " X 1 ~a k x k# =~a + 1 a 1 a ;1 x + ~a 1 ~a ;1 x + " 1 " 1 a 2 a ;1 + 2 ~a 2 ~a ;1 + 2 a 2 1 a;2 ~a 2 1 ~a;2 # # x 2 + :::= x 2 + :::= T k x k (19) ~T k x k : (2) Denition 1. The matrix T = ft k g 1 is said to be the transfer matrix of the analytic function f(x) if P 1 T kx k = f (x). 8

Using Eq. (19) one can estimate the coecients T k for a xed k: T k + k k a ;k max k a i C(a + ) (21) ik for arbitrary >, a constant C = C(), and >k. Theorem 4. Let the series b k x k g(x) (22) converge absolutely in [;R R]. The matrix S be the transfer matrix of the function g(x), and ~r r be such that the function ~ f(x), dened by(18), satises ~ f(x): [;~r ~r] [ R]. Then the matrix product ST exists and is equal to the transfer matrix of the composition of the functions f(x) and g(x), g f(x), i.e., the -th row of the matrix ST is the vector of the coecients of Maclaurin's decomposition of the function [g f] (x). Proof. First we prove the existence of the matrix ST. One can estimate the ( k)-th element of the matrix ST using Eq. (21): (ST) k S i T ik C 1 + S i T ik C 1 + C i=k+1 i=k+1 S i (a + ) i : Since min f(x) ~ = f() ~ = a <Rwe canchoose > suciently small, that a + <R, therefore the series in the last expression converges. Thus, the matrix ST exists. Now we should study the series Since the series converge absolutely: S i T ik x k (ST) k x k = S i ~T ik x k we can change the order of summation [1]: (ST) k x k = S i T ik x k = S i T ik x k : (23) h i S i f(x) ~ i < 1: (24) S i [f(x)] i =[g(f(x))] =[g f] (x): The theorem is proved. Restricting ourselves to the rst row of the matrix S only we get the following 9

Corollary 1. Let the functions f(x) and g(x) satisfy the conditions of Theorem 4 the bra-vector hb is the vector of coecients of the series (22). Then the vector hc = hbt consists of the coecients of the series c k x k = g f(x): (25) It is easy to see, that calculating the scalar product of the vector hc and the vector xi = fx i g 1 we obtain hbtxi = g f(x): (26) In this way we can write down the solution of the recursion y n+1 = f(y n ) with y y: (27) Theorem 5. Let the series (17) converge absolutely on [;r r], the matrix T be the transfer matrix of the function f(x) and ~r <rbe such that (see Eq. (18)) ~ f(x): [;~r ~r] [ ~r]. Then for any initial conditions y 2 [;~r ~r] y n = het n yi (28) where the bra-vector he =[ i 1 ] 1 and the ket-vector yi = fyi g 1. In order to solve a generalization of the recursion (27), y n+1 = f(n y n ), we should consider n-dependent matrices T n instead of the matrix T. One can use the same technique to solve the more general case of multivariable recursions, y n = f(y n;1 ) f: R n R n f(x) =ff i (x)g n i=1 x 2 Rn Here we should consider series decomposition of the functions Q n i=1 [f i (x)] a i, instead of the series (19), and then proceed by the analogy to multivariable polynomial recursions. However, since the notation becomes very cumbersome, we shall not write down this formalism. The theorems of this section can be easily generalized on complex mappings. Namely, the statement of the Theorem 4 for the complex variable functions reads: 1

Theorem 6. Let the series (17), (18) and (22) be the series of complex variable z with the complex coecients. Series (17) and (18) converge in the circle z r and series (22) converge in z R. The matrices S and T be the transfer matrices of the functions g(z) and f(z) correspondingly and ~r r ~r 2 R be such that f(~r) ~ R. Then the matrix product ST exists and equals to the transfer matrix of the composition of the functions f(z) and g(z), g f(z), i.e., the -th row of the matrix ST is the vector of the coecients of Maclaurin's decomposition of the function [g f] (z) convergent in z ~r. VII. EXAMPLES In order to show that the condition ~ f(x): [;~r ~r] [ R] is essential (see Theorem 4) we present the following Example 3. Let g(x) =x 4 ; 1 4 x6 + 1 9 x8 ; 1 16 x1 + ::: i.e. a 2i = (;1)i (i;1) 2 a 2i+1 = i 2 and radius of convergence is R =1 letf(x) bedened by f(x) =2x(1 ; x 2 ) f:[;1 1] [;1 1]. The transfer matrix T of the function f(x) according to Eq. (8) is T k =(;1) (k;)=2 (k;)=2 2 for even ; k. The sign in the sequences fs 1 2i g 1 i=1 and ft 2i 2kg 1 i=1 alternates. Therefore the sign in the sequence fs 1 2iT 2i 2k g 1 i=1 is constant and for the 2k-th component of the rst row of the matrix ST one has c 1 2k = S 1 2i T 2i 2k = =4 k;2 1 (k ; 3) 2 (2k ; 4)(2k ; 5) 2 S 1 2i T 2i 2k S 1 2(k;2) T 2(k;2) 2k 4 k;2 for any k > 3. Thus, the components of the the rst row cannot be the coecients of Maclaurin's decomposition of the function g f because of its divergence for x > 1=4. The following example presents a family of function whose transfer matrices form is invariant under the multiplication. 11

Example 4. Let the function f(x) = ax=(b + x). Then the components of the transfer matrix T are T k = k ; 1 ; 1 a b ;k for k > and T k = k for =ork = (29) where n m = 8 >< >: n m(n;m) integer m n m n any other m n and nm is the Kronecker symbol. If g(x) =cx=(d + x) ands is the transfer matrix of g(x), then the components of the matrix ST are (ST) k = = k ; 1 ; 1 S i T ik = ac a + b i ; 1 c d ;i k ; 1 ; 1 i ; 1 ;k bd : a + b a i b ;k = k ; 1 k; X c b ;k (a=d) l+ ; 1 l= k ; l One can see, that the matrix elements (ST) k have the same form as in Eq. (29). VIII. SUMMARY In this paper wehave presented a new method to obtain the solution of arbitrary polynomial recursions. The method has been generalized to the systems of multivariable recursions and analytical recursions. We found a class of analytical function recursions to which the method can be applied. Particularly, this class contains functions which are analytical over all space. Generally, the solution is obtained in the form of a matrix power, applied to the vectors of initial values. We have presented a way to construct such a matrix. Famous and important examples, such asthelogistic map and the Riccati recursion, have been considered and the corresponding matrices have been written down explicitly. The following generalizations are also can be done: a) Multivariable analytic recursions. This can be done, for example, as in the Section V. 12

b) System of higher-order nonlinear equations. The scheme is quite obvious: introduction of new variables to bring each equation to the rst-order structure and, then, construction of a transfer matrix, as in the previous case. IX. ACKNOWLEDGMENT We thank Dany Ben-Avraham for critical reading. [1] R. P. Agarwal: Dierence Equations and Inequalities, Marcel Dekker, New York, 1992 [2] V. Lakshmikantham, D. Trigiante: Theory of Dierence Equations: Numerical Methods and Applications, Academic Press, New York, 1988 [3] F. B. Hildebrand: Finite-Dierence Equations and Simulations, Prentice-Hall, Englewood Clits, NJ, 1968 [4] S. Rabinovich, G. Berkolaiko, S. Buldyrev, A. Shehter and S. Havlin: Physica A218, 457 (1995) [5] D. Zwillinger: Handbook of Dierential Equations, 2nd ed., Academic Press, Boston, 1992 [6] W. T. Reid: Riccati Dierential Equation, Academic Press, New York, 1972 [7] S. Bittanti, A. J. Laub, J. C. Willems, eds: The Riccati Equation, Springer, Berlin, 1991 [8] V. L. Kocic and G. Ladas: Global Behavior of Nonlinear Dierence Equations of Higher Order with Applications, Kluwer, Dordrecht, 1993 [9] R. Courant: Dierential and Integral Calculus, Blackie & Son, London, v.1, 1963 [1] E. Goursat: A Course in Mathematical Analysis, Dover, New York, v.1, 1956 [11] S. Rabinovich, G. Berkolaiko, S. Havlin: preprint, to be published 13