Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh University, 394693 Voronezh, RUSSIA (April 26, 1996) We propose a general method to map nonlinear recursions on a linear (but matrix) one. The solution of the recursion is represented as a product of matrices whose elements depend only on the form of the recursion and not on the initial conditions. First we consider the method for polynomial recursions of arbitrary order. Then we discuss the generalizations of the method for systems of polynomial recursions and arbitrary analytic recursions as well. The only restriction for these equations is to be solvable with respect to the highest order term (existence of a normal form). I. INTRODUCTION Recursions take a central place in various elds of science. Numerical solution of dierential equations and dierent models of evolution of a system involve, in general, recursions. By now, only linear recursions with constant coecients could be solved [1{3]. However, the nonlinearity changes the situation drastically. Solution of a rather simple recursion, the logistic map, y n+1 = y n (1 ; y n ) is far from being so simple as one might guess. The analysis of its behavior, while based on a roundabout approach, has revealed many amazing properties. In this paper we suggest a new approach to the solution of nonlinear recursions. It turns out, that the coecients of the i-th iteration of the polynomial depend linearly on the coecients of the (i ; 1)-th iteration. Using this fact we succeed to write down the general 1
solution of the polynomial recursion and to generalize it for any recursion y n+1 = f(y n ) with y y where f(x) is an analytic function. In the paper we study the conditions that the analytic function must satisfy to make this generalization possible. The gist of the method is the construction of a special transfer matrix, T. It allows to represent the solution in the form y n = het n yi (1) where he is the rst unit vector, yi is a vector of initial values and T n is the n-th power of the matrix T. II. FIRST-ORDER POLYNOMIAL RECURSION Here we consider a rst-order recursion equation in its normal form y n+1 = P (y n ) (2) where P (x) is a polynomial of degree m: P (x) = mx a k x k a m 6=: (3) Let y y be an initial value for the recursion (2). We denote by yi the column vector of powers of y yi = fy g 1 and the vector he is a row vector he =[ = 1] 1 : It should be = emphasized, that runs from, since in the general case a 6=. In this notation heyi is a scalar product that yields heyi = y: (4) Theorem 1. For any recursion of type Eq.(2) there exists a matrix T = ft k g 1 such that 2
y n = het n yi: (5) The matrix power T n exists for all n and all the operations in the right-hand side of Eq. (5) are associative. Proof. For n = the statement of the theorem is valid (see Eq. (4)). We introduce the n column vector y 1 i def = y 1 o 1 = where y 1 = P (y). Let T be a matrix such that y 1 i = Tyi : (6) The existence of this matrix will be proven later on. If such a matrix exists, then, analogically to Eq. (4), we have y 1 = hey 1 i = hetyi. Therefore, the statement of the theorem is true for n =1aswell. Assume, that Eq. (5) is valid for n = l for any initial value y. Theny l+1 can be represented as y l+1 = het l y 1 i, where y 1 = P (y) is considered as a new initial value of the recursion. Then, using Eq. (6) one gets y l+1 = het l y 1 i = hett l yi = het l+1 yi : Now we prove the existence of the matrix T. Onehashy 1 def =[P (y)] 1 =. In turn P (y) is the m-th degree polynomial P (y) = mx a i y i = mx T k y k (7) and we infer that T = ft k g 1 obeys Eq. (6). Note, that for and k satisfying k m we have T k, therefore, each row is nite (i.e., there is only a nite number of nonzero matrix elements in each row). This proves the existence of powers of T and associativity in Eq. (5). Thus, the proof is complete. III. SPECIAL CASES In some special cases the matrix T has the rather simple form. 3
(a) The binomial: P (x) =a p x p + a q x q. As one can see, in the general case elements of the matrix T have a form of rather complicated sums. However, they are degenerated to a fairly simple expression, when the polynomial (3) has only two terms. In this case one has P (y) =(a p y p + a q y q ) = X i a ;i p a i q yp(;i)+qi : Denoting k = p( ; i)+qi i = l(k) =(q ; p) ;1 (k ; p) we have P (y) = qx k=p y k l(k) a ;l(k) p a l(k) q : Thus, the matrix elements T k are T k = a ;l(k) p a l(k) q : (8) l(k) Example 1. To demonstrate this let us consider the following recursion: y n+1 = y n (1 ; y n ) with y y: (9) Plugging p =1,q =2,a p = ;a q = into Eq. (8) one immediately obtains T k =(;1) k; k ; : (1) Thus, we recover the result of Rabinovich et al [4] for the recursion equation known as the logistic mapping. form. (b) The trinomial P (x) =a + a p x p + a q x q and a 6=. Then the matrix T has a special Lemma 1. Let P (x) =a + a p x p + a q x q. Then, the following decomposition exists T = AT where T is a matrix corresponding to the polynomial P (x) = a p x p + a q x q and A is a triangular matrix. 4
Proof. Consider P (x) =a p x p + a q x q and the corresponding matrix T. It yields n o T yi = y i def 1 = P 1 (y) = : For the matrix T one gets Tyi = y 1 i def = fp (y) g 1 = P (y) = X i a ;i (a p y p + a q y q ) i : Denoting in the last line A i a ;i i one obtains y 1 i = Ay1i = AT yi = Tyi and T = AT. The lemma isproven. IV. POLYNOMIAL RECURSIONS WITH NON-CONSTANT COEFFICIENTS As shown in [4] a generalization of Eq. (9), y n+1 = n y n (1 ; y n ) with y y (11) can be solved using a similar approach. The solution is y n = het n T 2 T 1 yi (12) where the matrix elements of T i are now i-dependent: (T i ) k =(;1) k; ( i ) : (13) k ; The same argument isvalid for the general case. Theorem 2. Let the polynomial in the Eq. (2) depends on n. Then the solution Eq. (5) takes the form of Eq. (12) with the obvious changes (a ::: a m become i-dependent functions) in corresponding matrix elements. To illustrate this Theorem we consider Example 2. We are going to apply the previous material to the Riccati equation. Usually, people use this name for the equation y n+1 y n + a n y n+1 + b n y n + c n =: 5
However, by appropriate variable change [1,2] this equation can be reduced to a linear one and then treated by conventional techniques. Here we shall be dealing with the following recursion: y n+1 = a n + b n y n + c n y 2 n with y y: This is another possible (asymmetric) discrete analog of the Riccati dierential equation [5{7]. It is well-known that the latter cannot be solved in quadratures. The general results of the two previous sections can be employed to write down the solution of this recursion. Namely, the solution reads y n = het n T 2 T 1 yi where the matrix T i is a product of two matrices T i = A i S i : The matrix elements are given by (A i ) k = k a ;k i and (S i ) k = k ; b 2;k i c k; i : V. SYSTEM OF FIRST-ORDER NONLINEAR EQUATIONS Actually, very little is known about systems of nonlinear equations [8]. We now extend our method of Sect. II to handle systems of nonlinear equations. Let us demonstrate it on the following example: 8 >< >: u n+1 = u n (1 ; v n ) with u u v n+1 = v n (1 ; u n ) with v v: (14) Proceeding here as in Sect. II, we are checking the transformation of a product u v k : X [u(1 ; v)] [v(1 ; u)] k = u (;1) r v r k v k (;1) s k u s r s r s X = u p v q k (;1) (p;)+(q;k) k : (15) p q q ; k p ; 6
One can proceed with the aid of multidimensional matrices [11], but here we prefer to use more traditional two-dimensional matrices. Introduce now avector uvi, which consists of powers u v k of variables u and v, arranged in some order. Namely,introduce a biection ( ): N 2 N, wheren is the set of the natural numbers and zero. Then, the monomial u v k is the ( k)-th component of the vector uvi. For example, one can use ( k) =k + 1 ( + k)( + k +1): 2 Introducing a matrix T with the elements T ( k)(p q) =(;1) (p;)+(q;k) k q ; k p ; k we basically return to the familiar transfer-matrix construction. Indeed, we have u n = he (1 ) T n uvi v n = he ( 1) T n uvi where he ( k) = f ( k) i g 1 : Theorem 3. Consider a system of m rst-order nonlinear equations: x (i) n+1 = P i (x (1) ::: x(m) ) i =1 ::: m: (16) n n Then the matrix T exists such that x (i) n = he (i) T n x (1) x (m) i where the vectors he (i) and x (1) x (m) i are dened asabove, with the aid of some biection ( ::: ): N m N. Below, we present the scheme of the proof of a transfer-matrix representation for arbitrary systems of rst-order nonlinear equations. 7
As before we are checking the product P 1 1 P m m P (1 ::: m): The polynomial P (1 ::: m) depends on m variables x (1) ::: x (m) and therefore can be represented as P (1 ::: m)(x (1) ::: x (m) )=ht (1 ::: m)x (1) x (m) i where ht (1 ::: m) is the constant vector of coecients of the polynomial P (1 ::: m). This vector ht (1 ::: m) is the ( 1 ::: m )-th row of the transfer matrix T. The above procedure can be easily generalized on polynomials with variable coecients, P i (n x (1) ::: x (m) ), in complete analog to the Theorem 2. VI. RECURSION SOLUTION FOR ARBITRARY ANALYTIC FUNCTIONS IN THE RHS Let the series converge absolutely in [;r r]. Then the series a k x k f(x) (17) ~a k x k ~ f(x) (18) where ~a k = a k, also converge in [;r r]. We construct the following absolutely convergent in [;r r] [9,1] series " 1 X a k x k# = a + 1 " X 1 ~a k x k# =~a + 1 a 1 a ;1 x + ~a 1 ~a ;1 x + " 1 " 1 a 2 a ;1 + 2 ~a 2 ~a ;1 + 2 a 2 1 a;2 ~a 2 1 ~a;2 # # x 2 + :::= x 2 + :::= T k x k (19) ~T k x k : (2) Denition 1. The matrix T = ft k g 1 is said to be the transfer matrix of the analytic function f(x) if P 1 T kx k = f (x). 8
Using Eq. (19) one can estimate the coecients T k for a xed k: T k + k k a ;k max k a i C(a + ) (21) ik for arbitrary >, a constant C = C(), and >k. Theorem 4. Let the series b k x k g(x) (22) converge absolutely in [;R R]. The matrix S be the transfer matrix of the function g(x), and ~r r be such that the function ~ f(x), dened by(18), satises ~ f(x): [;~r ~r] [ R]. Then the matrix product ST exists and is equal to the transfer matrix of the composition of the functions f(x) and g(x), g f(x), i.e., the -th row of the matrix ST is the vector of the coecients of Maclaurin's decomposition of the function [g f] (x). Proof. First we prove the existence of the matrix ST. One can estimate the ( k)-th element of the matrix ST using Eq. (21): (ST) k S i T ik C 1 + S i T ik C 1 + C i=k+1 i=k+1 S i (a + ) i : Since min f(x) ~ = f() ~ = a <Rwe canchoose > suciently small, that a + <R, therefore the series in the last expression converges. Thus, the matrix ST exists. Now we should study the series Since the series converge absolutely: S i T ik x k (ST) k x k = S i ~T ik x k we can change the order of summation [1]: (ST) k x k = S i T ik x k = S i T ik x k : (23) h i S i f(x) ~ i < 1: (24) S i [f(x)] i =[g(f(x))] =[g f] (x): The theorem is proved. Restricting ourselves to the rst row of the matrix S only we get the following 9
Corollary 1. Let the functions f(x) and g(x) satisfy the conditions of Theorem 4 the bra-vector hb is the vector of coecients of the series (22). Then the vector hc = hbt consists of the coecients of the series c k x k = g f(x): (25) It is easy to see, that calculating the scalar product of the vector hc and the vector xi = fx i g 1 we obtain hbtxi = g f(x): (26) In this way we can write down the solution of the recursion y n+1 = f(y n ) with y y: (27) Theorem 5. Let the series (17) converge absolutely on [;r r], the matrix T be the transfer matrix of the function f(x) and ~r <rbe such that (see Eq. (18)) ~ f(x): [;~r ~r] [ ~r]. Then for any initial conditions y 2 [;~r ~r] y n = het n yi (28) where the bra-vector he =[ i 1 ] 1 and the ket-vector yi = fyi g 1. In order to solve a generalization of the recursion (27), y n+1 = f(n y n ), we should consider n-dependent matrices T n instead of the matrix T. One can use the same technique to solve the more general case of multivariable recursions, y n = f(y n;1 ) f: R n R n f(x) =ff i (x)g n i=1 x 2 Rn Here we should consider series decomposition of the functions Q n i=1 [f i (x)] a i, instead of the series (19), and then proceed by the analogy to multivariable polynomial recursions. However, since the notation becomes very cumbersome, we shall not write down this formalism. The theorems of this section can be easily generalized on complex mappings. Namely, the statement of the Theorem 4 for the complex variable functions reads: 1
Theorem 6. Let the series (17), (18) and (22) be the series of complex variable z with the complex coecients. Series (17) and (18) converge in the circle z r and series (22) converge in z R. The matrices S and T be the transfer matrices of the functions g(z) and f(z) correspondingly and ~r r ~r 2 R be such that f(~r) ~ R. Then the matrix product ST exists and equals to the transfer matrix of the composition of the functions f(z) and g(z), g f(z), i.e., the -th row of the matrix ST is the vector of the coecients of Maclaurin's decomposition of the function [g f] (z) convergent in z ~r. VII. EXAMPLES In order to show that the condition ~ f(x): [;~r ~r] [ R] is essential (see Theorem 4) we present the following Example 3. Let g(x) =x 4 ; 1 4 x6 + 1 9 x8 ; 1 16 x1 + ::: i.e. a 2i = (;1)i (i;1) 2 a 2i+1 = i 2 and radius of convergence is R =1 letf(x) bedened by f(x) =2x(1 ; x 2 ) f:[;1 1] [;1 1]. The transfer matrix T of the function f(x) according to Eq. (8) is T k =(;1) (k;)=2 (k;)=2 2 for even ; k. The sign in the sequences fs 1 2i g 1 i=1 and ft 2i 2kg 1 i=1 alternates. Therefore the sign in the sequence fs 1 2iT 2i 2k g 1 i=1 is constant and for the 2k-th component of the rst row of the matrix ST one has c 1 2k = S 1 2i T 2i 2k = =4 k;2 1 (k ; 3) 2 (2k ; 4)(2k ; 5) 2 S 1 2i T 2i 2k S 1 2(k;2) T 2(k;2) 2k 4 k;2 for any k > 3. Thus, the components of the the rst row cannot be the coecients of Maclaurin's decomposition of the function g f because of its divergence for x > 1=4. The following example presents a family of function whose transfer matrices form is invariant under the multiplication. 11
Example 4. Let the function f(x) = ax=(b + x). Then the components of the transfer matrix T are T k = k ; 1 ; 1 a b ;k for k > and T k = k for =ork = (29) where n m = 8 >< >: n m(n;m) integer m n m n any other m n and nm is the Kronecker symbol. If g(x) =cx=(d + x) ands is the transfer matrix of g(x), then the components of the matrix ST are (ST) k = = k ; 1 ; 1 S i T ik = ac a + b i ; 1 c d ;i k ; 1 ; 1 i ; 1 ;k bd : a + b a i b ;k = k ; 1 k; X c b ;k (a=d) l+ ; 1 l= k ; l One can see, that the matrix elements (ST) k have the same form as in Eq. (29). VIII. SUMMARY In this paper wehave presented a new method to obtain the solution of arbitrary polynomial recursions. The method has been generalized to the systems of multivariable recursions and analytical recursions. We found a class of analytical function recursions to which the method can be applied. Particularly, this class contains functions which are analytical over all space. Generally, the solution is obtained in the form of a matrix power, applied to the vectors of initial values. We have presented a way to construct such a matrix. Famous and important examples, such asthelogistic map and the Riccati recursion, have been considered and the corresponding matrices have been written down explicitly. The following generalizations are also can be done: a) Multivariable analytic recursions. This can be done, for example, as in the Section V. 12
b) System of higher-order nonlinear equations. The scheme is quite obvious: introduction of new variables to bring each equation to the rst-order structure and, then, construction of a transfer matrix, as in the previous case. IX. ACKNOWLEDGMENT We thank Dany Ben-Avraham for critical reading. [1] R. P. Agarwal: Dierence Equations and Inequalities, Marcel Dekker, New York, 1992 [2] V. Lakshmikantham, D. Trigiante: Theory of Dierence Equations: Numerical Methods and Applications, Academic Press, New York, 1988 [3] F. B. Hildebrand: Finite-Dierence Equations and Simulations, Prentice-Hall, Englewood Clits, NJ, 1968 [4] S. Rabinovich, G. Berkolaiko, S. Buldyrev, A. Shehter and S. Havlin: Physica A218, 457 (1995) [5] D. Zwillinger: Handbook of Dierential Equations, 2nd ed., Academic Press, Boston, 1992 [6] W. T. Reid: Riccati Dierential Equation, Academic Press, New York, 1972 [7] S. Bittanti, A. J. Laub, J. C. Willems, eds: The Riccati Equation, Springer, Berlin, 1991 [8] V. L. Kocic and G. Ladas: Global Behavior of Nonlinear Dierence Equations of Higher Order with Applications, Kluwer, Dordrecht, 1993 [9] R. Courant: Dierential and Integral Calculus, Blackie & Son, London, v.1, 1963 [1] E. Goursat: A Course in Mathematical Analysis, Dover, New York, v.1, 1956 [11] S. Rabinovich, G. Berkolaiko, S. Havlin: preprint, to be published 13