LECTURE 12 A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED)

Similar documents
LECTURE 11 A MODEL OF LIQUIDITY PREFERENCE

LECTURE 15 PROOF OF THE EXISTENCE OF WALRASIAN EQUILIBRIUM

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

LECTURE 10 ADDITIONAL DISCUSSION OF CYCLE THEORY

Math 104: Homework 7 solutions

CHAPTER 8: EXPLORING R

Lecture 8 : Eigenvalues and Eigenvectors

Stability of a Class of Singular Difference Equations

Jónsson posets and unary Jónsson algebras

MATH 117 LECTURE NOTES

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

A Perron-type theorem on the principal eigenvalue of nonsymmetric elliptic operators

On Backward Product of Stochastic Matrices

Absolute value equations

NORMS ON SPACE OF MATRICES

CHAPTER III THE PROOF OF INEQUALITIES

Problem set 1, Real Analysis I, Spring, 2015.

MATH 426, TOPOLOGY. p 1.

Introduction to Real Analysis Alternative Chapter 1

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

ALTERNATIVE SECTION 12.2 SUPPLEMENT TO BECK AND GEOGHEGAN S ART OF PROOF

Chapter III. Stability of Linear Systems

3 Integration and Expectation

Standard forms for writing numbers

Quitting games - An Example

Lebesgue Measure on R n

DIEUDONNE AGBOR AND JAN BOMAN

THE INVERSE FUNCTION THEOREM

M17 MAT25-21 HOMEWORK 6

2 Two-Point Boundary Value Problems

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

2 A Model, Harmonic Map, Problem

INITIAL POWERS OF STURMIAN SEQUENCES

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

WEIERSTRASS THEOREMS AND RINGS OF HOLOMORPHIC FUNCTIONS

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

DISSIPATIVITY OF NEURAL NETWORKS WITH CONTINUOUSLY DISTRIBUTED DELAYS

Introduction and Preliminaries

Cowles Foundation for Research in Economics at Yale University

1. Bounded linear maps. A linear map T : E F of real Banach

Partial Solution Set, Leon 4.3

In N we can do addition, but in order to do subtraction we need to extend N to the integers

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

Proof. We indicate by α, β (finite or not) the end-points of I and call

Hartogs Theorem: separate analyticity implies joint Paul Garrett garrett/

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

arxiv:math.fa/ v1 2 Oct 1996

The Asymptotic Theory of Transaction Costs

Eigenvalues of the Redheffer Matrix and Their Relation to the Mertens Function

An introduction to Birkhoff normal form

MA103 Introduction to Abstract Mathematics Second part, Analysis and Algebra

Exercise Solutions to Functional Analysis

Limits and Continuity

SYMMETRIC INTEGRALS DO NOT HAVE THE MARCINKIEWICZ PROPERTY

Assignment 1: From the Definition of Convexity to Helley Theorem

Lebesgue Measure on R n

Laplace s Equation. Chapter Mean Value Formulas

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

Stone-Čech compactification of Tychonoff spaces

NOTES ON FRAMES. Damir Bakić University of Zagreb. June 6, 2017

We know that S is a weak* compact convex subset of N and here are some sample questions where the answers are much less clear. 1.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lax Solution Part 4. October 27, 2016

Interlacing Inequalities for Totally Nonnegative Matrices

Centre d Economie de la Sorbonne UMR 8174

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Where is matrix multiplication locally open?

Date: July 5, Contents

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Criteria for existence of semigroup homomorphisms and projective rank functions. George M. Bergman

Analysis II: Basic knowledge of real analysis: Part IV, Series

Vectors. January 13, 2013

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

Zweier I-Convergent Sequence Spaces

5 Set Operations, Functions, and Counting

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

ELA

THE CANTOR GAME: WINNING STRATEGIES AND DETERMINACY. by arxiv: v1 [math.ca] 29 Jan 2017 MAGNUS D. LADUE

Detailed Proof of The PerronFrobenius Theorem

Suppose R is an ordered ring with positive elements P.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

SZEMERÉDI S REGULARITY LEMMA FOR MATRICES AND SPARSE GRAPHS

Research Article Almost Periodic Solutions of Prey-Predator Discrete Models with Delay

Problem List MATH 5143 Fall, 2013

Problem 3. Give an example of a sequence of continuous functions on a compact domain converging pointwise but not uniformly to a continuous function

Complex Analysis Homework 9: Solutions

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

0.1 Uniform integrability

An introduction to some aspects of functional analysis

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

MATH 324 Summer 2011 Elementary Number Theory. Notes on Mathematical Induction. Recall the following axiom for the set of integers.

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

ON VARIETIES IN WHICH SOLUBLE GROUPS ARE TORSION-BY-NILPOTENT

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

FORMAL TAYLOR SERIES AND COMPLEMENTARY INVARIANT SUBSPACES

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

MATH 566 LECTURE NOTES 6: NORMAL FAMILIES AND THE THEOREMS OF PICARD

Inequalities Relating Addition and Replacement Type Finite Sample Breakdown Points

Transcription:

LECTURE 12 A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) JACOB T. SCHWARTZ EDITED BY KENNETH R. DRIESSEL Abstract. We prove Theorem 11.6 and Theorem 11.9 of the previous lecture. 1. Proof of Theorem 11.6 The mathematical analysis of the final section of the preceding lecture brings us close to the proof of our first target: Theorem 11.6. The proof of the theorem will be based on the following fundamental lemma, which gives detailed information on the behavior of the convex sets φ n (b) as n. Lemma 12.1. Let K = K j be the set of Lemma 11.19. Then for each vector b 0 there exists a strictly positive constant C = C(b) such that φ n (b) CK as n. Before proving Lemma 12.1, we will show how a proof of Theorem 11.6 may be based upon it. Proof of Theorem 11.6: Let b 0 and d 0. Then by Lemmas 11.12, 11.17 and 12.1 we have (12.1) C(b + d)k = lim n φ n (b + d) = lim n φ n (b) + φ n (d) = C(b)K + C(d)K = (C(b) + C(d))K. Thus, if x = (C(b) + C(d))/C(b + d), we have xk = K. If x > 1, it would follow that K is unbounded, which we know to be false; similarly 2010 Mathematics Subject Classification. Primary 91B55; Secondary 91B08. Key words and phrases. Liquidity Preference, Business Cycle Theory. 1

2 JACOB T. SCHWARTZ x < 1 is impossible, and thus (12.2) C(b + d) = C(b) + C(d). If we define the vector v 0 by letting its jth component be C(δ j ), δ j being the vector whose ith component is δ ij, then it follows from Lemma 12.1 that v 0 > 0, and from (12.2) that C(b) = v 0 b. Thus we have (12.3) lim n φ n (b) = (v 0 b)k for each nonnegative vector b. Let ω be the vector of Lemma 11.16(a), and let its components be [ω 1,, ω n+1 ]; write n+1 (12.4) u = ω i u i i=1 for each vector u, thereby introducing a norm for the vectors u. Lemma 11.16(a) shows that ω M(µ) ω for each µ; thus it follows immediately that M(µ)u u for each µ and each vector u. If S ɛ denotes the sphere of radius ɛ centered at the origin, distances being measured in terms of the norm (12.4), it follows that M(µ)S ɛ S ɛ for each µ. Now suppose that b and c are nonnegative vectors, and that v 0, b v 0 c. Then, from (12.3) it follows that, for sufficiently large n, (12.5) φ n (c) (v 0 c)k + S ɛ (v 0 b)k + S ɛ φ n (b) + S ɛ + S ɛ = φ n (b) + S 2ɛ ; we have used statement (c) of Lemma 11.19. Since if b is a nonnegative nonzero vector it is plain from (11.7) and (11.5) that φ 2 (b) contains a positive multiple of the invariant vector u j of any of the optimal matrices M j, it follows that Dφ n (b) u j for n 2 and any sufficiently large constant D independent of n. It therefore follows immediately that ɛd φ n (b) S ɛ for a sufficiently large constant D independent of

12. A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) 3 n. Thus we conclude from (12.5) that (12.6) φ n (c) φ n (b) + 2ɛD φ n (b) (1 + 2ɛD )φ n (b), proving that b c. Conversely, let b c. Since by Lemma 11.2 and Definition 11.4 this implies that (12.7) (1 + ɛ)φ n (b) φ n (c) for any positive ɛ and all sufficiently large n, we find from (12.7) on letting n that (12.8) (1 + ɛ)(v 0 b)k (v 0 c)k. It follows, as in the first paragraph of the present proof, that (1+ɛ)(v 0 b) (v 0 c); since ɛ is arbitrary, we have v 0 b v 0 c. Q.E.D. Now, finally, we fill in a last gap by giving the proof of Lemma 12.1. Proof of Lemma 12.1: Let u j, m j n or m j n + 1, be an enumeration of the fixed vectors of the optimal matrices M j as in Lemma 11.19(a). We define a (nonlinear) transformation A by writing Av = c k u k, the summation being extended over m k n or m k n + 1 depending on whether or not M n+1 is optimal, and the constants c k = c k (v) being defined inductively by (12.9) c m = sup{c cu m v} c k+1 = sup{c cu k+1 + c m u m + + c k u k v}; here we use the symbol sup for the supremum or least upper bound of a set of real numbers. We also put Bv = v Av, A(µ) = AM(µ), B(µ) = BM(µ). It is then plain from (11.26) that (12.10) φ(u) = (A(µ)u + B(µ)u) µ

4 JACOB T. SCHWARTZ for each nonnegative vector u. Using Lemma 11.17 it is then easily seen that (12.11) φ 2 (u) = µ {φ(a(µ)u) + φ(b(µ)u)} = {φ(a(µ 1 )u) + A(µ 2 )B(µ 1 )u + B(µ 2 )B(µ 1 )u}. µ 1,µ 2 Thus, inductively, we have (12.12) φ k (u) = {φ k 1 (A(µ 1 )u) + φ k 2 (A(µ 2 )B(µ 1 )u) + µ 1,...,µ k + A(µ k )B(µ k 1 ) B(µ 1 )u + B(µ k )B(µ k 1 ) B(µ 1 )u}. Next we show that as k, (12.13) ω B(µ k ) B(µ 1 )u 0 uniformly in µ 1 µ k. To accomplish this, we shall prove that (12.14) ω B(µ 2 )B(µ 1 )u < ω u for each nonnegative nonzero vector u; since B(µ) is continuous in µ, it will follow immediately from this last inequality that there exists a constant δ < 1 such that (12.15) ω B(µ 2 )B(µ 1 )u δ(ω u). Thus the expression (12.13) will converge to zero not only uniformly but even as fast as the powers δ k/2. To prove (12.14), note that since A(µ) + B(µ) = M(µ) since A(µ)u and B(µ)u are evidently nonnegative if u is a nonnegative matrix, and since ω M(µ) ω by (11.22), (11.23), (11.25) and Lemma 11.16(a), we have A(µ 1 )u = 0 if ω B(µ 2 )B(µ 1 )u = ω u. But, if A(µ 1 )u = 0, then the definition (12.9) of the transformation A shows that v = B(µ 1 )u = M(µ 1 )u is a vector which does not exceed a positive multiple of any of the vectors u j. Thus, if M n+1 is an optimal matrix, it follows that all the components u l of u with m l n + 1 are zero; similarly, if M n+1 is not an optimal matrix all the components u l with m l n are zero. The corollary of Lemma 11.16 now shows that, in the first

12. A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) 5 case, ω B(µ 2 )B(µ 1 )u = ω M(µ 2 )M(µ 1 )u ω M(µ 1 )u < ω u, contradicting the statement ω B(µ 2 )B(µ 1 )u = ω u and establishing (12.14). In the second case (that in which M n+1 is nonoptimal), we come in the same way to the contradiction ω B(µ 2 )B(µ 1 )u < ω u unless all components of u but its last vanish; repeating our argument, we see that the same contradiction must arise unless all components of B(µ 1 )u = M(µ 1 )u but its last component vanish also. The definition (11.22), (11.23), (11.25) of the matrix M(µ 1 ), and the form (11.5) of the matrices M j, then shows that we must have M(µ 1 )u = M n+1 u; since M n+1 is nonoptimal we have then ω M n+1 u < ω u, and we are led to the same contradiction. Thus (12.14) is established in every case; as we have seen, (12.13) follows. Let b be a definite nonnegative nonzero vector. The definition (12.9) of the transformation A shows that the vector A(σ k )B(σ k 1 ) B(σ 1 )b may be written in the form (12.16) A(σ k )B(σ k 1 ) B(σ 1 )b = j c j (σ k,, σ 1 )u j, the real constants c j (σ k,, σ 1 ) being nonnegative and the summation being extended over all m j n if M n+1 is not optimal, and over all m j n + 1 if M n+1 is optimal. (In the remainder of the present proof, we shall suppose for the sake of definiteness that M n+1 is nonoptimal; the details in this case differ only trivially from the details in the other case.) Then, using (12.12) and Lemma 11.7, we may write (12.17) φ k (b) = µ 1,...,µ k + + ( φ k 1 (u j )c j (µ 1 ) + φ k 2 (u j )c j (µ 2, µ 1 ) ) u j c j (µ k,, µ 1 ) + B(µ k ) B(µ 1 )b. Let S ɛ denote the sphere of radius ɛ, defined, as in the preceding proof, relative to the norm (12.4).

6 JACOB T. SCHWARTZ Let ( (12.18) C = sup c j (µ 1 )+ c j (µ 2, µ 1 )+ + ) c j (µ k,, µ 1 ). Then it follows from (12.17), since tk K for 0 t 1 and since φ l (u j ) K if by Lemma 11.19, that (12.19) φ k (b) CK + µ 1,...,µ k B(µ k ) B(µ 1 )b. Thus, by (12.13), for each positive ɛ we have for k k(ɛ) (12.20) φ k (b) CK + S ɛ. On the other hand, since by (12.13) we have 0 B(µ k ) B(µ 1 )b + S ɛ for each positive ɛ and each k k(ɛ) it follows from (12.17) that (12.21) φ k (b) + S ɛ c j (µ 1 )φ k 1 (u j ) + c j (µ 2, µ 1 )φ k 2 (u j ) + + c j (µ k,, µ 1 )u j for each µ 1,, µ k and each sufficiently large k. A fortiori, we have (12.22) φ k (b) + S ɛ (c j (µ 1 ) + + c j (µ k,, µ 1 ))u j for each sufficiently large k, so that since φ k (b) remains bounded, the supremum C must be finite. Thus, if ɛ > 0 is given, we may choose µ 1,, µ k so that (12.23) C (c j (µ 1 ) + + c j (µ k,, µ 1 )) C ɛ. Then, for any additional µ k+1,, µ k0 we must have (12.24) {c j (µ k+1,, µ 1 ) + + c j (µ k0,, µ 1 )} < ɛ.

12. A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) 7 Since, as we have remarked immediately following formula (12.4), M(µ)u u, it follows inductively from (11.26) and from u j = ω u j = 1 that φ l (u j ) S 1. Thus, by (12.24), (12.25) φ k0 k 1 (u j )c j (µ k+1,, µ 1 ) + φ k0 k 2 (u j )c j (µ k+2,, µ 1 ) + + u j c j (µ k0,, µ 1 ) S ɛ. Hence, by (12.21), we have (12.26) φ k 0 (b) + S 2ɛ c j (µ 1 )φ k0 1 (u j ) + + φ k0 k (u j )c j (µ k,, µ 1 ) for each sufficiently large k 0. By Lemma 11.19, we have also (12.27) φ k0 l (u j ) + S ɛ K for 1 l k and each sufficiently large k 0. Thus, by (12.26), (12.28) φ k 0 (b) + S 2ɛ + {c j (µ 1 ) + + c j (µ k,, µ 1 )}S ɛ (c j (µ 1 ) + + c j (µ k,, µ 1 ))K, for all sufficiently large k 0, which by (12.23), implies (12.29) φ k 0 (b) + S (C+2)ɛ (C ɛ)k. Since K is bounded, this implies the existence of a finite constant D such that (12.30) φ k 0 (b) + S (C+D+2)ɛ CK

8 JACOB T. SCHWARTZ for all sufficiently large k 0. Formulas (12.20) and (12.30) together show that (12.31) φ k (b) CK as k, proving Lemma 12.1. Q.E.D. 2. Proof of Theorem 11.9 We now apply Theorem 11.6 and the ideas developed in its proof to prove our second main result, Theorem 11.9. Let us first note that Definitions 11.7 and 11.8 depend on the matrices M 1,, M n+1, but that if we multiply each of these matrices by a common factor t > 0, and, simultaneously, multiply the vectors b n of Definition 11.7 by the factor t n then, by Lemma 11.14, optimal paths (relative to M 1,, M n+1 ) are mapped into optimal paths (relative to tm 1,, tm n+1 ); optimal matrices into optimal matrices, and the relationship is unchanged. We may consequently assume without loss of generality, just as in the preceding lecture, that dom(m j ) = 1 if and only if M j is optimal, and that the optimal matrices are either M m,, M n or M m,, M n+1. This assumption will be made throughout the present section. Lemma 12.2. Let b 0. Then if c φ(b) we have b c; on the other hand, there exists some vector c φ(b) such that c b. Proof: If c φ(b), then φ n (c) φ n+1 (b). By Theorem 11.6 and by (12.3) of the proof of Theorem 11.6 it follows from this on letting n that (v 0 c)k (v 0 b)k. Thus we conclude as in the first paragraph of the proof of Theorem 11.6 that v 0 c v 0 b, so that, by Theorem 11.6, b c. On the other hand, suppose that there exists no c φ(b) such that c b, i.e., no c such that v 0 c v 0 b. Since it follows readily from (11.7) that the set φ(b) is bounded and closed, there must then exist a constant δ < 1 such that v 0 c δ(v 0 b) for all c φ(b). By what has been proved immediately above, we must then have v 0 c δv 0 b for all c φ m (b). Letting m and using (12.3) of the proof of

12. A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) 9 Theorem 11.6, it follows that (v 0 b)(v 0 c) δ(v 0 b) for each c K, i.e., that v 0 c δ for each c K. Let u j be one of the fixed vectors of Lemma 11.19. Then, since φ m (u j ) K, it follows from (12.3) that v 0 u j = 1. On the other hand, since as was observed following Lemma 11.18, u j φ(u j ) φ 2 (u j ) K, we have u j K and have hence a contradiction completing the proof of the present lemma. Q.E.D. Corollary. Let v 0 be the vector of Theorem 11.6. The maximum v 0 c as c ranges over φ(b) is v 0 b. We may now show that b 1 = b, b 2, b 3, is an optimal path beginning at b if and only if b n+1 φ(b n ) and v 0 b n = v 0 b. Indeed, if b n φ n (b) for arbitrarily large n, then, by Theorem 11.6 and by the preceding corollary, v 0 b n v 0 b, for arbitrarily large n. Thus, since b n+1 φ(b n ) for all n, it follows by the preceding corollary that v 0 b n v 0 b for all n; on the other hand, v 0 b v 0 b n is even more obvious from the preceding corollary. Conversely, if v 0 b n v 0 b for all n, then b n φ n (b) follows at once from the preceding corollary and from Theorem 11.6. Next we prove the following lemma. Lemma 12.3. If M j is an optimal matrix and u 0, then v 0 M j u v 0 u with equality if and only if all components of u except the jth and (n + 1)st vanish; if M j is a nonoptimal matrix, we have this same inequality with equality if and only if all components of u except the jth vanishes and j n + 1 Proof: The inequality results immediately from the above corollary. The form (11.5) and (11.6) of the matrices M j (and the fact that the constants τ j are less than 1) makes it plain that, if the jth and (n+1)st components of u 0 are zero but u 0 is nonzero, we have M j u 0 u 0 and thus v 0M j u 0 < v 0u 0. A vector u 0 which has any components but the jth and (n + 1)st different from zero may be decomposed into the sum of two nonnegative vectors u = u 0 + u 1, with u 0 as above; thus v 0 M j u < v 0u follows.

10 JACOB T. SCHWARTZ Let M j be optimal: if j = n + 1, then, using (11.6) and recollecting that each of the matrices has been multiplied by a factor t equal to the reciprocal of the largest dom(m k ), we have (M n+1 ) n+1,n+1 = 1, and the first statement of the present lemma follows in this case. If M j is optimal and j n + 1 then, for notational convenience and without loss of generality, we may assume j = n. Let v 0 = [v 1,, v n+1 ]. If we apply the inequality v 0 M n u v 0 u to an arbitrary vector u 0 of the form [0,, 0, u n, u n+1 ], and use the form (11.5) of the matrix M n, it follows that the two-dimensional vector v = [v n, v n+1 ] and the transformation M: [u n, u n+1 ] [η n u n+1 + τ n (1 σ n )u n, σ n u n ] stand in the relationship v M v. Since, as was observed following the statement of Theorem 11.9, M n is optimal only if dom(m) = 1, it follows from Lemma 2.4 of Lecture 2 that v M = v. Thus v 0M n u = v 0u if u = [0,, 0, u n, u n+1 ], completing the proof of the first assertion of the present lemma. If M n+1 is not dominant, then using (11.6) as above we see that v 0M n+1 u < v 0u if u 0. If M j is not dominant and j = n+1, then, for notational convenience and without loss of generality, we may assume j = n. Introducing v and M as in the preceding paragraph we find just as in the preceding paragraph that v M v ; since dom(m) < 1 we find by the second corollary of Theorem 2.1 that v M v. Thus we have v 0M n u < v 0 u either for a vector u 0 of the form [0,, 0, u n+1 ] or for a vector u 0 of the form [0,, 0, u n, 0]. Since by the Corollary of Lemma 2, by the definition (11.7) of the mapping φ, and by the parts of the present lemma already established we must have v 0M n u = v 0 u for a vector of the second form, it follows that v 0M n u < v 0 u for a vector of the first form. Arguing now as in the first paragraph of the present proof, we find that v 0M n u = v 0u if and only if all components of u except its nth component vanish. Q.E.D. Theorem 11.6, Lemma 3 and the characterization of optimal paths given in the paragraph immediately preceding Lemma 3, plainly imply

12. A MODEL OF LIQUIDITY PREFERENCE (CONCLUDED) 11 Theorem 11.9. Similarly, Theorem 11.6, Lemma 3, the characterization of optimal paths given in the paragraph immediately preceding Lemma 3, and the Corollary of Lemma 2 plainly imply the Corollary of Theorem 11.9. Thus we have proved all the results stated in the present and in the preceding lecture. 3. Implications for the Theory of the Preceding Lectures If the rate of interest is s, then we have seen that in the model just presented investment in a given line of production can only appear reasonable to an entrepreneur if the positive eigenvalue of the matrix [ ] 0 η (12.32) σ τ(1 σ) formed with the coefficients η, σ, and τ relevant for that line of enterprise exceeds 1 + s. This condition is evidently equivalent to the condition (12.33) (1 + s) 2 τ(1 σ)(1 + s) ση 0, or (12.34) (1 + s) 2 τ(1 σ)(1 + s) + ση. How will the various coefficients in this equation vary over the dynamic path of an economy? The rate s will vary, but (1+s) will of course vary only slightly; similarly, the coefficient η will vary slightly, as prices vary. The coefficient τ is essentially invariant and less than 1, its nearness to 1 being a measure of the extent to which the given commodity is an imperishable staple. The coefficient σ however is the ratio of sales to stocks, and the accumulation of stocks is in principle capable of driving this coefficient down to zero. Since condition (12.34) will always fail for sufficiently small σ, we see that if stocks are sufficiently high relative to sales it is inevitably unwise to continue a given line of production. Thus we may expect each form of production, or even all together, to be suspended when stock-sales ratios reach a certain level. As we

12 JACOB T. SCHWARTZ have seen in our earlier lecture, if the set of permissible stock-sales ratios has a sufficiently low average over the whole economy, we may expect the motion of the economy to be subject to the Keynes relation, production adjusting to consumption on the average, and occasional periods occurring in which production is reduced while consumption reduces inventories.