Advanced Mathematics for Economics, course Juan Pablo Rincón Zapatero
|
|
- Merry Washington
- 5 years ago
- Views:
Transcription
1 Advanced Mathematics for Economics, course Juan Pablo Rincón Zapatero Contents 1. Review of Matrices and Determinants Square matrices Determinants 3 2. Diagonalization of matrices 5 1. Introduction Systems of first order difference equations First order linear difference equations Second order linear difference equations The nonhomogeneous case Linear systems of difference equations Homogeneous systems Nonhomogeneous systems Stability of linear systems The nonlinear first order equation Phase diagrams Introduction. Definitions and classifications of ODEs Elementary integration methods of first order ODEs Separable equations Exact equations. Integrating factors Linear equations Phase diagrams Applications Second order linear ODEs Stability of second order ODEs with constant coefficients Systems of first order ODEs Linear systems Nonlinear systems 56 1
2 2 Topic 1: Matrix diagonalization 1. Review of Matrices and Determinants Definition 1.1. A matrix is a rectangular array of real numbers a 11 a 12 a 1m a A = 21 a 22 a 2m a n1 a n2 a nm The matrix is said to be of order n m if it has n rows and m columns. The set of matrices of order n m will be denoted M n m. The element a ij belongs to the ith row and to the jth column. Most often we will write in abbreviated form A = (a ij ) i=1,...,n j=1,...,m or even A = (a ij). The main or principal, diagonal of a matrix is the diagonal from the upper left to the lower right hand corner. Definition 1.2. The transpose of a matrix A, denoted A T, is the matrix formed by interchanging the rows and columns of A a 11 a 21 a n1 a A T = 12 a 22 a n M m n. a 1m a 2m a nm We can define two operations with matrices, sum and multiplication. The main properties of these operations as well as transposition are the following. It is assumed that the matrices in each of the following laws are such that the indicated operation can be performed and that α, β R. (1) (A T ) T = A. (2) (A + B) T = A T + B T. (3) A + B = B + A (commutative law). (4) A + (B + C) = (A + B) + C (associative law). (5) α(a + B) = αa + αb. (6) (α + β)a = αa + βa. (7) Matrix multiplication is not always commutative, i.e., AB BA. (8) A(BC) = (AB)C (associative law). (9) A(B + C) = AB + AC (distributive with respect to addition) Square matrices. We are mainly interested in square matrices. A matrix is square if n = m. The trace of a square matrix A is the sum of its diagonal elements, trace (A) = n i=1 a ii.
3 Definition 1.3. The identity matrix of order n is I n = The square matrix of order n with all its entries null is the null matrix, and will be denoted O n. It holds that I n A = AI n = A and O n A = AO n = O n. Definition 1.4. A square matrix A is called regular or invertible if there exists a matrix B such that AB = BA = I n. The matrix B is called the inverse of A and it is denoted A 1. Theorem 1.5. The inverse matrix is unique. Uniqueness of A 1 can be easily proved. For, suppose that B is another inverse of matrix A. Then BA = I n and B = BI n = B(AA 1 ) = (BA)A 1 = I n A 1 = A 1, showing that B = A 1. Some properties of the inverse matrix are the following. matrices in each of the following laws are regular. (1) (A 1 ) 1 = A. (2) (A T ) 1 = (A 1 ) T. (3) (AB) 1 = B 1 A 1. 3 It is assumed that the 1.2. Determinants. To a square matrix A we associate a real number called the determinant, A or det (A), in the following way. For a matrix of order 1, A = ((a), det (A) ) = a. a b For a matrix of order 2, A =, det (A) = ad bc. c d For a matrix of order 3 a 11 a 12 a 13 det (A) = a 21 a 22 a 23 a 31 a 32 a 33 = a 11 a 22 a 23 a 32 a 33 a 21 a 12 a 13 a 32 a 33 + a 31 a 12 a 13 a 22 a 23. This is known as the expansion of the determinant by the first column, but it can be done for any other row or column, giving the same result. Notice the sign ( 1) i+j in front of the element a ij. Before continuing with the inductive definition, let us see an example. Example 1.6. Compute the following determinant expanding by the second column = ( 1) ( 1) ( 1) = 2 ( 3) + 3 (0) (1) 1 = 5
4 4 For general n the method is the same that for matrices of order 3, expanding the determinant by a row or a column and reducing in this way the order of the determinants that must be computed. For a determinant of order 4 one has to compute 4 determinants of order 3. Definition 1.7. Given a matrix A of order n, the complementary minor of element a ij is the determinant of order n 1 which results from the deletion of the row i and the column j containing that element. The adjoint A ij of the element a ij is the minor multiplied by ( 1) i+j. According to this definition, the determinant of matrix A can be defined as or, equivalently A = a i1 A i1 + a i2 A i2 + + a in A in (by row i) A = a 1j A 1j + a 2j A 2j + + a nj A nj Example 1.8. Find the value of the determinant (by column j). Answer: Expanding the determinant by the third column, one gets = ( 1) ( 1) The main properties of the determinants are the following. It is assumed that the matrices A and B in each of the following laws are square of order n and λ R. (1) A = A T. (2) λa = λ n A. (3) AB = A B. (4) A matrix A is regular if and only if A 0; in this case A 1 = 1. A (5) If in a determinant two rows (or columns) are interchanged, the value of the determinant is changed in sign. (6) If two rows (columns) in a determinant are identical, the value of the determinant is zero. (7) If all the entries in a row (column) of a determinant are multiplied by a constant λ, then the value of the determinant is also multiplied by this constant. (8) In a given determinant, a constant multiple of the elements in one row (column) may be added to the elements of another row (column) without changing the value of the determinant. The next result is very useful to check if a given matrix is regular or not. Theorem 1.9. A square matrix A has an inverse if and only A 0.
5 2. Diagonalization of matrices Definition 2.1. Two matrices A and B of order n are similar if there exists a matrix P such that B = P 1 AP. Definition 2.2. A matrix A is diagonalizable if it is similar to a diagonal matrix D, that is, there exists D diagonal and P invertible such that D = P 1 AP. 5 Of course, D diagonal means that every element out of the diagonal is null λ λ D = , λ 1,..., λ n R λ n Proposition 2.3. If A is diagonalizable, then for all m 1 (2.1) A m = P D m P 1, where Proof. Since A is diagonalizable D m = λ m λ m λ m n. A m = (P DP 1 )(P DP 1 ) m (P DP 1 ) = P D(P 1 P )D D(P 1 P )DP 1 = P DI n D DI n DP 1 = P D m P 1. The expression for D m is readily obtained by induction on m. Example 2.4. At a given date, instructor X can teach well or teach badly. After a good day, the probability of doing well for the next class is 1/2, whilst after a bad day, the probability of doing well is 1/9. Let g t (b t ) the probability of good (poor) teaching at day t. Suppose that at time t = 1 the class has been right, that is, g 1 = 1, b 1 = 0. Which is the probability that the 5th class go fine (bad)? Answer: The data lead to the following equations that relate the probability of a good/bad class with the performance showed by the teacher the day before g t+1 = 1 2 g t b t, b t+1 = 1 2 g t b t.
6 6 In matrix form Obviously ( gt+1 ) = b t+1 ( g5 ) = b 5 ( ( ) ( gt b t ). ) 4 ( g1 b 1 ). If the matrix were diagonalizable and we could find matrices P and D, then the computation of the 10th power of the matrix would be easy using Proposition 2.3. We will come back to this example afterwards. Definition 2.5. Let A be a matrix of order n. We say that λ R is an eigenvalue of A and that u R n, u 0, is an eigenvector of A associated to λ if Au = λu. The set of eigenvalues of A, σ(a) = {λ 1,..., λ k }, is called the spectrum of A. The set of all eigenvectors of A associated to the same eigenvalue λ, including the null vector, is denoted S(λ), and is called the eigenspace or proper subspace associated to λ. The following result shows that an eigenvector can only be associated to a unique eigenvalue. Proposition 2.6. Let 0 u S(λ) S(µ). Then λ = µ. Proof. Suppose 0 u S(λ) S(µ). Then Au = λu Au = µu. Subtracting both equations we obtain 0 = (λ µ)u and, since 0 u, we must have λ = µ. Recall that for an arbitrary matrix A, the rank of the matrix is the number of linearly independent columns or rows (both numbers necessarily coincide). It is also given by the order of the largest non null minor of A. Theorem 2.7. The real number λ is an eigenvalue of A if and only if A λi n = 0. Moreover, S(λ) is the set of solutions (including the null vector) of the linear homogeneous system (A λi n )u = 0, and hence it is a vector subspace, which dimension is dim S(λ) = n rank(a λi n ).
7 Proof. Suppose that λ R is an eigenvalue of A. Then the system (A λi n )u = 0 admits some non trivial solution u. Since the system is homogeneous, this implies that the determinant of the system is zero, A λi n = 0. The second part about S(λ) follows also from the definition of eigenvector, and the fact that the set of solutions of a linear homogenous system is a subspace (the sum of two solutions is again a solution, as well as it is the product of a real number by a solution). Finally, the dimension of the space of solutions is given by the Theorem of Rouche Frobenius. Definition 2.8. The characteristic polynomial of A is the polynomial of order n given by p A (λ) = A λi n. Notice that the eigenvalues of A are the real roots of p A. This polynomial is of degree n. The Fundamental Theorem of Algebra estates that a polynomial of degree n has n complex roots (not necessarily different, some of the roots may have multiplicity grater than one). It could be the case that some of the roots of p A were not real numbers. For us, a root of p A (λ) which is not real is not an eigenvalue of A. Example 2.9. Find the eigenvalues and the proper subspaces of A = Answer: A λi = λ λ λ ; p(λ) = (1 λ) λ 1 1 λ = (1 λ)(λ2 + 1). The characteristic polynomial has only one real root, hence the spectrum of A is σ(a) = {1}. The proper subspace S(1) is the set of solutions of the homogeneous linear system (A I 3 )u = 0, that is, the set of solutions of (A I 3 )u = Solving the above system we obtain x y z = S(1) = {(0, 0, z) : z R} =< (0, 0, 1) > (the subspace generated by (0, 0, 1)). Notice that p A (λ) has other roots that are not reals. They are the complex numbers ±i, that are not (real) eigenvalues of A. If we would admit complex numbers, then they would be eigenvalues of A in this extended sense. Example Find the eigenvalues and the proper subspaces of B =
8 8 Answer: The eigenvalues are obtained solving 2 λ λ λ = 0. The solutions are λ = 3 (simple root) and λ = 2 (double root). To find S(3) = {u R 3 : (B 3I 3 )u = 0} we compute the solutions to (B 3I 3 )u = x y = 0 0, z 0 which are x = y and z = 2y, and hence S(3) =< (1, 1, 2) >. To find S(2) we solve the system (B 2I 3 )u = x y = 0 0, z 0 from which x = y = 0 and hence S(2) =< (1, 0, 0) >. Example Find the eigenvalues and the proper subspaces of C = Answer: To compute the eigenvalues we solve the characteristic equation 1 λ = C λi 3 = 0 2 λ λ = 2 λ 1 λ λ = (2 λ)(1 λ)(3 λ) So, the eigenvalues are λ 1 = 1, λ 2 = 2 and λ 3 = 3. We now compute the eigenvectors. The eigenspace S(1) is the solution of the homogeneous linear system whose associated matrix is C λi 3 with λ = 1. That is, S(1) is the solution of the following homogeneous linear system x y = z 0 Solving the above system we find that S(1) = {( 2z, 0, z) : z R} =< ( 2, 0, 1) >
9 On the other hand, S(2) is the set of solutions of the homogeneous linear system whose associated matrix is C λi 3 with λ = 2. That is, S(2) is the solution of the following x y = z 0 So, S(2) = {(2y, y, 3y) : y R} =< (2, 1, 3) > Finally, S(3) is the set of solutions of the homogeneous linear system whose associated matrix is A λi 3 with λ = 3. That is, S(3) is the solution of the following x y = z 0 and we obtain S(3) = {(0, 0, z) : z R} =< (0, 0, 1) > We now start describing the procedure to diagonalize a matrix. Fix a square matrix A. Let λ 1, λ 2,..., λ k be distinct real roots of the characteristic polynomial p A (λ) an let m k be the multiplicity of each λ k (Hence m k = 1 if λ k is a simple root, m k = 2 if it is double, etc.). Note that m 1 + m m k n. The following result estates that the number of independent vectors in the subspace S(λ) can never be bigger than the multiplicity of λ. Proposition For each j = 1,..., k 1 dim S(λ j ) n j. The following theorem gives necessary and sufficient conditions for a matrix A to be diagonalizable. Theorem A matrix A is diagonalizable if and only if the two following conditions hold. (1) Every root, λ 1, λ 2,..., λ k of the charateristic polynomial p A (λ) is real. (2) For each j = 1,..., k dim S(λ j ) = n j. Corollary If the matrix A has n distinct real eigenvalues, then it is diagonalizable. Theorem If A is diagonalizable, then the diagonal matrix D is formed by the eigenvalues of A in its main diagonal, with each λ j repeated n j times. Moreover, a matrix P such that D = P 1 AP has as columns independent eigenvectors selected from each proper subspace S(λ j ), j = 1,..., k. 9
10 10 Comments on the examples above. Matrix A of Example 1.3 is not diagonalizable, since p A has complex roots. Although all roots of p B are real, B of Example 1.4 is not diagonalizable, because dim S(2) = 1, which is smaller than the multiplicity of λ = 2. Matrix C of Example 2.11 is diagonalizable, since p C has 3 different real roots. In this case D = , P = Example Returning to Example 2.4, we compute 1 2 λ λ = 0, 2 9 or 18λ 2 25λ + 7 = 0. We get λ 1 = 1 and λ 2 = 7. Now, S(1) is the solution set of 18 ( ) 1 1 (x ) ( =. y 0) We find y = 9 7 x, so that S(1) =< (2, 9) >. In the same way, S( ) is the solution set of 2 18 ( ) 1 1 (x ) ( =. y 0) 1 2 We find y = x, so that S( 7 ) =< (1, 1) >. Hence the diagonal matrix is 18 ( ) D = and ( ) 2 1 P =, P = 1 ( ) Thus, A n = 1 ( ) ( ) (1 ) ( 7 18 )n 9 2 In particular, for n = 4 we obtain ( ) A 4 = Hence ( g4 ) ( ) ( ) ( ) = A b 4 g = = 4 b ( ) This means that probability that the 5th class goes right, conditioned to the event that the first class was also right is of
11 We can wonder what happens in the long run, that is, supposing that the course lasts forever (oh no!). In this case ( ) ( ) lim n An = P ( lim D n )P 1 = P P =, n 0 0 to find that the stationary distribution of probabilities is ( ) ( ) g =. b
12 12 Topic 2: Difference Equations 1. Introduction In this chapter we shall consider systems of equations where each variable has a time index t = 0, 1, 2,... and variables of different time periods are connected in a non trivial way. Such systems are called systems of difference equations and are useful to describe dynamical systems with discrete time. The study of dynamics in economics is important because it allows to drop out the (static) assumption that the process of economic adjustment inevitable leads to an equilibrium. In a dynamic context, this stability property has to be checked, rather than assumed away. Let time be a discrete denoted t = 0, 1,.... A function X : N R n that depends on this variable is simply a sequence of vectors of n dimensions X 0, X 1, X 2,... If each vector is connected with the previous vector by means of a mapping f : R n R n as X t+1 = f(x t ), t = 0, 1,..., then we have a system of first order difference equations. In the following definition, we generalize the concept to systems with longer time lags and that can include t explicitly. Definition 1.1. A kth order discrete system of difference equations is an expression of the form (1.1) X t+k = f(x t+k 1,..., X t, t), t = 0, 1,..., where every X t R n and f : R nk [0, ) R n. The system is autonomous, if f does not depend on t; linear, if the mapping f is linear in the variables (X t+k 1,..., X t ); of first order, if k = 1. Definition 1.2. A sequence {X 0, X 1, X 2,...} obtained from the recursion (1.1) with initial value X 0 is called a trajectory, orbit or path of the dynamical system from X 0. In what follows we will write x t instead of X t if the variable X t is a scalar. Example 1.3. [Geometrical sequence] Let {x t } be a scalar sequence, x t+1 = qx t, t = 0, 1,..., with q R. This a first order, autonomous and linear difference equation. Obviously x t = q t x 0. Similarly, for arithmetic sequence, x t+1 = x t + d, with d R, x t = x 0 + td. Example 1.4. x t+1 = x t + t is linear, non autonomous and of first order; x t+2 = x t is linear, autonomous and of second order; x t+1 = x 2 t + 1 is non linear, autonomous and of first order;
13 Example 1.5. [Fibonacci numbers (1202)] How many pairs of rabbits will be produced in a year, beginning with a single pair, if every month each pair bears a new pair which becomes productive from the second month on?. With x t denoting the pairs of rabbits in month t, the problem leads to the following recursion x t+2 = x t+1 + x t, t = 0, 1, 2,..., with x 0 = 1 and x 1 = 1. This is an autonomous and linear second order difference equation Systems of first order difference equations Systems of order k > 1 can be reduced to first order systems by augmenting the number of variables. This is the reason we study mainly first order systems. Instead of giving a general formula for the reduction, we present a simple example. Example 2.1. Consider the second order difference equation y t+2 = g(y t+1, y t ). Let x 1,t = y t+1, x 2,t = y t, then x 2,t+1 = y t+1 = x 1,t and the resulting first order system is ( ) ( ) x1,t+1 g(x1,t, x = 2,t ). x 2,t+1 x 1,t If we denote X t = ( x1,t x 2,t ), f(x t ) = ( ) g(xt ), then the system can be written X x t+1 = 1,t f(x t ). For example, y t+2 = 4y t+1 + yt can be reduced to the first order system ( ) ( ) x1,t+1 4x1,t + x = 2 2,t + 1, x 2,t+1 x 1,t and the Fibonacci equation of Example 1.5 is reduced to ( ) ( ) x1,t+1 x1,t + x = 2,t, x 2,t+1 x 1,t For a function f : R n R n, we shall use the following notation: f t denotes the t fold composition of f, i.e. f 1 = f, f 2 = f f and, in general, f t = f f t 1 for t = 1, 2,.... We also define f 0 as the identity function, f 0 (X) = X. Theorem 2.2. Consider the autonomous first order system X t+1 = f(x t ) and suppose that there exists some subset D such that for any X D, f(x) D. Then, given any initial condition X 0 D, the sequence {X t } is given by X t = f t (X 0 ).
14 14 Proof. Notice that X 1 = f(x 0 ), X 2 = f(x 1 ) = f(f(x 0 )) = f 2 (X 0 ),. X t = f(f f(x 0 ) )) = f t (X 0 ). The theorem provides the current value of X, X t, in terms of the initial value, X 0. We are interested what is the behavior of X t in the future, that is, in the limit lim f t (X 0 ). t Generally, we are more interested in this limit that in the analytical expression of X t. Nevertheless, there are some cases where the solution can be found explicitly, so we can study the above limit behavior quite well. Observe that if the limit exists, lim t f t (X 0 ) = X 0, say, and f is continuous f(x 0 ) = f( lim t f t (X 0 )) = lim t f t+1 (X 0 ) = X 0, hence the limit X 0 is a fixed point of map f. This is the reason fixed points play a distinguished role in dynamical systems. Definition 2.3. A point X 0 D is called a fixed point of the autonomous system f if, starting the system from X 0, it stays there: If X 0 = X 0, then X t = X 0, t = 1, 2,.... Obviously, X 0 is also a fixed point of map f. A fixed point is also called equilibrium, stationary point, or steady state. Example 2.4. In Example 1.3 (x t+1 = qx t ), if q = 1, then every point is a fixed point; if q 1, then there exists a unique fixed point: x 0 = 0. Notice that the solution x t = q t x 0 has the following limit (x 0 0) depending the value of q. 1 < q < 1 lim q t x 0 = 0, t q = 1 lim q t x 0 = x 0, t q 1 the sequence oscillates between + and and the limit does not exist In Example 1.5, x 0 = 0 is the unique fixed point. Consider now the difference equation x t+1 = x 2 t 6. Then, the fixed points are the solutions of x = x 2 6, that is, x 0 = 2 and x 0 = 3. In the following definitions, X Y stands for the Euclidean distance between X and Y. For example, if X = (1, 2, 3) and Y = (3, 6, 7), then X Y = (3 1) 2 + (6 2) 2 + (7 3) 2 = 36 = 6.
15 Definition 2.5. A fixed point X 0 is called stable if for any close enough initial state X 0, the resulting trajectory {X t } exists and stays close forever to X 0, that is, for any positive real ε, there exists a positive real δ(ε) such that if X 0 X 0 < δ(ɛ), then X t X 0 < ε for every t. A stable fixed point X 0 is called locally asymptotically stable (l.a.s.) if the trajectory {X t } starting from any initial point X 0 close to enough to X 0, converges to the fixed point. A stable fixed point is called globally asymptotically stable (g.a.s.) if any trajectory generated by any initial point X 0 converges to it. A fixed point is unstable if it is not stable or asymptotically stable. Remark 2.6. If X 0 is stable, but not l.a.s., {X t } need not approach X 0. A g.a.s. fixed point is necessarily unique. If X 0 is l.a.s., then small perturbations around X 0 decay and the trajectory generated by the system returns to the fixed point as the time grows. Definition 2.7. Let P be an integer larger than 1. A series of vectors X 0, X 1,..., X P 1 is called a P period cycle of system f if a trajectory starting from X 0 goes through X 1,..., X P 1 and returns to X 0, that is X t+1 = f(x t ), t = 0, 1,..., P 1, X P = X 0. Observe that the series of vectors X 0, X 1,..., X P repeats indefinitely in the trajectory, {X t } = {X 0, X 1,..., X P 1, X 0, X 1,..., X P 1,...}. For this reason, the trajectory itself is called a P cycle. Example 2.8. In Example 1.3 (x t+1 = qx t ) with q = 1 all the trajectories contains 2 cycles, because a typical path is {x 0, x 0, x 0, x 0,...}. Example 2.9. In Example 1.4 where y t+2 = y t, to find the possible cycles of the equation, first we write it as first order system using Example 2.1, to obtain ( ) ( ) x1,t+1 x2,t X t+1 = = f(x x 2,t+1 x t ). 1,t Let X 0 = (2, 4). Then X 1 = f(x 0 ) = ( 4, 2), X 2 = f(x 1 ) = ( 2, 4), X 3 = f(x 2 ) = (4, 2), X 4 = f(x 3 ) = (2, 4) = X 0. Thus, a 4 cycle appears starting at X 0. In fact, any trajectory is a 4 cycle. 15
16 16 The linear equation is of the form 3. First order linear difference equations (3.1) x t+1 = ax t + b, x t R, a, b R. Consider first the case b = 0 (homogeneous case). Then, by Theorem 2.2 the solution is x t = a t x 0, t = 0, 1,.... Consider now the non homogeneous case, b 0. Let us find the fixed points of the equation. They solve (see Definition 2.3) x 0 = ax 0 + b, hence there is no fixed point if a = 1. However, if a 1, the unique fixed point is x 0 = b 1 a. Define now y t = x t x 0 and replace x t = y t + x 0 into (3.1) to get y t+1 = ay t, hence y t = a t y 0. Returning to the variable x t we find that the solution of the linear equation is Theorem 3.1. In (3.1), the fixed point x 0 = x t = x 0 + a t (x 0 x 0 ) = b ( 1 a + at x 0 b ). 1 a b 1 a is g.a.s. if and only if a < 1. Proof. Notice that lim t a t = 0 iff a < 1 and hence lim t x t = lim t x 0 +a t (x 0 x 0 ) = x 0 iff a < 1, independently of the initial x 0. The convergence is monotonous if 0 < a < 1 and oscillating if 1 < a < 0. Example 3.2 (A Multiplier Accelerator Model of Growth). Let Y t denote national income, I t total investment, and S t total saving all in period t. Suppose that savings are proportional to national income, and that investment is proportional to the change in income from period t to t + 1. Then, for t = 0, 1, 2,..., S t = αy t, I t+1 = β(y t+1 Y t ), S t = I t. The last equation is the equilibrium condition that saving equals investment in each period. Here β > α > 0. We can deduce a difference equation for Y t and solve it as follows. From the first and third equation, I t = αy t, and so I t+1 = αy t+1. Inserting these into the second equation yields αy t+1 = β(y t+1 Y t ), or (α β)y t+1 = βy t. Thus, Y t+1 = β β α Y t = ( 1 + α β α ) Y t, t = 0, 1, 2,....
17 The solution is ( Y t = 1 + α ) t Y 0, t = 0, 1, 2,.... β α Thus, Y grows at the constant proportional rate g = α/(β α) each period. Note that g = (Y t+1 Y t )/Y t. Example 3.3 (A Cobweb Model). Consider a market model with a single commodity where producer s output decision must be made one period in advance of the actual sale such as in agricultural production, where planting must precede by an appreciable length of time the harvesting and sale of the output. Let us assume that the output decision in period t is based in the prevailing price P t, but since this output will no be available until period t + 1, the supply function is lagged one period, Q s,t+1 = S(P t ). Suppose that demand at time t is determined by a function that depends on P t, Q d,t+1 = D(P t ). Supposing that functions S and D are linear and that in each time period the market clears, we have the following three equations Q d,t = Q s,t, Q d,t+1 = α βp t+1, α, β > 0, Q s,t+1 = γ + δp t, γ, δ > 0. By substituting the last two equations into the first the model is reduced to the difference equation for prices P t+1 = δ β P t + α + γ β. The fixed point is P 0 = (α + γ)/(β + δ), which is also the equilibrium price of the market, that is, S(P 0 ) = D(P 0 ). The solution is ( P t+1 = P 0 + δ ) t (P 0 P 0 ). β Since δ/β is negative, the solution path is oscillating. It is this fact which gives rise to the cobweb phenomenon. There are three oscillations patterns: it is explosive if δ > β (S steeper than D), uniform if δ = β, and damped if δ < β (S flatter than D). The three possibilities are illustrated in the graphics below. The demand is the downward slopping line, with slope β. The supply is the upward slopping line, with slope δ. When δ > β, as in Figure 3, the interaction of demand and supply will produce an explosive oscillation as follows: Given an initial price P 0, the quantity supplied in the next period will be Q 1 = S(P 0 ). In order to clear the market, the quantity demanded in period 1 must be also Q 1, which is possible if and only if price is set at the level of P 1 given by the equation Q 1 = D(P 1 ). Now, via the S curve, the price P 1 will lead to Q 2 = S(P 1 ) as the quantity supplied in period 2, and to clear the market, price must be set at the level of P 2 according to the demand curve. Repeating this reasoning, we can trace out a cobweb around the demand and supply curves. 17
18 18 Figure 1. Cobweb diagram with damped oscillations Figure 2. Cobweb diagram with uniform oscillations Figure 3. Cobweb diagram with explosive oscillations
19 4. Second order linear difference equations The second order linear difference equation is x t+2 + a 1 x t+1 + a 0 x t = b t, where a 0 and a 1 are constants and b t is a given function of t. The associated homogeneous equation is x t+2 + a 1 x t+1 + a 0 x t = 0, and the associated characteristic equation is r 2 + a 1 r + a 0 = This quadratic equation has solutions r 1 = 1 2 a a 2 1 4a 0, r 2 = a 1 1 a 2 1 4a 0. 2 There are three different cases depending of the sign of the discriminant a 2 1 4a 0 of the equation. When it is negative, the solutions are (conjugate) complex numbers. Recall that a complex number is z = a + ib, where a and b are real numbers and i = 1 is called the imaginary unit, so that i 2 = 1. The real part of z is a, and the imaginary part of z is b. The conjugate of z = a+ib is z = a ib. Complex numbers can be added, z+z = (a+a )+i(b+b ), and multiplied, zz = (a + ib)(a + ib ) = aa + iab + ia b + i 2 bb = (aa bb ) + i(ab + a b). For the following theorem we need the modulus of z, ρ = z = a 2 + b 2, and the argument of z, which is the angle θ ( π/2, π/2] such that tan θ = b/a. It is useful to recall the following table of trigonometric values θ sin θ cos θ tan θ π 6 π π For the negative values of the argument θ, observe that sin ( θ) = sin θ and cos ( θ) = cos θ, so that tan ( θ) = tan θ. For example, the modulus and argument of 1 i is ρ = 2 and θ = π/2, respectively, since tan θ = 1/1 = 1. Theorem 4.1. The general solution of (4.1) x t+2 + a 1 x t+1 + a 0 x t = 0 (a 0 0) is as follows:
20 20 (1) If a 2 1 4a 0 > 0 (the characteristic equation has two distinct real roots), x t = Ar1 t + Br2, t r 1,2 = 1 2 a 1 ± 1 a 2 1 4a 0. 2 (2) If a 2 1 4a 0 = 0 (the characteristic equation has one real double roots), x t = (A + Bt)r t, r = 1 2 a 1. (3) If a 2 1 4a 0 < 0 (the characteristic equation has no real roots), x t = ρ t (A cos θt + B sin θt), ρ = 4a0 a 2 1 a 0, tan θ =, θ [0, π]. a 1 Remark 4.2. When the characteristic equation has complex roots, the solution of (4.1) involves oscillations. Note that when ρ < 1, ρ t tends to 0 as t and the oscillations are damped. If ρ > 1, the oscillations are explosive, and in the case ρ = 1, we have undamped oscillations. Example 4.3. Find the general solutions of (a) x t+2 7x t+1 + 6x t = 0, (b) x t+2 6x t+1 + 9x t = 0, (c) x t+2 2x t+1 + 4x t = 0. Solution: (a) The characteristic equation is r 2 7r + 6 = 0, whose roots are r 1 = 6 and r 2 = 1, so the general solution is x t = A6 t + B, A, B R. (b) The characteristic equation is r 2 6r + 9 = 0, which has a double root r = 3. The general solution is x t = 3 t (A + Bt). (c) The characteristic equation is r 2 2r+4 = 0, with complex solutions r 1 = 1(2+ 12) = 2 (1 + i 3), r 2 = (1 i 3). Here ρ = 2 and tan θ = 12 = 3. this means that θ = π/3. 2 The general solution is ( x t = 2 t A cos π 3 t + B sin π ) 3 t The nonhomogeneous case. Now consider the nonhomogeneous equation (4.2) x t+2 + a 1 x t+1 + a 0 x t = b t, and let x t be a particular solution. It turns out that solutions of the equation have an interesting structure, due to the linearity of the equation. Theorem 4.4. The general solution of the nonhomogeneous equation (4.2) is the sum of the general solution of the homogeneous equation (4.1) and a particular solution x t of the nonhomogeneous equation.
21 21 Example 4.5. Find the general solution of x t+2 4x t = 3. Solution: Note that x t = 1 is a particular solution. To find the general solution of the homogeneous equation, consider the solutions of the characteristic equation, m 2 4 = 0, m 1,2 = ±2. Hence, the general solution of the nonhomogeneous equation is x t = A( 2) t + B2 t 1. Example 4.6. Find the general solution of x t+2 4x t = t. Solution: Now it is not obvious how to find a particular solution. We can try with the method of undetermined coefficients and try with some expression of the form x t = Ct + D. Then, we look for constants a, b such that x t is a solution. This requires C(t + 2) + D 4(Ct + D) = t, t = 0, 1, 2,... One must have C 4C = 1 and 2C + D 4D = 0. It follows that C = 1/3 and D = 2/9. Thus, the general solution is x t = A( 2) t + B2 t t/3 2/9. Example 4.7. Find the solution of x t+2 4x t = t satisfying x 0 = 0 and x 1 = 1/3. Solution: Using the general solution found above, we have two equations for the two unknown parameters A and B: } A + B + 2 = 0 9 2A + 2B = The solution is A = 2/9 and B = 0. Thus, the solution of the nonhomogeneous equation is x t = 2 9 ( 2)t t The method of undetermined coefficients for solving equation (4.2) suppose that a particular solution has the form of the nonhomogeneous term, b t. The method works quite well when this term is of the form or linear combinations of them. a t, t m, cos at, sin at Example 4.8. Solve the equation x t+2 5x t+1 + 6x t = 4 t + t Solution: The homogeneous equation has characteristic equation r 2 5r + 6 = 0, with two different real roots r 1,2 = 2, 3. Its general solution is, therefore, A2 t + B3 t. To find a particular solution we look for constants C, D, E and F such that a particular solution is x t = C4 t + Dt 2 + Et + F.
22 22 Plugging this into the equation we find C4 t+2 + D(t + 2) 2 + E(t + 2) + F 5(C4 t+1 + D(t + 1) 2 + E(t + 1) + F ) Expanding and rearranging yields + 6(C4 t + Dt 2 + Et + F ) = 4 t + t C4 t + 2Dt 2 + ( 6D + 2E)t + ( D 3E + 2F ) = 4 t + t This must hold for every t = 0, 1, 2,... thus, 2C = 4, 2D = 1, 6D + 2E = 0, D 3E + 2F = 3. It follows that C = 1/2, D = 1/2, E = 3/3 and F = 4. The general solution is x t = A2 t + B3 t t t t + 4. Example 4.9 (A Multiplier Accelerator Growth Model). Let Y t denote national income, C t total consumption, and I t total investment in a country at time t. Assume that for t = 0, 1,..., (i) Y t = C t + I t (ii) C t+1 = ay t + b (iii) I t+1 = c(c t+1 C t ) (income is divided between consumption and investment) (consumption is a linear function of previous income) (investment is proportional to to the change in consumption), where a, b, c > 0. Find a second order difference equation describing this national economy. Solution: We eliminate two of the unknown functions as follows. From (i), we get (iv) Y t+2 = C t+2 + I t+2. Replace now t by t + 1 in (ii) and (iii) to get (v) C t+2 = ay t+1 + b and (vi) I t+2 = c(c t+2 C t+1 ), respectively. Then, inserting (iii) and (v) into (vi) gives I t+2 = ac(y t+1 Y t ). Inserting this result and (v) into (iv) we get Y t+2 = ay t+1 + b + ac(y t+1 Y t ) and rearranging we arrive to Y t+2 a(1 + c)y t+1 + acy t = b, t = 0, 1,... The form of the solution depends on the coefficients a, b, c.
23 5. Linear systems of difference equations Now we suppose that the dynamic variables are vectors, X t R n. A first order system of linear difference equations with constant coefficients is given by An example is x 1,t+1 = a 11 x 1,t + + a 1n x n,t + b 1,t. x n,t+1 = a n1 x 1,t + + a nn x n,t + b n,t x 1,t+1 = 2x 1,t x 2,t + 1 x 2,t+1 = x 1,t + x 2,t + e t. Most often we will rewrite systems omitting subscripts using different letters for different variables, as in x t+1 = 2x t y t + 1 y t+1 = x t + y t + e t. A linear system is equivalent to the matrix equation where X t = x 1,t. x n,t X t+1 = AX t + B t, a a 1n, A =....., B = a n1... a nn We will center on the case where the independent term B t B is a constant vector Homogeneous systems. Consider the homogeneous system X t+1 = AX t. Note that X 1 = AX 0, X 2 = AX 1 = AAX 0 = A 2 X 0. Thus, given the initial vector X 0, the solution is X t = A t X 0. In the case that A be diagonalizable, P 1 AP = D with D diagonal, the expression above simplifies to that is easy to compute since D is diagonal. X t = P D t P 1 X 0, Example 5.1. Find the general solution of the system ( ) ( ) ( ) xt xt = y t y t b 1,t. b n,t 23
24 24 ( ) 4 1 Solution: The matrix A = has characteristic polynomial p 2 1 A (λ) = λ 2 5λ+6, with roots λ 1 = 3 and λ 2 = 2. Thus, the matrix is diagonalizable. It is easy to find the eigenspaces S(3) =< (1, 1) >, S(2) =< (1, 2) >. Hence, the matrix P is ( 1 1 P = 1 2 and the solution ( ) ( ) ( t X t = t 1 1 ) ( 2 1, P 1 = 1 1 ) ( X 0 = ) ( 3 0, D = t 2 t 3 t + 2 t 2 3 t 2 t t + 2 t+1 ) ( x0 y 0 ) Supposing that the initial condition is (x 0, y 0 ) = (1, 2), the solution is given by x t = 2 3 t 2 t + 2( 3 t + 2 t ), y t = 2 3 t 2 t ( 3 t + 2 t+1 ) Nonhomogeneous systems. Consider the system X t+1 = AX t + B, where B is a non null, constant vector. To obtain a closed-form solution of the system, we begin by noting that ) Observe that X 1 = AX 0 + B, X 2 = AX 1 + B = A(AX 0 + B) + B = A 2 X 0 + (A + I n )B,. X t = AX t 1 + B = = A t X 0 + (A t 1 + A t I n )B. (A t 1 + A t I n )(A I n ) = A t + A t A A t 1 A I n = A t I n. Thus, assuming that (A I n ) is invertible, we find A t 1 + A t I n = (A t I n )(A I n ) 1. Plugging this equality into the expression for X t above one gets X t = A t X 0 + (A t I n )(A I n ) 1 B. On the other hand, note that the constant solutions of the nonhomogeneous system (or fixed points of the system) satisfy X 0 = AX 0 + B. Assuming again that the matrix A I n has inverse, we can solve for X 0 (I n A)X 0 = B X 0 = (I n A) 1 B.
25 Then, collecting all the above observations, we can write the solution of the nonhomogeneous system in a nice form as (5.1) X t = A t X 0 (A t I n )X 0 = X 0 + A t (X 0 X 0 ). Theorem 5.2. Suppose that A I n 0. Then, the general solution of the nonhomogeneous system is given in Eqn. (5.1). Moreover, when A is diagonalizable, the above expression may be written as (5.2) X t = X 0 + P D t P 1 (X 0 X 0 ), where P 1 AP = D with D diagonal. Proof. Eqn. (5.2) easily follows from Eqn. (5.1), taking into account that A t = P D t P 1. Example 5.3. Find the general solution of the system ( ) ( ) ( ) ( ) xt xt 1 = + y t y t 1 25 Solution: The fixed point X is given by ( ) 1 ( ) (I 3 A) 1 B = = ( ) ( 1 1 ) = ( 1/2 5/2 By the example above we already know the general solution of the homogeneous system. The general solution of the nonhomogeneous system is then ( ) ( ) ( ) ( ) xt 2 3 = t 2 t 3 t + 2 t x0 1/2 1/2 y t 2 3 t 2 t t + 2 t+1 +. y 0 5/2 5/ Stability of linear systems. We study here the stability properties of a first order system X t+1 = AX t + B where I n A 0. For the following theorem, recall that for a complex number z = α + βi, the modulus is ρ = α 2 + β 2. For a real number α, the modulus is α. Theorem 5.4. A necessary and sufficient condition for system X t+1 = AX t + B to be g.a.s. is that all roots of the characteristic polynomial p A (λ) (real or complex) have moduli less than 1. In this case, any trajectory converges to X = (I n A) 1 B as t. We can give an idea of the proof of the above theorem in the case where the matrix A is diagonalizable. As we have shown above, the solution of the nonhomogeneous system in this case is X t = X 0 + P D t P 1 (X 0 X 0 ), where D = λ t λ t λ t n, )
26 26 and λ 1,..., λ n are the real roots (possibly repeated) of p A (λ). Since λ j < 1 for all j, the diagonal elements of D t tends to 0 as t goes to, since λ t j λ j t 0. Hence lim X t = X 0. t Example 5.5. Study the stability of the system Solution: The matrix of the system is x t+1 = x t 1 2 y t + 1, y t+1 = x t 1. ( 1 1/2 1 0 ), with characteristic equation λ 2 λ+1/2 = 0. The (complex) roots are λ 1,2 = 1/2±i/2. Both have modulus ρ = 1/4 + 1/4 = 1/ 2 < 1, hence the system is g.a.s. and the limit of any trajectory is the equilibrium point, ( ) 1 ( ) ( ) ( 1/2) 1 3 X 0 = = Example 5.6. Study the stability of the system x t+1 = x t + y t, y t+1 = x t /2 y t /2. ( ) 1 3 Solution: The matrix of the system is, with characteristic equation λ 1/2 1/2 2 (3/2)λ 1 = 0. The roots are λ 1 = 2 and λ 2 = 1/2. The system is not g.a.s. However, there are initial conditions X 0 such that the trajectory converges to the fixed point X 0 = (0, 0). This can be seen once we find the solution X t = P D t P 1 X 0. The eigenspaces are S(2) =< (3, 1) > and S( 1/2) =< (2, 1) >, thus ( ) ( ) 3 2 1/5 2/5 P =, P =. 1/5 3/5 The solution is ( xt+1 ) = y t+1 ( 2 t 3 5 (x 0 + 2y 0 ) t 1 5 (x 0 3y 0 ) 2 t 1 5 (x 0 + 2y 0 ) 2 t 1 5 (x 0 3y 0 ) If the initial conditions are linked by the relation x 0 + 2y 0 = 0, then the solution converges to (0, 0). For this reason, the line x + 2y = 0 is called the stable manifold. Notice that the stable manifold is in fact the eigenspace associated to the eigenvalue λ 2 = 1/2, since S( 1/2) =< (2, 1) >= {x + 2y = 0}. For any other initial condition (x 0, y 0 ) / S( 1/2), the solution does not converge. ).
27 Example 5.7 (Dynamic Cournot adjustment). The purpose of this example is to investigate under what conditions a given adjustment process converges to the Nash equilibrium of the Cournot game. Consider a Cournot duopoly in which two firms produce the same product and face constant marginal costs c 1 > 0 and c 2 > 0. The market price P t is a function of the total quantity of output produced Q = q 1 + q 2 in the following way P = α βq, α > c i, i = 1, 2, β > 0. In the Cournot duopoly model each firm chooses q i to maximize profits, taking as given the production level of the other firm, q j. At time t, firm i s profit is π i = q i P c i q i. As it is well known, taking π i / q i = 0 we obtain the best response of firm i, which depends on the output of firm j as follows 1 br 1 = a 1 q 2 /2, br 2 = a 2 q 1 /2, where a i = α c i 2β, i = 1, 2. We suppose that a 1 > a 2 /2 and that a 2 > a 1 /2 in order to have positive quantities in equilibrium, as will be seen below. The Nash equilibrium of the game, (q1 N, q2 N ), is a pair of outputs of the firms such that none firm has incentives to deviate from it unilaterally, that is, it is the best response against itself. This means that the Nash equilibrium of the static game solves In this case q N 1 = br 1 (q N 2 ), q N 2 = br 2 (q N 1 ). q N 1 = a 1 q N 2 /2, q N 2 = a 2 q N 2 /2. Solving, we have q1 N = 4 ( a 2 a ) 1, 3 2 q2 N = 4 ( a 1 a ) 2, 3 2 which are both positive by assumption. As a specific example, suppose for a moment that the game is symmetric, with c 1 = c 2 = c. Then, a 1 = a 2 = α c and the Nash equilibrium 2β is the output q1 N = α c 3β, q2 N = α c 3β. 1 Actually, the best response map is br i = max{a i q j /2, 0}, since negative quantities are not allowed. 27
28 28 Now we turn to the general asymmetric game and introduce a dynamic component in the game as follows. Suppose that each firm does not choose its Nash output instantaneously, but they adjust gradually its output q i towards its best response br i at each time t as indicated below { q1,t+1 = q 1,t + d 1 (br 1,t q 1,t ) = q 1,t + d 1 (a 1 1q 2 2,t q 1,t ), (5.3) q 2,t+1 = q 1,t + d 2 (br 2,t q 2,t ) = q 2,t + d 2 (a q 1,t q 2,t ), where d 1 and d 2 are positive constants. The objective is to study whether this tattonement process converges to the Nash equilibrium. To simplify notation, let us rename x = q 1 and y = q 2. Then, rearranging terms in the system (5.3) above, it can be rewritten as { xt+1 = (1 d 1 )x t d 1 2 y t + d 1 a 1, y t+1 = (1 d 2 )y t d 2 2 x t + d 2 a 2. It is easy to find the equilibrium points by solving the system { x = (1 d1 )x d 1 2 y + d 1 a 1, y = (1 d 2 )y d 2 2 x + d 2 a 2. The only solution is precisely the Nash equilibrium, ( 4 ( (x N, y N ) = a 2 a ) 1, ( a 1 a 2 2 Under what conditions this progressive adjustment of the produced output does converge to the Nash equilibrium? According to the theory, it depends on the module of the eigenvalues being smaller than 1. Let us find the eigenvalues of the system. The matrix of the system is ( 1 d1 d 1 2 d d 2 To simplify matters, let us suppose that the adjustment parameter is the same for both players, d 1 = d 2 = d. The eigenvalues of the matrix are which only depend on d. We have ) λ 1 = 1 d 2, λ 2 = 1 3d 2, λ 1 < 1 iff 0 < d < 4, λ 2 < 1 iff 0 < d < 4/3, therefore λ 1 < 1 and λ 2 < 1 iff 0 < d < 4/3. Thus, 0 < d < 4/3 is a necessary and sufficient condition for convergence to the Nash equilibrium of the one shot game from any initial condition (g.a.s. system). ) ).
29 6. The nonlinear first order equation We investigate here the stability of the solutions of an autonomous first order difference equation x t+1 = f(x t ), t = 0, 1,..., where f : I I is nonlinear and I is an interval of the real line. Recall that a function f is said to be of class C 1 in an open interval, if f exists and it is continuous in that interval. For example, the functions x 2, cos x or e x are C 1 is the whole real line, but x is not differentiable at 0, so is not C 1 in any open interval that contains 0. Theorem 6.1. Let x 0 I a fixed point of f, and suppose that f is C 1 in an open interval around x 0, I δ = (x 0 δ, x 0 + δ). (1) If f (x 0 ) < 1, then x 0 is locally asymptotically stable; (2) If f (x 0 ) > 1, then x 0 is unstable. Proof. Since f is continuous in I δ and f (x 0 ) < 1, there exists some open interval I δ = (x 0 δ, x 0 + δ) and a positive number k < 1 such that f (x) k for any x I δ. (1) By the mean value theorem (also called Theorem of Lagrange), there exists some c between x 0 and x 0 such that f(x 0 ) f(x 0 ) = f (c)(x 0 x 0 ), or x 1 x 0 = f (c)(x 0 x 0 ), since x 0 = f(x 0 ) by definition of fixed point. Consider an initial condition x 0 I δ. Then any c between x 0 and x 0 belongs to I δ and thus taking absolute values in the equality above we get (6.1) x 1 x 0 = f (c) x 0 x 0 k x 0 x 0. Also note that x 1 x 0 kδ < δ, thus x 1 I δ. Reasoning as above, one gets x 2 x 0 = f(x 1 ) x 0 = f (c) x 1 x 0 k x 1 x 0 k 2 x 0 x 0. where c is a number between x 1 and x 0 that belongs to I δ (and thus f (c) k). Continuing in this fashion we get after t steps x t x 0 k t x 0 x 0 0, as t. So x t converges to the fixed point x 0 as t, and x 0 is l.a.s. (2) Now suppose that f (x 0 ) > 1. Again by continuity of f, there exists δ > 0 and K > 1 such that f (x) > K for any x I δ. By equation (6.1) one has x 1 x 0 = f (c) x 0 x 0 > K x 0 x 0 and after t steps x t x 0 > K t x 0 x 0. Since K t tends to as t, then x t departs more and more from x 0 at each step, and the fixed point x 0 is unstable. 29
30 30 Remark 6.2. If f (x) < 1 for every point x I, then the fixed point x 0 is globally asymptotically stable. Example 6.3 (Population growth models). In the Malthus model of population growth it is postulated that a given population x grows at constant rate r, x t+1 x t = r, or x t+1 = (1 + r)x t. x t This is a linear equation and the population grows unboundedly if the per capita growth rate r is positive 2. This is not realistic for large t. When the population is small, there are ample environmental resources to support a high birth rate, but for later times, as the population grows, there is a higher death rate as individuals compete for space and food. Thus, the growth rate should be decreasing as the population increases. The simplest case is to take a linearly decreasing per capita rate, that is ( r 1 x ) t, K where K is the carrying capacity. This modification is known as the Verhulst law. Then the population evolves as x t+1 = x t (1 + r r ) K x t, which is not linear. The function f is quadratic, f(x) = x(1 + r rx/k). In Fig. 6.3 is depicted a solution with x 0 = 5, r = 0.5 and K = 20. We observe that the solution converges to 20. In fact, there are two fixed points of the equation, 0 (extinction of the population) and x 0 = K (maximum carrying capacity). Considering the derivative of f at these two fixed points, we have f (0) = 1 + r 2 r K x x=0 = 1 + r > 1, f (K) = 1 + r 2 r K x x=k = 1 r. Thus, according to Theorem 6.1, 0 is unstable, but K is l.a.e. iff 1 r < 1, or 0 < r < 2. 2 The solution is x t = (1 + r) t x 0, why?
31 6.1. Phase diagrams. The stability of a fixed point of the equation x t+1 = f(x t ), t = 0, 1,..., can also be studied by a graphical method based in the phase diagram. This consists in drawing the graph of the function y = f(x) in the plane xy. Note that a fixed point x 0 corresponds to a point (x 0, x 0 ) where the graph of y = f(x) intersects the straight line y = x. The following figures show possible configurations around a fixed point. The phase diagram is at the left (plane xy), and a solution sequence is shown at the right (plane tx). Notice that we have drawn the solution trajectory as a continuous curve because it facilitates visualization, but in fact it is a sequence of discrete points. In Fig. 4, f (x 0 ) is positive, and the sequence x 0, x 1,... converges monotonically to x 0, whereas in Fig. 5, f (x 0 ) is negative and we observe a cobweb like behavior, with the sequence x 0, x 1,... converging to x 0 but alternating between values above and below the equilibrium. In Fig. 6, the graph of f near x 0 is too steep for convergence. After many iterations in the diagram, we observe an erratic behavior of the sequence x 0, x 1,.... There is no cyclical patterns and two sequences generated from close initial conditions depart along time at an exponential rate (see Theorem 6.1 above). It is often said that the equation exhibits chaos. Finally, Fig. 7 is the phase diagram of an equation admitting a cycle of period Figure 4. x 0 stable, f (x 0 ) (0, 1)
32 32 Figure 5. x 0 stable, f (x 0 ) ( 1, 0) Figure 6. x 0 unstable, f (x 0 ) > 1 Figure 7. A cycle of period 3
33 Topic 3: Differential Equations Introduction. Definitions and classifications of ODEs Most often decision agents take optimal actions sequentially and economic variables evolve along time. Thus it is important to understand the tools of analysis and modeling of dynamical systems. We are looking at functions x : R R or vector functions x : R R n described by equations of the form d x(t) = f(t, x(t)), dt possibly with an initial condition x(t 0 ) = x 0. Objectives: (1) To find x(t) in closed form or, if this is not possible, (2) to study qualitative properties of x(t) (e.g. stability). (3) To apply the above to economic modeling. Notation: x is the independent or unknown variable and t the dependent variable; most often the variable t is omitted; d dx x(t) dt dt, x (t), x, ẋ(t), ẋ, x (1) (t), x (1). Higher order derivatives dk dt x(t) k x(k) (t). Special case k = 2, x, ẍ, x (2). d Other variables are possible, e.g. dx y(x), y (x). Definition 1.1. A one dimensional ordinary differential equation (ODE) of order k is a relation of the form (1.1) x (k) (t) = f(t, x(t), x (1) (t),..., x (k 1) (t)). Note that k is the highest derivative appearing in the equation. Definition 1.2. A first order system of ordinary differential equations is a relation of the form (1.2) ẋ(t) = f(t, x(t)), where x = (x 1,..., x n ), f = (f 1,..., f n ), x i : R R, f i : R n+1 R, i = 1,..., n. It is always possible to transform a kth order ODE into a first order system. Let us see how. Suppose we have the kth order ODE x (k) (t) = f(t, x(t), x (1) (t),..., x (k 1) (t))
Topic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationTopic 5: The Difference Equation
Topic 5: The Difference Equation Yulei Luo Economics, HKU October 30, 2017 Luo, Y. (Economics, HKU) ME October 30, 2017 1 / 42 Discrete-time, Differences, and Difference Equations When time is taken to
More informationECON2285: Mathematical Economics
ECON2285: Mathematical Economics Yulei Luo FBE, HKU September 2, 2018 Luo, Y. (FBE, HKU) ME September 2, 2018 1 / 35 Course Outline Economics: The study of the choices people (consumers, firm managers,
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationLinear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.
Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:
More informationMatrices. In this chapter: matrices, determinants. inverse matrix
Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a
More informationECON0702: Mathematical Methods in Economics
ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 12, 2009 Luo, Y. (SEF of HKU) MME January 12, 2009 1 / 35 Course Outline Economics: The study of the choices people (consumers,
More informationLinear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway
Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row
More informationMath113: Linear Algebra. Beifang Chen
Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationMTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education
MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationEigenvalues and Eigenvectors: An Introduction
Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationDigital Workbook for GRA 6035 Mathematics
Eivind Eriksen Digital Workbook for GRA 6035 Mathematics November 10, 2014 BI Norwegian Business School Contents Part I Lectures in GRA6035 Mathematics 1 Linear Systems and Gaussian Elimination........................
More information3 Matrix Algebra. 3.1 Operations on matrices
3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8
More informationNOTES FOR LINEAR ALGEBRA 133
NOTES FOR LINEAR ALGEBRA 33 William J Anderson McGill University These are not official notes for Math 33 identical to the notes projected in class They are intended for Anderson s section 4, and are 2
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationDynamical Systems. August 13, 2013
Dynamical Systems Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 Dynamical Systems are systems, described by one or more equations, that evolve over time.
More informationa 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real
More informationEigenvalues and Eigenvectors
/88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix
More informationDifferential Equations and Modeling
Differential Equations and Modeling Preliminary Lecture Notes Adolfo J. Rumbos c Draft date: March 22, 2018 March 22, 2018 2 Contents 1 Preface 5 2 Introduction to Modeling 7 2.1 Constructing Models.........................
More informationLecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationMath Camp Notes: Linear Algebra I
Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationa11 a A = : a 21 a 22
Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are
More informationLecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017
More informationSystems of Linear Equations and Matrices
Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationSAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra
SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More information1 Matrices and Systems of Linear Equations
March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.
More information2 b 3 b 4. c c 2 c 3 c 4
OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationStudy Guide for Linear Algebra Exam 2
Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationEigenvalues and Eigenvectors
Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying
More informationTHEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)
4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More information2 Eigenvectors and Eigenvalues in abstract spaces.
MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationChapter 2: Matrix Algebra
Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationLecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra
Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationMATHEMATICS. IMPORTANT FORMULAE AND CONCEPTS for. Final Revision CLASS XII CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION.
MATHEMATICS IMPORTANT FORMULAE AND CONCEPTS for Final Revision CLASS XII 2016 17 CHAPTER WISE CONCEPTS, FORMULAS FOR QUICK REVISION Prepared by M. S. KUMARSWAMY, TGT(MATHS) M. Sc. Gold Medallist (Elect.),
More informationCHAPTER 3. Matrix Eigenvalue Problems
A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL
More informationMath 3108: Linear Algebra
Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationDM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini
DM554 Linear and Integer Programming Lecture 9 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. More on 2. 3. 2 Resume Linear transformations and
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationHOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)
HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More informationMIDTERM REVIEW AND SAMPLE EXAM. Contents
MIDTERM REVIEW AND SAMPLE EXAM Abstract These notes outline the material for the upcoming exam Note that the review is divided into the two main topics we have covered thus far, namely, ordinary differential
More information1 Determinants. 1.1 Determinant
1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationLinear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay
Linear Algebra Summary Based on Linear Algebra and its applications by David C. Lay Preface The goal of this summary is to offer a complete overview of all theorems and definitions introduced in the chapters
More informationSolving a system by back-substitution, checking consistency of a system (no rows of the form
MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary
More informationMath 1553, Introduction to Linear Algebra
Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level
More informationEIGENVALUES AND EIGENVECTORS 3
EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices
More informationELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices
ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationLec 2: Mathematical Economics
Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen
More information1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.
STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition
More informationAPPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF
ELEMENTARY LINEAR ALGEBRA WORKBOOK/FOR USE WITH RON LARSON S TEXTBOOK ELEMENTARY LINEAR ALGEBRA CREATED BY SHANNON MARTIN MYERS APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF When you are done
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More informationMath 302 Outcome Statements Winter 2013
Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a
More information