Lecture 1: Introduction to computation

Size: px
Start display at page:

Download "Lecture 1: Introduction to computation"

Transcription

1 UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 4044 Extension: URL: Economics 9619 Computational methods in macroeconomics Winter 2017 Lecture 1: Introduction to computation 1 Numerical software For the most part, I ll assume that you ll be working with Matlab. However, I am quite happy for you to use Fortran, Scilab, C++, Python, R or whatever language you prefer. I will not give advice on what software to use, except to inform you of the distinction between compiled and interpreted languages, nor on how to acquire any particular kind of software. I am, however, happy to give programming advice for Matlab and Fortran. The former will be my canonical example of an interpreted language and Fortran of a compiled one. 2 Finite precision arithmetic Computational mathematics and ordinary mathematics are not the same. ordinary mathematics, there is no smallest number ε such that For example, in 1 + ε > 1 but in finite precision arithmetic, there is. How would you go about determining that number, approximately? 1

2 The magnitude of ε ( machine epsilon ) is determined by the number of bytes (1 byte=8 bits) allocated to representing a floating-point number. In double precision, which has emerged as a standard, a floating-point number takes up 64 bits (8 bytes), 52 of which are reserved for what is called the mantissa (see below for a definition). A floating-point number is stored on a computer in the following way, supposing for the sake of argument that computers operate in base 10. (They don t of course, they operate in base 2.) ± ±N Here the sequence is called the mantissa (or significand) and N is called the exponent. Notice that the leading digit is not zero meaning that in binary it does not need to be stored at all; it is implicit. For instance, the number 1/2 is represented as in binary rather than It is simply a convention that the leading digit is always one. It is often called an implicit bit, because it does not need to be (and isn t) stored in memory. In double precision, 11 bits are allocated to the exponent, so 2 11 = 2048 distinct exponents can be represented. To be precise, the exponent can take any integer value between and (By the way, the number is not treated as a proper number in double precision; instead it is the improper number Inf. Special rules apply to Inf. For instance, Inf+Inf=Inf but Inf Inf=NaN.) Meanwhile, the smallest number ɛ such that ɛ > 0 is a bit smaller than , in fact it is The way this works is as follows. When the exponent is minimal, the number is known as subnormal, meaning that the implicit bit is set to zero to enable storage of very small numbers. Meanwhile, the exponent is interpreted as being one larger than otherwise, i.e rather than 1023; that makes sense, doesn t it? So the smallest possible strictly positive number is represented by = =

3 In any case, the smallest strictly positive number in double precision is known as machine zero. Is it the same number as machine epsilon? No, it is not! The magnitude of machine epsilon is determined by the number of bits assigned to the mantissa, not the number of bits assigned to the exponent. So let s continue with our example and suppose that five base 10 bits are assigned to the mantissa. Then what is machine epsilon? It should be clear that it s 10 4 (where 4 is, not coincidentally, the number of base 10 bits allocated to the mantissa minus one). Consider x = and let s try to add it to 1. Notice that and one differ by five orders of magnitude one more than we can deal with. So things don t look good. But let s try. First let s represent the two numbers on the computer. 1 = = This representation is fine as far as it goes, but to add the two numbers, they need first to be put on a common exponent. 1 = Ooops! By putting on a common exponent with 1, it vanished! In double precision, the mantissa is allocated 52 bits. So you d expect machine epsilon to be But here s a subtlety: in binary, as we have seen, the leading digit of the mantissa is always one (unless we are dealing with a subnormal number). So there are in fact 53 binary bits in the mantissa if we include the implicit bit. As a result, machine epsilon is equal to Here s why machine epsilon is so important. For any reasonable x, the smallest number y such that x(1 + y) > x is precisely machine epsilon ε. Can you prove it? This has important implications for the sorts of things you can meaningfully compute, or more to the point, what things cannot be meaningfully computed. In particular, consider x + y y. In ordinary mathematics, this is equal to x. In finite precision arithmetic, if x and y differ in 3

4 magnitude by a sufficiently large factor, then it is garbage. For example, in double precision arithmetic, = 0. For more on this topic, see K. Kopecky s Computing Basics. 3 Solving the growth model by discretization Suppose a social planner maximizes subject to β t ln c t t=0 c t + k t+1 = (1 δ)k t + Ak θ t k 0 > 0 given and k t+1 0. If δ = 1 then you should know the solution to this problem. It can be represented via k t+1 = βθak θ t. But what if δ < 1? Then there are many ways to proceed. We will pursue one here. It is based on Bellman s principle of optimality, which says the following (ignoring some crucial mathematical subtleties). If the function v : R + R satisfies { v(k) = sup ln(ak θ + (1 δ)k k ) + βv(k ) } k 0 for all k > 0 and the sequence {k t } t=1 satisfies for all t = 0, 1,... then {k t } t=1 is optimal. v(k t ) = ln ( Ak θ t + (1 δ)k t k t+1 ) + βv(kt+1 ) What we will do is to force k t to belong to the same finite set for all t. This turns an infinitedimensional problem into a finite-dimensional one. We then solve the finite-dimensional problem. We start by computing the steady state. Apparently ( ) 1 k ss Aθ 1 θ =. β 1 + δ 1 4

5 You may want to set A = (β 1 + δ 1)/θ in which case k ss = 1. Alternatively, you may want steady state output to equal one in which case and ( 1 β(1 δ) A = βθ k ss = ) θ βθ 1 β(1 δ). The next step is to create a grid of values of k. Let there be n points in this grid and denote them by k 1, k 2,..., k n. (These superscripts are not powers.) One possibility is to let the points be equally spaced. Perhaps a better idea is to let them be equally spaced in logs. For example, suppose k 1 is half the steady state value and k n is twice the steady state value. Then k i = 1 { } 2(i 1) 2 kss exp n 1 ln Scalar by scalar If k t is confined to the grid, then we have a new, constrained problem. The Bellman equation associated with that new problem is for i = 1, 2,..., n. v(k i ) = max 1 j n {ln(a(ki ) θ + (1 δ)k i k j ) + βv(k j )} To find the value function (and hence the decision rule), we proceed as follows. Start with a guess of the value function consisting of its values at the grid points. Denote them by v 0 (k i ) for i = 1, 2,..., n. Now you may want to be more or less sophisticated about your choice of initial guess. Any initial guess is guaranteed to work, but the question is how quickly you will find an approximate solution. One possible, but not very intelligent guess is zero, v 0 (k i ) = 0 for all i. More on that a bit later. For now, let s talk about how to update the guess to something better. Inspired by Bellman s equation, and denoting the updated value function by v 1, let for i = 1, 2,..., n. v 1 (k i ) = max 1 j n {ln(a(ki ) θ + (1 δ)k i k j ) + βv 0 (k j )} (1) 5

6 Suppose now that our initial guess really was zero, or for that matter any constant. Then the optimal choice in the first round of updating will be to save as little as possible, i.e. to choose k = k 1 or j = 1 every time. That is quite far from the true solution. How can we come up with a more intelligent initial guess? Well, coming up with an initial guess of the value function may be hard, but guessing the decision rule is easier. How about k = k or j = i? At the steady state that s precisely accurate, and away from the steady state it s surely not too bad unless it involves negative consumption. Let s pause for a minute here to contemplate what safeguards should be taken to avoid tempting the computer to stray into negative consumption territory. You should define the utility function in terms of k and k in such a way that it delivers a very large negative real number if consumption is negative. A naive approach won t do this. Alternatively, you can choose your grid so that it never happens. This may or may not be possible depending on the model you re dealing with. Anyway, let s say that maintaining the current level of capital is a reasonable initial guess. What does that imply for the associated (not maximal) value function. Well, with this stay put policy we have Solving for v 0 (k), we get v 0 (k) = ln(ak θ + (1 δ)k k) + βv 0 (k). v 0 (k) = ln(akθ δk). 1 β Incidentally, if β is a number close to 1, 1 β is an awfully small number to be dividing by a dangerous thing numerically. You may therefore want to define your period utility function via (1 β) ln c instead of ln c. But let s not do that here. Anyhow, we now have a respectable initial guess and we can find a good approximation to the true value function of the constrained problem by iterating on Equation (1). An obvious approach to coding these iterations is to write explicit loops. However, in an interpreted language such as Matlab (as opposed to a compiled language like C++ or Fortran), loops are slow. The reason is that the code is reinterpreted at every iteration on the loop. To avoid looping, we vectorize the code. 6

7 3.2 Vectorizing We begin by defining k = (k 1 k 2 k 3... k n ) T. Similarly, define the vector v via v = (v(k 1 ) v(k 2 ) v(k 3 )... v(k n )) T. Second, define the matrix K via and similarly the matrix V via K = [k k... k] }{{} n times V = [v v... v] }{{} n times The remarkable thing about these definitions is that Bellman s equation now becomes v = max{ln ( AK θ + (1 δ)k K T ) + βv T } provided that we interpret the max operator when applied to a matrix as operating on the rows, picking out the maximal element of each row to create a column vector of maxima. (Also, scalar multiplication and raising to powers is interpreted element-wise as is the application of the natural logarithm.) We are now ready to describe an algorithm. Step 1. Create, once and for all, the matrix U = ln ( AK θ + (1 δ)k K T ), remembering to put a large negative value whenever the argument of the natural logarithm happens to be negative (if it ever is). Notice that Bellman s equation can now be written v = max{u + βv T }. Step 2. Create an initial guess of the value function v 0 and hence an initial guess V 0. Step 3. Iterate on Bellman s equation via the following recipe v 1 = max{u + βv T 0 }. Stop when the norm of the vector v 1 v 0 falls below some reasonable threshold. At this stage, before we get into the thick of finite-precision arithmetic, let me suggest 10 7 or, in computer jargon, 1E-7. Or maybe you should stop iterating when decisions no longer change. What do you think makes more sense? 7

8 3.3 Howard s improvement In the algorithm described above, maximization is carried out at each iteration. This is not efficient. Decision rules converge faster than value functions and maximization is time-consuming. Howard s improvement (see Howard, 1960) consists in iterating on the value function while keeping the decision rule fixed. How do we represent a decision rule? By a vector of indices, each index telling us what column of the right hand side of the Bellman equation matrix to pick out. Call this vector d. Denote by A(d) the column vector that results from this selection process from a matrix A. Howard s improvement, then, consists in iterating on v 1 = (U + βv0 T )(d). As a rule of thumb, you may want to apply Howard s improvement 10 or 20 times before applying another round of optimization. 3.4 Matlab tips and tricks To replace complex numbers from a matrix with something unattractively small, write U(imag(U) =0) = -1000; To create an n n matrix A, each column of which is equal to the column vector x, write A = repmat(x,1,n). If you apply Matlab s max operator to an m n matrix A, the result is a 1 n row vector consisting of the maxima of each column. That is the exact opposite of what we want. To get a column vector consisting of all the row-wise maxima, write max(a ). If you want to extract the d k :th element from each row k of an n n matrix A for all k = 1, 2,..., n and put them all into a column vector a, here is how you do it. idcs = sub2ind([n n],1:n,d ); a = A(idcs) ; 8

9 How do you compute the vector d? Wouldn t it be nice if we could code it like this: [v,d] = max(u+beta*v ); The problem with this is that it mixes up rows and columns and that the resulting vector d is a row vector, not a column vector. I leave to the reader the problem of sorting this out. You may of course conclude at this stage that it is better to work with row vectors in the first place since that is what Matlab seems to prefer. But for pedagogical purposes (writing these lecture notes) it is better to work with column vectors because it forces you, the reader, to be explicitly aware of what s going on when you do the coding. 4 Solving for a deterministic transition Consider the one-sector growth model with exogenous labour supply. A social planner maximizes subject to β t ln c t t=0 c t + k t+1 = Akt θ + (1 δ)k t and k t+1 0 where of course k 0 is given. As you know, the optimality conditions can be written as 1 = β 1 + Aθkθ 1 t+1 δ. c t c t+1 To solve for a transition from an arbitrary point to the steady state, we can just stack all these equations and solve them numerically. Well, not quite, because there are infinitely many equations. So we fix an integer T and force the economy to converge to the steady state in T 9

10 periods. Thus we solve the following system. Ak0 θ + (1 δ)k 0 c 0 k 1 = 0 1 β 1 + Aθkθ 1 1 δ = 0 c 0 c 1 Ak θ 1 + (1 δ)k 1 c 1 k 2 = 0 1 β 1 + Aθkθ 1 2 δ = 0 c 1 c 2 1 β 1 + Aθkθ 1 T δ = 0 c T 1 c T Ak θ T + (1 δ)k T c T k T +1 = 0. Notice that we repeat one of the equations T times, but the other is repeated T + 1 times. This means that we have 2T +1 equations. Why not repeat both equations T +1 times? Because we want to force the economy to converge to a steady state. This is important because a dynamic model will typically have explosive solutions that we are not interested in. So we will insist on k T +1 = k where k is defined by the steady state condition β(1 + θa k θ 1 δ) = 1. Having fixed k 0 and k T +1, we have T values of k t to solve for. Meanwhile, we have T + 1 values for c t to solve for: t = 0, 1,..., T. Thus we have 2T + 1 unknowns. Now let s consider this problem at a somewhat higher level of generality. It turns out to be useful to distinguish between static and dynamic equilibrium conditions.. The static equilibrium conditions don t involve d t+1, and there are n d of them. The dynamic conditions do involve d t+1 and there are n x of them. More precisely, let the equilibrium conditions be given by f(x t, x t+1, d t, d t+1 ) = 0 for t = 0, 1,... g(x t, x t+1, d t ) = 0 Here it is not crucial that f and g do not depend explicitly on t as long as any dependence ceases after finitely many periods. In any case, here s how the approach works: 10

11 1. Fix a time horizon T after which you think the economy is very close to the steady state. 2. Set up your equilibrium conditions. Using the notation already established, we have g(x 0, x 1, d 0 ) = 0 f(x 0, x 1, d 0, d 1 ) = 0 g(x 1, x 2, d 1 ) = 0 f(x 1, x 2, d 1, d 2 ) = 0 f(x T 1, x T, d T 1, d T ) = 0. g(x T, x T +1, d T ) = 0 3. Don t forget that x 0 fixed by an initial condition! 4. Meanwhile, x T +1 is fixed by the steady state (if known) or by the equation x T +1 = x T if the steady state is not known. 5. Solve for x 1, x 2,..., x T and d 0, d 1,..., d T. Notice that there are just as many equations as there are unknowns! 11

12 5 Exercises 1. Derive a formula for the maximum sustainable level of capital k max. By definition, k max is the largest value of k such that Ak θ δk Write a program that solves the model described in these lecture notes. Use a grid that s evenly spaced in logs, has 1000 points and ranges from k ss /2 to k max. Verify that it works by comparing the computed value function and decision rule with the true counterparts when δ = How long does the program take to converge without Howard s improvement? What is (roughly) the optimal number of Howard improvements in between optimizations and how much does that cut computation time? (Set parameter values as you like.) 4. Set β =.96, θ = 0.36, δ = 0.08 and k 0 = k ss /2. How many periods does it take for k t 3k ss /4? (What if δ = 0.1? How long does it take then?) 5. The following first-order condition should hold along your computed transition path. 1 = β Aθkθ 1 t δ c t c t+1 Report by what percentage consumption c t would have to change in order for this condition to hold exactly in each period t. When you compute this number for a particular t, keep all the other variables (c t+1, k t+1 ) unchanged. t = 0, 1,..., 20. Compute the number for 6. Revisit Question 4 by computing a deterministic transition as described in Section 4. Do you get the same answer? 7. The Hodrick-Prescott filter defines the trend as the sequence { ˆX t } T t=0 that solves { T } min (X t ˆX ) 2 T 1 (( t + λ ˆXt+1 ˆX ) ( t ˆXt ˆX )) 2 t 1 ˆX t T t=0 t=0 t=1 where λ is a parameter and {X t } T t=0 is some observed time series. (2) (a) Write down the first order conditions for this minimization problem. (b) Describe the precise type of sparseness that the linear system you derived in (a) exhibits. (c) Code a function that solves for the trend, exploiting the sparseness. 12

13 References Howard, R. A. (1960). Dynamic Programming and Markov Processes. The M.I.T. Press. 13

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer

Lecture 3: Huggett s 1993 model. 1 Solving the savings problem for an individual consumer UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 4044 Extension: 85484 Email: pklein2@uwo.ca URL: http://paulklein.ca/newsite/teaching/619.php Economics 9619 Computational methods in

More information

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract What Every Programmer Should Know About Floating-Point Arithmetic Last updated: November 3, 2014 Abstract The article provides simple answers to the common recurring questions of novice programmers about

More information

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013 Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii

More information

Understanding Exponents Eric Rasmusen September 18, 2018

Understanding Exponents Eric Rasmusen September 18, 2018 Understanding Exponents Eric Rasmusen September 18, 2018 These notes are rather long, but mathematics often has the perverse feature that if someone writes a long explanation, the reader can read it much

More information

Binary floating point

Binary floating point Binary floating point Notes for 2017-02-03 Why do we study conditioning of problems? One reason is that we may have input data contaminated by noise, resulting in a bad solution even if the intermediate

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Numerical optimization

Numerical optimization THE UNIVERSITY OF WESTERN ONTARIO LONDON ONTARIO Paul Klein Office: SSC 408 Phone: 661-111 ext. 857 Email: paul.klein@uwo.ca URL: www.ssc.uwo.ca/economics/faculty/klein/ Numerical optimization In these

More information

How to Characterize Solutions to Constrained Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems How to Characterize Solutions to Constrained Optimization Problems Michael Peters September 25, 2005 1 Introduction A common technique for characterizing maximum and minimum points in math is to use first

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16 600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16 25.1 Introduction Today we re going to talk about machine learning, but from an

More information

Optimization and Gradient Descent

Optimization and Gradient Descent Optimization and Gradient Descent INFO-4604, Applied Machine Learning University of Colorado Boulder September 12, 2017 Prof. Michael Paul Prediction Functions Remember: a prediction function is the function

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

Week 2: Defining Computation

Week 2: Defining Computation Computational Complexity Theory Summer HSSP 2018 Week 2: Defining Computation Dylan Hendrickson MIT Educational Studies Program 2.1 Turing Machines Turing machines provide a simple, clearly defined way

More information

Dynamic Programming Theorems

Dynamic Programming Theorems Dynamic Programming Theorems Prof. Lutz Hendricks Econ720 September 11, 2017 1 / 39 Dynamic Programming Theorems Useful theorems to characterize the solution to a DP problem. There is no reason to remember

More information

The Gram-Schmidt Process

The Gram-Schmidt Process The Gram-Schmidt Process How and Why it Works This is intended as a complement to 5.4 in our textbook. I assume you have read that section, so I will not repeat the definitions it gives. Our goal is to

More information

Descriptive Statistics (And a little bit on rounding and significant digits)

Descriptive Statistics (And a little bit on rounding and significant digits) Descriptive Statistics (And a little bit on rounding and significant digits) Now that we know what our data look like, we d like to be able to describe it numerically. In other words, how can we represent

More information

Main topics for the First Midterm Exam

Main topics for the First Midterm Exam Main topics for the First Midterm Exam The final will cover Sections.-.0, 2.-2.5, and 4.. This is roughly the material from first three homeworks and three quizzes, in addition to the lecture on Monday,

More information

1. Using the model and notations covered in class, the expected returns are:

1. Using the model and notations covered in class, the expected returns are: Econ 510a second half Yale University Fall 2006 Prof. Tony Smith HOMEWORK #5 This homework assignment is due at 5PM on Friday, December 8 in Marnix Amand s mailbox. Solution 1. a In the Mehra-Prescott

More information

Lecture 5: Some Informal Notes on Dynamic Programming

Lecture 5: Some Informal Notes on Dynamic Programming Lecture 5: Some Informal Notes on Dynamic Programming The purpose of these class notes is to give an informal introduction to dynamic programming by working out some cases by h. We want to solve subject

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information

Y t = log (employment t )

Y t = log (employment t ) Advanced Macroeconomics, Christiano Econ 416 Homework #7 Due: November 21 1. Consider the linearized equilibrium conditions of the New Keynesian model, on the slide, The Equilibrium Conditions in the handout,

More information

2 Discrete Dynamical Systems (DDS)

2 Discrete Dynamical Systems (DDS) 2 Discrete Dynamical Systems (DDS) 2.1 Basics A Discrete Dynamical System (DDS) models the change (or dynamics) of single or multiple populations or quantities in which the change occurs deterministically

More information

Lecture XI. Approximating the Invariant Distribution

Lecture XI. Approximating the Invariant Distribution Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.

More information

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math. Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if

More information

5 + 9(10) + 3(100) + 0(1000) + 2(10000) =

5 + 9(10) + 3(100) + 0(1000) + 2(10000) = Chapter 5 Analyzing Algorithms So far we have been proving statements about databases, mathematics and arithmetic, or sequences of numbers. Though these types of statements are common in computer science,

More information

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 7: Introduction to Computational Complexity 15-251 Great Theoretical Ideas in Computer Science Lecture 7: Introduction to Computational Complexity September 20th, 2016 What have we done so far? What will we do next? What have we done so far? > Introduction

More information

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.

HOMEWORK #1 This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox. Econ 50a (second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK # This homework assignment is due at 5PM on Friday, November 3 in Marnix Amand s mailbox.. Consider a growth model with capital

More information

One-to-one functions and onto functions

One-to-one functions and onto functions MA 3362 Lecture 7 - One-to-one and Onto Wednesday, October 22, 2008. Objectives: Formalize definitions of one-to-one and onto One-to-one functions and onto functions At the level of set theory, there are

More information

You separate binary numbers into columns in a similar fashion. 2 5 = 32

You separate binary numbers into columns in a similar fashion. 2 5 = 32 RSA Encryption 2 At the end of Part I of this article, we stated that RSA encryption works because it s impractical to factor n, which determines P 1 and P 2, which determines our private key, d, which

More information

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur

Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Constrained and Unconstrained Optimization Prof. Adrijit Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 01 Introduction to Optimization Today, we will start the constrained

More information

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs CS26: A Second Course in Algorithms Lecture #2: Applications of Multiplicative Weights to Games and Linear Programs Tim Roughgarden February, 206 Extensions of the Multiplicative Weights Guarantee Last

More information

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this. Preface Here are my online notes for my Calculus II course that I teach here at Lamar University. Despite the fact that these are my class notes they should be accessible to anyone wanting to learn Calculus

More information

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Natural Language Processing Prof. Pawan Goyal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture - 18 Maximum Entropy Models I Welcome back for the 3rd module

More information

Notes for Chapter 1 of. Scientific Computing with Case Studies

Notes for Chapter 1 of. Scientific Computing with Case Studies Notes for Chapter 1 of Scientific Computing with Case Studies Dianne P. O Leary SIAM Press, 2008 Mathematical modeling Computer arithmetic Errors 1999-2008 Dianne P. O'Leary 1 Arithmetic and Error What

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005.

Lecture 3. 1 Terminology. 2 Non-Deterministic Space Complexity. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005. Notes on Complexity Theory: Fall 2005 Last updated: September, 2005 Jonathan Katz Lecture 3 1 Terminology For any complexity class C, we define the class coc as follows: coc def = { L L C }. One class

More information

Lecture 4: Constructing the Integers, Rationals and Reals

Lecture 4: Constructing the Integers, Rationals and Reals Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 4: Constructing the Integers, Rationals and Reals Week 5 UCSB 204 The Integers Normally, using the natural numbers, you can easily define

More information

Finite Mathematics : A Business Approach

Finite Mathematics : A Business Approach Finite Mathematics : A Business Approach Dr. Brian Travers and Prof. James Lampes Second Edition Cover Art by Stephanie Oxenford Additional Editing by John Gambino Contents What You Should Already Know

More information

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Module - 3 Lecture - 10 First Order Linear Equations (Refer Slide Time: 00:33) Welcome

More information

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460

Arithmetic and Error. How does error arise? How does error arise? Notes for Part 1 of CMSC 460 Notes for Part 1 of CMSC 460 Dianne P. O Leary Preliminaries: Mathematical modeling Computer arithmetic Errors 1999-2006 Dianne P. O'Leary 1 Arithmetic and Error What we need to know about error: -- how

More information

EXAM 2 REVIEW DAVID SEAL

EXAM 2 REVIEW DAVID SEAL EXAM 2 REVIEW DAVID SEAL 3. Linear Systems and Matrices 3.2. Matrices and Gaussian Elimination. At this point in the course, you all have had plenty of practice with Gaussian Elimination. Be able to row

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

PEANO AXIOMS FOR THE NATURAL NUMBERS AND PROOFS BY INDUCTION. The Peano axioms

PEANO AXIOMS FOR THE NATURAL NUMBERS AND PROOFS BY INDUCTION. The Peano axioms PEANO AXIOMS FOR THE NATURAL NUMBERS AND PROOFS BY INDUCTION The Peano axioms The following are the axioms for the natural numbers N. You might think of N as the set of integers {0, 1, 2,...}, but it turns

More information

Lecture 7. Floating point arithmetic and stability

Lecture 7. Floating point arithmetic and stability Lecture 7 Floating point arithmetic and stability 2.5 Machine representation of numbers Scientific notation: 23 }{{} }{{} } 3.14159265 {{} }{{} 10 sign mantissa base exponent (significand) s m β e A floating

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring

Lecture 3: Hamilton-Jacobi-Bellman Equations. Distributional Macroeconomics. Benjamin Moll. Part II of ECON Harvard University, Spring Lecture 3: Hamilton-Jacobi-Bellman Equations Distributional Macroeconomics Part II of ECON 2149 Benjamin Moll Harvard University, Spring 2018 1 Outline 1. Hamilton-Jacobi-Bellman equations in deterministic

More information

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

6.080 / Great Ideas in Theoretical Computer Science Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras

Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras Mechanics, Heat, Oscillations and Waves Prof. V. Balakrishnan Department of Physics Indian Institute of Technology, Madras Lecture - 21 Central Potential and Central Force Ready now to take up the idea

More information

Introduction, basic but important concepts

Introduction, basic but important concepts Introduction, basic but important concepts Felix Kubler 1 1 DBF, University of Zurich and Swiss Finance Institute October 7, 2017 Felix Kubler Comp.Econ. Gerzensee, Ch1 October 7, 2017 1 / 31 Economics

More information

Lecture - 30 Stationary Processes

Lecture - 30 Stationary Processes Probability and Random Variables Prof. M. Chakraborty Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 30 Stationary Processes So,

More information

Lecture 7: Numerical Tools

Lecture 7: Numerical Tools Lecture 7: Numerical Tools Fatih Guvenen January 10, 2016 Fatih Guvenen Lecture 7: Numerical Tools January 10, 2016 1 / 18 Overview Three Steps: V (k, z) =max c,k 0 apple u(c)+ Z c + k 0 =(1 + r)k + z

More information

Chapter 3. Dynamic Programming

Chapter 3. Dynamic Programming Chapter 3. Dynamic Programming This chapter introduces basic ideas and methods of dynamic programming. 1 It sets out the basic elements of a recursive optimization problem, describes the functional equation

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010 Computational complexity studies the amount of resources necessary to perform given computations.

More information

Logarithms and Exponentials

Logarithms and Exponentials Logarithms and Exponentials Steven Kaplan Department of Physics and Astronomy, Rutgers University The Basic Idea log b x =? Whoa...that looks scary. What does that mean? I m glad you asked. Let s analyze

More information

1 Jan 28: Overview and Review of Equilibrium

1 Jan 28: Overview and Review of Equilibrium 1 Jan 28: Overview and Review of Equilibrium 1.1 Introduction What is an equilibrium (EQM)? Loosely speaking, an equilibrium is a mapping from environments (preference, technology, information, market

More information

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search

Stochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t

More information

An analogy from Calculus: limits

An analogy from Calculus: limits COMP 250 Fall 2018 35 - big O Nov. 30, 2018 We have seen several algorithms in the course, and we have loosely characterized their runtimes in terms of the size n of the input. We say that the algorithm

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Some AI Planning Problems

Some AI Planning Problems Course Logistics CS533: Intelligent Agents and Decision Making M, W, F: 1:00 1:50 Instructor: Alan Fern (KEC2071) Office hours: by appointment (see me after class or send email) Emailing me: include CS533

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 017/018 DR. ANTHONY BROWN. Lines and Their Equations.1. Slope of a Line and its y-intercept. In Euclidean geometry (where

More information

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018 CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2018 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis

More information

M155 Exam 2 Concept Review

M155 Exam 2 Concept Review M155 Exam 2 Concept Review Mark Blumstein DERIVATIVES Product Rule Used to take the derivative of a product of two functions u and v. u v + uv Quotient Rule Used to take a derivative of the quotient of

More information

Time-bounded computations

Time-bounded computations Lecture 18 Time-bounded computations We now begin the final part of the course, which is on complexity theory. We ll have time to only scratch the surface complexity theory is a rich subject, and many

More information

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n = Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,

More information

Lecture 22: Quantum computational complexity

Lecture 22: Quantum computational complexity CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the

More information

Quick Sort Notes , Spring 2010

Quick Sort Notes , Spring 2010 Quick Sort Notes 18.310, Spring 2010 0.1 Randomized Median Finding In a previous lecture, we discussed the problem of finding the median of a list of m elements, or more generally the element of rank m.

More information

Neoclassical Growth Model: I

Neoclassical Growth Model: I Neoclassical Growth Model: I Mark Huggett 2 2 Georgetown October, 2017 Growth Model: Introduction Neoclassical Growth Model is the workhorse model in macroeconomics. It comes in two main varieties: infinitely-lived

More information

Numerical Methods - Preliminaries

Numerical Methods - Preliminaries Numerical Methods - Preliminaries Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Preliminaries 2013 1 / 58 Table of Contents 1 Introduction to Numerical Methods Numerical

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

Lecture 7: Stochastic Dynamic Programing and Markov Processes

Lecture 7: Stochastic Dynamic Programing and Markov Processes Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology

More information

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox.

HOMEWORK #3 This homework assignment is due at NOON on Friday, November 17 in Marnix Amand s mailbox. Econ 50a second half) Yale University Fall 2006 Prof. Tony Smith HOMEWORK #3 This homework assignment is due at NOON on Friday, November 7 in Marnix Amand s mailbox.. This problem introduces wealth inequality

More information

An Application to Growth Theory

An Application to Growth Theory An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x

More information

Unary negation: T F F T

Unary negation: T F F T Unary negation: ϕ 1 ϕ 1 T F F T Binary (inclusive) or: ϕ 1 ϕ 2 (ϕ 1 ϕ 2 ) T T T T F T F T T F F F Binary (exclusive) or: ϕ 1 ϕ 2 (ϕ 1 ϕ 2 ) T T F T F T F T T F F F Classical (material) conditional: ϕ 1

More information

CIS 2033 Lecture 5, Fall

CIS 2033 Lecture 5, Fall CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite

More information

How do computers represent numbers?

How do computers represent numbers? How do computers represent numbers? Tips & Tricks Week 1 Topics in Scientific Computing QMUL Semester A 2017/18 1/10 What does digital mean? The term DIGITAL refers to any device that operates on discrete

More information

Hidden Markov Models: All the Glorious Gory Details

Hidden Markov Models: All the Glorious Gory Details Hidden Markov Models: All the Glorious Gory Details Noah A. Smith Department of Computer Science Johns Hopkins University nasmith@cs.jhu.edu 18 October 2004 1 Introduction Hidden Markov models (HMMs, hereafter)

More information

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence

Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras. Lecture - 13 Conditional Convergence Real Analysis Prof. S.H. Kulkarni Department of Mathematics Indian Institute of Technology, Madras Lecture - 13 Conditional Convergence Now, there are a few things that are remaining in the discussion

More information

Richard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration

Richard S. Palais Department of Mathematics Brandeis University Waltham, MA The Magic of Iteration Richard S. Palais Department of Mathematics Brandeis University Waltham, MA 02254-9110 The Magic of Iteration Section 1 The subject of these notes is one of my favorites in all mathematics, and it s not

More information

Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore

Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore Water Resources Systems Prof. P. P. Mujumdar Department of Civil Engineering Indian Institute of Science, Bangalore Module No. # 05 Lecture No. # 22 Reservoir Capacity using Linear Programming (2) Good

More information

Finding Limits Graphically and Numerically

Finding Limits Graphically and Numerically Finding Limits Graphically and Numerically 1. Welcome to finding limits graphically and numerically. My name is Tuesday Johnson and I m a lecturer at the University of Texas El Paso. 2. With each lecture

More information

2. Limits at Infinity

2. Limits at Infinity 2 Limits at Infinity To understand sequences and series fully, we will need to have a better understanding of its at infinity We begin with a few examples to motivate our discussion EXAMPLE 1 Find SOLUTION

More information

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic

Practical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic Practical Dynamic Programming: An Introduction Associated programs dpexample.m: deterministic dpexample2.m: stochastic Outline 1. Specific problem: stochastic model of accumulation from a DP perspective

More information

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a

ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a 316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important

More information

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018 CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2018 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis

More information

2 = = 0 Thus, the number which is largest in magnitude is equal to the number which is smallest in magnitude.

2 = = 0 Thus, the number which is largest in magnitude is equal to the number which is smallest in magnitude. Limits at Infinity Two additional topics of interest with its are its as x ± and its where f(x) ±. Before we can properly discuss the notion of infinite its, we will need to begin with a discussion on

More information

Advanced topic: Space complexity

Advanced topic: Space complexity Advanced topic: Space complexity CSCI 3130 Formal Languages and Automata Theory Siu On CHAN Chinese University of Hong Kong Fall 2016 1/28 Review: time complexity We have looked at how long it takes to

More information

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS

FLOATING POINT ARITHMETHIC - ERROR ANALYSIS FLOATING POINT ARITHMETHIC - ERROR ANALYSIS Brief review of floating point arithmetic Model of floating point arithmetic Notation, backward and forward errors 3-1 Roundoff errors and floating-point arithmetic

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Slides II - Dynamic Programming

Slides II - Dynamic Programming Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian

More information

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019 CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis Ruth Anderson Winter 2019 Today Algorithm Analysis What do we care about? How to compare two algorithms Analyzing Code Asymptotic Analysis

More information

CS 124 Math Review Section January 29, 2018

CS 124 Math Review Section January 29, 2018 CS 124 Math Review Section CS 124 is more math intensive than most of the introductory courses in the department. You re going to need to be able to do two things: 1. Perform some clever calculations to

More information

Neoclassical Business Cycle Model

Neoclassical Business Cycle Model Neoclassical Business Cycle Model Prof. Eric Sims University of Notre Dame Fall 2015 1 / 36 Production Economy Last time: studied equilibrium in an endowment economy Now: study equilibrium in an economy

More information

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Shinichi Nishiyama Graduate School of Economics Kyoto University January 10, 2019 Abstract In this lecture, we solve and simulate

More information

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity

Great Theoretical Ideas in Computer Science. Lecture 9: Introduction to Computational Complexity 15-251 Great Theoretical Ideas in Computer Science Lecture 9: Introduction to Computational Complexity February 14th, 2017 Poll What is the running time of this algorithm? Choose the tightest bound. def

More information

Session 4: Money. Jean Imbs. November 2010

Session 4: Money. Jean Imbs. November 2010 Session 4: Jean November 2010 I So far, focused on real economy. Real quantities consumed, produced, invested. No money, no nominal in uences. I Now, introduce nominal dimension in the economy. First and

More information

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2

ECON607 Fall 2010 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 ECON607 Fall 200 University of Hawaii Professor Hui He TA: Xiaodong Sun Assignment 2 The due date for this assignment is Tuesday, October 2. ( Total points = 50). (Two-sector growth model) Consider the

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

A Quick Introduction to Numerical Methods

A Quick Introduction to Numerical Methods Chapter 5 A Quick Introduction to Numerical Methods One of the main advantages of the recursive approach is that we can use the computer to solve numerically interesting models. There is a wide variety

More information