Lecture Notes 8
|
|
- Jewel Hall
- 5 years ago
- Views:
Transcription
1 Lecture Notes 8 Guido Lorenzoni Fall 29 1 Stochastic dynamic programming: an example We no turn to analyze problems ith uncertainty, in discrete time. We begin ith an example that illustrates the poer of recursive methods. Take an unemployed orker ith linear utility function. The orker is draing age-o ers from a knon distribution ith continuous c.d.f. F () on [; ]. At any point in time, he can stop and accept the o er. If he accepts he gets to ork at age and then orks forever getting utility = (1 ). Sequence setup: history is sequence of observed o ers t = ( ; 1 ; :::; t ) : A plan is to stop or not after any possible history, i.e., choose ( t ) 2 f; 1g. The stopping time T is a random variable that depends on the plan (:): T is the rst time here ( t ) = 1. Objective is to choose (:) to maximize " # T E 1 T : Recursive setup. State variable: did you stop in the past? if yes hat age did you accept? So the state space is no X = funemployedg [ R +. The value after stopping at age is just V () = = (1 ). So e need to characterize V (unemployed), hich e ill denote V U. Each period decision after never stopped is max 1 ; V U or, equivalently, max 2f;1g 1 + (1 ) V U : So optimal policy is to stop if > ^, not stop if < ^, and indi erence if = ^, here ^ = (1 ) V U. 1
2 Bellman equation V U = max 1 ; V U df () : We can rerite it in terms of the cuto ^ and e have ^ = (1 ) max 1 ; V U df () = max f; ^g df () nd xed point, here simply nd ^ that solves here Properties of this map: T (v) ^ = T ( ^) it is continuous increasing on [; ]; has derivative max f; vg df () : T (v) = F (v) vf (v) + vf (v) = F (v) 2 [; ] for v 2 (; ) (here e use continuous distribution); has T () = E [] and T ( ) =. Therefore, a unique xed point ^ exists and is in (; ) (you can use contraction mapping to prove it). Comparative statics 1. An increase in increases the cuto ^. Just look at T (v) = max f; vg df () and see that it is increasing in both and v at the xed point ^. Comparative statics 2. A rst-order stochastic shift in the distribution F leads to a (eak) increase in ^. Comparative statics 3. A second-order stochastic shift in the distribution F leads to a (eak) increase in ^. What is rst-order and second-order stochastic dominance? Take to distributions F and G on R. 2
3 De nition 1 The distribution F dominates the distribution G in the sense of 1st order stochastic dominance i h (x) df (x) h (x) dg (x) for all monotone functions h : R! R. De nition 2 The distribution F dominates the distribution G in the sense of 2nd order stochastic dominance i h (x) df (x) h (x) dg (x) for all convex functions h : R! R. Sometimes you see stochastic dominance (1st and 2nd order) de ned in terms of comparisons of the c.d.f. of F and G and then the de nitions above are theorems! Exercise: using the de nitions above prove comparative statics 2 and 3. Characterizing the dynamics. Let us make the problem more interesting (and stationary) by assuming that hen employed agents lose their job ith exogenous probability. The state space is still X = funemployedg [ R +. No the Bellman equation(s) are V () = + V U + (1 ) V () V U = max V () ; V U df () From the rst e get V () = + V U 1 (1 ) and e have to nd V U from + V V U U = max 1 (1 ) ; V U df () Exercise: prove that this de nes a contraction ith modulus. So e still have a cuto given by ^ = (1 ) (1 ) V U : No the ne thing is that the optimal policy de nes a Markov process for the state x t 2 X. No let us simplify assuming the distribution of ages is a discrete distribution ith J possible realizations f! 1 ;! 2 ; :::;! J g and probabilities f 1 ; 2 ; :::; J g (the c.d.f. is no a step function). Suppose!^ 1 < ^ <!^. 3
4 No e have a Markov chain ith transition probabilities given as follos: Pr (x t+1 = unemployed j x t = unemployed) = X ^ 1 j j=1 Pr (x t+1 =! j j x t = unemployed) = for j = 1; :::; ^ 1 Pr (x t+1 =! j j x t = unemployed) = j for j = ^ ; :::; J Pr (x t+1 = unemployed j x t =! j ) = for all j Pr (x t+1 =! j j x t =! j ) = 1 for all j Pr (x t+1 =! j j x t =! j ) = for all j 6= j and all j We can then address questions like: suppose you have a large population of agents (ith independent age dras and separation shocks) and you start from some distribution on the state X, if the economy goes on for a hile do you converge to some invariant distribution on X? This is the analogous of the deterministic dynamics, but the notion of convergence is di erent. No steady state but invariant distribution. Example: f! 1 ;! 2 ;! 3 g ith ^ = 2, then X = funemployed;! 1 ;! 2 ;! 3 g and transition matrix: M = Suppose you start from distribution. 1; ; 2; ; 3; ; 4; What happens to the distribution after t periods? ;t 1; 6 2;t 4 3;t 5 = M t 6 2; 4 3; 5 4;t 4; Does it converge? 3 5 : 4
5 MIT OpenCourseWare Dynamic Optimization Methods ith Applications Fall 29 For information about citing these materials or our Terms of Use, visit:
Labor Economics, Lecture 11: Partial Equilibrium Sequential Search
Labor Economics, 14.661. Lecture 11: Partial Equilibrium Sequential Search Daron Acemoglu MIT December 6, 2011. Daron Acemoglu (MIT) Sequential Search December 6, 2011. 1 / 43 Introduction Introduction
More informationLecture 7: Stochastic Dynamic Programing and Markov Processes
Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology
More informationTraining, Search and Wage Dispersion Technical Appendix
Training, Search and Wage Dispersion Technical Appendix Chao Fu University of Wisconsin-Madison October, 200 Abstract This paper combines on-the-job search and human capital theory to study the coexistence
More informationTime is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.
Chapter 0 Discrete Time Dynamic Programming 0.1 The Finite Horizon Case Time is discrete and indexed by t =0; 1;:::;T,whereT
More informationMacroeconomics IV Problem Set I
14.454 - Macroeconomics IV Problem Set I 04/02/2011 Due: Monday 4/11/2011 1 Question 1 - Kocherlakota (2000) Take an economy with a representative, in nitely-lived consumer. The consumer owns a technology
More informationStochastic Problems. 1 Examples. 1.1 Neoclassical Growth Model with Stochastic Technology. 1.2 A Model of Job Search
Stochastic Problems References: SLP chapters 9, 10, 11; L&S chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology Production function y = Af k where A is random Let A s t
More informationLecture 6: Contraction mapping, inverse and implicit function theorems
Lecture 6: Contraction mapping, inverse and implicit function theorems 1 The contraction mapping theorem De nition 11 Let X be a metric space, with metric d If f : X! X and if there is a number 2 (0; 1)
More informationBusiness Cycles: The Classical Approach
San Francisco State University ECON 302 Business Cycles: The Classical Approach Introduction Michael Bar Recall from the introduction that the output per capita in the U.S. is groing steady, but there
More informationAdvanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications
Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic
More informationInternation1al Trade
14.581 Internation1al Trade Class notes on 3/4/2013 1 Factor Proportion Theory The law of comparative advantage establishes the relationship between relative autarky prices and trade ows But where do relative
More information1 Bewley Economies with Aggregate Uncertainty
1 Bewley Economies with Aggregate Uncertainty Sofarwehaveassumedawayaggregatefluctuations (i.e., business cycles) in our description of the incomplete-markets economies with uninsurable idiosyncratic risk
More informationAdvanced Microeconomics
Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions
More informationNotes on Measure Theory and Markov Processes
Notes on Measure Theory and Markov Processes Diego Daruich March 28, 2014 1 Preliminaries 1.1 Motivation The objective of these notes will be to develop tools from measure theory and probability to allow
More informationDiscrete planning (an introduction)
Sistemi Intelligenti Corso di Laurea in Informatica, A.A. 2017-2018 Università degli Studi di Milano Discrete planning (an introduction) Nicola Basilico Dipartimento di Informatica Via Comelico 39/41-20135
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationD(f/g)(P ) = D(f)(P )g(p ) f(p )D(g)(P ). g 2 (P )
We first record a very useful: 11. Higher derivatives Theorem 11.1. Let A R n be an open subset. Let f : A R m and g : A R m be two functions and suppose that P A. Let λ A be a scalar. If f and g are differentiable
More informationLayo Costs and E ciency with Asymmetric Information
Layo Costs and E ciency with Asymmetric Information Alain Delacroix (UQAM) and Etienne Wasmer (Sciences-Po) September 4, 2009 Abstract Wage determination under asymmetric information generates ine ciencies
More informationz = f (x; y) = x 3 3x 2 y x 2 3
BEE Mathematics for Economists Week, ecture Thursday..7 Functions in two variables Dieter Balkenborg Department of Economics University of Exeter Objective This lecture has the purpose to make you familiar
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationLECTURE 2. Convexity and related notions. Last time: mutual information: definitions and properties. Lecture outline
LECTURE 2 Convexity and related notions Last time: Goals and mechanics of the class notation entropy: definitions and properties mutual information: definitions and properties Lecture outline Convexity
More informationThe Markov Decision Process (MDP) model
Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the
More information(a) Write down the Hamilton-Jacobi-Bellman (HJB) Equation in the dynamic programming
1. Government Purchases and Endogenous Growth Consider the following endogenous growth model with government purchases (G) in continuous time. Government purchases enhance production, and the production
More informationRecursive Methods Recursive Methods Nr. 1
Recursive Methods Recursive Methods Nr. 1 Outline Today s Lecture continue APS: worst and best value Application: Insurance with Limitted Commitment stochastic dynamics Recursive Methods Nr. 2 B(W) operator
More informationBEE1024 Mathematics for Economists
BEE1024 Mathematics for Economists Dieter and Jack Rogers and Juliette Stephenson Department of Economics, University of Exeter February 1st 2007 1 Objective 2 Isoquants 3 Objective. The lecture should
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More information4- Current Method of Explaining Business Cycles: DSGE Models. Basic Economic Models
4- Current Method of Explaining Business Cycles: DSGE Models Basic Economic Models In Economics, we use theoretical models to explain the economic processes in the real world. These models de ne a relation
More informationIn the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now
PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational
More informationAn adaptation of Pissarides (1990) by using random job destruction rate
MPRA Munich Personal RePEc Archive An adaptation of Pissarides (990) by using random job destruction rate Huiming Wang December 2009 Online at http://mpra.ub.uni-muenchen.de/203/ MPRA Paper No. 203, posted
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # 12 Scribe: Indraneel Mukherjee March 12, 2008 In the previous lecture, e ere introduced to the SVM algorithm and its basic motivation
More informationLecture 1. Evolution of Market Concentration
Lecture 1 Evolution of Market Concentration Take a look at : Doraszelski and Pakes, A Framework for Applied Dynamic Analysis in IO, Handbook of I.O. Chapter. (see link at syllabus). Matt Shum s notes are
More information1 Basic Analysis of Forward-Looking Decision Making
1 Basic Analysis of Forward-Looking Decision Making Individuals and families make the key decisions that determine the future of the economy. The decisions involve balancing current sacrifice against future
More informationProofs for Stress and Coping - An Economic Approach Klaus Wälde 56 October 2017
A Appendix Proofs for Stress and Coping - An Economic Approach Klaus älde 56 October 2017 A.1 Solution of the maximization problem A.1.1 The Bellman equation e start from the general speci cation of a
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationECE380 Digital Logic. Synchronous sequential circuits
ECE38 Digital Logic Synchronous Sequential Circuits: State Diagrams, State Tables Dr. D. J. Jackson Lecture 27- Synchronous sequential circuits Circuits here a clock signal is used to control operation
More information1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More information1 Uncertainty. These notes correspond to chapter 2 of Jehle and Reny.
These notes correspond to chapter of Jehle and Reny. Uncertainty Until now we have considered our consumer s making decisions in a world with perfect certainty. However, we can extend the consumer theory
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationSession 4: Money. Jean Imbs. November 2010
Session 4: Jean November 2010 I So far, focused on real economy. Real quantities consumed, produced, invested. No money, no nominal in uences. I Now, introduce nominal dimension in the economy. First and
More information1 Extensive Form Games
1 Extensive Form Games De nition 1 A nite extensive form game is am object K = fn; (T ) ; P; A; H; u; g where: N = f0; 1; :::; ng is the set of agents (player 0 is nature ) (T ) is the game tree P is the
More informationCompetitive Equilibrium and the Welfare Theorems
Competitive Equilibrium and the Welfare Theorems Craig Burnside Duke University September 2010 Craig Burnside (Duke University) Competitive Equilibrium September 2010 1 / 32 Competitive Equilibrium and
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More information18.440: Lecture 25 Covariance and some conditional expectation exercises
18.440: Lecture 25 Covariance and some conditional expectation exercises Scott Sheffield MIT Outline Covariance and correlation Outline Covariance and correlation A property of independence If X and Y
More informationStochastic Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania
Stochastic Dynamic Programming Jesus Fernande-Villaverde University of Pennsylvania 1 Introducing Uncertainty in Dynamic Programming Stochastic dynamic programming presents a very exible framework to handle
More informationLECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT
MARCH 29, 26 LECTURE 2 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT (Davidson (2), Chapter 4; Phillips Lectures on Unit Roots, Cointegration and Nonstationarity; White (999), Chapter 7) Unit root processes
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More information6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 7 LECTURE OUTLINE DP for imperfect state info Sufficient statistics Conditional state distribution as a sufficient statistic Finite-state systems Examples 1 REVIEW: IMPERFECT
More informationEconomics 2010c: Lecture 2 Iterative Methods in Dynamic Programming
Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem
More informationLecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning
More informationU(w) = w + βu(w) U(w) =
Economics 250a Lecture 11 Search Theory 2 Outline a) search intensity - a very simple model b) The Burdett-Mortensen equilibrium age-posting model (from Manning) c) Brief mention - Christensen et al (2005)
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationNotes on the Thomas and Worrall paper Econ 8801
Notes on the Thomas and Worrall paper Econ 880 Larry E. Jones Introduction The basic reference for these notes is: Thomas, J. and T. Worrall (990): Income Fluctuation and Asymmetric Information: An Example
More information1 Slutsky Matrix og Negative deniteness
Slutsky Matrix og Negative deniteness This is exercise 2.F. from the book. Given the demand function x(p,) from the book page 23, here β = and =, e shall :. Calculate the Slutsky matrix S = D p x(p, )
More informationexp(v j=) k exp(v k =)
Economics 250c Dynamic Discrete Choice, continued This lecture will continue the presentation of dynamic discrete choice problems with extreme value errors. We will discuss: 1. Ebenstein s model of sex
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More informationY t = log (employment t )
Advanced Macroeconomics, Christiano Econ 416 Homework #7 Due: November 21 1. Consider the linearized equilibrium conditions of the New Keynesian model, on the slide, The Equilibrium Conditions in the handout,
More informationIndivisible Labor and the Business Cycle
Indivisible Labor and the Business Cycle By Gary Hansen Zhe Li SUFE Fall 2010 Zhe Li (SUFE) Advanced Macroeconomics III Fall 2010 1 / 14 Motivation Kydland and Prescott (1982) Equilibrium theory of the
More informationStochastic convexity in dynamic programming
Economic Theory 22, 447 455 (2003) Stochastic convexity in dynamic programming Alp E. Atakan Department of Economics, Columbia University New York, NY 10027, USA (e-mail: aea15@columbia.edu) Received:
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More informationSequential decision making under uncertainty. Department of Computer Science, Czech Technical University in Prague
Sequential decision making under uncertainty Jiří Kléma Department of Computer Science, Czech Technical University in Prague https://cw.fel.cvut.cz/wiki/courses/b4b36zui/prednasky pagenda Previous lecture:
More informationEconomics of Controlling Climate Change under Uncertainty.
Economics of Controlling Climate Change under Uncertainty. Alexander Golub y Environmental Defense Fund, Washington, DC. Santanu Roy z Southern Methodist University, Dallas, TX. October 18, 2010 Abstract
More informationSolow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4
Solow Growth Model Michael Bar February 28, 208 Contents Introduction 2. Some facts about modern growth........................ 3.2 Questions..................................... 4 2 The Solow Model 5
More informationECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria
ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationPractical Dynamic Programming: An Introduction. Associated programs dpexample.m: deterministic dpexample2.m: stochastic
Practical Dynamic Programming: An Introduction Associated programs dpexample.m: deterministic dpexample2.m: stochastic Outline 1. Specific problem: stochastic model of accumulation from a DP perspective
More informationBirgit Rudloff Operations Research and Financial Engineering, Princeton University
TIME CONSISTENT RISK AVERSE DYNAMIC DECISION MODELS: AN ECONOMIC INTERPRETATION Birgit Rudloff Operations Research and Financial Engineering, Princeton University brudloff@princeton.edu Alexandre Street
More informationNASH BARGAINING, ON-THE-JOB SEARCH AND LABOR MARKET EQUILIBRIUM
NASH BARGAINING, ON-THE-JOB SEARCH AND LABOR MARKET EQUILIBRIUM Roberto Bonilla Department of Economics University of Newcastle Business School University of Newcastle upon Tyne Newcastle upon Tyne U.K.
More informationMacroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models
Macroeconomics II Dynamic macroeconomics Class 1: Introduction and rst models Prof. George McCandless UCEMA Spring 2008 1 Class 1: introduction and rst models What we will do today 1. Organization of course
More informationThe Kuhn-Tucker Problem
Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker
More information18.440: Lecture 33 Markov Chains
18.440: Lecture 33 Markov Chains Scott Sheffield MIT 1 Outline Markov chains Examples Ergodicity and stationarity 2 Outline Markov chains Examples Ergodicity and stationarity 3 Markov chains Consider a
More informationLecture Notes - Dynamic Moral Hazard
Lecture Notes - Dynamic Moral Hazard Simon Board and Moritz Meyer-ter-Vehn October 27, 2011 1 Marginal Cost of Providing Utility is Martingale (Rogerson 85) 1.1 Setup Two periods, no discounting Actions
More information2. What is the fraction of aggregate savings due to the precautionary motive? (These two questions are analyzed in the paper by Ayiagari)
University of Minnesota 8107 Macroeconomic Theory, Spring 2012, Mini 1 Fabrizio Perri Stationary equilibria in economies with Idiosyncratic Risk and Incomplete Markets We are now at the point in which
More informationInternational Trade Lecture 9: Factor Proportion Theory (II)
14.581 International Trade Lecture 9: Factor Proportion Theory (II) 14.581 Week 5 Spring 2013 14.581 (Week 5) Factor Proportion Theory (II) Spring 2013 1 / 24 Today s Plan 1 Two-by-two-by-two Heckscher-Ohlin
More information9 A Class of Dynamic Games of Incomplete Information:
A Class of Dynamic Games of Incomplete Information: Signalling Games In general, a dynamic game of incomplete information is any extensive form game in which at least one player is uninformed about some
More informationbe a deterministic function that satisfies x( t) dt. Then its Fourier
Lecture Fourier ransforms and Applications Definition Let ( t) ; t (, ) be a deterministic function that satisfies ( t) dt hen its Fourier it ransform is defined as X ( ) ( t) e dt ( )( ) heorem he inverse
More informationECON0702: Mathematical Methods in Economics
ECON0702: Mathematical Methods in Economics Yulei Luo SEF of HKU January 14, 2009 Luo, Y. (SEF of HKU) MME January 14, 2009 1 / 44 Comparative Statics and The Concept of Derivative Comparative Statics
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 6.5: Solutions to Homework 0 6.262 Discrete Stochastic Processes MIT, Spring 20 Consider the Markov process illustrated below. The transitions are labelled by the rate q ij at which those transitions
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationFEDERAL RESERVE BANK of ATLANTA
FEDERAL RESERVE BANK of ATLANTA On the Solution of the Growth Model with Investment-Specific Technological Change Jesús Fernández-Villaverde and Juan Francisco Rubio-Ramírez Working Paper 2004-39 December
More informationAdmission control schemes to provide class-level QoS in multiservice networks q
Computer Networks 35 (2001) 307±326 www.elsevier.com/locate/comnet Admission control schemes to provide class-level QoS in multiservice networks q Suresh Kalyanasundaram a,1, Edwin K.P. Chong b, Ness B.
More information6.231 DYNAMIC PROGRAMMING LECTURE 17 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 17 LECTURE OUTLINE Undiscounted problems Stochastic shortest path problems (SSP) Proper and improper policies Analysis and computational methods for SSP Pathologies of
More informationSophisticated Monetary Policies
Federal Reserve Bank of Minneapolis Research Department Sta Report 419 January 2008 Sophisticated Monetary Policies Andrew Atkeson University of California, Los Angeles, Federal Reserve Bank of Minneapolis,
More informationEconomic Growth
MIT OpenCourseWare http://ocw.mit.edu 14.452 Economic Growth Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 14.452 Economic Growth: Lecture
More informationPartial Solutions to Homework 2
Partial Solutions to Homework. Carefully depict some of the indi erence curves for the following utility functions. In each case, check whether the preferences are monotonic and whether preferences are
More informationLecture 3 - Axioms of Consumer Preference and the Theory of Choice
Lecture 3 - Axioms of Consumer Preference and the Theory of Choice David Autor 14.03 Fall 2004 Agenda: 1. Consumer preference theory (a) Notion of utility function (b) Axioms of consumer preference (c)
More informationNeoclassical Growth Model / Cake Eating Problem
Dynamic Optimization Institute for Advanced Studies Vienna, Austria by Gabriel S. Lee February 1-4, 2008 An Overview and Introduction to Dynamic Programming using the Neoclassical Growth Model and Cake
More informationLecture 3, November 30: The Basic New Keynesian Model (Galí, Chapter 3)
MakØk3, Fall 2 (blok 2) Business cycles and monetary stabilization policies Henrik Jensen Department of Economics University of Copenhagen Lecture 3, November 3: The Basic New Keynesian Model (Galí, Chapter
More informationRecursive Methods. Introduction to Dynamic Optimization
Recursive Methods Nr. 1 Outline Today s Lecture finish off: theorem of the maximum Bellman equation with bounded and continuous F differentiability of value function application: neoclassical growth model
More informationSome Notes on Costless Signaling Games
Some Notes on Costless Signaling Games John Morgan University of California at Berkeley Preliminaries Our running example is that of a decision maker (DM) consulting a knowledgeable expert for advice about
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in
More informationEcon 201: Problem Set 3 Answers
Econ 20: Problem Set 3 Ansers Instructor: Alexandre Sollaci T.A.: Ryan Hughes Winter 208 Question a) The firm s fixed cost is F C = a and variable costs are T V Cq) = 2 bq2. b) As seen in class, the optimal
More informationModule 8: Multi-Agent Models of Moral Hazard
Module 8: Multi-Agent Models of Moral Hazard Information Economics (Ec 515) George Georgiadis Types of models: 1. No relation among agents. an many agents make contracting easier? 2. Agents shocks are
More informationSTATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics
STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 26: Probability and Random Processes Problem Set Spring 209 Self-Graded Scores Due:.59 PM, Monday, February 4, 209 Submit your
More information6.254 : Game Theory with Engineering Applications Lecture 13: Extensive Form Games
6.254 : Game Theory with Engineering Lecture 13: Extensive Form Games Asu Ozdaglar MIT March 18, 2010 1 Introduction Outline Extensive Form Games with Perfect Information One-stage Deviation Principle
More informationHomework Set 2 Solutions
MATH 667-010 Introduction to Mathematical Finance Prof. D. A. Edards Due: Feb. 28, 2018 Homeork Set 2 Solutions 1. Consider the ruin problem. Suppose that a gambler starts ith ealth, and plays a game here
More informationProbabilistic Planning. George Konidaris
Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t
More informationEconomic Growth: Lecture 13, Stochastic Growth
14.452 Economic Growth: Lecture 13, Stochastic Growth Daron Acemoglu MIT December 10, 2013. Daron Acemoglu (MIT) Economic Growth Lecture 13 December 10, 2013. 1 / 52 Stochastic Growth Models Stochastic
More informationCHARACTERIZATION OF ULTRASONIC IMMERSION TRANSDUCERS
CHARACTERIZATION OF ULTRASONIC IMMERSION TRANSDUCERS INTRODUCTION David D. Bennink, Center for NDE Anna L. Pate, Engineering Science and Mechanics Ioa State University Ames, Ioa 50011 In any ultrasonic
More informationSolution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations
Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations Victor Aguirregabiria University of Toronto and CEPR Arvind Magesan University of Calgary May 1st, 2018 Abstract
More information