Numerical Methods-Lecture XIII: Dynamic Discrete Choice Estimation
|
|
- Branden Gardner
- 5 years ago
- Views:
Transcription
1 Numerical Methods-Lecture XIII: Dynamic Discrete Choice Estimation (See Rust 1989) Trevor Gallen Fall / 1
2 Introduction Rust (1989) wrote a paper about Harold Zurcher s decisions as the superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company. 10 years of monthly data bus mileage and engine replacement 104 buses 1 Harold 2 / 1
3 Introduction Rust (1989) wrote a paper about Harold Zurcher s decisions as the superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company. 10 years of monthly data bus mileage and engine replacement 104 buses 1 Harold Not interested in Harold per se 3 / 1
4 Introduction Rust (1989) wrote a paper about Harold Zurcher s decisions as the superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company. 10 years of monthly data bus mileage and engine replacement 104 buses 1 Harold Not interested in Harold per se Interested in application of Dynamic Discrete Choice framework 4 / 1
5 Description Observe monthly bus mileage Observe maintenance diary with date, milage, and list of components repaired or replaced Three types of maintenance operations Routine adjustments (brake adjustments, tire rotation) Replacement of individual components when failed Major engine overhauls Model Zurcher s decision to replace bus engines based on observables and unobservables 5 / 1
6 Description Observe monthly bus mileage Observe maintenance diary with date, milage, and list of components repaired or replaced Three types of maintenance operations Routine adjustments (brake adjustments, tire rotation) Replacement of individual components when failed Major engine overhauls Model Zurcher s decision to replace bus engines based on observables and unobservables 6 / 1
7 Model Agent is forward looking Maximizes expected intertemporal payoff Estimate parameters of the models Test whether the agent s behavior is consistent with the model 7 / 1
8 Replacement data-i 8 / 1
9 Replacement data-ii 9 / 1
10 Replacement data-iii 10 / 1
11 Sample Focus on bus groups 1-4 Most recent acquisitions Have data on replacement costs only for this group Utilization fairly constant within each group (necessary for model) 11 / 1
12 Model - I Harold chooses {i 1, i 2,..., i t,...} to maximize his expected utility max E t β t 1 u(x t, ε t, i t ; θ) {i 1,i 2,...,i t,...} t=1 x t is the total mileage on an engine since last replacement ε t are unobservable (to the econometrician) shocks i t is an indicator of engine replacement θ is vector of parameters (to be discussed below) 12 / 1
13 Model - II Cost function c(x, θ 1 ) = m(x, θ 11 ) + µ(x, θ 12 )b(x, θ 13 ) m(x, θ 11 ) is the conditional expectation of normal maintenance and operation expenditure µ(x, θ 12 ) is the conditional probability of an unexpected engine failure b(x, θ 13 ) is the conditional expectation of towing costs, repair costs, and loss of customer goodwill costs resulting from engine failure No data on maintenance and operating costs, so only estimating the sum of costs c(x, θ 1 ) 13 / 1
14 Model - III Stochastic process governing {i t, x t } is the solution to V θ (x t ) = sup E β j t u(x Π j, f j, θ 1 ) x t j=t where utility u is { given by c(x u(x t, i t, θ 1 ) = t, θ 1 ) if i t = 0 [ P P + c(0, θ 1 ) ] if i t = 1 Π = {f t, f t+1,...} is a sequence of decision rules where each f indicates the optimal choice (replace or not) at time t given the entire history of investment i t 1,..., i 1 and mileage x t, x t 1,..., x 1 observed to date P is the scrap value of an engine P is the cost of a new engine 14 / 1
15 Model - IV Evolution of x t is given by stochastic process: p(x t+1 x t, i t, θ 2 ) θ 2 exp[θ 2 (x t+1 x t )] if i t = 0 & x t+1 x t = θ 2 exp[θ 2 (x t+1 )] if i t = 1 & 0 o/w x t+1 0 Without investment, next period mileage is drawn from exponential CDF 1 exp[θ 2 (x t+1 x t )] With investment, next period mileage is drawn from exponential CDF 1 exp[θ 2 (x t+1 0)] 15 / 1
16 Model - V Bellman: V θ (x t ) = EV θ (x t, i t ) = i t = f (x t, θ) = max u(x t, i t, θ 1 ) + βev θ (x t, i t ) i t {0,1} o V θ (y)p(dy x t, i t, θ 2 ) { 1 if xt > γ(θ 1, θ 2 ) 0 if x t γ(θ 1, θ 2 ) Where γ(θ 1, θ 2 ) is the investment cut-off ( optimal stopping barrier ) given by the unique solution to ( P P)(1 β) = γ(θ1,θ 2 ) 0 [1 β exp{ θ 2 (1 β)y}] c(y, θ 1) ydy 16 / 1
17 Model - VI Making the model stochastic: i t = f (x t, θ) + ε t ε t known to agent but unknown to the econometrician 17 / 1
18 Model - VII C(x t ) Choice set. A finite set of allowable values of the control variable i t when state variable is x t ε t = {e t (i) i C(x t )} A #C(x t )-dimensional vector of state variables observed by agent by not by the econometrician. x t = {x t (1),..., x t (K)} K dimensional vector of state variables observed by both the agent and the econometrician. u(x t, i t, θ 1 ) + ε t (i) Realized single period utility of decision i when state variable is x t, ε t ). θ 1 is a vector of unknown parameters to be estimated p(x t+1, ε t+1 x t, ε t, i t, θ 2, θ 3 ) Markov transitional denisty for state variable (x t, ε t ) when alternative i t is selected. θ 1 and θ 2 are vectors of unknown parameters to be estimated. θ = (β, θ 1, θ 2, θ 3 ) The complete 1 + K 1 + K 2 + K 3 vector of parameters to be estimated 18 / 1
19 Model - VIII Infinite horizon, discounted Harold decision problem: V θ (x t, ε t ) = sup E β j t (u(x Π j, f j, θ 1 ) + ε j (f j )) x t, ε t, θ 2, θ 3 j=t The optimal value function V θ is the unique solution to V θ (x t, ε t ) = with the decision rule i t = f (x t, ε t, θ) arg max [u(x t, i t, θ 1 ) + ε t (i) + βev θ (x t, ε t, i t )] i t C(x t) max [u(x t, i t, θ 1 ) + ε t (i t ) + βev θ (x t, ε t, i t )] i t C(x t) 19 / 1
20 Model - IX Problems: ε t appears nonlinearly. Have to integrate out over ε t to obtain choice probabilities Dimensionality: grid approach to estimating ε would still be too large to be computationally tractable (especially in 1989). 20 / 1
21 Model - X Conditional Independence Assumption: P(x t+1, ε t+1 x t, ε t, i, θ 2, θ 3 ) = q(ε t+1 x t+1, θ 2 )P(x t+1 x t, i, θ 3 ) 21 / 1
22 Estimation-I Three likelihood functions. Two partial likelihoods: T l 1 (x 1,..., x T, i 1,..., i T x 0, i 0, θ) = p(x t x t 1, i t 1, θ 3 ) l 2 (x 1,..., x T, i 1,..., i T θ) = And the full likelihood function: l f (x 1,..., x T, i 1,..., i T x 0, i 0, θ) = Three stages of estimation. t=1 T P(i t x t, θ) t=1 T P(i t x t, θ)p(x t x t 1, i t 1, θ 3 ) t=1 22 / 1
23 Estimation-II Recall, have value function which is given by the functional equation EV θ (x, i) = log exp(u(y, j, θ 1 ) + βev θ (y, j)) p(dy x, i, θ 3) y j C(y) Goal is to estimate θ using a nested fixed point algorithm: For each θ, compute EV θ using a fixed point algorithm Outer hill climbing algorithm searches for the value of θ which maximizes the likelihood function. 23 / 1
24 Estimation-III First estimate the parameters θ 3 of the transition probability { g(xt+1 x p(x t+1 x t, i t, θ 3 ) = t, θ 3 ) if i t = 1 g(x t+1 0, θ 3 ) if i t = 0 Using l 1 (x 1,..., x T, i 1,..., i T x 0, i 0 θ) = T p(x t x t 1, i t 1, θ 3 ) t=1 g is a multinomial distribution on the set {0, 1, 2} corresponding to monthly mileage intervals [0, 5000), [5000, 10000), [10000, ) so θ 3j = Pr{x t+1 = x t + j x t, i t = 0}, j = 0, 1 This first stage doesn t require estimation of EV θ! 24 / 1
25 Estimation-IV 25 / 1
26 Estimation-V 26 / 1
27 Estimation-VI Given θ 3, and using l 2 we can obtain estimates for β, θ 1, and RC (RC = P P): where with EV θ (x, i) = and l 2 (x 1,..., x T, i 1,...i, T ) θ) = T P(i t x t, θ) t=1 exp{u(x, i, θ 1 ) + βev θ (x, i)} P(i x, θ) = j C(x) exp{u(x, j, θ 1) + βev θ (x, j)} y log j C(y) u(x t, i t, θ 1 ) + ε t (i) = exp(u(y, j, θ 1 ) + βev θ (y, j)) p(dy x, i, θ 3) { RC c(0, θ1 ) + ε t (1) if i = 1 c(x t, θ 1 ) + ε t (0) if i = 0 27 / 1
28 Estimation-VII θ 1 are the parameters of the cost function Compare different (parsimonious) specifications Linear and square root forms do well 28 / 1
29 Estimation-VIII 29 / 1
30 Estimation-IX Next look at myopic replacement rule replace only when operating costs c(x t, θ 1 ) exceed current cost of replacement RC + c(0, θ 1 ) This model is rejected β =.999 produces a statistically significantly better fit of the model to the data 30 / 1
31 Estimation-VIII 31 / 1
32 Takeaways So what? Why not just do a logit? 32 / 1
33 Takeaways So what? Why not just do a logit? Rust estimates Harold s problem at a micro level 33 / 1
34 Takeaways So what? Why not just do a logit? Rust estimates Harold s problem at a micro level Changes in interest rates, costs, etc. he can still deal with 34 / 1
35 Takeaways So what? Why not just do a logit? Rust estimates Harold s problem at a micro level Changes in interest rates, costs, etc. he can still deal with Not myopic 35 / 1
36 Takeaways So what? Why not just do a logit? Rust estimates Harold s problem at a micro level Changes in interest rates, costs, etc. he can still deal with Not myopic Good example of indirectly inferring underlying parameters from agent behavior 36 / 1
Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher. John Rust
Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher John Rust 1987 Top down approach to investment: estimate investment function that is based on variables that are aggregated
More informationLecture notes: Rust (1987) Economics : M. Shum 1
Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily
More informationMONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM. Erik P. Johnson. Working Paper # WP October 2010
MONTE CARLO SIMULATIONS OF THE NESTED FIXED-POINT ALGORITHM Erik P. Johnson Working Paper # WP2011-011 October 2010 http://www.econ.gatech.edu/research/wokingpapers School of Economics Georgia Institute
More informationConstrained Optimization Approaches to Estimation of Structural Models: Comment
Constrained Optimization Approaches to Estimation of Structural Models: Comment Fedor Iskhakov, University of New South Wales Jinhyuk Lee, Ulsan National Institute of Science and Technology John Rust,
More informationEstimating Single-Agent Dynamic Models
Estimating Single-Agent Dynamic Models Paul T. Scott Empirical IO Fall, 2013 1 / 49 Why are dynamics important? The motivation for using dynamics is usually external validity: we want to simulate counterfactuals
More informationEstimating Single-Agent Dynamic Models
Estimating Single-Agent Dynamic Models Paul T. Scott New York University Empirical IO Course Fall 2016 1 / 34 Introduction Why dynamic estimation? External validity Famous example: Hendel and Nevo s (2006)
More information1 Hotz-Miller approach: avoid numeric dynamic programming
1 Hotz-Miller approach: avoid numeric dynamic programming Rust, Pakes approach to estimating dynamic discrete-choice model very computer intensive. Requires using numeric dynamic programming to compute
More informationEconometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice
Holger Sieg Carnegie Mellon University Econometrics III: Problem Set # 2 Single Agent Dynamic Discrete Choice INSTRUCTIONS: This problem set was originally created by Ron Goettler. The objective of this
More informationEstimating Dynamic Programming Models
Estimating Dynamic Programming Models Katsumi Shimotsu 1 Ken Yamada 2 1 Department of Economics Hitotsubashi University 2 School of Economics Singapore Management University Katsumi Shimotsu, Ken Yamada
More informationLecture Notes: Estimation of dynamic discrete choice models
Lecture Notes: Estimation of dynamic discrete choice models Jean-François Houde Cornell University November 7, 2016 These lectures notes incorporate material from Victor Agguirregabiria s graduate IO slides
More informationA Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)
A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.
More informationLecture 14 More on structural estimation
Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution
More informationMARKOV DECISION PROCESSES WITH SET-VALUED TRANSITIONS: A CASE STUDY IN AUTOMATED REPLACEMENT MODELS
MARKOV DECISION PROCESSES WITH SET-VALUED TRANSITIONS: A CASE STUDY IN AUTOMATED REPLACEMENT MODELS Ricardo Shirota Filho, Fabio Gagliardi Cozman Escola Politécnica, Universidade de São Paulo Av. Prof.
More informationexp(v j=) k exp(v k =)
Economics 250c Dynamic Discrete Choice, continued This lecture will continue the presentation of dynamic discrete choice problems with extreme value errors. We will discuss: 1. Ebenstein s model of sex
More informationImplementation of Bayesian Inference in Dynamic Discrete Choice Models
Implementation of Bayesian Inference in Dynamic Discrete Choice Models Andriy Norets anorets@princeton.edu Department of Economics, Princeton University August 30, 2009 Abstract This paper experimentally
More informationDynamic Discrete Choice Structural Models in Empirical IO
Dynamic Discrete Choice Structural Models in Empirical IO Lecture 4: Euler Equations and Finite Dependence in Dynamic Discrete Choice Models Victor Aguirregabiria (University of Toronto) Carlos III, Madrid
More informationThree essays in computational economics
Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2015 Three essays in computational economics Reich, Gregor Philipp Abstract:
More informationErgodicity and Non-Ergodicity in Economics
Abstract An stochastic system is called ergodic if it tends in probability to a limiting form that is independent of the initial conditions. Breakdown of ergodicity gives rise to path dependence. We illustrate
More informationDynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry
Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic
More informationDynamic Models with Serial Correlation: Particle Filter Based Estimation
Dynamic Models with Serial Correlation: Particle Filter Based Estimation April 6, 04 Guest Instructor: Nathan Yang nathan.yang@yale.edu Class Overview ( of ) Questions we will answer in today s class:
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationRevisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation
Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Jinhyuk Lee Kyoungwon Seo September 9, 016 Abstract This paper examines the numerical properties of the nested
More informationDynamic stochastic game and macroeconomic equilibrium
Dynamic stochastic game and macroeconomic equilibrium Tianxiao Zheng SAIF 1. Introduction We have studied single agent problems. However, macro-economy consists of a large number of agents including individuals/households,
More informationDynamic and Stochastic Model of Industry. Class: Work through Pakes, Ostrovsky, and Berry
Dynamic and Stochastic Model of Industry Class: Work through Pakes, Ostrovsky, and Berry Reading: Recommend working through Doraszelski and Pakes handbook chapter Recall model from last class Deterministic
More informationLecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter
More information1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationAM 121: Intro to Optimization Models and Methods: Fall 2018
AM 11: Intro to Optimization Models and Methods: Fall 018 Lecture 18: Markov Decision Processes Yiling Chen Lesson Plan Markov decision processes Policies and value functions Solving: average reward, discounted
More informationA Computational Method for Multidimensional Continuous-choice. Dynamic Problems
A Computational Method for Multidimensional Continuous-choice Dynamic Problems (Preliminary) Xiaolu Zhou School of Economics & Wangyannan Institution for Studies in Economics Xiamen University April 9,
More informationIdentification of the Distribution of Random Coefficients in Static and Dynamic Discrete Choice Models
Identification of the Distribution of Random Coefficients in Static and Dynamic Discrete Choice Models Kyoo il Kim University of Minnesota May 2014 Abstract We show that the distributions of random coefficients
More informationAbstract Dynamic Programming
Abstract Dynamic Programming Dimitri P. Bertsekas Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Overview of the Research Monograph Abstract Dynamic Programming"
More informationLecture Pakes, Ostrovsky, and Berry. Dynamic and Stochastic Model of Industry
Lecture Pakes, Ostrovsky, and Berry Dynamic and Stochastic Model of Industry Let π n be flow profit of incumbant firms when n firms are in the industry. π 1 > 0, 0 π 2
More informationEstimating Dynamic Discrete-Choice Games of Incomplete Information
Estimating Dynamic Discrete-Choice Games of Incomplete Information Michael Egesdal Zhenyu Lai Che-Lin Su October 5, 2012 Abstract We investigate the estimation of models of dynamic discrete-choice games
More informationSequential Estimation of Dynamic Programming Models
Sequential Estimation of Dynamic Programming Models Hiroyuki Kasahara Department of Economics University of British Columbia hkasahar@gmail.com Katsumi Shimotsu Department of Economics Hitotsubashi University
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationLecture 4: Hidden Markov Models: An Introduction to Dynamic Decision Making. November 11, 2010
Hidden Lecture 4: Hidden : An Introduction to Dynamic Decision Making November 11, 2010 Special Meeting 1/26 Markov Model Hidden When a dynamical system is probabilistic it may be determined by the transition
More informationConsumer heterogeneity, demand for durable goods and the dynamics of quality
Consumer heterogeneity, demand for durable goods and the dynamics of quality Juan Esteban Carranza University of Wisconsin-Madison April 4, 2006 Abstract This paper develops a methodology to estimate models
More informationDynamic Approaches: The Hidden Markov Model
Dynamic Approaches: The Hidden Markov Model Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Inference as Message
More informationSEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES. Victor Aguirregabiria (Boston University) and. Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota
SEQUENTIAL ESTIMATION OF DYNAMIC DISCRETE GAMES Victor Aguirregabiria (Boston University) and Pedro Mira (CEMFI) Applied Micro Workshop at Minnesota February 16, 2006 CONTEXT AND MOTIVATION Many interesting
More informationIdentifying the Discount Factor in Dynamic Discrete Choice Models
Identifying the Discount Factor in Dynamic Discrete Choice Models Jaap H. Abbring Øystein Daljord September 25, 2017 Abstract Empirical applications of dynamic discrete choice models usually either take
More informationAdvanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications
Advanced Economic Growth: Lecture 21: Stochastic Dynamic Programming and Applications Daron Acemoglu MIT November 19, 2007 Daron Acemoglu (MIT) Advanced Growth Lecture 21 November 19, 2007 1 / 79 Stochastic
More informationOblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games
Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract
More informationApproximating High-Dimensional Dynamic Models: Sieve Value Function Iteration
Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Department of Economics Duke University psarcidi@econ.duke.edu Federico A. Bugni Department of Economics
More informationEconomics 2010c: Lecture 2 Iterative Methods in Dynamic Programming
Economics 2010c: Lecture 2 Iterative Methods in Dynamic Programming David Laibson 9/04/2014 Outline: 1. Functional operators 2. Iterative solutions for the Bellman Equation 3. Contraction Mapping Theorem
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More informationEconometrica Supplementary Material
Econometrica Supplementary Material SUPPLEMENT TO CONDITIONAL CHOICE PROBABILITY ESTIMATION OF DYNAMIC DISCRETE CHOICE MODELS WITH UNOBSERVED HETEROGENEITY (Econometrica, Vol. 79, No. 6, November 20, 823
More informationClosed book and notes. 60 minutes. Cover page and four pages of exam. No calculators.
IE 230 Seat # Closed book and notes. 60 minutes. Cover page and four pages of exam. No calculators. Score Exam #3a, Spring 2002 Schmeiser Closed book and notes. 60 minutes. 1. True or false. (for each,
More informationCOPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition
Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15
More informationCCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity
CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity Peter Arcidiacono Duke University Robert A. Miller Carnegie Mellon University August 13, 2010 Abstract We adapt the Expectation-Maximization
More informationLecture 8 Inequality Testing and Moment Inequality Models
Lecture 8 Inequality Testing and Moment Inequality Models Inequality Testing In the previous lecture, we discussed how to test the nonlinear hypothesis H 0 : h(θ 0 ) 0 when the sample information comes
More informationKnown probability distributions
Known probability distributions Engineers frequently wor with data that can be modeled as one of several nown probability distributions. Being able to model the data allows us to: model real systems design
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationApproximating High-Dimensional Dynamic Models: Sieve Value Function Iteration
Approximating High-Dimensional Dynamic Models: Sieve Value Function Iteration Peter Arcidiacono Patrick Bayer Federico Bugni Jon James Duke University Duke University Duke University Duke University January
More informationApproximate Dynamic Programming
Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.
More informationHidden Rust Models. Benjamin Connault. Princeton University, Department of Economics. Job Market Paper. November Abstract
Hidden Rust Models Benjamin Connault Princeton University, Department of Economics Job Market Paper November 2014 Abstract This paper shows that all-discrete dynamic choice models with hidden state provide
More informationCSE250A Fall 12: Discussion Week 9
CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationNumerical Methods in Economics
Numerical Methods in Economics MIT Press, 1998 Chapter 12 Notes Numerical Dynamic Programming Kenneth L. Judd Hoover Institution November 15, 2002 1 Discrete-Time Dynamic Programming Objective: X: set
More informationMKTG 555: Marketing Models
MKTG 555: Marketing Models Structural Models -- Overview Arvind Rangaswamy (Some Parts are adapted from a presentation by by Prof. Pranav Jindal) March 27, 2017 1 Overview Differences between structural
More informationECON 2010c Solution to Problem Set 1
ECON 200c Solution to Problem Set By the Teaching Fellows for ECON 200c Fall 204 Growth Model (a) Defining the constant κ as: κ = ln( αβ) + αβ αβ ln(αβ), the problem asks us to show that the following
More informationNear-Optimal Control of Queueing Systems via Approximate One-Step Policy Improvement
Near-Optimal Control of Queueing Systems via Approximate One-Step Policy Improvement Jefferson Huang March 21, 2018 Reinforcement Learning for Processing Networks Seminar Cornell University Performance
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationSwitching Regime Estimation
Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms
More informationClassical and Bayesian inference
Classical and Bayesian inference AMS 132 Claudia Wehrhahn (UCSC) Classical and Bayesian inference January 8 1 / 8 Probability and Statistical Models Motivating ideas AMS 131: Suppose that the random variable
More informationGeneralized Linear Models
Generalized Linear Models Advanced Methods for Data Analysis (36-402/36-608 Spring 2014 1 Generalized linear models 1.1 Introduction: two regressions So far we ve seen two canonical settings for regression.
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationOptimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course.
Paris. Optimization, Part 2 (november to december): mandatory for QEM-IMAEF, and for MMEF or MAEF who have chosen it as an optional course. Philippe Bich (Paris 1 Panthéon-Sorbonne and PSE) Paris, 2016.
More informationCCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity
CCP Estimation of Dynamic Discrete Choice Models with Unobserved Heterogeneity Peter Arcidiacono Duke University Robert A. Miller Carnegie Mellon University April 26, 2011 Abstract We adapt the Expectation-Maximization
More informationProbabilistic Planning. George Konidaris
Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t
More informationValue and Policy Iteration
Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite
More informationContinuous random variables
Continuous random variables CE 311S What was the difference between discrete and continuous random variables? The possible outcomes of a discrete random variable (finite or infinite) can be listed out;
More informationLecture 4: Lower Bounds (ending); Thompson Sampling
CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More information1 Degree distributions and data
1 Degree distributions and data A great deal of effort is often spent trying to identify what functional form best describes the degree distribution of a network, particularly the upper tail of that distribution.
More informationBayesian inference in dynamic discrete choice models
University of Iowa Iowa Research Online Theses and Dissertations 2007 Bayesian inference in dynamic discrete choice models Andriy Norets University of Iowa Copyright 2007 Andriy Norets This dissertation
More informationApproximation of Heavy-tailed distributions via infinite dimensional phase type distributions
1 / 36 Approximation of Heavy-tailed distributions via infinite dimensional phase type distributions Leonardo Rojas-Nandayapa The University of Queensland ANZAPW March, 2015. Barossa Valley, SA, Australia.
More informationPart III. A Decision-Theoretic Approach and Bayesian testing
Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to
More informationABC methods for phase-type distributions with applications in insurance risk problems
ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationSome Fixed-Point Results for the Dynamic Assignment Problem
Some Fixed-Point Results for the Dynamic Assignment Problem Michael Z. Spivey Department of Mathematics and Computer Science Samford University, Birmingham, AL 35229 Warren B. Powell Department of Operations
More informationSocial Interactions under Incomplete Information with Heterogeneous Expectations
Social Interactions under Incomplete Information with Heterogeneous Expectations Chao Yang and Lung-fei Lee Ohio State University February 15, 2014 A Very Preliminary Version Abstract We build a framework
More informationMonte Carlo Studies. The response in a Monte Carlo study is a random variable.
Monte Carlo Studies The response in a Monte Carlo study is a random variable. The response in a Monte Carlo study has a variance that comes from the variance of the stochastic elements in the data-generating
More informationDiscrete panel data. Michel Bierlaire
Discrete panel data Michel Bierlaire Transport and Mobility Laboratory School of Architecture, Civil and Environmental Engineering Ecole Polytechnique Fédérale de Lausanne M. Bierlaire (TRANSP-OR ENAC
More informationA Theory of Financing Constraints and Firm Dynamics by Clementi and Hopenhayn - Quarterly Journal of Economics (2006)
A Theory of Financing Constraints and Firm Dynamics by Clementi and Hopenhayn - Quarterly Journal of Economics (2006) A Presentation for Corporate Finance 1 Graduate School of Economics December, 2009
More informationSlides II - Dynamic Programming
Slides II - Dynamic Programming Julio Garín University of Georgia Macroeconomic Theory II (Ph.D.) Spring 2017 Macroeconomic Theory II Slides II - Dynamic Programming Spring 2017 1 / 32 Outline 1. Lagrangian
More informationCS6220 Data Mining Techniques Hidden Markov Models, Exponential Families, and the Forward-backward Algorithm
CS6220 Data Mining Techniques Hidden Markov Models, Exponential Families, and the Forward-backward Algorithm Jan-Willem van de Meent, 19 November 2016 1 Hidden Markov Models A hidden Markov model (HMM)
More informationThe Bayesian approach to inverse problems
The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu
More informationLecture 4: State Estimation in Hidden Markov Models (cont.)
EE378A Statistical Signal Processing Lecture 4-04/13/2017 Lecture 4: State Estimation in Hidden Markov Models (cont.) Lecturer: Tsachy Weissman Scribe: David Wugofski In this lecture we build on previous
More informationProbabilistic numerics for deep learning
Presenter: Shijia Wang Department of Engineering Science, University of Oxford rning (RLSS) Summer School, Montreal 2017 Outline 1 Introduction Probabilistic Numerics 2 Components Probabilistic modeling
More informationCupid s Invisible Hand
Alfred Galichon Bernard Salanié Chicago, June 2012 Matching involves trade-offs Marriage partners vary across several dimensions, their preferences over partners also vary and the analyst can only observe
More information1 Basic Analysis of Forward-Looking Decision Making
1 Basic Analysis of Forward-Looking Decision Making Individuals and families make the key decisions that determine the future of the economy. The decisions involve balancing current sacrifice against future
More informationEstimation of demand for differentiated durable goods
Estimation of demand for differentiated durable goods Juan Esteban Carranza University of Wisconsin-Madison September 1, 2006 Abstract This paper develops a methodology to estimate models of demand for
More informationLinear Classification: Probabilistic Generative Models
Linear Classification: Probabilistic Generative Models Sargur N. University at Buffalo, State University of New York USA 1 Linear Classification using Probabilistic Generative Models Topics 1. Overview
More informationSpatial Economics and Potential Games
Outline Spatial Economics and Potential Games Daisuke Oyama Graduate School of Economics, Hitotsubashi University Hitotsubashi Game Theory Workshop 2007 Session Potential Games March 4, 2007 Potential
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Variational Inference II: Mean Field Method and Variational Principle Junming Yin Lecture 15, March 7, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3
More informationInfering the Number of State Clusters in Hidden Markov Model and its Extension
Infering the Number of State Clusters in Hidden Markov Model and its Extension Xugang Ye Department of Applied Mathematics and Statistics, Johns Hopkins University Elements of a Hidden Markov Model (HMM)
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationSoftware Reliability & Testing
Repairable systems Repairable system A reparable system is obtained by glueing individual non-repairable systems each around a single failure To describe this gluing process we need to review the concept
More information