Structural Econometrics: Dynamic Discrete Choice. Jean-Marc Robin

Size: px
Start display at page:

Download "Structural Econometrics: Dynamic Discrete Choice. Jean-Marc Robin"

Transcription

1 Structural Econometrics: Dynamic Discrete Choice Jean-Marc Robin 1. Dynamic discrete choice models 2. Application: college and career choice Plan 1

2 Dynamic discrete choice models See for example the presentation by Wolpin (AER, 1996). At each date t discrete, an individual has to choose one action among K possible actions. Let 1 if k is the chosen action, d k (t) = 0 otherwise. Let d(t) = (d 1 (t); :::; d K (t)) or d (t) = P K k=1 kd k (t) be the choice variable. Let S(t) 2 S be the state variable (i.e; the information at the beginning of period t when the action is chosen). Assume S discrete: S = fs 1 ; :::; s N g (in any case the computer will require a discrete state space). Action k yields payo R k (S(t); t). The state transition probability matrix is p ij (k; t) = Pr fs(t + 1) = s j js(t) = s i ; d k (t) = 1g : 2

3 Strategies A strategy is a sequence of functions D(; t) : S! f0; 1g K s 7! D(s; t) = (D 1 (s; t); :::; D K (s; t)) Individuals seek for the strategy D to maximise the expected discounted sum of future payos: " TX K # V (S(t); t) = max E X t D k (S(); ) R k (S(); ) D(;) S(t) : =t k=1 3

4 Bellman principle Write, for s 2 S, V (s; t) = max fv 1 (s; t); :::; V K (s; t)g where V k (s; t) is the present value if action k is chosen at t when S(t) = s: V k (s; t) = R k (S(t); t) + E [V (S(t + 1); t + 1)j S(t) = s; d k (t) = 1] and V k (s; T ) = R k (s; T ): The optimal strategy is D k (s; t) = 1 i V k (s; t) = max fv 1 (s; t); :::; V K (s; t)g and then V (s; t) = KX D k (s; t)v k (s; t): k=1 4

5 Solution Start from terminal period T and, for all s 2 S, determine the action which maximises payo R k (s; T ): and D k (s; T ) = 1 i R k (s; T ) = max fr 1 (s; T ); :::; R K (s; T )g V (s; T ) = KX D k (s; T )R k (s; T ): k=1 Then determine D(s; t) recursively: for all s 2 S, where, for all s 1 ; :::; s N ; D k (s; t) = 1 i V k (s; t) = max fv 1 (s; t); :::; V K (s; t)g V k (s i ; t) = R k (s i ; t) + E [V (S(t + 1); t + 1)j S(t) = s i ; d k (t) = 1] NX = R k (s; t) + p ij (k; t) V (s j ; t + 1) {z } j=1 = P K k=1 D k(s j ; t + 1)V k (s j ; t + 1) Curse of dimensionality: huge number of computations and large memory size required to compute V k (s; t) 8k; s; t. 5

6 Estimation Parameters: in the payo functions R k (s; t) and transition probabilities p ij (k; t). Inference: maximum likelihood or (simulated) method of moments. Data: individual sequences y h = x h (t h 0); d h (t h 0); x h (t h 0 + 1); d h (t h 0 + 1); :::; x h (t h 1); d h (t h 1) for individuals h = 1; :::; H and t 2 t h 0; t h 0 + 1; :::; t h 1, where x h (t) 2 fx 1 ; :::; x I g is the observed part of the state variables, i.e. S h (t) = x h (t); " h (t), with the following......assumptions on the process of shocks " h (t): " h (t) = " h 1(t); :::; " h K (t) iid; R k (S h (t) ; t) = R k (x h (t) ; t) + " h k (t); conditional independence: Pr x h (t + 1); " h (t + 1)jx h (t); " h (t); d k (t) = 1 = Pr(" h (t + 1)) Pr x h (t + 1) = x j jx h (t) = x i ; d h k(t) = 1 : {z } p ij (k;t) 6

7 Likelihood The conditional likelihood of y h given x h (t h 0) is `(y h jx h (t h 0)) = Pr d h (t h 0)jx h (t h 0) Pr x h (t h 0 + 1)jx h (t h 0); d h (t h 0) Pr d h (t h 0 + 1)jx h (t h 0 + 1) P x h (t h 0 + 2)jx h (t h 0 + 1); d h (t h 0 + 1) Pr d h (t h 1)jx h (t h 1) where Pr d h k(t) = 1jx h (t) = Pr " h (t) s.t. D k (x h (t); " h (t); t) = 1jx h (t) : The conditional likelihood of the sample is HY `(y h jx h (t h 0)): h=1 7

8 Choice probabilities Pr d h k(t) = 1jx h (t) = x i = Pr " h k(t) " h m(t) + V m (x i ; t) V k (x i ; t); 8m 6= kjx h (t) = x i : where V k (x i ; t) = R k (x i ; t) + NX p ij (k; t)v (s j ; t + 1); p ij (k; t) = Pr x h (t + 1) = x j jx h (t) = x i ; d h k(t) = 1 ; V (x j ; t + 1) = E max K V k (x j ; t + 1) + " h k(t + 1) : k=1 j=1 For instance, for (X 1 ; X 2 ) Gaussian, E max fx 1 ; X 2 g = X 2 + E max fx 1 X 2 ; 0g m1 m 2 m1 m 2 = m 2 + (m 1 m 2 ) + ' where = Std (X 1 X 2 ) = p

9 Two stage estimation One can proceed in two stages to save computer time, although t the cost of some eciency loss. 1. Maiximise partial likelihood of state changes: HY Pr x h (t h 0 + 1)jx h (t h 0); d h (t h 0) Pr x h (t h 0 + 2)jx h (t h 0 + 1); d h (t h 0 + 1) h=1 Pr x h (t h 1)jx h (t h 1 1); d h (t h 1 1) ; with respect to parameters of Pr fx(t + 1)jx(t); d(t); tg. 2. Maximise the likelihood of the sequence of decisions: HY Pr d h (t h 0)jx h (t h 0) Pr d h (t h 1)jx h (t h 1) h=1 using the estimated Pr fx(t + 1)jx(t); d(t); tg to compute the present value functions necessary to calculate choice probabilities. 9

10 Unobserved heterogeneity The two-stage estimation procedure does not work if there exists unobserved heterogeneity. Assume that S h (t) = x h (t); " h (t); h where h 2 f1; :::; Mg indicates a particular way of grouping individuals. All individuals with the same h have a specic value of the parameters governing payo functions and state probabilities. Let Pr h = m = m, m 2 f1; :::; Mg. The likelihood becomes HY `(y h jx h (t h 0)) = HY! MX m`(y h jx h (t h 0); m) h=1 h=1 m=1 where `(y h jx h (t h 0); h ) = Pr d h (t h 0)jx h (t h 0); h Pr x h (t h 0 + 1)jx h (t h 0); h ; d h (t h 0) Pr d h (t h 0 + 1)jx h (t h 0 + 1); h Pr x h (t h 0 + 2)jx h (t h 0 + 1); h ; d h (t h 0 + 1) Pr d h (t h 1)jx h (t h 1); h : 10

11 EM algorithm Let y = (y 1 ; ; y H ) be a vector of observations. Let z = (z 1 ; ; z H ) be unobserved covariates. The likelihood of (y; z) is f(y; z; ). Since z is not observed one estimates by maximixing the integrated likelihood: Z f(y; ) = f(y; z; )(dz). This integral may be dicult to compute and the numerical approximation may yield unstable Newtontype optimisation algorithms (numerical errors accumulate instead of averaging). The EM algorithm is often preferable. The EM algorithm iterates the following steps until numerical convergence (generally slowly) where Q(j (p (p) = arg max 1) ) = E Z = Q(j (p 1) ); (p hln f(y; z; )jy; 1)i p n zjy; (p 1)o ln f(y; z; )(dz): Each iteration increases the likelihood and converges toward a local maximum of the likelihood. 11

12 Assume z i 2 f1; :::; Mg and m = Pr fz i = mg. EM algorithm: discrete mixtures Then = (; ) where indexes f(y i jz i ; ) and = ( 1 ; :::; M ). We have f(y; z; ) = HY f(y i ; z i ; ) = " HY X M # m f(y i jz i = m; ) : i=1 i=1 m=1 Step E (expectation): Use Bayes rule to compute posterior probabilities: and p Q(j (p 1) ) = n z i = mjy i ; (p 1)o = = Z p HX i=1 n zjy; (p MX p m=1 P M m=1 (p 1) m f(y i jz i = m; (p 1) ) (p 1) m f(y i jz i = m; (p 1) ) 1)o ln f(y; z; )(dz) n z i = mjy i ; (p 1)o ln [ m f(y i jz i = m; )] : 12

13 Step M (maximisation): Update by constrained ML: (p) = arg max HX i=1 MX p m=1 n z i = mjy i ; (p 1)o ln f(y i jz i = m; ); (i.e. n duplicate individual observations K times and aect a weight equal to posterior probability p z i = mjy i ; (p 1)o ) and update as (p) m = 1 H HX p i=1 n z i = mjy i ; (p 1)o : 13

14 Application: education and career choice See for example the presentation by Keane et Wolpin (JPE, 1997). Model of education and career choices. Data: 11-year panel (National Longitudinal Survey of Youths): cohort of youths aged 16 in 1979 and followed until Objective: evaluate policy eects such as education subsidies. Population studied is a cohort of individuals starting at the age of 16 and retiring at 65. Choices: blue collar worker (k = 1), white collar worker (k = 2), military (k = 3), education (k = 4) or inactivity (k = 5). 14

15 Model Payos associated to choices k = 1; 2; 3 are the corresponding wages, the log of which are ln R k (t) = e k (16) + e k1 EDUC(t) + e k2 EXP k (t) e k3 [EXP k (t)] 2 + " k (t) where e k (16) is the intercept (initial condition), EDUC(t) is the number of years of education, EXP k (t) is occupation-k specic experience (= nb of years spent working as k; with EXP k (16) = 0). Education's instantaneous payo (or cost if negative): Leisure utility: R 4 (t) = e 4 (16) c 1 1 [EDUC(t) 12] {z } HS graduate R 5 (t) = e 5 (16) + " 5 (t): c 2 1 [EDUC(t) 16] + " {z } 4 (t): college graduate State variable: S(t) = (e(16); EDUC(t); EXP (t); "(t)) with 8 >< e(16) = (e 1 (16); :::; e 5 (16)) ; EXP (t) = (EXP 1 (t); EXP 2 (t); EXP 3 (t)) ; >: "(t) = (" 1 (t); :::; " 5 (t)) : 15

16 Model (continued) Heterogeneity { four groups m = 1; 2; 3; 4. { e(16) = (e 1 (16); :::; e 5 (16)) group-specic. { as EDUC(16) = 9 or 10, assume dierent proportions of each type given EDUC(16): Pr f = mjeduc(16)g m;educ(16) : State probabilities: { "(t) = (" 1 (t); :::; " 5 (t)) iid and N (0; ), with Cov (" k (t); "`(t)) = 0 for ` or k 4 (i.e. only " 1 (t); " 2 (t); " 3 (t) corresponding to employment spells are correlated). { Education: EDUC(t + 1) = EDUC(t) + d 4 (t): { Experience: EXP k (t + 1) = EXP k (t) + d k (t). Value functions: V k (S(t); t) = R k (t) + E [V (S(t + 1); t + 1)jd k (t) = 1] where "(t + 1) is the only risk factor (not predetermined) in V (S(t + 1); t + 1) given d(t): 16

17 Value functions V k (S(t); t) = R k (t) + E [V (S(t + 1); t + 1)jd k (t) = 1] where "(t + 1) is the only risk factor (not predetermined) in V (S(t + 1); t + 1) given d(t), a Pour k = 1; 2; 3, ( EXP`(t + 1) = EXP`(t) + 1(` = k); ` = 1; 2; 3; EDUC(t + 1) = EDUC(t): Pour k = 4, ( Pour k = 5, ( EXP`(t + 1) = EXP`(t); ` = 1; 2; 3; EDUC(t + 1) = EDUC(t) + 1: EXP`(t + 1) = EXP`(t); ` = 1; 2; 3; EDUC(t + 1) = EDUC(t): 17

18 Likelihood Individual observations: y h (t) = d h (t); w h (t), t = 16; :::; 26, where d h (t) = d h 1(t); :::; d h 5(t) is occupation choice and w h (t) = P 3 k=1 dh k (t)rh k (t) is current wage (missing if not working). Sample likelihood: " HY H # X L = m;educ (16)`h(y h h (16); :::; y h (26)je h (16); EDUC h (16)) : h=1 h=1 Likelihood for individual h: `h(y h (16); y h (17); :::; y h (26)je h (16); EDUC h (16)) = Y26 t=16 `h y h (t)je h (16); EDUC h (t); EXP h (t) : 18

19 Likelihood (continued) Likelihood for individual h at time t: `h y h (t)je h (16); EDUC h (t); EXP h (t) is computed as follows (we omit conditioning to simplify notations). Dierent as general studied above as the wage information tells us about shocks " h k (t). Case d h (t) = k 2 f1; 2; 3g: one thus knows that w h (t) = R h k (t) and V k(s(t); t) V`(S(t); t), ` 6= k: 8 9 `h y h (t) >< >= = Pr >: V k(s h (t); t) V`(S h (t); t); 8` 6= kjrk(t) h = w h (t) {z } >; pdf Rk(t) h = w h (t) {z } determines " h k (t) Other cases: one only knows that V k (S h (t); t) V`(S h (t); t), ` 6= k: `h y h (t) = Pr V k (S h (t); t) V`(S h (t); t); 8` 6= k : i.e. density of R h k (t) at observation wh (t) : 19

20 Given e h (16); EDUC h (t); EXP h (t), 0 Likelihood (continued) pdf Rk(t) h = w h (t) = 1 z } { 1 ' ln w h (t) e k (16) e k1 EDUC(t) e k2 EXP k (t) + e k3 [EXP k (t)] 2 w h (t) k k C A " h k (t) 1 where 2 k = Var(" k(t)), and Pr V k (S h (t); t) V`(S h (t); t); 8` 6= k j " h k(t) = Pr " h` (t) g`(t); 8` 6= k j " k (t) where g`(t) = ln V k (S h (t); t) E V (S h (t + 1); t + 1)jd`(t) = 1 e`(16) e`1 EDUC(t) e`2 EXP k (t) + e`3 [EXP k (t)] 2 ; ` = 1; 2; 3 ; g 4 (t) = V k (S h (t); t) e 4 (16) c 1 1 [EDUC(t) 12] c 2 1 [EDUC(t) 16] ; g 5 (t) = V k (S h (t); t) e 5 (16): One has to compute the cdf of a vector of 4 normal r.v.'s. (Computation simplied by the fact that " h 4 (t) and " h 5 (t) are assumed independent and independent of " h 1 (t) ; " h 2 (t) and " h 3 (t). 20

21 Lastly, Pr V k (S h (t); t) V`(S h (t); t); 8` 6= k w.r.t. " h k (t). can be computed by numerical integration of Pr V k (S h (t); t) V 21

22 Results See article. The t is excellent. They nd a very limited eect of college tuition subsidies (exogenous change in c 2 ). 22

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997)

A Model of Human Capital Accumulation and Occupational Choices. A simplified version of Keane and Wolpin (JPE, 1997) A Model of Human Capital Accumulation and Occupational Choices A simplified version of Keane and Wolpin (JPE, 1997) We have here three, mutually exclusive decisions in each period: 1. Attend school. 2.

More information

Introduction: structural econometrics. Jean-Marc Robin

Introduction: structural econometrics. Jean-Marc Robin Introduction: structural econometrics Jean-Marc Robin Abstract 1. Descriptive vs structural models 2. Correlation is not causality a. Simultaneity b. Heterogeneity c. Selectivity Descriptive models Consider

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2014 Instructor: Victor guirregabiria SOLUTION TO FINL EXM Monday, pril 14, 2014. From 9:00am-12:00pm (3 hours) INSTRUCTIONS:

More information

Towards a Bayesian model for Cyber Security

Towards a Bayesian model for Cyber Security Towards a Bayesian model for Cyber Security Mark Briers (mbriers@turing.ac.uk) Joint work with Henry Clausen and Prof. Niall Adams (Imperial College London) 27 September 2017 The Alan Turing Institute

More information

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization Prof. Daniel Cremers 6. Mixture Models and Expectation-Maximization Motivation Often the introduction of latent (unobserved) random variables into a model can help to express complex (marginal) distributions

More information

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS ECONOMETRICS II (ECO 2401) Victor Aguirregabiria Spring 2018 TOPIC 4: INTRODUCTION TO THE EVALUATION OF TREATMENT EFFECTS 1. Introduction and Notation 2. Randomized treatment 3. Conditional independence

More information

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall

Machine Learning. Gaussian Mixture Models. Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall Machine Learning Gaussian Mixture Models Zhiyao Duan & Bryan Pardo, Machine Learning: EECS 349 Fall 2012 1 The Generative Model POV We think of the data as being generated from some process. We assume

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Lecture 14 More on structural estimation

Lecture 14 More on structural estimation Lecture 14 More on structural estimation Economics 8379 George Washington University Instructor: Prof. Ben Williams traditional MLE and GMM MLE requires a full specification of a model for the distribution

More information

Rising Wage Inequality and the Effectiveness of Tuition Subsidy Policies:

Rising Wage Inequality and the Effectiveness of Tuition Subsidy Policies: Rising Wage Inequality and the Effectiveness of Tuition Subsidy Policies: Explorations with a Dynamic General Equilibrium Model of Labor Earnings based on Heckman, Lochner and Taber, Review of Economic

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Probabilistic Choice Models

Probabilistic Choice Models Probabilistic Choice Models James J. Heckman University of Chicago Econ 312 This draft, March 29, 2006 This chapter examines dierent models commonly used to model probabilistic choice, such as eg the choice

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample

More information

Duration Analysis. Joan Llull

Duration Analysis. Joan Llull Duration Analysis Joan Llull Panel Data and Duration Models Barcelona GSE joan.llull [at] movebarcelona [dot] eu Introduction Duration Analysis 2 Duration analysis Duration data: how long has an individual

More information

COM336: Neural Computing

COM336: Neural Computing COM336: Neural Computing http://www.dcs.shef.ac.uk/ sjr/com336/ Lecture 2: Density Estimation Steve Renals Department of Computer Science University of Sheffield Sheffield S1 4DP UK email: s.renals@dcs.shef.ac.uk

More information

Introduction to Linear Regression Analysis Interpretation of Results

Introduction to Linear Regression Analysis Interpretation of Results Introduction to Linear Regression Analysis Interpretation of Results Samuel Nocito Lecture 2 March 8th, 2018 Lecture 1 Summary Why and how we use econometric tools in empirical research. Ordinary Least

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf 1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a

More information

Gaussian Mixture Models

Gaussian Mixture Models Gaussian Mixture Models Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 Some slides courtesy of Eric Xing, Carlos Guestrin (One) bad case for K- means Clusters may overlap Some

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Naïve Bayes classification

Naïve Bayes classification Naïve Bayes classification 1 Probability theory Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. Examples: A person s height, the outcome of a coin toss

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 679 Part XIX

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Introduction to Reinforcement Learning Part 1: Markov Decision Processes Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for

More information

Unsupervised Learning

Unsupervised Learning 2018 EE448, Big Data Mining, Lecture 7 Unsupervised Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/ee448/index.html ML Problem Setting First build and

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

A Dynamic Model of Health, Education, and Wealth with Credit Constraints and Rational Addiction

A Dynamic Model of Health, Education, and Wealth with Credit Constraints and Rational Addiction A Dynamic Model of Health, Education, and Wealth with Credit Constraints and Rational Addiction Rong Hai and James J. Heckman Human Capital and Inequality Conference December 17, 2015 Hai & Heckman (UChicago)

More information

Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations

Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations Solution and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations Victor Aguirregabiria University of Toronto and CEPR Arvind Magesan University of Calgary May 1st, 2018 Abstract

More information

Inference. Jesús Fernández-Villaverde University of Pennsylvania

Inference. Jesús Fernández-Villaverde University of Pennsylvania Inference Jesús Fernández-Villaverde University of Pennsylvania 1 A Model with Sticky Price and Sticky Wage Household j [0, 1] maximizes utility function: X E 0 β t t=0 G t ³ C j t 1 1 σ 1 1 σ ³ N j t

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Comprehensive Examination: Macroeconomics Fall, 202 Answer Key to Section 2 Questions Section. (Suggested Time: 45 Minutes) For 3 of

More information

Hidden Markov Models and Gaussian Mixture Models

Hidden Markov Models and Gaussian Mixture Models Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 23&27 January 2014 ASR Lectures 4&5 Hidden Markov Models and Gaussian

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Remarks on Structural Estimation The Search Framework

Remarks on Structural Estimation The Search Framework Remarks on Structural Estimation The Search Framework Christopher Flinn NYU and Collegio Carlo Alberto November 2009 1 The Estimation of Search Models We develop a simple model of single agent search set

More information

The Expectation Maximization or EM algorithm

The Expectation Maximization or EM algorithm The Expectation Maximization or EM algorithm Carl Edward Rasmussen November 15th, 2017 Carl Edward Rasmussen The EM algorithm November 15th, 2017 1 / 11 Contents notation, objective the lower bound functional,

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008 Instructions: Answer all four (4) questions. Point totals for each question are given in parenthesis; there are 00 points possible. Within

More information

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability

Naïve Bayes classification. p ij 11/15/16. Probability theory. Probability theory. Probability theory. X P (X = x i )=1 i. Marginal Probability Probability theory Naïve Bayes classification Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s height, the outcome of a coin toss Distinguish

More information

Applications for solving DSGE models. October 25th, 2011

Applications for solving DSGE models. October 25th, 2011 MATLAB Workshop Applications for solving DSGE models Freddy Rojas Cama Marola Castillo Quinto Preliminary October 25th, 2011 A model to solve The model The model A model is set up in order to draw conclusions

More information

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making Modeling conditional distributions with mixture models: Applications in finance and financial decision-making John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Variational data assimilation

Variational data assimilation Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution

More information

Ninth ARTNeT Capacity Building Workshop for Trade Research "Trade Flows and Trade Policy Analysis"

Ninth ARTNeT Capacity Building Workshop for Trade Research Trade Flows and Trade Policy Analysis Ninth ARTNeT Capacity Building Workshop for Trade Research "Trade Flows and Trade Policy Analysis" June 2013 Bangkok, Thailand Cosimo Beverelli and Rainer Lanz (World Trade Organization) 1 Selected econometric

More information

Exam D0M61A Advanced econometrics

Exam D0M61A Advanced econometrics Exam D0M61A Advanced econometrics 19 January 2009, 9 12am Question 1 (5 pts.) Consider the wage function w i = β 0 + β 1 S i + β 2 E i + β 0 3h i + ε i, where w i is the log-wage of individual i, S i is

More information

COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017

COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017 COMS 4721: Machine Learning for Data Science Lecture 16, 3/28/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University SOFT CLUSTERING VS HARD CLUSTERING

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Controlling for Time Invariant Heterogeneity

Controlling for Time Invariant Heterogeneity Controlling for Time Invariant Heterogeneity Yona Rubinstein July 2016 Yona Rubinstein (LSE) Controlling for Time Invariant Heterogeneity 07/16 1 / 19 Observables and Unobservables Confounding Factors

More information

A Distributional Framework for Matched Employer Employee Data

A Distributional Framework for Matched Employer Employee Data A Distributional Framework for Matched Employer Employee Data (Preliminary) Interactions - BFI Bonhomme, Lamadon, Manresa University of Chicago MIT Sloan September 26th - 2015 Wage Dispersion Wages are

More information

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm

IEOR E4570: Machine Learning for OR&FE Spring 2015 c 2015 by Martin Haugh. The EM Algorithm IEOR E4570: Machine Learning for OR&FE Spring 205 c 205 by Martin Haugh The EM Algorithm The EM algorithm is used for obtaining maximum likelihood estimates of parameters when some of the data is missing.

More information

ECON 721: Lecture Notes on Duration Analysis. Petra E. Todd

ECON 721: Lecture Notes on Duration Analysis. Petra E. Todd ECON 721: Lecture Notes on Duration Analysis Petra E. Todd Fall, 213 2 Contents 1 Two state Model, possible non-stationary 1 1.1 Hazard function.......................... 1 1.2 Examples.............................

More information

Stock Sampling with Interval-Censored Elapsed Duration: A Monte Carlo Analysis

Stock Sampling with Interval-Censored Elapsed Duration: A Monte Carlo Analysis Stock Sampling with Interval-Censored Elapsed Duration: A Monte Carlo Analysis Michael P. Babington and Javier Cano-Urbina August 31, 2018 Abstract Duration data obtained from a given stock of individuals

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Four Parameters of Interest in the Evaluation. of Social Programs. James J. Heckman Justin L. Tobias Edward Vytlacil

Four Parameters of Interest in the Evaluation. of Social Programs. James J. Heckman Justin L. Tobias Edward Vytlacil Four Parameters of Interest in the Evaluation of Social Programs James J. Heckman Justin L. Tobias Edward Vytlacil Nueld College, Oxford, August, 2005 1 1 Introduction This paper uses a latent variable

More information

ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY 1. INTRODUCTION Consider an individual making a portfolio choice at date T involving two assets. T

ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY 1. INTRODUCTION Consider an individual making a portfolio choice at date T involving two assets. T May 5, 1999 ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY Gary Chamberlain Harvard University ABSTRACT Gilboa and Schmeidler (1989) provide axioms on preferences that imply a set of distributions

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY 1. INTRODUCTION Consider an individual making a portfolio choice at date T involving two assets. T

ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY 1. INTRODUCTION Consider an individual making a portfolio choice at date T involving two assets. T October 1999 ECONOMETRIC APPLICATIONS OF MAXMIN EXPECTED UTILITY Gary Chamberlain Harvard University ABSTRACT Gilboa and Schmeidler (1989) provide axioms on preferences that imply a set of distributions

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

Hidden Markov Models and Gaussian Mixture Models

Hidden Markov Models and Gaussian Mixture Models Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 25&29 January 2018 ASR Lectures 4&5 Hidden Markov Models and Gaussian

More information

Estimating Dynamic Programming Models

Estimating Dynamic Programming Models Estimating Dynamic Programming Models Katsumi Shimotsu 1 Ken Yamada 2 1 Department of Economics Hitotsubashi University 2 School of Economics Singapore Management University Katsumi Shimotsu, Ken Yamada

More information

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko Indirect Utility Recall: static consumer theory; J goods, p j is the price of good j (j = 1; : : : ; J), c j is consumption

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Dynamic sequential analysis of careers

Dynamic sequential analysis of careers Dynamic sequential analysis of careers Fulvia Pennoni Department of Statistics and Quantitative Methods University of Milano-Bicocca http://www.statistica.unimib.it/utenti/pennoni/ Email: fulvia.pennoni@unimib.it

More information

By Thierry Magnac and David Thesmar 1

By Thierry Magnac and David Thesmar 1 Econometrica, Vol. 70, No. 2 (March, 2002), 801 816 IDENTIFYING DYNAMIC DISCRETE DECISION PROCESSES By Thierry Magnac and David Thesmar 1 1 introduction Over the last decade, the number of econometric

More information

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Winter 2017 TOPIC 3: MULTINOMIAL CHOICE MODELS

ECONOMETRICS II (ECO 2401) Victor Aguirregabiria. Winter 2017 TOPIC 3: MULTINOMIAL CHOICE MODELS ECONOMETRICS II (ECO 2401) Victor Aguirregabiria Winter 2017 TOPIC 3: MULTINOMIAL CHOICE MODELS 1. Introduction 2. Nonparametric model 3. Random Utility Models - De nition; - Common Speci cation and Normalizations;

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

INTRODUCTION TO MULTILEVEL MODELLING FOR REPEATED MEASURES DATA. Belfast 9 th June to 10 th June, 2011

INTRODUCTION TO MULTILEVEL MODELLING FOR REPEATED MEASURES DATA. Belfast 9 th June to 10 th June, 2011 INTRODUCTION TO MULTILEVEL MODELLING FOR REPEATED MEASURES DATA Belfast 9 th June to 10 th June, 2011 Dr James J Brown Southampton Statistical Sciences Research Institute (UoS) ADMIN Research Centre (IoE

More information

Gibbs Sampling in Latent Variable Models #1

Gibbs Sampling in Latent Variable Models #1 Gibbs Sampling in Latent Variable Models #1 Econ 690 Purdue University Outline 1 Data augmentation 2 Probit Model Probit Application A Panel Probit Panel Probit 3 The Tobit Model Example: Female Labor

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: Tony Jebara Topic 11 Maximum Likelihood as Bayesian Inference Maximum A Posteriori Bayesian Gaussian Estimation Why Maximum Likelihood? So far, assumed max (log) likelihood

More information

Discrete State Space Methods for Dynamic Economies

Discrete State Space Methods for Dynamic Economies Discrete State Space Methods for Dynamic Economies A Brief Introduction Craig Burnside Duke University September 2006 Craig Burnside (Duke University) Discrete State Space Methods September 2006 1 / 42

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

Limited Dependent Variables and Panel Data

Limited Dependent Variables and Panel Data and Panel Data June 24 th, 2009 Structure 1 2 Many economic questions involve the explanation of binary variables, e.g.: explaining the participation of women in the labor market explaining retirement

More information

1. Reliability and survival - basic concepts

1. Reliability and survival - basic concepts . Reliability and survival - basic concepts. Books Wolstenholme, L.C. "Reliability modelling. A statistical approach." Chapman & Hall, 999. Ebeling, C. "An introduction to reliability & maintainability

More information

Trade and Inequality: From Theory to Estimation

Trade and Inequality: From Theory to Estimation Trade and Inequality: From Theory to Estimation Elhanan Helpman Oleg Itskhoki Marc Muendler Stephen Redding Harvard Princeton UC San Diego Princeton MEF Italia Dipartimento del Tesoro September 2014 1

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

Lecture 4: Linear panel models

Lecture 4: Linear panel models Lecture 4: Linear panel models Luc Behaghel PSE February 2009 Luc Behaghel (PSE) Lecture 4 February 2009 1 / 47 Introduction Panel = repeated observations of the same individuals (e.g., rms, workers, countries)

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Spring 2013 Instructor: Victor Aguirregabiria SOLUTION TO FINAL EXAM Friday, April 12, 2013. From 9:00-12:00 (3 hours) INSTRUCTIONS:

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

Density Estimation: ML, MAP, Bayesian estimation

Density Estimation: ML, MAP, Bayesian estimation Density Estimation: ML, MAP, Bayesian estimation CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Maximum-Likelihood Estimation Maximum

More information

Variational inference

Variational inference Simon Leglaive Télécom ParisTech, CNRS LTCI, Université Paris Saclay November 18, 2016, Télécom ParisTech, Paris, France. Outline Introduction Probabilistic model Problem Log-likelihood decomposition EM

More information

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2016 Instructor: Victor Aguirregabiria

ECONOMETRICS II (ECO 2401S) University of Toronto. Department of Economics. Winter 2016 Instructor: Victor Aguirregabiria ECOOMETRICS II (ECO 24S) University of Toronto. Department of Economics. Winter 26 Instructor: Victor Aguirregabiria FIAL EAM. Thursday, April 4, 26. From 9:am-2:pm (3 hours) ISTRUCTIOS: - This is a closed-book

More information

Basic Sampling Methods

Basic Sampling Methods Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Lecture 4: Probabilistic Learning

Lecture 4: Probabilistic Learning DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods

More information

Florian Hoffmann. September - December Vancouver School of Economics University of British Columbia

Florian Hoffmann. September - December Vancouver School of Economics University of British Columbia Lecture Notes on Graduate Labor Economics Section 3: Human Capital Theory and the Economics of Education Copyright: Florian Hoffmann Please do not Circulate Florian Hoffmann Vancouver School of Economics

More information

1 Static (one period) model

1 Static (one period) model 1 Static (one period) model The problem: max U(C; L; X); s.t. C = Y + w(t L) and L T: The Lagrangian: L = U(C; L; X) (C + wl M) (L T ); where M = Y + wt The FOCs: U C (C; L; X) = and U L (C; L; X) w +

More information

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction

Estimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept

More information

Dynamic Models with Serial Correlation: Particle Filter Based Estimation

Dynamic Models with Serial Correlation: Particle Filter Based Estimation Dynamic Models with Serial Correlation: Particle Filter Based Estimation April 6, 04 Guest Instructor: Nathan Yang nathan.yang@yale.edu Class Overview ( of ) Questions we will answer in today s class:

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Lecture Notes 10: Dynamic Programming

Lecture Notes 10: Dynamic Programming University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 81 Lecture Notes 10: Dynamic Programming Peter J. Hammond 2018 September 28th University of Warwick, EC9A0 Maths for Economists Peter

More information

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION

ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION ECO 2901 EMPIRICAL INDUSTRIAL ORGANIZATION Lecture 8: Dynamic Games of Oligopoly Competition Victor Aguirregabiria (University of Toronto) Toronto. Winter 2017 Victor Aguirregabiria () Empirical IO Toronto.

More information

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1

Lecture 13: Data Modelling and Distributions. Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Lecture 13: Data Modelling and Distributions Intelligent Data Analysis and Probabilistic Inference Lecture 13 Slide No 1 Why data distributions? It is a well established fact that many naturally occurring

More information

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I

University of Cambridge. MPhil in Computer Speech Text & Internet Technology. Module: Speech Processing II. Lecture 2: Hidden Markov Models I University of Cambridge MPhil in Computer Speech Text & Internet Technology Module: Speech Processing II Lecture 2: Hidden Markov Models I o o o o o 1 2 3 4 T 1 b 2 () a 12 2 a 3 a 4 5 34 a 23 b () b ()

More information