Lecture 4: Dynamic models

Size: px
Start display at page:

Download "Lecture 4: Dynamic models"

Transcription

1 linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL hlopes@chicagobooth.edu SV

2 linear s SV 1 1st order DLM 2 linear s 3 Outline 4 SV

3 linear s s (DMs) SV

4 linear s SV 1st order DLM The local level (West and Harrison, 1997) has Observation equation: y t+1 x t+1, θ N(x t+1, σ 2 ) System equation: x t+1 x t, θ N(x t, τ 2 ) where x 0 N(m 0, C 0 ) and θ = (σ 2, τ 2 ) fixed (for now).

5 linear s SV It is worth noticing that the can be rewritten as where y x, θ N(x, σ 2 I n ) x x 0, θ N(x 0 1 n, τ 2 Ω) x 0 N(m 0, C 0 ) Ω = n 2 n 2 n n 1 n 1 n n 2 n 1 n

6 linear s Therefore, the prior of x given θ is x θ N(m 0 1 n ; C 0 1 n 1 n + τ 2 Ω), while its full conditional posterior distribution is where and x y, θ N(m 1, C 1 ) C 1 1 = (C 0 1 n 1 n + τ 2 Ω) 1 + σ 2 I n C 1 1 m 1 = (C 0 1 n 1 n + τ 2 Ω) 1 m 0 1 n + σ 2 y SV

7 linear s SV Let y t = (y 1,..., y t ). The previous joint posterior posterior for x given y (omitting θ for now) can be constructed as p(x y n ) = p(x 1 y n, x 2 ) which is obtained from n p(x t y n, x t+1 ), t=1 p(x n y n ) and noticing that given y t and x t+1, x t and x t+h are independent, and x t and y t are independent, for all integer h > 1.

8 Therefore, we first need to derive the above joint and this is done forward via the well-known Kalman recursions. 1st order DLM linear s SV p(x t y t ) = p(x t+1 y t ) = p(y t+1 x t ) = p(x t+1 y t+1 ) Posterior at t: (x t y t ) N(m t, C t ) Prior at t + 1: (x t+1 y t ) N(m t, R t+1 ) R t+1 = C t + τ 2 Marginal likelihood: (y t+1 y t ) N(m t, Q t+1 ) Q t+1 = R t+1 + σ 2 Posterior at t + 1: (x t+1 y t+1 ) N(m t+1, C t+1 ) m t+1 = (1 A t+1 )m t + A t+1 y t+1 C t+1 = A t+1 σ 2 where A t+1 = R t+1 /Q t+1.

9 linear s SV For t = n, x n y n N(m n n, C n n ), where m n n = m n and C n n = C n. For t < n, where and x t y n N(m n t, C n t ) x t x t+1, y n N(a n t, R n t ) m n t = (1 B t )m t + B t m n t+1 C n t = (1 B t )C t + B 2 t C n t+1 at n = (1 B t )m t + B t x t+1 Rt n = B t τ 2 B t = C t /(C t + τ 2 ).

10 linear s SV n = 100, σ 2 = 1.0 τ 2 = 0.5 and x 0 = y(t) x(t) Time

11 linear s SV p(x t y t ) via Kalman m 0 = 0.0 and C 0 = 10.0 given τ 2 and σ Time

12 linear s SV p(x t y n ) via Kalman m 0 = 0.0 and C 0 = 10.0 given τ 2 and σ Time

13 linear s SV We showed earlier that (y t y t 1 ) N(m t 1, Q t ) where both m t 1 and Q t were presented before and are functions of θ = (σ 2, τ 2 ), y t 1, m 0 and C 0. Therefore, by Bayes rule, p(θ y n ) p(θ)p(y n θ) n = p(θ) f N (y t ; m t 1, Q t ). t=1

14 linear s SV : p(y σ 2, τ 2 )p(σ 2 )p(τ 2 ) σ 2 IG(ν 0 /2, ν 0 σ 2 0 /2), where ν 0 = 5 and σ 2 0 = 1. τ 2 IG(n 0 /2, n 0 τ 2 0 /2), where n 0 = 5 and τ 2 0 = 0.5 τ σ 2

15 linear s SV Sample θ from p(θ y n, x n ) n p(θ y n, x n ) p(θ) p(y t x t, θ)p(x t x t 1, θ). t=1 Sample x n from p(x n y n, θ) n p(x n y n, θ) = f N (x t at n, Rt n ) t=1

16 linear s SV : p(x t y n ) σ 2 τ 2 σ 2 Density τ 2 Density

17 linear s SV : Comparison p(x t y n ) versus p(x t y n, σ 2 = 0.87, τ 2 = 0.63) Time

18 linear s SV from the 1st order DLM learning in non-normal and nonlinear dynamic s p(y t+1 x t+1 ) and p(x t+1 x t ) in general rather difficult since p(x t+1 y t ) = p(x t+1 x t )p(x t y t )dx t p(x t+1 y t+1 ) p(y t+1 x t+1 )p(x t+1 y t ) are usually unavailable in closed form. Over the last 20 years: FFBS for conditionally Gaussian DLMs; Gamerman (1998) for generalized DLMs; Carlin, Polson and Stoffer (2002) for more general DMs.

19 linear s SV linear s Large class of s with time-varying parameters. linear s are defined by a pair of equations, the observation equation and the evolution/system equation: y t = F tβ t + ɛ t, ɛ t N(0, V ) β t = G t β t 1 + ω t, ω t N(0, W ) y t : sequence of observations; F t : vector of explanatory variables; β t : d-dimensional state vector; G t : d d evolution matrix; β 1 N(a, R).

20 linear s SV The linear growth is slightly more elaborate by incorporation of an extra time-varying parameter β 2 representing the growth of the level of the series: y t = β 1,t + ɛ t ɛ t N(0, V ) β 1,t = β 1,t 1 + β 2,t + ω 1,t β 2,t = β 2,t 1 + ω 2,t where ω t = (ω 1,t, ω 2,t ) N(0, W ) and F t = (1, 0) ( ) 1 1 G t = 0 1

21 linear s SV Prior distributions Updated/online distributions Smoothed distributions Prior, updated and smoothed distributions p(β t y t k ) k > 0 p(β t y t ) p(β t y t+k ) k > 0

22 linear s SV Let y t = {y 1,..., y t }. Posterior at time t 1: β t 1 y t 1 N(m t 1, C t 1 ) Prior at time t: β t y t 1 N(a t, R t ) with a t = G t m t 1 and R t = G t C t 1 G t + W. predictive at time t: y t y t 1 N(f t, Q t ) with f t = F ta t and Q t = F tr t F t + V.

23 linear s SV Posterior at time t p(β t y t ) = p(β t y t, y t 1 ) p(y t β t ) p(β t y t 1 ) The resulting posterior distribution is β t y t N(m t, C t ) with m t = a t + A t e t C t = R t A t A tq t A t = R t F t /Q t e t = y t f t By induction, these distributions are valid for all times.

24 linear s SV In dynamic s, the smoothed distribution π(β y n ) is more commonly used: n 1 π(β y n ) = p(β n y n ) p(β t β t+1,..., β n, y n ) = p(β n y n ) t=1 n 1 p(β t β t+1, y t ) t=1 Integrating with respect to (β 1,..., β t 1 ): n 1 π(β t,..., β n y n ) = p(β n y n ) p(β k β k+1, y t ) k=t π(β t, β t+1 y n ) = p(β t+1 y n )p(β t β t+1, y t ) for t = 1,..., n 1.

25 linear s It can be shown that where β t V, W, y n N(m n t, C n t ) : p(β t y n ) mt n = m t + C t G t+1r 1 t+1 (mn t+1 a t+1 ) Ct n = C t C t G t+1r 1 t+1 (R t+1 Ct+1)R n 1 t+1 G t+1c t SV

26 linear s It can be shown that : p(β y n ) (β t β t+1, V, W, y n ) is normally distributed with mean (G tw 1 G t + Ct 1 ) 1 (G tw 1 β t+1 + Ct 1 m t ) and variance (G tw 1 G t + C 1 t ) 1. SV

27 linear s Forward ing, backward (FFBS) Sampling from π(β y n ) can be performed by Sampling β n from N(m n, C n ) and then Sampling β t from (β t β t+1, V, W, y t ), for t = n 1,..., 1. The above scheme is known as the forward ing, backward (FFBS) algorithm (Carter and Kohn, 1994 and Frühwirth-Schnatter, 1994). SV

28 linear s SV from π(β t β t, y n ) Let β t = (β 1,..., β t 1, β t+1,..., β n ). For t = 2,..., n 1 where π(β t β t, y n ) p(y t β t ) p(β t+1 β t ) p(β t β t 1 ) f N (y t ; F tβ t, V )f N (β t+1 ; G t+1 β t, W ) f N (β t ; G t β t 1, W ) = f N (β t ; b t, B t ) b t = B t (σ 2 F t y t + G t+1w 1 β t+1 + W 1 G t β t 1 ) B t = (σ 2 F t F t + G t+1w 1 G t+1 + W 1 ) 1 for t = 2,..., n 1.

29 linear s SV For t = 1 and t = n, and where π(β 1 β 1, y n ) = f N (β 1 ; b 1, B 1 ) π(β n β n, y n ) = f N (β t ; b n, B n ) b 1 = B 1 (σ1 2 1y 1 + G 2W 1 β 2 + R 1 a) B 1 = (σ1 2 1F 1 + G 2W 1 G 2 + R 1 ) 1 b n = B n (σn 2 F n y n + W 1 G n β n 1 ) B n = (σn 2 F n F n + W 1 ) 1

30 linear s SV Sampling from π(v, W y n, β) Assume that φ = V 1 Gamma(n σ /2, n σ S σ /2) Φ = W 1 Wishart(n W /2, n W S W /2) Full conditionals π(φ β, Φ) π(φ β, φ) n f N (y t ; F t β t, φ 1 ) f G (φ; n σ /2, n σ S σ /2) t=1 f G (φ; nσ/2, nσs σ/2) n f N (β t ; G t β t 1, Φ 1 ) f W (Φ; n W /2, n W S W /2) t=2 f W (Φ; nw /2, nw SW /2) where nσ = n σ + n, nw = n W + n 1, nσs σ = n σ S σ + σ(y t F t β t ) 2 nw SW = n W S W + Σ n t=2(β t G t β t 1 )(β t G t β t 1 )

31 linear s SV to sample from p(β, V, W y n ) Sample V 1 from its full conditional f G (φ; n σ/2, n σs σ/2) Sample W 1 from its full conditional f W (Φ; n W /2, n W S W /2) Sample β from its full conditional by the FFBS algorithm. π(β y n, V, W )

32 linear s It is easy to see that n p(y n V, W ) = f N (y t f t, Q t ) t=1 Likelihood for (V, W ) which is the integrated likelihood of (V, W ). SV

33 linear s SV (β, V, W ) can be sampled jointly by Jointly (β, V, W ) Sampling (V, W ) from its marginal posterior π(v, W y n ) l(v, W y n )π(v, W ) by a rejection or Metropolis-Hastings step; Sampling β from its full conditional by the FFBS algorithm. π(β y n, V, W ) Jointly (β, V, W ) avoids MCMC convergence problems associated with the posterior correlation between parameters (Gamerman and Moreira, 2002).

34 linear s SV First order DLM with V = 1 : Comparing schemes 1 y t = β t + ɛ t, ɛ t N(0, 1) β t = β t 1 + ω t, ω t N(0, W ), with (n, W ) {(100,.01), (100,.5), (1000,.01), (1000,.5)}. 400 runs: 100 replications per combination. Priors: β 1 N(0, 10) and V and W have inverse Gammas with means set at true values and coefficients of variation set at 10. Posterior : based on 20,000 MCMC draws. 1 Gamerman, Reis and Salazar (2006) Comparison of schemes for dynamic linear s. International Statistical Review, 74,

35 linear s SV Effective sample size For a given θ, let t (n) = t(θ (n) ), γ k = Cov π (t (n), t (n+k) ), the variance of t (n) as σ 2 = γ 0, the autocorrelation of lag k as ρ k = γ k /σ 2 and τ 2 n /n = Var π ( t n ). It can be shown that, as n, n 1 τn 2 = σ ( k=1 ) ( n k n ρ k σ 2 ) ρ k. k=1 }{{} inefficiency factor The inefficiency factor measures how far t (n) s are from being a random sample and how much Var π ( t n ) increases because of that. The effective sample size is defined as n n eff = k=1 ρ k or the size of a random sample with the same variance.

36 linear s SV Schemes Scheme I: Sampling β 1,..., β n, V, W from their conditionals. Scheme II: Sampling β, V and W from their conditionals. Scheme III: Jointly (β, V, W ). Scheme n=100 n=1000 II III Computing times relative to scheme I. Scheme W n I II III Sample averages (based on the 100 replications) of n eff based on V.

37 linear s SV Let y t, for t = 1,..., n, be generated by y t = x 2 t 20 + ɛt ɛt N(0, σ2 ) x t = αx t 1 + β xt 1 + γcos(1.2(t 1)) + u 1 + xt 1 2 t u t N(0, τ 2 ) where x 0 N(m 0, C 0 ) and θ = (α, β, γ). Prior distribution σ 2 IG(n 0 /2, n 0 σ 2 0/2) θ, τ 2 N(θ 0, τ 2 V 0 )IG(ν 0 /2, ν 0 τ 2 0 /2)

38 linear s SV It follows that where n 1 = n 0 + n and Sampling (σ 2, θ, τ 2 x 0, x n, y n ) (σ 2 y n, x n ) IG(n 1 /2, n 1 σ 2 1/2) n 1 σ 2 1 = n 0 σ n (y t xt 2 /20) 2. t=1 Also (θ, τ 2 x 0:n ) N(θ 1, τ 2 V 1 )IG(ν 1 /2, ν 1 τ1 2 /2) where ν 1 = ν 0 + n, V 1 1 = V Z Z V 1 1 θ 1 = V 1 0 θ 0 + Z x 1:n ν 1 τ1 2 = ν 0 τ0 2 + (y Zθ 1) (y Zθ 1 ) + (θ 1 θ 0 ) V 1 0 (θ 1 θ 0 ) Z = (G x0,..., G xn 1 ) G xt = (x t 1, x t 1 /(1 + x 2 t 1 ), cos(1.2(t 1))).

39 linear s SV Sampling x 1,..., x n Let x t = (x 0,..., x t 1, x t+1,..., x n ), for t = 1,..., n 1, x 0 = x n, x n = x 0:(n 1) and y 0 =. For t = 0 p(x 0 x 0, y 0, ψ) f N (x 0 ; m 0, C 0 )f N (x 1 ; G x 0 θ, τ 2 ) For t = 1,..., n 1 For t = n p(x t x t, y t, ψ) f N (y t ; x 2 t /20, σ 2 )f N (x t ; G x t 1 θ, τ 2 ) f N (x t+1 ; G x t θ, τ 2 ) p(x n x n, y n, ψ) f N (y n ; x 2 n/20, σ 2 )f N (x n ; G x n 1 θ, τ 2 )

40 linear s SV Metropolis-Hastings algorithm A simple random walk Metropolis algorithm with tuning variance v 2 x would work as follows. For t = 0,..., n 1 Current state: x (j) t 2 Sample xt from N(x (j) t, vx 2 ) 3 Compute the acceptance probability { } α = min 1, p(x t x t, y t, ψ) p(x (j) t x t, y t, ψ) 4 New state: x (j+1) t = { x t w. p. α x (j) t w. p. 1 α

41 linear s SV Simulation set up We simulated n = 100 observations based on θ = (0.5, 25, 8), σ 2 = 1, τ 2 = 10 and x 0 = 0.1. y t y t time x t x t time x t x t 1

42 linear s SV Prior hyperparameters x 0 N(m 0, C 0 ) m 0 = 0.0 and C 0 = 10 θ τ 2 N(θ 0, τ 2 V 0 ) θ 0 = (0.5, 25, 8) and V 0 = diag(0.0025, 0.1, 0.04) τ 2 IG(ν 0 /2, ν 0 τ0 2/2) ν 0 = 6 and τ0 2 = 20/3 such that E(τ 2 ) = V (τ 2 ) = 10. σ 2 IG(n 0 /2, n 0 σ0 2) n 0 = 6 and σ 2 0 = 2/3 such that E(σ 2 ) = V (σ 2 ) = 1.

43 linear s SV Metropolis-Hastings tuning parameter v 2 x = (0.1) 2 MCMC setup Burn-in period, step and MCMC sample size M 0 = 1, 000 L = 20 M = , 000 draws Initial values θ = (0.5, 25, 8) τ 2 = 10 σ 2 = 1 x 0:n = x true 0:n

44 linear s SV ACF alpha iteration Lag ACF beta iteration Lag ACF gamma iteration Lag Parameters ACF tau iteration Lag ACF sig iteration Lag

45 linear s SV States time

46 linear s SV ACF lag States

47 linear s Parameters M 0 = 100, 000 L = 50 M = , 000 draws ACF alpha iteration Lag ACF beta iteration Lag ACF gamma iteration Lag ACF tau iteration Lag ACF sig iteration Lag SV

48 linear s SV ACF lag States

49 linear s Stochastic volatility The canonical stochastic volatility (SVM), is y t = e ht/2 ε t h t = µ + φh t 1 + τη t where ε t and η t are N(0, 1) shocks with E(ε t η t+h ) = 0 for all h and E(ε t ε t+l ) = E(η t η t+l ) = 0 for all l 0. τ 2 : volatility of the log-volatility. φ < 1 then h t is a stationary process. SV Let y n = (y 1,..., y n ), h n = (h 1,..., h n ) and h a:b = (h a,..., h b ).

50 linear s SV Simulated data n = 500, h 0 = 0.0 and (µ, φ, τ 2 ) = ( , 0.99, ) time time

51 linear s SV Prior information Uncertainty about the initial log volatility is h 0 N (m 0, C 0 ). Let θ = (µ, φ), then the prior distribution of (θ, τ 2 ) is normal-inverse gamma, i.e. (θ, τ 2 ) NIG(θ 0, V 0, ν 0, s 2 0 ): θ τ 2 N(θ 0, τ 2 V 0 ) τ 2 IG(ν 0 /2, ν 0 s 2 0/2) For example, if ν 0 = 10 and s0 2 = then E(τ 2 ) = ν 0s 2 0 /2 ν 0 /2 1 = Var(τ 2 ) = (ν 0 s 2 0 /2)2 (ν 0 /2 1) 2 (ν 0 /2 2) = (0.013)2 Hyperparameters: m 0, C 0, θ 0, V 0, ν 0 and s 2 0.

52 linear s SV Simulated data Simulation setup h 0 = 0.0 µ = φ = 0.99 τ 2 = Prior distribution h 0 N(0, 100) µ N(0, 100) φ N(0, 100) τ 2 IG(5, 0.14) (Mode=0.0234; 95% c.i.=(0.014; 0.086))

53 linear s SV Posterior The SVM is a dynamic and posterior via MCMC for the the latent log-volatility states h t can be performed in at least two ways. Let h t = (h 0:(t 1), h (t+1):n ), for t = 1,..., n 1 and h n = h 1:(n 1). moves for h t (θ, τ 2 h n, y n ) (h t h t, θ, τ 2, y n ), for t = 1,..., n Block move for h n (θ, τ 2 h n, y n ) (h n θ, τ 2, y n )

54 linear s SV Sampling (θ, τ 2 h n, y n ) Conditional on h 0:n, the posterior distribution of (θ, τ 2 ) is also normal-inverse gamma: (θ, τ 2 y n, h 0:n ) NIG(θ 1, V 1, ν 1, s 2 1) where X = (1 n, h 0:(n 1) ), ν 1 = ν 0 + n V1 1 = V0 1 + X X V1 1 θ 1 = V0 1 θ 0 + X h 1:n ν 1 s1 2 = ν 0 s0 2 + (y X θ 1 ) (y X θ 1 ) + (θ 1 θ 0 ) V0 1 (θ 1 θ 0 )

55 linear s SV Combining and leads to (by Bayes theorem) where Sampling (h 0 θ, τ 2, h 1 ) h 0 N(m 0, C 0 ) h 1 h 0 N(µ + φh 0, τ 2 ) h 0 h 1 N(m 1, C 1 ) C1 1 1 = C φτ 2 (h 1 µ) C1 1 = C0 1 + φ 2 τ 2

56 linear s SV Conditional prior distribution of h t Given h t 1, θ and τ 2, it can be shown that ( ) {( ) ( ht µ + φh N t 1 h t+1 (1 + φ)µ + φ 2, τ 2 1 φ h t 1 φ (1 + φ 2 ) E(h t h t 1, h t+1, θ, τ 2 ) and V (h t h t 1, h t+1, θ, τ 2 ) are ( ) ( ) 1 φ φ µ t = 1 + φ 2 µ φ 2 (h t 1 + h t+1 ) ν 2 = τ 2 (1 + φ 2 ) 1. Therefore, )}. (h t h t 1, h t+1, θ, τ 2 ) N(µ t, ν 2 ) t = 1,..., n 1 (h n h n 1, θ, τ 2 ) N(µ n, τ 2 ) where µ n = µ + φh n 1.

57 linear s SV Sampling h t via RWM Let ν 2 t = ν 2 for t = 1,..., n 1 and ν 2 n = τ 2, then p(h t h t, y n, θ, τ 2 ) = f N (h t ; µ t, ν 2 t )f N (y t ; 0, e ht ) for t = 1,..., n. RWM with tuning vh 2 (t = 1,..., n): 1 Current state: h (j) t 2 Sample ht from N(h (j) t, vh 2) 3 Compute the acceptance probability { } f N (ht ; µ t, νt 2 )f N (y t ; 0, e h t ) α = min 1, f N (h (j) t ; µ t, νt 2 )f N (y t ; 0, e h(j) t ) 4 New state: h (j+1) t = { h t w. p. α h (j) t w. p. 1 α

58 linear s SV MCMC setup M 0 = 1, 000 M = 1, 000 Autocorrelation of h t s ACF Simulated data lag

59 linear s SV Sampling h t via IMH The full conditional distribution of h t is given by p(h t h t, y n, θ, τ 2 ) = p(h t h t 1, h t+1, θ, τ 2 )p(y t h t ) = f N (h t ; µ t, ν 2 )f N (y t ; 0, e ht ). Kim, Shephard and Chib (1998) explored the fact that log p(y t h t ) = const 1 2 h t y 2 t 2 exp( h t) and that a Taylor expansion of exp( h t ) around µ t leads to log p(y t h t ) const 1 2 h t y t 2 ( e µ t (h t µ t )e ) µt { 2 g(h t ) = exp 1 } 2 h t(1 yt 2 e µt )

60 linear s Proposal distribution Let ν 2 t = ν 2 for t = 1,..., n 1 and ν 2 n = τ 2. Then, by combining f N (h t ; µ t, νt 2 ) and g(h t ), for t = 1,..., n, leads to the following proposal distribution: q(h t h t, y n, θ, τ 2 ) N ( h t ; µ t, νt 2 ) where µ t = µ t + 0.5ν 2 t (y 2 t e µt 1). SV

61 linear s SV For t = 1,..., n 1 Current state: h (j) t 2 Sample h t from N( µ t, ν 2 t ) IMH algorithm 3 Compute the acceptance probability { } f N (ht ; µ t, νt 2 )f N (y t ; 0, e h t ) α = min 1, f N (h (j) t ; µ t, νt 2 )f N (y t ; 0, e h(j) t ) f (j) N(ht ; µ t, νt 2 ) f N (ht ; µ t, νt 2 ) 4 New state: h (j+1) t = { h t w. p. α h (j) t w. p. 1 α

62 ACF for both schemes 1st order DLM linear s SV ACF RANDOM WALK ACF INDEPENDENT lag lag

63 linear s SV Let y t = log y 2 t and ɛ t = log ε 2 t. Sampling h n - normal approximation and FFBS The SV-AR(1) is a DLM with nonnormal observational errors, i.e. y t = h t + ɛ t h t = µ + φh t 1 + τη t where η t N(0, 1). The distribution of ɛ t is log χ 2 1, where E(ɛ t ) = 1.27 V (ɛ t ) = π2 2 = 4.935

64 linear s SV Normal approximation Let ɛ t be approximated by N(α, σ 2 ), z t = y t α, α = 1.27 and σ 2 = π 2 /2. Then z t = h t + σv t h t = µ + φh t 1 + τη t is a simple DLM where v t and η t are N(0, 1). Sampling from p(h n θ, τ 2, σ 2, z n ) can be performed by the FFBS algorithm.

65 log χ 2 1 and N( 1.27, π2 /2) 1st order DLM linear s SV density log chi^2_1 normal x

66 linear s ACF RANDOM WALK lag ACF for the three schemes ACF INDEPENDENT lag ACF NORMAL lag SV

67 linear s SV Sampling h n - mixtures of normals and FFBS The log χ 2 1 distribution can be approximated by where 7 π i N(µ i, ωi 2 ) i=1 i π i µ i ωi

68 linear s SV density log χ 2 1 and 7 i=1 π in(µ i, ω 2 i ) log chi^2_1 mixture of 7 normals x

69 linear s SV Mixture of normals Using an argument from the Bayesian analysis of mixture of normal, let z 1,..., z n be unobservable (latent) indicator variables such that z t {1,..., 7} and Pr(z t = i) = π i, for i = 1,..., 7. Therefore, conditional on the z s, y t is transformed into log y 2 t, log y 2 t = h t + log ε 2 t h t = µ + φh t 1 + τ η η t which can be rewritten as a normal DLM: log y 2 t = h t + v t v t N(µ zt, ω 2 z t ) h t = µ + φh t 1 + w t w t N(0, τ 2 η ) where µ zt and ω 2 z t are provided in the previous table. Then h n is jointly sampled by using the the FFBS algorithm.

70 linear s SV ACF ACF RANDOM WALK lag NORMAL ACF for the four schemes ACF ACF INDEPENDENT lag MIXTURE lag lag

71 linear s SV Posterior means: volatilities TRUE RANDOM WALK INDEPENDENT NORMAL MIXTURE time

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Bayesian Ingredients. Hedibert Freitas Lopes

Bayesian Ingredients. Hedibert Freitas Lopes Normal Prior s Ingredients Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Particle Learning and Smoothing Carlos Carvalho, Michael Johannes, Hedibert Lopes and Nicholas Polson This version: September 2009 First draft: December 2007 Abstract In this paper we develop particle

More information

Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors

Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors rrors Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors Seokwoo Lee 1, Hedibert F. Lopes 2 1 Department of Statistics 2 Graduate School of Business University of Chicago May 19, 2008 rrors

More information

Bayesian Monte Carlo Filtering for Stochastic Volatility Models

Bayesian Monte Carlo Filtering for Stochastic Volatility Models Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Particle Learning and Smoothing Hedibert Freitas Lopes The University of Chicago Booth School of Business September 18th 2009 Instituto Nacional de Pesquisas Espaciais São José dos Campos, Brazil Joint

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve

Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Online Appendix for: A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve Joshua CC Chan Australian National University Gary Koop University of Strathclyde Simon M Potter

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Dynamic linear models (aka state-space models) 1

Dynamic linear models (aka state-space models) 1 Dynamic linear models (aka state-space models) 1 Advanced Econometris: Time Series Hedibert Freitas Lopes INSPER 1 Part of this lecture is based on Gamerman and Lopes (2006) Markov Chain Monte Carlo: Stochastic

More information

Gaussian Multiscale Spatio-temporal Models for Areal Data

Gaussian Multiscale Spatio-temporal Models for Areal Data Gaussian Multiscale Spatio-temporal Models for Areal Data (University of Missouri) Scott H. Holan (University of Missouri) Adelmo I. Bertolde (UFES) Outline Motivation Multiscale factorization The multiscale

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Statistical Science 2010, Vol. 25, No. 1, 88 106 DOI: 10.1214/10-STS325 Institute of Mathematical Statistics, 2010 Particle Learning and Smoothing Carlos M. Carvalho, Michael S. Johannes, Hedibert F. Lopes

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

BAYESIAN MODEL CRITICISM

BAYESIAN MODEL CRITICISM Monte via Chib s BAYESIAN MODEL CRITICM Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Michael Johannes Columbia University Nicholas Polson University of Chicago August 28, 2007 1 Introduction The Bayesian solution to any inference problem is a simple rule: compute

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Time-Varying Parameters

Time-Varying Parameters Kalman Filter and state-space models: time-varying parameter models; models with unobservable variables; basic tool: Kalman filter; implementation is task-specific. y t = x t β t + e t (1) β t = µ + Fβ

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Dynamic Generalized Linear Models

Dynamic Generalized Linear Models Dynamic Generalized Linear Models Jesse Windle Oct. 24, 2012 Contents 1 Introduction 1 2 Binary Data (Static Case) 2 3 Data Augmentation (de-marginalization) by 4 examples 3 3.1 Example 1: CDF method.............................

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Particle Learning for General Mixtures

Particle Learning for General Mixtures Particle Learning for General Mixtures Hedibert Freitas Lopes 1 Booth School of Business University of Chicago Dipartimento di Scienze delle Decisioni Università Bocconi, Milano 1 Joint work with Nicholas

More information

Contrastive Divergence

Contrastive Divergence Contrastive Divergence Training Products of Experts by Minimizing CD Hinton, 2002 Helmut Puhr Institute for Theoretical Computer Science TU Graz June 9, 2010 Contents 1 Theory 2 Argument 3 Contrastive

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Quantile Filtering and Learning

Quantile Filtering and Learning Quantile Filtering and Learning Michael Johannes, Nicholas Polson and Seung M. Yae October 5, 2009 Abstract Quantile and least-absolute deviations (LAD) methods are popular robust statistical methods but

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Research Division Federal Reserve Bank of St. Louis Working Paper Series Research Division Federal Reserve Bank of St Louis Working Paper Series Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models Michael Dueker Working Paper

More information

Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models

Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models Financial Econometrics and Volatility Models Estimation of Stochastic Volatility Models Eric Zivot April 26, 2010 Outline Likehood of SV Models Survey of Estimation Techniques for SV Models GMM Estimation

More information

Bayesian Dynamic Linear Modelling for. Complex Computer Models

Bayesian Dynamic Linear Modelling for. Complex Computer Models Bayesian Dynamic Linear Modelling for Complex Computer Models Fei Liu, Liang Zhang, Mike West Abstract Computer models may have functional outputs. With no loss of generality, we assume that a single computer

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

1 Elements of Markov Chain Structure and Convergence

1 Elements of Markov Chain Structure and Convergence 1 Elements of Markov Chain Structure and Convergence 1.1 Definition Random quantities x in a p dimensional state space χ are generated sequentially according to a conditional or transition distribution

More information

Spatial Dynamic Factor Analysis

Spatial Dynamic Factor Analysis Spatial Dynamic Factor Analysis Esther Salazar Federal University of Rio de Janeiro Department of Statistical Methods Sixth Workshop on BAYESIAN INFERENCE IN STOCHASTIC PROCESSES Bressanone/Brixen, Italy

More information

The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence

The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence The Effects of Monetary Policy on Stock Market Bubbles: Some Evidence Jordi Gali Luca Gambetti ONLINE APPENDIX The appendix describes the estimation of the time-varying coefficients VAR model. The model

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Flexible Particle Markov chain Monte Carlo methods with an application to a factor stochastic volatility model

Flexible Particle Markov chain Monte Carlo methods with an application to a factor stochastic volatility model Flexible Particle Markov chain Monte Carlo methods with an application to a factor stochastic volatility model arxiv:1401.1667v4 [stat.co] 23 Jan 2018 Eduardo F. Mendes School of Applied Mathematics Fundação

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models

Dynamic models. Dependent data The AR(p) model The MA(q) model Hidden Markov models. 6 Dynamic models 6 Dependent data The AR(p) model The MA(q) model Hidden Markov models Dependent data Dependent data Huge portion of real-life data involving dependent datapoints Example (Capture-recapture) capture histories

More information

A Class of Non-Gaussian State Space Models. with Exact Likelihood Inference

A Class of Non-Gaussian State Space Models. with Exact Likelihood Inference A Class of Non-Gaussian State Space Models with Exact Likelihood Inference Drew D. Creal University of Chicago, Booth School of Business July 13, 2015 Abstract The likelihood function of a general non-linear,

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

July 31, 2009 / Ben Kedem Symposium

July 31, 2009 / Ben Kedem Symposium ing The s ing The Department of Statistics North Carolina State University July 31, 2009 / Ben Kedem Symposium Outline ing The s 1 2 s 3 4 5 Ben Kedem ing The s Ben has made many contributions to time

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Statistical Science 2010, Vol. 25, No. 1, 88 106 DOI: 10.1214/10-STS325 c Institute of Mathematical Statistics, 2010 Particle Learning and Smoothing Carlos M. Carvalho, Michael S. Johannes, Hedibert F.

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS Bayesian Analysis of the Stochastic Conditional Duration Model Chris M. Strickland, Catherine S. Forbes and Gael M. Martin Working

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

Bayesian Inference. Chapter 4: Regression and Hierarchical Models

Bayesian Inference. Chapter 4: Regression and Hierarchical Models Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative

More information

David Giles Bayesian Econometrics

David Giles Bayesian Econometrics David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian

More information

Bayesian Inference: Concept and Practice

Bayesian Inference: Concept and Practice Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of

More information

Alexandra M. Schmidt and Hedibert F. Lopes. Dynamic models

Alexandra M. Schmidt and Hedibert F. Lopes. Dynamic models Alexandra M. Schmidt and Hedibert F. Lopes Dynamic models ii Contents 1 Dynamic models 1 Alexandra M. Schmidt and Hedibert F. Lopes 1.1 Introduction........................... 1 1.2 Univariate Normal Dynamic

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Hierarchical Modeling for Spatial Data

Hierarchical Modeling for Spatial Data Bayesian Spatial Modelling Spatial model specifications: P(y X, θ). Prior specifications: P(θ). Posterior inference of model parameters: P(θ y). Predictions at new locations: P(y 0 y). Model comparisons.

More information

High-dimensional Problems in Finance and Economics. Thomas M. Mertens

High-dimensional Problems in Finance and Economics. Thomas M. Mertens High-dimensional Problems in Finance and Economics Thomas M. Mertens NYU Stern Risk Economics Lab April 17, 2012 1 / 78 Motivation Many problems in finance and economics are high dimensional. Dynamic Optimization:

More information

Cholesky Stochastic Volatility. August 2, Abstract

Cholesky Stochastic Volatility. August 2, Abstract Cholesky Stochastic Volatility Hedibert F. Lopes University of Chicago R. E. McCulloch University of Texas at Austin R. S. Tsay University of Chicago August 2, 2011 Abstract Multivariate volatility has

More information

Part I State space models

Part I State space models Part I State space models 1 Introduction to state space time series analysis James Durbin Department of Statistics, London School of Economics and Political Science Abstract The paper presents a broad

More information

Parsimony inducing priors for large scale state-space models

Parsimony inducing priors for large scale state-space models Vol. 1 (2018) 1 38 DOI: 0000 Parsimony inducing priors for large scale state-space models Hedibert F. Lopes 1, Robert E. McCulloch 2 and Ruey S. Tsay 3 1 INSPER Institute of Education and Research, Rua

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Embedding Supernova Cosmology into a Bayesian Hierarchical Model

Embedding Supernova Cosmology into a Bayesian Hierarchical Model 1 / 41 Embedding Supernova Cosmology into a Bayesian Hierarchical Model Xiyun Jiao Statistic Section Department of Mathematics Imperial College London Joint work with David van Dyk, Roberto Trotta & Hikmatali

More information

Gaussian kernel GARCH models

Gaussian kernel GARCH models Gaussian kernel GARCH models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics 7 June 2013 Motivation A regression model is often

More information

arxiv: v1 [stat.me] 26 Jul 2011

arxiv: v1 [stat.me] 26 Jul 2011 AUTOREGRESSIVE MODELS FOR VARIANCE MATRICES: STATIONARY INVERSE WISHART PROCESSES BY EMILY B. FOX AND MIKE WEST Duke University, Durham NC, USA arxiv:1107.5239v1 [stat.me] 26 Jul 2011 We introduce and

More information

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9 Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods

More information

Markov Chains and Hidden Markov Models

Markov Chains and Hidden Markov Models Chapter 1 Markov Chains and Hidden Markov Models In this chapter, we will introduce the concept of Markov chains, and show how Markov chains can be used to model signals using structures such as hidden

More information

CS281A/Stat241A Lecture 22

CS281A/Stat241A Lecture 22 CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Cross-sectional space-time modeling using ARNN(p, n) processes

Cross-sectional space-time modeling using ARNN(p, n) processes Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest

More information

Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice

Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice Non-homogeneous Markov Mixture of Periodic Autoregressions for the Analysis of Air Pollution in the Lagoon of Venice Roberta Paroli 1, Silvia Pistollato, Maria Rosa, and Luigi Spezia 3 1 Istituto di Statistica

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Controlled sequential Monte Carlo

Controlled sequential Monte Carlo Controlled sequential Monte Carlo Jeremy Heng, Department of Statistics, Harvard University Joint work with Adrian Bishop (UTS, CSIRO), George Deligiannidis & Arnaud Doucet (Oxford) Bayesian Computation

More information

Bayesian Decision and Bayesian Learning

Bayesian Decision and Bayesian Learning Bayesian Decision and Bayesian Learning Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 30 Bayes Rule p(x ω i

More information

MONTE CARLO METHODS. Hedibert Freitas Lopes

MONTE CARLO METHODS. Hedibert Freitas Lopes MONTE CARLO METHODS Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Bayesian Computations for DSGE Models

Bayesian Computations for DSGE Models Bayesian Computations for DSGE Models Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 This Lecture is Based on Bayesian Estimation of DSGE Models Edward P. Herbst &

More information

Hyperparameter estimation in Dirichlet process mixture models

Hyperparameter estimation in Dirichlet process mixture models Hyperparameter estimation in Dirichlet process mixture models By MIKE WEST Institute of Statistics and Decision Sciences Duke University, Durham NC 27706, USA. SUMMARY In Bayesian density estimation and

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility

Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Estimation and Inference on Dynamic Panel Data Models with Stochastic Volatility Wen Xu Department of Economics & Oxford-Man Institute University of Oxford (Preliminary, Comments Welcome) Theme y it =

More information

Sequential Monte Carlo Methods

Sequential Monte Carlo Methods University of Pennsylvania Bradley Visitor Lectures October 23, 2017 Introduction Unfortunately, standard MCMC can be inaccurate, especially in medium and large-scale DSGE models: disentangling importance

More information

A Bayesian Treatment of Linear Gaussian Regression

A Bayesian Treatment of Linear Gaussian Regression A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,

More information

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Lecture 8: Bayesian Estimation of Parameters in State Space Models in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

Bayesian Estimation of DSGE Models

Bayesian Estimation of DSGE Models Bayesian Estimation of DSGE Models Stéphane Adjemian Université du Maine, GAINS & CEPREMAP stephane.adjemian@univ-lemans.fr http://www.dynare.org/stepan June 28, 2011 June 28, 2011 Université du Maine,

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet. Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web this afternoon: capture/recapture.

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information