Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors

Size: px
Start display at page:

Download "Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors"

Transcription

1 rrors Bayesian Stochastic Volatility (SV) Model with non-gaussian Errors Seokwoo Lee 1, Hedibert F. Lopes 2 1 Department of Statistics 2 Graduate School of Business University of Chicago May 19, 2008

2 rrors Outline of Topics 1 Preliminaries Markov Chain Mote Carlo: Gibbs Sampling Bayesian Regression 2 SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Kim, Shephard, and Chib: FIX-SVM [5] 3 SVM with Non-Gaussian Errors: FLEX-SVM 4 Real Stock Data: S&P500 and 30 DOW JONES 5 Simulation Methodology

3 rrors Motivation rdata GE.txt Time Varying Volatility Volatility Clustering

4 rrors Standard Stochastic Volatility Model [3] Assumption ɛ t, h t are stochastically independent Log volatility h t is stationary AR(1) process Assume h 1 N(a 1, R 1 ) Equivalently, y t = e ht ɛ t ɛ t i.i.d. N(0, 1) (1) h t = ω + φ h t 1 + σ w w t w t N(0, 1) log y 2 t = h t + log ɛ 2 t log ɛ 2 t log χ 2 1 (2) h t = ω + φ h t 1 + w t w t N(0, σ 2 w )

5 rrors Preliminaries Markov Chain Mote Carlo: Gibbs Sampling MCMC In our standard SV model (Eq. 1), likelihood function is f (y θ) = f (y h, θ)f (h θ)dh where y = (y 1,, y n ) and h = (h 1,, h n ) The key issue is that this likelihood function is intractable. Instead, we are focusing on p(θ, h y) Markov Chain Mote Carlo procedures provides a way to sampling this density without directly computing the above complex likelihood function Posterior moments and marginal density can be estimated by averaging the relevant function over the sampled (simulated) variates

6 rrors Preliminaries Markov Chain Mote Carlo: Gibbs Sampling Gibbs Sampling Gibbs sampling generates a successive samples from the full conditional distributions. Algorithms proceeds by sampling each block from the full conditional distributionswhere the most recent values of the conditioning blocks are used in the simulation Gibbs for SV model: π(θ, h y) 1 Initialize h and θ 2 Sample h y, θ hard 3 Sample(Update) θ h, y easy 4 goto 2

7 rrors Preliminaries Bayesian Regression Sampling (ω, φ, σ 2 w h, y): Bayesian Regression h = (h 1,, h T 1 ) and h = (h 2,, h T ) X = (1 n 1, h) β = (ω, φ) System Equation: AR(1) h = β X + σ w w t where w t N(0, 1)

8 rrors Preliminaries Bayesian Regression Sampling (ω, φ, σ 2 w h, y): Bayesian Regression Priors: a 1, R 1, β 0, A, ν 0 and s0 2 are known hyperparameters. h 1 N(a 1, R1 1 ) β σw 2 N(β 0, σw 2 A 1 ) σ 2 w IG( ν 0 2, ν 0s ) Full Conditionals: σw 2 h 1:n, y 1:n IG( ν 1 2, ν 1s1 2 2 ) where ν 1 = ν 0 + (n 1)/2 ν 1 s1 2 = ( β β 0 ) A( β β 0 ) β = (X X + A) 1 (X h + Aβ 0 ) where X = (1 n 1, h 1:n 1 ) β σ 2 w, h 1:n, y 1:n N( β, σ 2 w (X X + A) 1 )

9 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Forwards Filtering Backwards Sampling with DLM Idea Jointly sample (h ω, φ, σ 2 w, y) using FFBS with Kalman Filter Kalman Filter (Dynamic Linear Model) Forward Filtering Backward Sampling (Simulation Smoothing)

10 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Dynamic Linear Model (DLM) Model: y t = F t β t + ɛ t, where ɛ t iid N(0, Vt ) β t = G t β t 1 + w t, where w t iid N(0, Wt ) y t : sequence of observations F t : vector of explanatory variables β t : d-dimensional state vector G t : d d evolution matrix β 1 N(a 1, R 1 ) v t w t

11 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Sequential Inference β t β t 1 N(G t β t 1, W t ) Posterior at t 1 β t 1 y 1:t 1 N(m t 1, C t 1 ) Prior at t β t y 1:t 1 N(a t, R t ) a t = G t m t 1 R t = G t C t 1 G t +W t Predictive at t y t y 1:t 1 N(f t, Q t ) f t = F t a t Q t = F t R t F t + V t

12 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Sequential Inference: Filtering Since, p(β t y 1:t ) = p(β t y t, y 1:t 1 ) p(y t β t )p(β t y 1:t 1 ) Posterior at t β t y 1:t N(m t, C t ) where m t = a t + A t e t C t = R t + A t A tq t A t = R t F t Qt 1 e t = y t f t By induction, these distributions are valid for all times

13 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter Forward Filtering Backward Sampling p(β y) = p(β n,, β 1 y 1:n ) = p(β n y 1:n )p(β n 1 β n, y 1:n )p(β n 2 β n 1, β n, y 1:n ) p(β 2 β 3,, β n, y 1:n ) = p(β n y 1:n )p(β n 1 β n, y 1:n )p(β n 2 β n 1, y 1:n ) p(β 2 β 3, y 1:n ) Also, p(β n 1 β n, y 1:n ) = p(β n 1 β n, y 1:n 1, y n ) = p(β n 1 β n, y 1:n 1 ) In general, p(β t β t+1, y 1:n ) = p(β t β t+1, y 1:t ) for 1 < t < n

14 rrors SV model with Gaussian Errors Forward Filtering Backward Sampling with Kalman Filter FFBS (β t, β t+1 ) given y 1:t is bivariate normal under Gasussian assumption Conditional mean and covariance matrix of (β t, β t+1 ) given y 1:t are readily available from Kalman Filtering. [ ] ( [ ] [ ] ) βt mt Ct G N, t C t β t+1 a y t+1 G t C t R t+1 1:t Consequently, where p(β t β t+1, y 1:t ) N(m t, C t ) m t = m t + G t C t R 1 t+1 [β t+1 a t+1 ] C t = C t G t C t R 1 t+1 C t G t

15 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] SV model as DLM Recalling log y 2 t := y t = h t + v t v t := log ɛ 2 t log χ 2 1 h t = ω + φ h t 1 + w t w t N(0, σ 2 w ) where E(log ɛ 2 t ) = 1.27 and Var(log ɛ 2 t ) = 4.9 Indeed, very close to DLM except for log ɛ 2 t is non-gaussian

16 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] Comparison between Approximations density log chi^2_1 mixture of normals N( 1.27, 4.9) x Figure: log χ 2 1 vs N( 1.27, 4.9) and mixture of 7 normals It is obvious single Normal approximation is not good enough

17 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] Approximation by Normal mixture 7 log χ 2 1 π k N(µ k, τk 2 ) k=1 µ k τk π k Table: 7 mixture normal component Figure 1 suggests mixture approximation is more appropriate.

18 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] SV model as DLM with Fixed Mixture: FIX-SVM Let z 1,, z n be latent indicators corresponding to the observation innovations such that z t {1,, 7}. (v t z t = i) N(µ i, τi 2 ) P(v t = i) = π i Then, conditional on {z t } n t=1, the models becomes DLM: log y 2 t = h t + v t v t N(µ zt, τ 2 z t ) h t = ω + φ h t 1 + w t w t N(0, σ 2 w ) h 1 N(a 1, R 1 ) where µ zt and τ 2 z t are presented in Table 1 (h ω, φ, σ 2 w, y) can be jointly sampling by using FFBS as discussed.

19 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] Gibbs sampling for FIX-SVM 1 Initialize h 1 2 Jointly sample h 2:n by FFBS with given h 1, θ and σ 2 w 3 Sample (z i v i ) {1,, K} with p(z i = j v i ) = π j p N (v i µ j,τj 2 ) 7 l=1 π l p N (v i µ l,τl 2 ) 4 Update σ 2 w θ, h, y, z and 5 Update θ σ 2 w, h, y, z by Bayesian Regression as discussed 6 go to step 2

20 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] Simulation Study: Heavy tailed Innovations ɛ t t 2 yt Time Figure: Simulated 4000 observations with Heavy tailed innovation Error n obs = 4000 ω = , φ = , σ 2 w =

21 phi Histogram of phi phi Index Series phi Lag sig Histogram of sig sig Index Series sig Lag om Histogram of om om Index Series om Lag rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] FIX-SVM Parameter Estimation: ɛ t t 2 INCORRECT! ACF Frequency ACF Frequency Frequency ACF FIX-SVM mean std 5% 95% True φ σ w ω

22 rrors SV model with Gaussian Errors Kim, Shephard, and Chib: FIX-SVM [5] FIX-SVM E(h t ) vs true h t : ɛ t t 2 INCORRECT! FLEX SVM estimated ht(black solid) versus true ht(red dotted) Time Figure: FIX-SVM E(h t ) and true h t with t 2 innovations

23 rrors SVM with Non-Gaussian Errors: FLEX-SVM SVM with Non-Gaussian Errors: FLEX-SVM Observation Wrong assumption of distribution of innovation errors appears to cause incorrect inferences about h, ω, φ, σ w Idea Eliminate the assumption that ɛ t N(0, 1) Estimate the density of ɛ t only depending on data Inference ω, φ, σ w + Density estimation

24 rrors SVM with Non-Gaussian Errors: FLEX-SVM FLEX-SVM Mechanism: Learning the unknown density of ɛ t by the mixture π i K i=1 N(µ i, τ 2 i ) {µ i, τ i, π i } K i=1 are dynamically estimated through MCMC (Gibbs) based on the observations Regarding K is fixed e.g., K = 10 or K = 14

25 rrors SVM with Non-Gaussian Errors: FLEX-SVM Density Estimation by Mixture Model Given observation {v 1,, v n } probability density of v can be as; p(v i γ) = K i=1 π i p N (v i µ i, τ 2 i ) Latent group classifiers, z i, is introduced per observation v i : i is classified in group j when z i = j Using latent indicators such that (v i z i = j) N(µ j, τj 2), (v, z γ) has the following joint density; [ K ] n p(v, z γ) = p(v z, γ)p(z γ) = p N (v i µ i, τi 2 ) p(z i γ) j=1 i I j i=1 where I j = {t z t = j}.

26 rrors SVM with Non-Gaussian Errors: FLEX-SVM Priors for Mixture Models Define Priors: I j := {t z t = j} where j {1,, K} n j := I j n j v j = i I j v i n j s 2 j = i I j (v i v j ) 2 µ j N(µ 0j, s 2 0j ) τ 2 j N(n 0j /2, n 0j τ 2 0j /2) π i Drichlet(α 0 ) where α 0 = (α 01,, α 0K ) where µ 0j, s 0j, τ 0j, n 0j, α 0 are the known hyperparameters.

27 rrors SVM with Non-Gaussian Errors: FLEX-SVM Full Conditional for Mixture Models Full conditionals: [µ i τi 2, z, v] N(µ, τ 2 ) where τ 2 = n i τ 2 i + s 2 0i and µ = τ 2 (τ 2 i n i v i + s 2 0i µ 0i ) ) [τi 2 µ i, z, v] IG ((n 0i + n i )/2, (n 0i τ0i 2 + n isi 2)/2 [π γ, z, v] Drichlet(α 0 + n) where n = (n 1,, n K ) [z i γ, v] {1,, K} with p(z i = j γ, v i ) = π j p N (v i µ j,τ 2 j ) K l=1 π l p N (v i µ l,τ 2 l )

28 rrors SVM with Non-Gaussian Errors: FLEX-SVM Gibbs Sampling for FLEX-SVM Core procedures 1 FFBS h 2 Update ω, φ, σ by Bayesian Regression 3 Estimte density by updating mixture components {π i, µ i, τ i } K i=1

29 rrors SVM with Non-Gaussian Errors: FLEX-SVM Gibbs sampling for FLEX-SVM 1 Initialize h 1:n, θ = (ω, φ) and σ 2 w. 2 Initialize {µ i, τ i } K i=1 3 Make initial assignment of z 1:n 4 FFBS h 1:n y 1:n, z 1:n, {µ i, τ i, π i } K i=1, θ 5 Sample(update) θ and σ 2 w using Bayesian Regression 6 Compute n = {n 1,, n K } 7 Sample {µ i } K i=1 {τ i 2, π i} K i=1, v 1:n, z 1:n by the full conditionals 8 Sample {τi 2}K i=1 {π i, µ i } K i=1, v 1:n, z 1:n by the full conditionals 9 Sample {π i } K i=1 v 1:n, z 1:n, {µ i, τi 2}K i=1 by the full conditionals 10 Sample z 1:n v 1:n, {π i, µ i, τi 2}K i=1 by the full conditionals 11 go to 4

30 rrors SVM with Non-Gaussian Errors: FLEX-SVM FLEX-SVM Innovation Density Estimation: ɛ t t Emperical True Prior dist eps Figure: FLEX-SVM density log ɛ 2 t estimation

31 phi Histogram of phi phi Index Series phi Lag sig Histogram of sig sig Index Series sig Lag om Histogram of om om Index Series om Lag rrors SVM with Non-Gaussian Errors: FLEX-SVM FLEX-SVM Parameter Estimation: ɛ t t 2 CORRECT! ACF ACF ACF Frequency Frequency Frequency Figure: FLEX-SVM parameter estimation with t 2 innovations(φ, σ w, ω)

32 rrors SVM with Non-Gaussian Errors: FLEX-SVM FLEX-SVM E(h t ) vs true h t : ɛ t t 2 CORRECT! FLEX SVM estimated ht(black solid) versus true ht(red dotted) with normal Time Figure: FLEX-SVM E(h t ) and true h t with t 2 innovations

33 rrors Real Stock Data: S&P500 and 30 DOW JONES S&P500 return series rdata sp500r.txt

34 rrors Real Stock Data: S&P500 and 30 DOW JONES Equivalent Model Instead of the model introduced from the beginning, the finance researcher often used the following parsimonious model. (I ) y t = e ht/2 ɛ t h t+1 = ω + φ h t + w t (II ) y t = β e ht/2 ɛ t h t+1 = φ h t + w t where ( φ ) β = exp 2(1 ω) In the later section, we fit the models with the parsimonious model (II) instead of model (I).

35 rrors Real Stock Data: S&P500 and 30 DOW JONES SP500: FIX-SVM Parameter Estimation Histogram of phi Series phi Density phi ACF phi Time Lag Histogram of sig.w Series sig.w Density sig.w ACF sig.w Time FIX mean std 5% 95% φ σ w Lag

36 rrors Real Stock Data: S&P500 and 30 DOW JONES SP500: FLEX-SVM innovation density estimation log(mm) flex log X^2 mm x x

37 rrors Real Stock Data: S&P500 and 30 DOW JONES SP500: FLEX-SVM/FIX-SVM Tail Behavior k FIX-SVM: P(v > k) # Obs FLEX-SVM: P(v > k) # Obs E E Table: Comparison of Tail behavior with different threshholds

38 rrors Real Stock Data: S&P500 and 30 DOW JONES SP500: FLEX-SVM Paramter Estimation Histogram of phi Series phi Density phi ACF phi Time Lag Histogram of sig.w Series sig.w Density sig.w ACF sig.w Time FLEX mean std 5% 95% φ σ w Lag

39 rrors Real Stock Data: S&P500 and 30 DOW JONES SP: FLEX-SVM/FIX-SVM Parameter Est. Comparison density.default(x = phi1) density.default(x = sig.w1) Density Density N = 1781 Bandwidth = N = 1781 Bandwidth = FIX FLEX mean std 5% 95% mean std 5% 95% φ σ w

40 rrors Real Stock Data: S&P500 and 30 DOW JONES SP500: FLEX-SVM/FIX-SVM Model Comparison(h t ) eht E(exp(ht/2)) with flex E(exp(ht/2)) with fixed E(ht) with flex E(ht) with fixed Index

41 flex log X^ flex log X^2 x x flex log X^ flex log X^2 x x x x flex log X^ flex log X^2 x x x x flex log X^ flex log X^2 x x x x rrors Real Stock Data: S&P500 and 30 DOW JONES DOW JONES mm mm log(mm) mm mm mm log(mm) mm (a) AXP (b) AXP (c) BA (d) BA mm mm log(mm) mm mm mm log(mm) mm (e) GM (f) GM (g) IBM (h) IBM

42 flex log X^ x flex log X^ flex log X^2 x x x flex log X^ flex log X^2 x x x x flex log X^ x x x rrors Real Stock Data: S&P500 and 30 DOW JONES DOW JONES mm mm log(mm) mm mm mm log(mm) mm (i) JNJ (j) JNJ (k) MO (l) MO mm mm log(mm) mm (m) MRK (n) MRK

43 rrors Simulation Methodology High Performance MCMC Engine Language & Libraries C/C++ (FFBS, core Gibbs sampler) Rmath, GSL, ATLAS(BLAS+LAPACK) R plot, GNU plot Achieve average 4000 CPM (cycles per minute) for FIX-SVM, 2000CPM FLEX-SVM Future Work: Distributed Computing Construct Sim-Grid Parallel simulation with batch job(such as PBS queue or Condor)

44 rrors Simulation Methodology Performance Comparison # Obs R C/C sec 8.71 sec sec sec Table: FFBS performance benchmark C/C++ against R: 3000 sweep Data Set iter FIX-SVM FLEX-SVM GE (2100) s (5.559 m) s (12.28 m) S&P(6107) s (12.20 m) 1667 s (27.78 m) Table: Full SVM estimation performance benchmark with S&P500(6107 obs) and GE(2500 obs)

45 rrors Conclusion and Future Work Conclusion and Future Work Conclusion Novel FLEX-SVM is presented to correctly estimate the density of innovations by dynamically learn the parameters of a mixture components The data analysis suggested FLEX-SVM apperas better approach than FIX-SVM in the presence of non-gaussian, particulary heavy tail innovations The desirable precision of estimation achieved by sampling sufficiently large variates from MCMC by the virtue of customized high-performance MCMC engine

46 rrors Conclusion and Future Work Future Work Regard the number of mixture components, K, as parameter. (transdimensional jump between different K) Incorporating with Particle Filters (non-linear structure, non-gaussian error) Construct Sim-Grid parallel simulation network to deal with a variety of data index simultaneously

47 rrors Reference Slides: Extra Meyer R. Berg A. and Yu J. Dic as a model comparison criterion for stochastic volatility model. Gammerman D. and Lopes H. F. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference. Chapman & Hall/CRC, Taylor S. J. Modelling Financial Time Series. John & Willey, Polson G Jacquier, E and E Rossi. Bayesian analysis of stochastic volatility model. The Review of Economic Studies, 65(3): , Jul Shephard N. Kim S. and Chib S. Stochastic volatility: Likelihood inference and comparison with arch models. The Review of Economic Studies, 65(3): , Jul West M. and Harrison J. Bayesian Forecasting and Dynamic Models. springer, 1998.

48 rrors Reference Slides: Extra Asset Pricing: Heston model An asset price that is a geometric Brownian motion: ds S = µdt + σdb s where µ and σ are unknown constants and B s is a Brownian motion under the risk-neutral measure. ( d log S = µ 1 2 σ2) dt + σdb s σ is not a constant but instead evolves as σ(t) = v(t) [ ] dv(t) = κ θ v(t) + γ v(t)db v κ, θ, γ > 0 (3) where B v is a Brownian motion under the risk-neutral measure having a constant correlation ρ with B s

49 rrors Reference Slides: Extra Asset Pricing: Heston Model We could discretize (3) as: log S(t i+1 ) = log S(t i ) + (µ 1 ) 2 σ2 ti t + v(t i ) B s [ ] v(t i+1 ) = v(t i ) + κ θ v(t i ) t + γ v(t i ) B v Simple way to approximate(simulate) the changes B s and B v in two correlated Brownian motion is to generate two independent standard normal z 1 and z 2 and take B s = t z and B s = t z where z = z 1 and z = ρz ρ 2 z 2

50 rrors Reference Slides: Extra Sampling (h ω, φ, σ 2 w, y): Individual Sampler Since, Thus, y t h t N(0, e ht ) and h t h t 1 N(ω + φh t 1, σ 2 w ) p(h t h t, y 1:n ) = p(y t h t )p(h t h t ) = p(y t h t )p(h t h t 1 )p(h t+1 h t ) = N(y t 0, e ht ) N(h t ω + φh t 1, σ 2 w ) N(h t+1 ω + φh t, σw 2 ) ( σ 2 ) = N(y t 0, e ht ) N h t ω t, (1 + φ 2 ) where ω t = ω + φ[(h t 1 ω)(h t+1 ω)] 1 + φ 2

51 rrors Reference Slides: Extra Since, p(y t h t ) exp{ h t 2 y 2 t 2 e ht } log p(y t h t ) 1 2 h t y 2 t 2 e ht Let 1 2 h t y t 2 2 e ht := log f (h t y t ), and observe that e ht is convex function, so can be bounded by a linear function of h t [5] log f (h t y t ) 1 2 h t y 2 t 2 {e ωt (1+ω t ) h t e ωt } := log g (h t y t ) Thus, p(h t h t )p(y t h t ) p(h t h t )f (y t h t ) σ 2 N(h t ω t, 1 + φ 2 )g (h t y t ) }{{} σ 2 N(h t ω t, 1 + φ 2 )

52 rrors Reference Slides: Extra Sampling (h ω, φ, σ 2 w, y): Individual Sampler Jacquire, Polson and Rossi [4] observed that where For t = 1,, n p(h t h t, y 1:n ) N(h t ω t, ω t = ω t + σ φ 2 ) σ 2 2(1 + φ 2 ) [y 2 t e ωt 1] 1 Sample the candidate h t N( ω t, σ 2 1+φ 2 ) 2 Accept h t with probability f ( h t y t, θ) { y 2 g ( h t y t, θ) = exp t [ e h t e ωt (1 + ω t ) + 2 h ]} ωt t e 3 If rejected, then return to step 1 and make a new proposal.

53 rrors Reference Slides: Extra Quality of Approximation ht true h_t Filtered h_t with single normal Index Figure: Approximation by single Normal ht true h_t Filtered h_t with 7 normal mixtures Index Figure: Approximation by mixtures

54 rrors Reference Slides: Extra Density Estimation by Mixture Model Given observation {v 1,, v n } probability density of v can be as; p(v i γ) = K i=1 π i p N (v i µ i, τ 2 i ) where γ = {µ 1,..., µ K, σ1 2,, σ2 K, π 1,, π K } and p N (v i µ i, τi 2) is the normal PDF with mean µ and variance τi 2 Then, N [ K p(v γ) = π i p N (v i µ i, τi 2 ) ] j=1 i=1

55 rrors Reference Slides: Extra Simulation Study: (i) Normal Innovations ɛ t N(0, 1) yt y_t h_t Index Figure: Simulated Data 4000 observation with Normal innovation Error n obs = 4000 ω = , φ = , σ 2 w =

56 phi Histogram of phi phi Index Series phi Lag sig Histogram of sig sig Index Series sig Lag om Histogram of om om Index Series om Lag rrors Reference Slides: Extra FIX-SVM Parameter Estimation: ɛ t N(0, 1) ACF ACF ACF Frequency Frequency Frequency Figure: FIX-SVM parameter estimation with normal innovations(φ, σ w, ω)

57 rrors Reference Slides: Extra FIX-SVM E(h t ) vs true h t : ɛ t N(0, 1) FIX SVM estimated ht(black solid) versus true ht(red dotted) tht Index Figure: FIX-SVM E(h t ) and true h t with normal innovations

58 rrors Reference Slides: Extra FLEX-SVM Innovation Density Estimation: ɛ t N(0, 1) mm flex log X^2 mm x x Figure: FLEX-SVM density log ɛ 2 t estimation

59 phi Histogram of phi phi Index Series phi Lag sig Histogram of sig sig Index Series sig Lag om Histogram of om om Index Series om Lag rrors Reference Slides: Extra FLEX-SVM Parameter Estimation: ɛ t N(0, 1) ACF ACF ACF Frequency Frequency Frequency Figure: FLEX-SVM parameter estimation with normal innovations (φ, σ w, ω)

60 rrors Reference Slides: Extra FLEX-SVM E(h t ) vs true h t : ɛ t N(0, 1) FLEX SVM estimated ht(black solid) versus true ht(red dotted) Time Figure: FLEX-SVM E(h t ) and true h t with normal innovations

61 rrors Reference Slides: Extra DOWJONES(GE): 2516 obs rdata GE.txt

62 rrors Reference Slides: Extra FLEX-SVM Model innovation density estimation log(mm) flex log X^2 mm x x

63 rrors Reference Slides: Extra FLEX-SVM/FIX-SVM Model Comparison(parameter) density.default(x = phi1) density.default(x = sig.w1) Density Density N = 1001 Bandwidth = N = 1001 Bandwidth =

64 rrors Reference Slides: Extra FLEX-SVM/FIX-SVM Model Comparison(h t ) eht E(exp(ht/2)) with flex E(exp(ht/2)) with fixed E(ht) with flex E(ht) with fixed Index

65 rrors Reference Slides: Extra DIC comparison between FIX- and FLEX-SVM Dow Jone FIX FLEX Dow Jone FIX FLEX AA (ALCOA) JNJ AIG JPM AXP KO BA MCD C MMM CAT MO DD MRK DIS MSFT GE PFE GM PG HD T HON UTX HPQ VZ IBM WMT INTC XOM SP

Lecture 4: Dynamic models

Lecture 4: Dynamic models linear s Lecture 4: s Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes hlopes@chicagobooth.edu

More information

DYNAMIC VS STATIC AUTOREGRESSIVE MODELS FOR FORECASTING TIME SERIES

DYNAMIC VS STATIC AUTOREGRESSIVE MODELS FOR FORECASTING TIME SERIES DYNAMIC VS STATIC AUTOREGRESSIVE MODELS FOR FORECASTING TIME SERIES Chris Xie Polytechnic Institute New York University (NYU), NY chris.xie@toprenergy.com Phone: 905-93-0577 June, 008 Electronic copy available

More information

Bayesian Model Comparison:

Bayesian Model Comparison: Bayesian Model Comparison: Modeling Petrobrás log-returns Hedibert Freitas Lopes February 2014 Log price: y t = log p t Time span: 12/29/2000-12/31/2013 (n = 3268 days) LOG PRICE 1 2 3 4 0 500 1000 1500

More information

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US

Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Dynamic linear models (aka state-space models) 1

Dynamic linear models (aka state-space models) 1 Dynamic linear models (aka state-space models) 1 Advanced Econometris: Time Series Hedibert Freitas Lopes INSPER 1 Part of this lecture is based on Gamerman and Lopes (2006) Markov Chain Monte Carlo: Stochastic

More information

Multivariate elliptically contoured stable distributions: theory and estimation

Multivariate elliptically contoured stable distributions: theory and estimation Multivariate elliptically contoured stable distributions: theory and estimation John P. Nolan American University Revised 31 October 6 Abstract Mulitvariate stable distributions with elliptical contours

More information

Bayesian Semiparametric GARCH Models

Bayesian Semiparametric GARCH Models Bayesian Semiparametric GARCH Models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics xibin.zhang@monash.edu Quantitative Methods

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Spatio-temporal precipitation modeling based on time-varying regressions

Spatio-temporal precipitation modeling based on time-varying regressions Spatio-temporal precipitation modeling based on time-varying regressions Oleg Makhnin Department of Mathematics New Mexico Tech Socorro, NM 87801 January 19, 2007 1 Abstract: A time-varying regression

More information

Bayesian Dynamic Linear Modelling for. Complex Computer Models

Bayesian Dynamic Linear Modelling for. Complex Computer Models Bayesian Dynamic Linear Modelling for Complex Computer Models Fei Liu, Liang Zhang, Mike West Abstract Computer models may have functional outputs. With no loss of generality, we assume that a single computer

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Michael Johannes Columbia University Nicholas Polson University of Chicago August 28, 2007 1 Introduction The Bayesian solution to any inference problem is a simple rule: compute

More information

Bayesian Monte Carlo Filtering for Stochastic Volatility Models

Bayesian Monte Carlo Filtering for Stochastic Volatility Models Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial

More information

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for

More information

Riemann Manifold Methods in Bayesian Statistics

Riemann Manifold Methods in Bayesian Statistics Ricardo Ehlers ehlers@icmc.usp.br Applied Maths and Stats University of São Paulo, Brazil Working Group in Statistical Learning University College Dublin September 2015 Bayesian inference is based on Bayes

More information

Spatio-Temporal Models for Areal Data

Spatio-Temporal Models for Areal Data Spatio-Temporal Models for Areal Data Juan C. Vivar (jcvivar@dme.ufrj.br) and Marco A. R. Ferreira (marco@im.ufrj.br) Departamento de Métodos Estatísticos - IM Universidade Federal do Rio de Janeiro (UFRJ)

More information

Bayesian Modeling of Conditional Distributions

Bayesian Modeling of Conditional Distributions Bayesian Modeling of Conditional Distributions John Geweke University of Iowa Indiana University Department of Economics February 27, 2007 Outline Motivation Model description Methods of inference Earnings

More information

Gaussian kernel GARCH models

Gaussian kernel GARCH models Gaussian kernel GARCH models Xibin (Bill) Zhang and Maxwell L. King Department of Econometrics and Business Statistics Faculty of Business and Economics 7 June 2013 Motivation A regression model is often

More information

Approximate Bayesian computation: an application to weak-lensing peak counts

Approximate Bayesian computation: an application to weak-lensing peak counts STATISTICAL CHALLENGES IN MODERN ASTRONOMY VI Approximate Bayesian computation: an application to weak-lensing peak counts Chieh-An Lin & Martin Kilbinger SAp, CEA Saclay Carnegie Mellon University, Pittsburgh

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Towards inference for skewed alpha stable Levy processes

Towards inference for skewed alpha stable Levy processes Towards inference for skewed alpha stable Levy processes Simon Godsill and Tatjana Lemke Signal Processing and Communications Lab. University of Cambridge www-sigproc.eng.cam.ac.uk/~sjg Overview Motivation

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Modeling conditional distributions with mixture models: Theory and Inference

Modeling conditional distributions with mixture models: Theory and Inference Modeling conditional distributions with mixture models: Theory and Inference John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università di Venezia Italia June 2, 2005

More information

On Bayesian Computation

On Bayesian Computation On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Particle Learning and Smoothing

Particle Learning and Smoothing Particle Learning and Smoothing Carlos Carvalho, Michael Johannes, Hedibert Lopes and Nicholas Polson This version: September 2009 First draft: December 2007 Abstract In this paper we develop particle

More information

VCMC: Variational Consensus Monte Carlo

VCMC: Variational Consensus Monte Carlo VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object

More information

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods Jonas Hallgren 1 1 Department of Mathematics KTH Royal Institute of Technology Stockholm, Sweden BFS 2012 June

More information

The Recycling Gibbs Sampler for Efficient Learning

The Recycling Gibbs Sampler for Efficient Learning The Recycling Gibbs Sampler for Efficient Learning L. Martino, V. Elvira, G. Camps-Valls Universidade de São Paulo, São Carlos (Brazil). Télécom ParisTech, Université Paris-Saclay. (France), Universidad

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Research Division Federal Reserve Bank of St. Louis Working Paper Series

Research Division Federal Reserve Bank of St. Louis Working Paper Series Research Division Federal Reserve Bank of St Louis Working Paper Series Kalman Filtering with Truncated Normal State Variables for Bayesian Estimation of Macroeconomic Models Michael Dueker Working Paper

More information

Approximate Bayesian Computation and Particle Filters

Approximate Bayesian Computation and Particle Filters Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra

More information

Bayesian spatial hierarchical modeling for temperature extremes

Bayesian spatial hierarchical modeling for temperature extremes Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics

More information

Duration-Based Volatility Estimation

Duration-Based Volatility Estimation A Dual Approach to RV Torben G. Andersen, Northwestern University Dobrislav Dobrev, Federal Reserve Board of Governors Ernst Schaumburg, Northwestern Univeristy CHICAGO-ARGONNE INSTITUTE ON COMPUTATIONAL

More information

CSC 2541: Bayesian Methods for Machine Learning

CSC 2541: Bayesian Methods for Machine Learning CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 3 More Markov Chain Monte Carlo Methods The Metropolis algorithm isn t the only way to do MCMC. We ll

More information

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract Bayesian analysis of a vector autoregressive model with multiple structural breaks Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus Abstract This paper develops a Bayesian approach

More information

Dynamic Generalized Linear Models

Dynamic Generalized Linear Models Dynamic Generalized Linear Models Jesse Windle Oct. 24, 2012 Contents 1 Introduction 1 2 Binary Data (Static Case) 2 3 Data Augmentation (de-marginalization) by 4 examples 3 3.1 Example 1: CDF method.............................

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Learning the hyper-parameters. Luca Martino

Learning the hyper-parameters. Luca Martino Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Sequential Monte Carlo Methods (for DSGE Models)

Sequential Monte Carlo Methods (for DSGE Models) Sequential Monte Carlo Methods (for DSGE Models) Frank Schorfheide University of Pennsylvania, PIER, CEPR, and NBER October 23, 2017 Some References These lectures use material from our joint work: Tempered

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Cholesky Stochastic Volatility. August 2, Abstract

Cholesky Stochastic Volatility. August 2, Abstract Cholesky Stochastic Volatility Hedibert F. Lopes University of Chicago R. E. McCulloch University of Texas at Austin R. S. Tsay University of Chicago August 2, 2011 Abstract Multivariate volatility has

More information

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31

Hierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31 Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,

More information

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making

Modeling conditional distributions with mixture models: Applications in finance and financial decision-making Modeling conditional distributions with mixture models: Applications in finance and financial decision-making John Geweke University of Iowa, USA Journal of Applied Econometrics Invited Lecture Università

More information

Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations

Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations Using Model Selection and Prior Specification to Improve Regime-switching Asset Simulations Brian M. Hartman, PhD ASA Assistant Professor of Actuarial Science University of Connecticut BYU Statistics Department

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

The Metropolis-Hastings Algorithm. June 8, 2012

The Metropolis-Hastings Algorithm. June 8, 2012 The Metropolis-Hastings Algorithm June 8, 22 The Plan. Understand what a simulated distribution is 2. Understand why the Metropolis-Hastings algorithm works 3. Learn how to apply the Metropolis-Hastings

More information

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Estimating Macroeconomic Models: A Likelihood Approach

Estimating Macroeconomic Models: A Likelihood Approach Estimating Macroeconomic Models: A Likelihood Approach Jesús Fernández-Villaverde University of Pennsylvania, NBER, and CEPR Juan Rubio-Ramírez Federal Reserve Bank of Atlanta Estimating Dynamic Macroeconomic

More information

Estimation of moment-based models with latent variables

Estimation of moment-based models with latent variables Estimation of moment-based models with latent variables work in progress Ra aella Giacomini and Giuseppe Ragusa UCL/Cemmap and UCI/Luiss UPenn, 5/4/2010 Giacomini and Ragusa (UCL/Cemmap and UCI/Luiss)Moments

More information

Modeling Ultra-High-Frequency Multivariate Financial Data by Monte Carlo Simulation Methods

Modeling Ultra-High-Frequency Multivariate Financial Data by Monte Carlo Simulation Methods Outline Modeling Ultra-High-Frequency Multivariate Financial Data by Monte Carlo Simulation Methods Ph.D. Student: Supervisor: Marco Minozzo Dipartimento di Scienze Economiche Università degli Studi di

More information

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors RicardoS.Ehlers Laboratório de Estatística e Geoinformação- UFPR http://leg.ufpr.br/ ehlers ehlers@leg.ufpr.br II Workshop on Statistical

More information

A Class of Non-Gaussian State Space Models. with Exact Likelihood Inference

A Class of Non-Gaussian State Space Models. with Exact Likelihood Inference A Class of Non-Gaussian State Space Models with Exact Likelihood Inference Drew D. Creal University of Chicago, Booth School of Business July 13, 2015 Abstract The likelihood function of a general non-linear,

More information

Spatial Dynamic Factor Analysis

Spatial Dynamic Factor Analysis Spatial Dynamic Factor Analysis Esther Salazar Federal University of Rio de Janeiro Department of Statistical Methods Sixth Workshop on BAYESIAN INFERENCE IN STOCHASTIC PROCESSES Bressanone/Brixen, Italy

More information

An introduction to Sequential Monte Carlo

An introduction to Sequential Monte Carlo An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods

More information

Monte Carlo in Bayesian Statistics

Monte Carlo in Bayesian Statistics Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview

More information

Computer intensive statistical methods

Computer intensive statistical methods Lecture 11 Markov Chain Monte Carlo cont. October 6, 2015 Jonas Wallin jonwal@chalmers.se Chalmers, Gothenburg university The two stage Gibbs sampler If the conditional distributions are easy to sample

More information

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9 Metropolis Hastings Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 9 1 The Metropolis-Hastings algorithm is a general term for a family of Markov chain simulation methods

More information

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo

The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo NBER Summer Institute Minicourse What s New in Econometrics: Time Series Lecture 5 July 5, 2008 The Kalman filter, Nonlinear filtering, and Markov Chain Monte Carlo Lecture 5, July 2, 2008 Outline. Models

More information

Time-Varying Parameters

Time-Varying Parameters Kalman Filter and state-space models: time-varying parameter models; models with unobservable variables; basic tool: Kalman filter; implementation is task-specific. y t = x t β t + e t (1) β t = µ + Fβ

More information

Cross-sectional space-time modeling using ARNN(p, n) processes

Cross-sectional space-time modeling using ARNN(p, n) processes Cross-sectional space-time modeling using ARNN(p, n) processes W. Polasek K. Kakamu September, 006 Abstract We suggest a new class of cross-sectional space-time models based on local AR models and nearest

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

GARCH Models Estimation and Inference

GARCH Models Estimation and Inference GARCH Models Estimation and Inference Eduardo Rossi University of Pavia December 013 Rossi GARCH Financial Econometrics - 013 1 / 1 Likelihood function The procedure most often used in estimating θ 0 in

More information

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models

Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Point, Interval, and Density Forecast Evaluation of Linear versus Nonlinear DSGE Models Francis X. Diebold Frank Schorfheide Minchul Shin University of Pennsylvania May 4, 2014 1 / 33 Motivation The use

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

Non-Parametric Bayes

Non-Parametric Bayes Non-Parametric Bayes Mark Schmidt UBC Machine Learning Reading Group January 2016 Current Hot Topics in Machine Learning Bayesian learning includes: Gaussian processes. Approximate inference. Bayesian

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

Gibbs Sampling for the Probit Regression Model with Gaussian Markov Random Field Latent Variables

Gibbs Sampling for the Probit Regression Model with Gaussian Markov Random Field Latent Variables Gibbs Sampling for the Probit Regression Model with Gaussian Markov Random Field Latent Variables Mohammad Emtiyaz Khan Department of Computer Science University of British Columbia May 8, 27 Abstract

More information

Volatility. Gerald P. Dwyer. February Clemson University

Volatility. Gerald P. Dwyer. February Clemson University Volatility Gerald P. Dwyer Clemson University February 2016 Outline 1 Volatility Characteristics of Time Series Heteroskedasticity Simpler Estimation Strategies Exponentially Weighted Moving Average Use

More information

Kernel Sequential Monte Carlo

Kernel Sequential Monte Carlo Kernel Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) * equal contribution April 25, 2016 1 / 37 Section

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,

More information

CS281A/Stat241A Lecture 22

CS281A/Stat241A Lecture 22 CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution

More information

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33

Hypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33 Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett

More information

MCMC algorithms for fitting Bayesian models

MCMC algorithms for fitting Bayesian models MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models

More information

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania Nonlinear and/or Non-normal Filtering Jesús Fernández-Villaverde University of Pennsylvania 1 Motivation Nonlinear and/or non-gaussian filtering, smoothing, and forecasting (NLGF) problems are pervasive

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

1 Elements of Markov Chain Structure and Convergence

1 Elements of Markov Chain Structure and Convergence 1 Elements of Markov Chain Structure and Convergence 1.1 Definition Random quantities x in a p dimensional state space χ are generated sequentially according to a conditional or transition distribution

More information

A note on Reversible Jump Markov Chain Monte Carlo

A note on Reversible Jump Markov Chain Monte Carlo A note on Reversible Jump Markov Chain Monte Carlo Hedibert Freitas Lopes Graduate School of Business The University of Chicago 5807 South Woodlawn Avenue Chicago, Illinois 60637 February, 1st 2006 1 Introduction

More information

Gibbs Sampling in Linear Models #2

Gibbs Sampling in Linear Models #2 Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling

More information

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t.

ECO 513 Fall 2008 C.Sims KALMAN FILTER. s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. u t = r t. u 0 0 t 1 + y t = [ H I ] u t. ECO 513 Fall 2008 C.Sims KALMAN FILTER Model in the form 1. THE KALMAN FILTER Plant equation : s t = As t 1 + ε t Measurement equation : y t = Hs t + ν t. Var(ε t ) = Ω, Var(ν t ) = Ξ. ε t ν t and (ε t,

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes

More information

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Marginal Specifications and a Gaussian Copula Estimation

Marginal Specifications and a Gaussian Copula Estimation Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is

More information

BAYESIAN MODEL CRITICISM

BAYESIAN MODEL CRITICISM Monte via Chib s BAYESIAN MODEL CRITICM Hedibert Freitas Lopes The University of Chicago Booth School of Business 5807 South Woodlawn Avenue, Chicago, IL 60637 http://faculty.chicagobooth.edu/hedibert.lopes

More information

Session 5B: A worked example EGARCH model

Session 5B: A worked example EGARCH model Session 5B: A worked example EGARCH model John Geweke Bayesian Econometrics and its Applications August 7, worked example EGARCH model August 7, / 6 EGARCH Exponential generalized autoregressive conditional

More information

Parsimony inducing priors for large scale state-space models

Parsimony inducing priors for large scale state-space models Vol. 1 (2018) 1 38 DOI: 0000 Parsimony inducing priors for large scale state-space models Hedibert F. Lopes 1, Robert E. McCulloch 2 and Ruey S. Tsay 3 1 INSPER Institute of Education and Research, Rua

More information