John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

Similar documents
( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

Notes on the stability of dynamic systems and the use of Eigen Values.

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

Fall 2010 Graduate Course on Dynamic Learning

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

On One Analytic Method of. Constructing Program Controls

Department of Economics University of Toronto

Bayesian Inference of the GARCH model with Rational Errors

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

Lecture 6: Learning for Control (Generalised Linear Regression)

( ) [ ] MAP Decision Rule

Part II CONTINUOUS TIME STOCHASTIC PROCESSES

Math 128b Project. Jude Yuen

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

Advanced time-series analysis (University of Lund, Economic History Department)

Robustness Experiments with Two Variance Components

SELFSIMILAR PROCESSES WITH STATIONARY INCREMENTS IN THE SECOND WIENER CHAOS

January Examinations 2012

Normal Random Variable and its discriminant functions

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Lecture VI Regression

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

Should Exact Index Numbers have Standard Errors? Theory and Application to Asian Growth

FI 3103 Quantum Physics

Comb Filters. Comb Filters

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

( ) () we define the interaction representation by the unitary transformation () = ()

2/20/2013. EE 101 Midterm 2 Review

Relative controllability of nonlinear systems with delays in control

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

Let s treat the problem of the response of a system to an applied external force. Again,

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

CS286.2 Lecture 14: Quantum de Finetti Theorems II

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System

HARMONIC MARKOV SWITCHING AUTOREGRESSIVE MODELS FOR AIR POLLUTION ANALYSIS

Volatility Interpolation

Discussion Paper No Multivariate Time Series Model with Hierarchical Structure for Over-dispersed Discrete Outcomes

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Supplementary Material to: IMU Preintegration on Manifold for E cient Visual-Inertial Maximum-a-Posteriori Estimation

Advanced Macroeconomics II: Exchange economy

Testing a new idea to solve the P = NP problem with mathematical induction

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Sampling Procedure of the Sum of two Binary Markov Process Realizations

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

A HIERARCHICAL KALMAN FILTER

An introduction to Support Vector Machine

Graduate Macroeconomics 2 Problem set 5. - Solutions

Chapter 6 DETECTION AND ESTIMATION: Model of digital communication system. Fundamental issues in digital communications are

CHAPTER 5: MULTIVARIATE METHODS

Linear Response Theory: The connection between QFT and experiments

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading

GORDON AND NEWELL QUEUEING NETWORKS AND COPULAS

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

CHAPTER 10: LINEAR DISCRIMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

Lecture 2 M/G/1 queues. M/G/1-queue

NPTEL Project. Econometric Modelling. Module23: Granger Causality Test. Lecture35: Granger Causality Test. Vinod Gupta School of Management

FTCS Solution to the Heat Equation

Stochastic Programming handling CVAR in objective and constraint

Hidden Markov Models

CONSISTENT ESTIMATION OF THE NUMBER OF DYNAMIC FACTORS IN A LARGE N AND T PANEL. Detailed Appendix

Data Collection Definitions of Variables - Conceptualize vs Operationalize Sample Selection Criteria Source of Data Consistency of Data

Robust and Accurate Cancer Classification with Gene Expression Profiling

Clustering (Bishop ch 9)

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Response of MDOF systems

Track Properities of Normal Chain

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Online Appendix for. Strategic safety stocks in supply chains with evolving forecasts

arxiv: v1 [cs.sy] 2 Sep 2014

Journal of Econometrics. The limit distribution of the estimates in cointegrated regression models with multiple structural changes

Fitting a Conditional Linear Gaussian Distribution

On elements with index of the form 2 a 3 b in a parametric family of biquadratic elds

arxiv: v1 [math.pr] 6 Mar 2019

Handout # 6 (MEEN 617) Numerical Integration to Find Time Response of SDOF mechanical system Y X (2) and write EOM (1) as two first-order Eqs.

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Clustering with Gaussian Mixtures

MAXIMIN POWER DESIGNS IN TESTING LACK OF FIT Douglas P. Wiens 1. July 30, 2018

Bernoulli process with 282 ky periodicity is detected in the R-N reversals of the earth s magnetic field

Machine Learning Linear Regression

P R = P 0. The system is shown on the next figure:

Panel Data Regression Models

SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

Epistemic Game Theory: Online Appendix

A NEW TECHNIQUE FOR SOLVING THE 1-D BURGERS EQUATION

Solution in semi infinite diffusion couples (error function analysis)

Comparison of Weibayes and Markov Chain Monte Carlo methods for the reliability analysis of turbine nozzle components with right censored data only

Chapter 8 Dynamic Models

High frequency analysis of lead-lag relationships between financial markets de Jong, Frank; Nijman, Theo

Transcription:

Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy of Iowa, USA b European Cenral Bank, Frankfur, Germany and Unversy of Bresca, Bresca, Ialy December, 008 1

Proofs of Theorems Proof of Theorem 1 Usng he mehods of Ryden e al. (1998) for he Markov normal mxure model, cov (y ; y s x 1 ; : : : ; x T ) = 0 B s0 = 0 B s (s = 1; ; : : :), (1) where =dag(); B = P e m1 0, whch esablshes su cency. If he egenvalues of P are dsnc hen P s dagonable and has specral decomposon P = Q 1 Q, where he marx =dag ( 1 ; : : : ; m1 ) conans he ordered egenvalues of P, 1 3 : : : m1. The marx Q has orhogonal columns and we may ake Q = [q 1 ; q ; : : : ; q m ] 0 = [; q ; : : : ; q m ] 0, Q 1 = q 1 ; q ; : : : ; q m = e m ; q ; : : : ; q m. If P s also rreducble and aperodc hen 1 = 1 > and we may wre B = Q 1 Q q 1 q 0 1 = Q 1 e Q () where e = dag (0; ; : : : ; m1 ). From (1) absence of seral correlaon s equvalen o 0 Q 1 e s Q =0 (s = 1; ; : : :). The rs elemen of Q s q 0 1 = 0 =0, and so 0 Q 1 s Q =0 (s = 1; ; : : :). (3) De ne he m 1 m 1 marx D = 6 4 1 m1 1.. m 1. m 1 1 m 1 m 1 m 1 3 7 5, m m1 1 whose deermnan s ( ) 6= 0 (Rao (1965), p 8). Le A = D 1 and le ; denoe he Kronecker dela funcon; hen < m 1 s=1 a s s = ; =) m 1 m 1 a s s = I m1 ( = 1; : : : ; m 1 ), s=1

and from (3) m 1 m 1 a s 0 Q 1 s Q = 0 = s=1 = 0. Proof of Theorem The nsananeous varance marx 0 s mmedaely aaned by consderng h h 0 0 = E z z = E z z 0 0 h = z z 0 s = 0 = R + m 0 u 0 : The dynamc covarance marces (p > 0) are obaned by condonng on s and s u, explong seral ndependence of observables afer condonng on he saes, and hen by margnalzng ou he saes: u = cov z ; z u = E z z 0 u 0 = E z z 0 u s = ; s u = [P u ] M 0 M 0 = = E z s = E where B u = (P e m 0 ) u = P u e m 0. Proof of Theorem 3 z 0 u s = [P u ] M 0 M 0 0 [P u ] e 0 mm 0 = M B u0 M 0 ; Adop he noaon n he proof of Theorem. From (), B u = P m = u q q 0. Subsung n he expresson for n he saemen of he heorem, where u = u r+1 u M q q 0 M 0 = u A 0 (u = 1; ; 3; : : :) = A 0 = hh M q h q h0 M 0, H = h : q 0 hp = q 0 h; M q h 6= 0.: = 3

Observe ha r s he number of dsnc egenvalues of P wh modulus n he open un nerval assocaed wh as leas one column of Q 0 no n he column null space of M. In oher words, r can be less han m 1 because some egenvalues are equal o zero (as n he compound Markov model nerpreed as havng m = m 1 m saes), because some egenvalues are repeaed, or because some egenvalues are assocaed wh columns of Q 0 all n he column null space of M. De ne now a sochasc process v wh auocovarances ~ u = P r+1 = u A 0 (u > 0) and ~ 0 = P r+1 = A0. Then for u > 0, ~ u = u, whle ~ 0 = r+1 A 0 = = 0 0 : Noce ha he marx ~ 0 0 = P m R s posve (sem) de ne, snce each R s a varance marx. Gven ha here are r dsnc egenvalues of P, ; : : : ; r+1, wh modulus n he open un nerval, conrbung o he deermnaon of u = ~ u, here exss a unque se of consans 1 ; : : : ; r such ha r r r = 0 ( = ; : : : ; r + 1) : The coe cens 1 ; : : : ; r deermne a degree r polynomal whose roos are 1 ; : : : ; 1 r. Thus for all u > r, ~ u r r+1 ~ u = u A 0 The auocovarance funcon of = = r+1 n = v u r r+1 u = r u herefore sas es he ule-walker equaons for a VAR(r) process wh coe cen marces I np o A 0 A 0 = 0: ( = 1; : : : ; r). 4

Deals of he Markov chan Mone Carlo algorhm Le s 1 = (s 11 ; : : : ; s T 1 ) 0. Then p s 1 = s11 T = p s 1;1 s 1 = s11 p T ; (4) where T s he number of ransons from perssen sae o n s 1. The n n Markov ranson marx P s rreducble and aperodc, and = ( 1 ; : : : ; m1 ) 0 s he unque saonary dsrbuon of fs 1 g. Le s = (s 1 ; : : : ; s T ) 0 denoe all T ransory saes. Then p s s 1 ; = T m s = U : (5) where U s he number of occurrences of s = (; ) ( = 1; : : : ; T ). The observables y depend on he laen saes s and he deermnsc varables x. If s = (; ) hen y = 0 x + + + " ; " s N 0; (h h h ) 1 : (6) Condonal on (x ; s ) ( = 1; : : : ; T ) he y are ndependen. From (6) one expresson for hs dsrbuon s p (y s; ) = () T n= h T= m 1 exp 4 h h m h T n= h m :s =(;) h U n= 3 " =5 ; (7) The uncondonal mean of he ransory saes whn each permanen sae s 0 0, whch s equvalen o = 0 ( = 1; : : : ; m 1 ). Le C be an m (m 1) orhonormal complemen of, de ne he (m 1) 1 vecors e 0 = C 0, and noe ha = C e ( = 1; : : : ; m 1 ). Consruc he m 1 m m 1 (m 1) block dagonal marx C = Blockdag [C 1 ; : : : ; C m1 ] and he m 1 (m 1) 1 vecor e = e 0 1; : : : ; e 0 m 1 0. Then = C e, and subsung n equaon (7) a he end of Secon.1.1, y = 0 x + e 0 C 0 0z 1 + e 0 C 0 z + " : (8) 5

Ths expresson has he form y = 0 w + " n whch he (k + m 1 m 1) 1 vecor = 0 ; ~ 0 ; e 0 0 and w 0 = x 0 ; z 10 C 0 ; z 0 C : (9) Thus condonal on he laen saes s (equvalenly z 1 and z ) ( = 1; : : : ; T ), and gven he resrcons on he sae means, (6) s a lnear regresson model wh hghly srucured heeroscedascy. If we ake = h s1 h s, hen " # p (y s; ) = () T= h T= T = () T= h T= T exp " h n= exp n= T h " = # T (y w) 0 = : (10) The kernel of he pror densy s he produc of he followng expressons. h p () / exp 0 H = m (11) p (p ) / p r 1 1 ( = 1; : : : ; m 1 ) (1) p ( ) / r 1 ( = 1; : : : ; m 1 ) (13) p (h) / h ( 1)= exp s h= (14) p (h ) / h ( 1 1)= exp s 1h = ( = 1; : : : ; m 1 ) (15) p (h ) / h ( 1)= exp s h = ( = 1; : : : ; m 1 ; = 1; : : : ; m ) (16) p e h / h (m 1 1)= exp h h e 0 = e 1 1 = h (m 1 1)= exp h h e = p e h ; h / (h h ) (m 1)= exp h h h e 0 e = ( = 1; : : : ; m 1 ) (17) (18) 6

/ h m 1(m 1)= p e h1 ; : : : ; h m ; h m 1 exp h h h e 0 e = h (m 1)= " = h m 1(m 1)= h (m 1)= exp n exp = h m 1(m 1)= m 1 1 h h h h (m 1)= e = h h e 0 [dag (h 1 ; : : : ; h m1 ) I m 1] e o = # (19) (0) (1) Condonal poseror dsrbuon of h. From (14), (18), (0) and (7), s h s () ; s = s + h e 0 e + h m 1 h e 0 e + T ", = + (m 1 1) + m 1 (m 1) + T: Condonal poseror dsrbuon of he h. From (15), (0), and (7), s h s ( ) ; s = s 1 + h h e 0 e m + h h ", :s =(;) = 1 + m 1 + nt ( = 1; : : : ; m 1 ). Condonal poseror dsrbuon of he h. From (16) and (7), s h s ( ) ; s = s + h h = + U :s =(;) ( = 1; : : : ; m 1 ; = 1; : : : ; m ). Condonal poseror dsrbuon of P. From (1), (7), and (4), T p (P) / s11 p r 1+T 1 exp h " = : 7 ",

Use a Meropols whn Gbbs sep for each for each row of P. Draw he canddae p s Bea (r 1 + T 1 ; : : : ; r 1 + T m1 ), and le C 0 be he orhonormal complemen of correspondng o he resulng P. Accoun mus be aken of he fac ha because " = y 0 x s z 10 C 0, e C0 s a funcon of and herefore of P. Le C 0 be he orhonormal complemen of and compue " = y 0 x s z 10 C e 0. The Meropols accepance rao s s 11 exp s11 exp h P T " = h P T " = If he canddae s acceped, hen P s updaed o P, o, and C 0 o C 0. The orhonormal complemen of C 0 of s no unque. As dscussed n Secon.1. nohng subsanve n he model depends on whch C 0 s used. However, f C 0 s no a smooh funcon of hen he canddae wll be reeced more ofen han f s, because C 0 e wll change more. To consruc a unque orhonormal complemen C ha s a smooh funcon of a vecor of probables wh P m = 1, noe ha (0; 1) wh probably 1 ( = 1; : : : ; m). Consruc a marx C as follows. The rs column of C s c 11 =, c 1 = 1, c 1 = 0 ( = 3; : : : ; m). The h column of C s c = ( = 1; : : : ; ), c +1; = P = +1, c = 0 ( = + ; : : : ; m). Consruc C from C by normalzng he columns o each have Eucldan lengh 1. Condonal poseror dsrbuon of R. From (13), (7), and (5), m p / k=1 r +U k 1 k exp h : :s 1 = " = Use a Meropols whn Gbbs sep for each for each row of R. Noe ha n " = y 0 x s z 0 C ~, C s a funcon of whenever s 1 =. Draw he canddae from Bea (r + U 1 ; : : : ; r + U ;m ). Le C be he orhonormal complemen of. For all for whch s 1 =, compue " = y 0 x s z 0 C ~ :The Meropols accepance rao s exp h P :s 1 = " = exp h P : :s 1 = " = The Meropols sep s used only afer he rs 1,000 eraons. Condonal poseror dsrbuon of. Recall ha y = w 0 + ", wh 0 = 0 ; ~ 0 ; ~ 0 and From (11), (17), (1) and (10), w 0 = x 0 ; z 10 C 0 ; z 0 C : () s N ; H 1 ; H = H + h 8. T w w 0

where wh he mean s = H 1 c wh H = 4 H 0 0 0 H 0 0 0 H 3 5, H = h hi m1 1and H = h h Dag (h 1 ; : : : ; h m1 ) ; c = c + h T w y, c 0 = 0 H 0 ; 0 0 : Drawng he sae marx S. The nal sep of he MCMC algorhm s he draw of he T marx of laen saes from s dsrbuon condonal on he parameers and observed and. De ne d = p [y s = (; ) ; x ; ] h = () 1= (hh h ) 1= exp hh h y 0 x = and m d = p (y s 1 = ; x ; ) = d. We draw s sp (s ; y; ) as a wo sep margnal-condonal, s 1 s P (s 1 ; y; ) followed by s s P (s s 1,; y; ). Frs, gven d ( = 1; : : : ; T; = 1; : : : ; m 1 ) and P, he algorhm of Chb (1996) draws s 1 s P (s 1 ; y; ) and provdes p (y ) as a byproduc of he compuaons. Then he ransory saes s are condonally ndependen wh P (s = s 1 = ; y ; x ; ) _ d. References Chb S. 1996. Calculang poseror dsrbuons and modal esmaes n Markov mxure models. Journal of Economercs 75: 79-97. Rao CR. 1965. Lnear Sascal Inference and Is Applcaons. New ork: Wley. Rydén T, Teräsvra T, Åsbrnk S. 1998. Sylzed facs of daly reurn seres and he hdden Markov model. Journal of Appled Economercs 13: 17-44. 9