Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model

Size: px
Start display at page:

Download "Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model"

Transcription

1 Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model Bernard Hanzon and Thomas Ribarits University College Cork School of Mathematical Sciences Cork, Ireland European Investment Bank Risk Management Directorate Financial Risk Department 100, boulevard Konrad Adenauer, L-2950 Luxembourg Summary In this paper 1 we consider co-integrated I(1) processes in the state-space framework. We introduce the state-space error correction model (SSECM) and provide a complete treatment of how to estimate SSECMs by maximum likelihood methods, including reduced rank regression techniques. In doing so, we follow very closely the Johansen approach for the VAR case; see Johansen (1995). The remaining free parameters will be represented using a novel type of local parametrization. An application to UK money market zero rates shows the usefulness of the new approach. Keywords: Co-integration, state-space models, data-driven local coordinates, maximum likelihood estimation, reduced rank regression. 1. INTRODUCTION In dynamical stochastic modelling, including financial dynamic stochastic modelling, one distinguishes stationary models and non-stationary models. The simplest example of a non-stationary model is the wellknown random walk. Mean reverting models are examples of stationary models. Intuitively, when the processes run over a long enough period of time, a non-stationary process will tend to become large compared to a stationary process. Granger s idea (for which he received the Nobel prize in 2003) was that two variables which are both non-stationary can still allow a linear combination which is stationary: the variables are co-integrated. Such linear combinations are interpreted as equilibrium relations. This idea can be generalized to the case of more than two variables in a straightforward way. An important problem that arises to see if such models are applicable for a certain set of variables is the system identification problem, which asks us to find a stochastic system within an appropriate class of systems allowing for co-integration, that represents available data as good as possible. Here we will use the maximum likelihood principle for this purpose. Within any given class of systems this leads to the maximum likelihood estimator, which is defined as the optimizer of the likelihood function. The optimization problem involved is non-trivial, not in the least due to the co-integration structure in the model. For the class of (vector) autoregressive time series models with a fixed maximal lag, S. Johansen (see e.g. Johansen (1995)) has given an elegant solution method for this optimization problem. The solution method involves an eigenvalue calculation as 1 The present manuscript is an extended summary of the full paper. This paper has been submitted to the 5th World Congress of the Bachelier Finance Society, London, UK, July 2008.

2 2 Bernard Hanzon and Thomas Ribarits its key step. Up till now co-integration has mostly been investigated in the context of discrete-time vector autoregressive (VAR) models. In modern mathematical finance continuous-time stochastic model are the standard. Therefore for application of co-integration models to finance (e.g. to interest rate modelling) a continuous-time formulation will be useful. Here we present a discrete-time and a continuous-time state space model with co-integration and an associated optimization method to arrive at the maximum likelihood estimator. In our approach we endeavor to stay as close as possible to Johansen s celebrated method for the VAR class. One key step is the derivation of the state-space error-correction form of the model. With the goal of arriving at the maximum likelihood estimator, we apply a series of partial optimization steps (called concentration steps in econometrics) on the likelihood function. The crucial step of finding the co-integrating relations, conditional on the choice of the remaining parameters, involves the solution of an eigenvalue problem in complete analogy with Johansen s solution for the class of VAR models. After a number of these partial concentration steps we end up with a concentrated likelihood function in a smaller number of parameters. In order to find an optimum for that function we apply a specific gradient search technique that circumvents the potential problems that are related to the fact that eigenvalues of a matrix are not everywhere differentiable with respect to the entries of that matrix. The method that we apply is a special case of the so-called separable least-squares DDLC method for maximum likelihood estimation of stochastic linear systems, see e.g. Ribarits et al. (2005) and the references given there. The DDLC (data-driven local coordinates) technique applied avoids having to work out a particular canonical form for the state space matrices involved: the method produces its own local coordinates. As we are using a search method to find the maximum likelihood estimator, a starting point needs to be supplied. How we propose to do that is described in a separate section. In a final section an application to UK money market zero rates is described. 2. THE DISCRETE-TIME STATE-SPACE ERROR CORRECTION MODEL Consider the following linear, time invariant, discrete-time state-space system: x t+1 = Ax t + Bε t, x t0 = x 0 (2.1) y t = Cx t + ε t, t = t 0, t 0 + 1, t 0 + 2,.... (2.2) Here, x t is the n-dimensional state vector, which is, in general, unobserved; A R n n, B R n p and C R p n are parameter matrices; y t is the observed p-dimensional output. In addition, (ε t ) t N is a p- dimensional Gaussian discrete-time white noise process with Eε t = 0 and Eε t ε t = Σ. The state-space system in (2.1, 2.2) is called minimal if rk [ B, AB,..., A n 1 B ] =rk [ C, A C,..., (A ) n 1 C ] = n. In the remainder of the paper we will assume minimality of the state-space systems except if the contrary is explicitly stated. The matrix A is called asymptotically stable if λ max (A) < 1 is satisfied, where λ max (A) denotes an eigenvalue of A of maximum modulus and stable if the sequence of matrices {A k } k=0 is bounded. Here we will consider models for which A is stable and for which each eigenvalue is either 1 or lies within the open unit disc. It can be shown that the requirement of stability of A can be replaced by the equivalent requirement that the geometric and algebraic multiplicities of 1 as an eigenvalue of A coincide. This multiplicity will be denoted by µ = c, where c stands for the number of common trends in the model. It can be shown that µ satisfies 0 µ p. The sub-class of models for which 1 µ p 1 is denoted by I(1). This is the simplest class of co-integrated models. The number r := p µ = p c is called the rank of co-integration. It is also the number of linearly independent co-integrating vectors (this will become clear later). Note that in the model used the process noise (noise in the state equation (2.1)) coincides with the

3 Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model 3 measurement noise (noise in the measurement equation (2.2)). Starting with other noise specifications, e.g. with independent process and measurement noise, one can arrive at a specification with equal process and measurement noise by applying the Kalman filter and using the innovations ε t = y t CE(x t y t 1, y t 2,...) as the noise terms in the model. In general one will get a model with time-varying parameters in this way, however the time-variation tends to be small after a sufficient amount of time, and here we will assume that the parameters are constant in this innovations representation. Note that we can solve for y t by eliminating the state vector sequence from the equations, to obtain: y t = CA t t0 x 0 + Σ t t0 j=1 CAj 1 Bε t j + ε t, t t 0 (2.3) To simplify the exposition let us assume that x 0 = 0. Furthermore let us formally define ε t = 0, y t = 0, for all t with t < t 0. Let L denote the lag operator, so Ly t = y t 1. Then we can write: where k(z) = y t = k(l)ε t, CA j 1 Bz j + I = Cz(I za) 1 B + I, z C, z σ(a) j=1 where σ(a) denotes the spectrum of A. Let k(z) := k(z) 1 = z C(I zā) 1 B + I, with Ā = A BC, B = B, C = C. Then ε t = k(l)y t = CĀj 1 Byt j + y t j=1 It is well-known (and an explicit proof will be given in the full paper) that the geometric multiplicity µ of 1 as an eigenvalue of A can be read of from the rank of k(1), in fact rank( k(1)) = r = p µ An equivalent statement is that there must exist full column rank p r matrices α and β such that C(I Ā) 1 B + I = αβ Defining k(z) := I + ( k(z) k(1)z)/(z 1) we find k(0) = 0 and we can rewrite ε t = k(l)y t in the so-called error correction form y t = αβ y t 1 + k(l) y t + ε t where y t = y t y t 1. If y t 1 is far from the kernel of the matrix β and β y t 1 is large then a large correction can be expected in y t. This motivates the (standard) terminology error-correction form. Note that except for the innovation ε t the terms on the right-hand side of this equation are known at time t 1. Therefore this form can be used as a regression model, in which the coefficients α, β and those determining k have to be determined. The error-correction form can be rewritten as where x t+1 = Ā x t + B y t, (2.4) y t = αβ y t 1 + CĀ(I Ā) 1 x t + ε t, (2.5) αβ = C(I Ā) 1 B I. This is called the state-space error-correction form or the state-space error correction model (SSEMC).

4 4 Bernard Hanzon and Thomas Ribarits 3. THE CONTINUOUS-TIME STATE-SPACE ERROR CORRECTION MODEL Consider the following continuous-time stochastic linear state-space model (with formats of vectors and matrices analogous to the discrete time case) dx t = AX t + BdW t, x t0 = x 0 (3.6) dy t = CX t dt + dw t, t [t 0, ) (3.7) Here we assume A to be continuous-time stable, i.e. exp(at), t [0, ) is bounded, and each element in the spectrum of A is zero or lies in the open left half plane. Then the geometric and algebraic multiplicities of 0 as an eigenvalue of A coincide and will be denoted by µ. If 1 µ p 1 then this is a co-integration model of class I(1), but now for continuous time. The model is in innovations form. Here W t denotes a Wiener process with covariance matrix Σ so that W t W s has a normal distribution with mean zero and covariance matrix (t s)σ if s < t. We can now write dw t = dy t CX t dt = dy t + t s=t 0 C exp( Ā(t s) B)dY s dt In integral form this gives (using the simplifying assumption that W t0 = 0 and Y t0 = 0) W t = Y t + t s=t 0 ( CĀ 1 exp(ā(t s)) B CĀ 1 B)dYs Note that the integral is measurable with respect to the strict past of the process Y t at time t, because the integrand is zero at s = t. We cannot split the integral without losing this essential property. To obtain the desired regression equation we therefore choose an arbitrary small number t > 0 and derive the following approximate equation: Here the co-integration rank constraint is Y t Y t t (I CĀ 1 B)Yt t CĀ 1 X t t + W t. rank(i CĀ 1 B) = r. The representation obtained is the continuous-time state-space error correction model in integral form. 4. MAXIMIZATION OF THE LIKELIHOOD FUNCTION The proposed method for the maximization of the likelihood function is outlined here for the discrete-time state-space error correction model. A similar approach can be applied to the continuous time model with discrete-time (but not necessarily with equidistant time points) observations. Let us assume n > p for simplicity of exposition and let us define the following quantities: Z 0t := y t, Z 1t := y t 1, Z 2t := x t, M i,j := 1 T T Z it Z jt, i, j {0, 1, 2} Note that Z 2t depends on Ā, B via x t = j=1 (Ā)j 1 B yt j. The maximum likelihood estimator is the value of (α, β, Ā, B, C, Σ) that minimizes L subject to C(I Ā) 1 B = (Ip + αβ ) where L is given by: L = log det(σ) + 1 T ΣT t=1(z 0t αβ Z 1t CĀ(I Ā) 1 Z 2t ) Σ 1 (Z 0t αβ Z 1t CĀ(I Ā) 1 Z 2t ), Note that L is minus the logarithm of the likelihood function, but will, with some abuse of language, be called the log-likelihood function here. t=1

5 Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model 5 As the criterion is quadratic in the combined elements of C and α and the constraint is linear in those combined elements, we can solve for C and α as a function of the other parameters. Substituting the result into the log-likelihood function we obtain a so-called concentrated log-likelihood function L 2c that depends on (β, Ā, B, Σ) and is given by where L 2c = log det Σ + 1 T T (R 0t S 01 β(β S 11 β) 1 β R 1t ) Σ 1 (R 0t S 01 β(β S 11 β) 1 β R 1t ) t=1 and where R 0t and R 1t are given by S ij = 1 T T R it R jt, i, j {0, 1} t=1 R 0t = Z 0t ([M 02 (I Ā)Ā]H 11 H 21 )Ā(I Ā) 1 Z 2t, and H = R 1t = Z 1t ([M 12 (I Ā)Ā]H 11 H 21 )Ā(I Ā) 1 Z 2t ( ) ( ) H11 H 12 Ā(I Ā) = 1 1 M 22 (I Ā) 1 Ā (I Ā) 1 B H 21 H 22 ((I Ā) 1 B) 0 For given β, Ā, B the optimal choice ˆΣ of matrix Σ is ˆΣ = S 00 S 01 β(β S 11 β) 1 β S 10 and substituting this in the concentrated log-likelihood function L 2c gives, after some manipulations, ( L 3c (β, Ā, B) det(β (S 11 S 10 S00 1 = log det(s 00 ) + log S ) 01)β) det(β + p S 11 β) Note that the quotient in the second term of this expression can be viewed as a generalized Rayleigh quotient. The problem of optimizing this with respect to β can be solved by considering the associated generalized eigenvalue problem (λs 11 S 01 S00 1 S 10)v = 0, λ R, v R p, v 0. Let λ i, i = 1, 2,..., r denote the largest r eigenvalues for this problem and let v 1, v 2,..., v r denote corresponding (linearly independent) eigenvectors. Then ˆβ = [v 1, v 2,..., v r ]T is an optimal choice for β for any arbitrary non-singular r r matrix T. Substituting this into L 3c gives L 4c (Ā, B) r = log det S 00 + log(1 λ i ) + p. Note that the resulting function L 4c depends in a rather complicated way on the matrices Ā, B. Here we would like to apply a gradient-search algorithm starting from a good initial point. At first finding the gradient of L 4c might look extremely complicated because of the dependence on the eigenvalues λ 1, λ 2,..., λ r which may even be non-differentiable at some points. Our solution is surprisingly simple: for any given Ā, B we calculate the vector of partial derivatives of the original log-likelihood function L with respect to the entries of Ā, B and evaluate the result in i=1 β = ˆβ, Σ = ˆΣ, α = ˆα, C = ˆ C and call the resulting vector of partial derivatives. We have the following result:

6 6 Bernard Hanzon and Thomas Ribarits (i) If L 4c (Ā, B) is differentiable at (Ā, B) then coincides with the vector of partial derivatives of L 4c with respect to the entries of Ā, B at Ā, B. (ii) If is non-zero at (Ā, B) the value of L 4c can be improved by making a sufficiently small step in the direction of. (iii) If = 0 we have arrived at a critical point of the log-likelihood function L. By considering the state-space error correction model again, it can be seen that if we replace (Ā, B, C) by (T 1 ĀT1 1 1, T 1 B, CT 1 ), and (α, β) by(αt2 1, T 2 β) where T 1 and T 2 are non-singular matrices of appropriate sizes, then the criterion value remains the same. This implies for L 4c that for fixed Ā, B all pairs (T 1 ĀT1 1, T 1 B), T1 nonsingular, give the same function value. The set of all such pairs forms an equivalence class; the DDLC method constructs a system of np local coordinates that is transversal to the tangent space of this equivalence class (in fact orthogonal with respect to some suitable metric) and calculates the gradient with respect to these local coordinates. 5. MODEL SELECTION AND INITIAL PARAMETER ESTIMATES FOR SSECMS Note that up to now we have assumed to know both the state dimension n, i.e. the size of the Ā matrix in (2.4) and the number of common trends c. In practice, these integer-valued parameters have to be estimated from data. Similarly, it is desirable to start the likelihood optimization procedure with some educated guess for (Ā, B). In principle, one can think of many ways of how to find estimates for n, c and, thereafter, initial estimates for (Ā, B). In the application introduced in the following section, we will proceed along the lines of the classical Johansen approach to co-integration: First, we estimated a conventional VAR model (in levels). Let ˆk HQ denote the estimated lag order of such VAR model of the form y t = k j=1 A(j)y t j + ε t, using the Hannan-Quinn criterion function. Given the lag order ˆk HQ, we then performed the classical trace test in Johansen s VAR error correction model to determine r and thus c = p r. The resulting VAR error correction model is readily obtained as well, and one can then transform it back to a conventional VAR model which can, in turn, be straightforwardly written in state-space form, (A, B, C ) say. Note that the state-space representation of a VAR model is widely known as the companion form of the VAR model. By construction, the A matrix has one as c-fold eigenvalue. Applying a suitable state transformation of the form (Ã, B, C ) = (T A T 1, T B, C T 1 ) yields a state-space system with block diagonal à matrix, i.e. ( ) xt+1,1 = x t+1,2 à { ( }} ) { ( Ic 0 xt,1 0 à st ( ) st y t = C1, C }{{} C ( xt,1 x t,2 ) + B {( }} ){ B1 B st ε t (5.8) x t,2 ) + ε t (5.9) where (Ãst st st, B, C ) represents a stable system, i.e. λ max(a st ) < 1. In order to determine n, one can now apply a so-called balance and truncate approach to the stable subsystem. Balanced model truncation is a simple and, nevertheless, efficient model reduction technique for state-space systems; see e.g. Scherrer (2002) and the references provided there for the concept of stochastic

7 Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model 7 balancing, which relies upon the determination of the singular values corresponding to the system. For given (Ãst st st, B, C ) the computation of these singular values boils down to the solution of two matrix equations. Using an information-type criterion such as SV C(n), we consider the number of singular values that differ significantly from zero as the estimated order of the stable sub-system. By adding c, we then obtain the estimated system order ˆn. Also, by simply truncating (Ã, B, C ) in (5.8,5.9) to size ˆn, we obtain an initial state-space model which can be written in SSECM form, yielding initial estimates for (Ā, B). 6. AN APPLICATION TO UK MONEY MARKET RATES We consider a four dimensional vector (p = 4) of UK money market (zero) rates r 3, r 6, r 9 and r 12 which have been directly obtained from the 4 LIBOR rates corresponding to maturities 3 months, 6 months, 9 months and 1 year. The data set is comprised of 913 daily observations covering the 2.5 year period from 1 January 2004 to 30 June 2006; see Figure 1. For estimation purposes, we used the first 2 years of observations only, and we evaluated the performance of the SSECM (in comparison with Johansen s VAR error correction model) on data corresponding to the first 6 months in For a start, we performed a VAR analysis on the estimation data set: Using the Hannan-Quinn criterion, the lag order k of the AR polynomial has been determined as ˆk HQ = 2, leading to an (unrestricted) estimated model of the form y t = A(1)y t 1 + A(2)y t 2 + ε t. Note that the BIC criterion also led to ˆk BIC = 2, whereas the AIC criterion yielded ˆk AIC = 4. Based on the estimation of this VAR(2) model, the classical Johansen trace test (without intercept) yielded r = 3. Hence, we found only one common trend (c = 1) driving the four interest rates 2. In the next step, following Section 5, we wrote the VAR(2) error correction model in state-space form and applied a suitable state transformation to obtain a system of the form (5.8,5.9). Note that this system had state dimension 8, and c = 1 (as r = 3), i.e. the first state is I(1). In order to reduce the state dimension, we employed a balance and truncate approach to the 7-dimensional stable subsystem: The singular values corresponding to the stable subsystem, sometimes also referred to as canonical correlations, are used for defining the information-type criterion SV C(n) in Figure 1. As can be seen there, SV C(n) is minimized by choosing 3 canonical correlations. Hence, the stable sub-system was truncated taking into account only 3 states and an SSECM of state dimension n = 4 was then estimated: such SSECM has 31 free parameters. Note that the estimated VAR(2) error correction model also had 31 free parameters (although it represents, of couse, a different model class). In order not to depend on only one choice of the initial pair (Ā, B) for the subsequent iterative optimization algorithm, we did not only take the SSECM model corresponding to Johansen s VAR(2) estimate as starting point for the estimation of the SSECM. Instead, we randomly generated 99 other initial points ( centered at the one obtained directly from the VAR(2) model). Most likely, one would normally only retain the best fitting model out of the 100 estimated ones esimation data corresponding to the best model are indicated by a square in Figure 2 but we decided to present the top 20 below and to compare their performance to the Johansen VAR estimate. Referring to Figure 2, the following observations can be made: 2 Such observation is in line with the so-called expectations hypothesis of the term structure of interest rates. It should be noted, however, that more common trends are often found if one were to include e.g. also swap rates with maturities of 2,3,4, etc. years in the analysis.

8 8 Bernard Hanzon and Thomas Ribarits 5.4 Information Type Criterion using Canonical Correlations Interest Rate SVC Days as from 1 January System order Figure 1. Left: UK money market (zero) rates from 1 January 2004 to 30 June 2006 for maturities of 3 months, 6 months, 9 months and 1 year, respectively. Right: Estimation of the state dimension of the stable part of the SSECM. The best 20 SSECMs with respect to in-sample likelihood fit were all better than the estimated VAR(2) model; see the bottom left part of Figure 2. The out-of-sample fit, as measured by log det ˆΣ+p, where ˆΣ denotes the empirical variance-covariance matrix of the prediction errors obtained on the validation sample, was better for all the 20 SSECMs, too. However, the differences were rather small. The estimated co-integrating spaces for VAR and SSECMs almost coincided. This is evidenced by looking at the top right graph of Figure 2 showing the (negligible) Hausdorff distances between the Student Version of MATLAB estimated co-integrating spaces. Note that the Hausdorff distance between two sub-spaces (of the same dimension) of R n may take values between 0 (the sub-spaces coincide) and 1 (the sub-spaces are orthogonal to each other). As co-integrating sub-spaces can be estimated super-consistently, this observation does not come as a big surprise. In all cases considered, the search algorithm used for the estimation of the SSECMs terminated after reasonably many iterations; see the bottom right graph of Figure 2. Student Version of MATLAB As a concluding remark we want to emphasize that the Hausdorff distance between the estimated cointegrating spaces and the column span of was in the order of magnitude of This shows that the interest rate spreads r 3 r 6, r 6 r 9 and r 9 r 12

9 Co-integration in Continuous and Discrete Time The State-Space Error-Correction Model 9 can be considered to be stationary. The development of a formal test procedure in the framework of SSECMs is a topic of future research. Out of Sample Likelihood Comparison of Predictive Power Initial System Gap Contegrating Spaces SSECM < > AR 7.5 x 10 4 Comparison of Cointegrating Spaces Initial System 26.3 In Sample Likelihood Fit 50 Number of Iterations in the Search Algorithm Likelihood Value Number of Iterations Initial System Initial System Figure 2. Best 20 SSECMs with respect to in-sample likelihood fit (bottom left). The other graphs show the out-of-sample fit (top left), the Hausdorff distance between the estimated co-integrating space for VAR and SSECMs (top right) and the number of iterations the estimation algorithm took until it found a local optimum (bottom right). Student Version of MATLAB

10 10 Bernard Hanzon and Thomas Ribarits REFERENCES Johansen, S. (1995). Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models. Oxford University Press. Ribarits, T., M. Deistler, and B. Hanzon (2005). An analysis of separable least squares data driven local coordinates for maximum likelihood estimation of linear systems. Automatica, Special Issue on Data- Based Modeling and System Identification 41 (3), Scherrer, W. (2002). Local optimality of minimum phase balanced truncation. In Proceedings of the 15th IFAC World Congress, Barcelona, Spain.

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

ECON 4160, Lecture 11 and 12

ECON 4160, Lecture 11 and 12 ECON 4160, 2016. Lecture 11 and 12 Co-integration Ragnar Nymoen Department of Economics 9 November 2017 1 / 43 Introduction I So far we have considered: Stationary VAR ( no unit roots ) Standard inference

More information

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Pairs Trading by Cointegration Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Distance method Cointegration Stationarity

More information

ECON 4160, Spring term Lecture 12

ECON 4160, Spring term Lecture 12 ECON 4160, Spring term 2013. Lecture 12 Non-stationarity and co-integration 2/2 Ragnar Nymoen Department of Economics 13 Nov 2013 1 / 53 Introduction I So far we have considered: Stationary VAR, with deterministic

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

Multivariate Time Series: Part 4

Multivariate Time Series: Part 4 Multivariate Time Series: Part 4 Cointegration Gerald P. Dwyer Clemson University March 2016 Outline 1 Multivariate Time Series: Part 4 Cointegration Engle-Granger Test for Cointegration Johansen Test

More information

Modelling of Economic Time Series and the Method of Cointegration

Modelling of Economic Time Series and the Method of Cointegration AUSTRIAN JOURNAL OF STATISTICS Volume 35 (2006), Number 2&3, 307 313 Modelling of Economic Time Series and the Method of Cointegration Jiri Neubauer University of Defence, Brno, Czech Republic Abstract:

More information

Financial Econometrics

Financial Econometrics Financial Econometrics Long-run Relationships in Finance Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Long-Run Relationships Review of Nonstationarity in Mean Cointegration Vector Error

More information

Advanced Econometrics

Advanced Econometrics Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 2, 2013 Outline Univariate

More information

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms

Y t = ΦD t + Π 1 Y t Π p Y t p + ε t, D t = deterministic terms VAR Models and Cointegration The Granger representation theorem links cointegration to error correction models. In a series of important papers and in a marvelous textbook, Soren Johansen firmly roots

More information

Cointegration Lecture I: Introduction

Cointegration Lecture I: Introduction 1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Econometrics II. Seppo Pynnönen. January 14 February 27, Department of Mathematics and Statistics, University of Vaasa, Finland

Econometrics II. Seppo Pynnönen. January 14 February 27, Department of Mathematics and Statistics, University of Vaasa, Finland Department of Mathematics and Statistics, University of Vaasa, Finland January 14 February 27, 2014 Feb 19, 2014 Part VI Cointegration 1 Cointegration (a) Known ci-relation (b) Unknown ci-relation Error

More information

VAR Models and Cointegration 1

VAR Models and Cointegration 1 VAR Models and Cointegration 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot The Cointegrated VAR

More information

TESTING FOR CO-INTEGRATION

TESTING FOR CO-INTEGRATION Bo Sjö 2010-12-05 TESTING FOR CO-INTEGRATION To be used in combination with Sjö (2008) Testing for Unit Roots and Cointegration A Guide. Instructions: Use the Johansen method to test for Purchasing Power

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Fin. Econometrics / 31

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Fin. Econometrics / 31 Cointegrated VAR s Eduardo Rossi University of Pavia November 2014 Rossi Cointegrated VAR s Fin. Econometrics - 2014 1 / 31 B-N decomposition Give a scalar polynomial α(z) = α 0 + α 1 z +... + α p z p

More information

Financial Econometrics

Financial Econometrics Financial Econometrics Multivariate Time Series Analysis: VAR Gerald P. Dwyer Trinity College, Dublin January 2013 GPD (TCD) VAR 01/13 1 / 25 Structural equations Suppose have simultaneous system for supply

More information

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] 1 Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8] Insights: Price movements in one market can spread easily and instantly to another market [economic globalization and internet

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

Fitting Linear Statistical Models to Data by Least Squares: Introduction

Fitting Linear Statistical Models to Data by Least Squares: Introduction Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

An application of the GAM-PCA-VAR model to respiratory disease and air pollution data

An application of the GAM-PCA-VAR model to respiratory disease and air pollution data An application of the GAM-PCA-VAR model to respiratory disease and air pollution data Márton Ispány 1 Faculty of Informatics, University of Debrecen Hungary Joint work with Juliana Bottoni de Souza, Valdério

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Vector error correction model, VECM Cointegrated VAR

Vector error correction model, VECM Cointegrated VAR 1 / 58 Vector error correction model, VECM Cointegrated VAR Chapter 4 Financial Econometrics Michael Hauser WS17/18 2 / 58 Content Motivation: plausible economic relations Model with I(1) variables: spurious

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University Topic 4 Unit Roots Gerald P. Dwyer Clemson University February 2016 Outline 1 Unit Roots Introduction Trend and Difference Stationary Autocorrelations of Series That Have Deterministic or Stochastic Trends

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Inverse Singular Value Problems

Inverse Singular Value Problems Chapter 8 Inverse Singular Value Problems IEP versus ISVP Existence question A continuous approach An iterative method for the IEP An iterative method for the ISVP 139 140 Lecture 8 IEP versus ISVP Inverse

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2

Unit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2 Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically

More information

Subdiagonal pivot structures and associated canonical forms under state isometries

Subdiagonal pivot structures and associated canonical forms under state isometries Preprints of the 15th IFAC Symposium on System Identification Saint-Malo, France, July 6-8, 29 Subdiagonal pivot structures and associated canonical forms under state isometries Bernard Hanzon Martine

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

EC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University

EC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University EC48 Topics in Applied Econometrics B Fingleton, Dept of Economics, Strathclyde University Applied Econometrics What is spurious regression? How do we check for stochastic trends? Cointegration and Error

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Tutorial lecture 2: System identification

Tutorial lecture 2: System identification Tutorial lecture 2: System identification Data driven modeling: Find a good model from noisy data. Model class: Set of all a priori feasible candidate systems Identification procedure: Attach a system

More information

Lecture 1: Systems of linear equations and their solutions

Lecture 1: Systems of linear equations and their solutions Lecture 1: Systems of linear equations and their solutions Course overview Topics to be covered this semester: Systems of linear equations and Gaussian elimination: Solving linear equations and applications

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

1.4 Properties of the autocovariance for stationary time-series

1.4 Properties of the autocovariance for stationary time-series 1.4 Properties of the autocovariance for stationary time-series In general, for a stationary time-series, (i) The variance is given by (0) = E((X t µ) 2 ) 0. (ii) (h) apple (0) for all h 2 Z. ThisfollowsbyCauchy-Schwarzas

More information

Switching Regime Estimation

Switching Regime Estimation Switching Regime Estimation Series de Tiempo BIrkbeck March 2013 Martin Sola (FE) Markov Switching models 01/13 1 / 52 The economy (the time series) often behaves very different in periods such as booms

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

Vector Auto-Regressive Models

Vector Auto-Regressive Models Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

VAR Models and Applications

VAR Models and Applications VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Identifying SVARs with Sign Restrictions and Heteroskedasticity

Identifying SVARs with Sign Restrictions and Heteroskedasticity Identifying SVARs with Sign Restrictions and Heteroskedasticity Srečko Zimic VERY PRELIMINARY AND INCOMPLETE NOT FOR DISTRIBUTION February 13, 217 Abstract This paper introduces a new method to identify

More information

Market efficiency of the bitcoin exchange rate: applications to. U.S. dollar and Euro

Market efficiency of the bitcoin exchange rate: applications to. U.S. dollar and Euro Market efficiency of the bitcoin exchange rate: applications to U.S. dollar and Euro Zheng Nan and Taisei Kaizoji International Christian University 3-10-2 Osawa, Mitaka, Tokyo 181-8585 1 Introduction

More information

by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University

by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University 1 THE COINTEGRATED VECTOR AUTOREGRESSIVE MODEL WITH AN APPLICATION TO THE ANALYSIS OF SEA LEVEL AND TEMPERATURE by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University

More information

Empirical properties of large covariance matrices in finance

Empirical properties of large covariance matrices in finance Empirical properties of large covariance matrices in finance Ex: RiskMetrics Group, Geneva Since 2010: Swissquote, Gland December 2009 Covariance and large random matrices Many problems in finance require

More information

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation 1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II

ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II Ragnar Nymoen Department of Economics University of Oslo 9 October 2018 The reference to this lecture is:

More information

AR, MA and ARMA models

AR, MA and ARMA models AR, MA and AR by Hedibert Lopes P Based on Tsay s Analysis of Financial Time Series (3rd edition) P 1 Stationarity 2 3 4 5 6 7 P 8 9 10 11 Outline P Linear Time Series Analysis and Its Applications For

More information

Fractional integration and cointegration: The representation theory suggested by S. Johansen

Fractional integration and cointegration: The representation theory suggested by S. Johansen : The representation theory suggested by S. Johansen hhu@ssb.no people.ssb.no/hhu www.hungnes.net January 31, 2007 Univariate fractional process Consider the following univariate process d x t = ε t, (1)

More information

Cointegrating Regressions with Messy Regressors: J. Isaac Miller

Cointegrating Regressions with Messy Regressors: J. Isaac Miller NASMES 2008 June 21, 2008 Carnegie Mellon U. Cointegrating Regressions with Messy Regressors: Missingness, Mixed Frequency, and Measurement Error J. Isaac Miller University of Missouri 1 Messy Data Example

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh

SQUARE ROOTS OF 2x2 MATRICES 1. Sam Northshield SUNY-Plattsburgh SQUARE ROOTS OF x MATRICES Sam Northshield SUNY-Plattsburgh INTRODUCTION A B What is the square root of a matrix such as? It is not, in general, A B C D C D This is easy to see since the upper left entry

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

It is easily seen that in general a linear combination of y t and x t is I(1). However, in particular cases, it can be I(0), i.e. stationary.

It is easily seen that in general a linear combination of y t and x t is I(1). However, in particular cases, it can be I(0), i.e. stationary. 6. COINTEGRATION 1 1 Cointegration 1.1 Definitions I(1) variables. z t = (y t x t ) is I(1) (integrated of order 1) if it is not stationary but its first difference z t is stationary. It is easily seen

More information

Eigenvalues and Eigenvectors: An Introduction

Eigenvalues and Eigenvectors: An Introduction Eigenvalues and Eigenvectors: An Introduction The eigenvalue problem is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems

More information

Math 322. Spring 2015 Review Problems for Midterm 2

Math 322. Spring 2015 Review Problems for Midterm 2 Linear Algebra: Topic: Linear Independence of vectors. Question. Math 3. Spring Review Problems for Midterm Explain why if A is not square, then either the row vectors or the column vectors of A are linearly

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Testing Error Correction in Panel data

Testing Error Correction in Panel data University of Vienna, Dept. of Economics Master in Economics Vienna 2010 The Model (1) Westerlund (2007) consider the following DGP: y it = φ 1i + φ 2i t + z it (1) x it = x it 1 + υ it (2) where the stochastic

More information

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis Introduction to Time Series Analysis 1 Contents: I. Basics of Time Series Analysis... 4 I.1 Stationarity... 5 I.2 Autocorrelation Function... 9 I.3 Partial Autocorrelation Function (PACF)... 14 I.4 Transformation

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction Michal Andrle 1 Computing in Economics and Finance, Prague June 212 1 The views expressed herein are those

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology FE670 Algorithmic Trading Strategies Lecture 3. Factor Models and Their Estimation Steve Yang Stevens Institute of Technology 09/12/2012 Outline 1 The Notion of Factors 2 Factor Analysis via Maximum Likelihood

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

arxiv: v1 [math.na] 5 May 2011

arxiv: v1 [math.na] 5 May 2011 ITERATIVE METHODS FOR COMPUTING EIGENVALUES AND EIGENVECTORS MAYSUM PANJU arxiv:1105.1185v1 [math.na] 5 May 2011 Abstract. We examine some numerical iterative methods for computing the eigenvalues and

More information

ECON2285: Mathematical Economics

ECON2285: Mathematical Economics ECON2285: Mathematical Economics Yulei Luo FBE, HKU September 2, 2018 Luo, Y. (FBE, HKU) ME September 2, 2018 1 / 35 Course Outline Economics: The study of the choices people (consumers, firm managers,

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

On the Error Correction Model for Functional Time Series with Unit Roots

On the Error Correction Model for Functional Time Series with Unit Roots On the Error Correction Model for Functional Time Series with Unit Roots Yoosoon Chang Department of Economics Indiana University Bo Hu Department of Economics Indiana University Joon Y. Park Department

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1

(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1 Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I.

Vector Autoregressive Model. Vector Autoregressions II. Estimation of Vector Autoregressions II. Estimation of Vector Autoregressions I. Vector Autoregressive Model Vector Autoregressions II Empirical Macroeconomics - Lect 2 Dr. Ana Beatriz Galvao Queen Mary University of London January 2012 A VAR(p) model of the m 1 vector of time series

More information

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability

Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra Lecture Notes in Mathematics Arkansas Tech University Department of Mathematics The Basics of Linear Algebra Marcel B. Finan c All Rights Reserved Last Updated November 30, 2015 2 Preface Linear algebra

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information