Cointegrated linear processes in Hilbert space

Size: px
Start display at page:

Download "Cointegrated linear processes in Hilbert space"

Transcription

1 Cointegrated linear processes in Hilbert space Brendan K. Beare 1, Juwon Seo 2, and Won-Ki Seo 3 1,3 Department of Economics, University of California, San Diego 2 Department of Economics, National University of Singapore July 12, 2017 Abstract We extend the notion of cointegration for multivariate time series to a potentially infinite dimensional setting in which our time series takes values in a complex separable Hilbert space. In this setting, standard linear processes with nonzero long run covariance operator play the role of I(0) processes. We show that the cointegrating space for an I(1) process may be sensibly defined as the kernel of the long run covariance operator of its difference. The inner product of an I(1) process with an element of its cointegrating space is a stationary complex valued process. Our main result is a version of the Granger-Johansen representation theorem: we obtain a geometric reformulation of the Johansen I(1) condition that extends naturally to a Hilbert space setting, and show that an autogressive Hilbertian process satisfying this condition, and possibly also a compactness condition, admits an I(1) representation. We thank Juan Carlos Escanciano and Werner Ploberger for useful suggestions at an early stage, and Jim Hamilton, Peter Hansen, Lajos Horváth, Søren Johansen, Morten Nielsen, Peter Phillips, Anders Swensen, and seminar participants at the University of Montreal, the Toulouse School of Economics, and the Université libre de Bruxelles for helpful discussions. Detailed comments from Søren Johansen were particularly valuable. The paper has benefited greatly from the generosity and insight of two anonymous referees, one of whom contributed significantly to the proof of Theorem 4.1. Part of this work was completed while the first author was visiting the National University of Singapore during September

2 1 Introduction The subject of time series analysis has traditionally dealt with time series that take values in finite dimensional Euclidean space. A more recent literature on so-called functional time series analysis deals with time series that take values in a possibly infinite dimensional Banach or Hilbert space, frequently a space of functions. For instance, one observation of such a time series may be a continuous record of the value of some asset over a given trading day, or the distribution of income across an economy in a given year. An important early contribution to the literature on functional time series is the monograph of Bosq (2000), which gives a detailed theoretical treatment of linear processes in Banach and Hilbert spaces. Hörmann and Kokoszka (2012) and Horváth and Kokoszka (2012) discuss much of the subsequent literature. Empirical applications of functional time series analysis to economic and financial data have studied the term structure of interest rates (Kargin and Onatski, 2008), intraday cumulative returns (Kokoszka and Zhang, 2012), intraday volatility (Hörmann et al., 2013; Gabrys et al., 2013), and the distributions of high frequency stock returns and of individual earnings (Chang et al., 2016). The property of cointegration, well studied for time series in finite dimensional Euclidean space, was first introduced by Granger (1981). Its study transformed the practice of time series econometrics over the following two decades, especially in applications to macroeconomic data. Given the recent surge of interest in functional data analysis and functional time series, an extension of the methods of cointegration analysis to a functional time series setting may be valuable. A recent paper by Chang et al. (2016) appears to be the first effort in this direction. Those authors consider a time series taking values in a space of square integrable probability density functions. They introduce a notion of cointegration for this functional time series, and develop associated statistical methods based on functional principal components analysis. Unfortunately, as shown by Beare (2017), nontrivial examples of probability density valued cointegrated processes in the space considered by Chang et al. do not exist due to the nonnegativity property of densities. Nevertheless the framework they provide is useful if we ignore their requirement that the time series be density valued and allow it to take values in a Hilbert space of square integrable real valued functions which may or may not be probability densities. In this paper we take a closer look at what it means for a Hilbert space valued time series to be cointegrated. Our purpose is to provide solid foundations for future research. We show that, if the difference X of our time series X is a 2

3 standard linear process loosely speaking, a stationary moving average of past Hilbert space valued innovations then the cointegrating space may be sensibly defined as the kernel of the long run covariance operator associated with X. Inner products of the levels of X with elements of the cointegrating space are stationary real valued processes. We show that the trend component in a Beveridge-Nelson decomposition of X takes values in the orthogonal complement of the cointegrating space, which we call the attractor space. Our main result is Theorem 4.1, which extends the Granger-Johansen representation theorem to a Hilbert space context. It provides conditions under which a process satisfying an autoregressive Hilbertian law of motion is integrated of order one. To achieve this extension we obtain a geometric reformulation of the Johansen I(1) condition that extends naturally to our more general setting. We require the autoregressive operators to be compact when the autoregressive law of motion is second order or higher, but impose no compactness condition for a first order law of motion. Our paper is organized as follows. In Section 2 we review some essential mathematics. In Section 3 we discuss what it means for a linear process in Hilbert space to be cointegrated and provide some related results. Our extension of the Granger-Johansen representation theorem is stated and proved in Section 4. We conclude in Section 5 by numerically simulating a cointegrated autoregressive process in the space of square integrable functions on the unit interval whose attractor space is the subspace of quadratic polynomials. 2 Essential preliminaries Here we briefly review essential background material for the study of cointegrated linear processes in Hilbert space, and fix standard notation and terminology. Our primary sources are Conway (1990) and Bosq (2000). While Bosq considers random elements of real rather than complex Hilbert spaces, the complex case is similar and has been studied by Cerovecki and Hörmann (2017) 2.1 Continuous linear operators on Hilbert space Let H be a complex separable Hilbert space with inner product, and norm. We denote by L H the space of continuous linear operators from H to H equipped with the operator norm A LH = sup x 1 A(x). It can be shown that L H is a Banach space. We denote the kernel (null space) of an operator 3

4 A L H by ker A, and its range (image) by ran A. Both are linear subspaces of H. The dimension of ker A is called the nullity of A, and the dimension of ran A is called the rank of A. The rank-nullity theorem asserts that the two must sum to the dimension of H. We denote the adjoint of an operator A L H by A. We will say that an operator A L H is positive semidefinite if A(x), x is real and nonnegative for all x H, and positive definite if also A(x), x 0 for all nonzero x H. Given a set G H, we denote the orthogonal complement to G by G = {x H : x,y = 0 for all y G}, (2.1) and the closure of G that is, the union of G and its limit points by cl(g). We will make repeated use of the following equalities, valid for any A L H (see e.g. Conway, 1990, pp ): ker A = (ran A ) and (ker A) = cl(ran A ). (2.2) We will refer to the two equalities in (2.2) as the strong rank-nullity theorem. Two properties of operators in L H that will be important in Section 4 are compactness and the Fredholm property. An operator A L H is said to be compact if it is the limit of a sequence of finite rank operators in L H. It is said to be Fredholm if ker A and ker A are finite dimensional and ran A is closed. The index of a Fredholm operator A is defined to be ind A = dim ker A dim ker A, (2.3) where we write dim for dimension. If A is Fredholm and B is compact then A + B is Fredholm with the same index as A (Conway, 1990, Thm. XI.3.11). If H is finite dimensional then every operator in L H is compact and Fredholm of index zero. See Conway (1990, ch. XI) for further discussion of Fredholm operators. 2.2 Random elements of Hilbert space Let (Ω, F, P) be our underlying probability space. A random element of H is a measurable map Z : Ω H, where H is understood to be equipped with its Borel σ-algebra. Noting that Z is a real valued random variable, if E Z < we say that Z is integrable. If Z is integrable then there exists a unique element of H, which we denote EZ, such that E Z, x = EZ, x for all x H. (2.4) 4

5 We call EZ the expected value of Z. It is also called the Bochner integral of Z. Let L 2 be the space of random elements Z of H (identifying random elements H that are equal almost surely) that satisfy E Z 2 < and EZ = 0, equipped with the norm Z L 2H = (E Z 2 ) 1/2, Z L 2 H. (2.5) It can be shown that L 2 H is a Banach space. For Z L2, the Cauchy-Schwarz H inequality implies that Z x, Z is integrable for each x H. We may therefore define the operator C Z L H by C Z (x) = E (Z x, Z ), x H. (2.6) We call C Z the covariance operator of Z. It is guaranteed to be self-adjoint, positive semidefinite, and compact. 2.3 Linear processes in Hilbert space Let η = (η t, t Z) be an independent and identically distributed (iid) sequence in L 2 H, and let (A k, k 0) be a sequence in L H satisfying k=0 A k L 2 <. H Then it can be shown (Bosq, 2000, p. 182) that for each t Z the series Z t = A k (η t k ) (2.7) k=0 is convergent in L 2 H. We call the sequence (Z t, t Z) a linear process in H with innovations η. More generally, given any t 0 Z { }, we call the sequence (Z t, t > t 0 ) a linear process in H with innovations η. Such a linear process is necessarily stationary. When the operators in (2.7) satisfy k=0 A k LH <, our linear process (Z t, t > t 0 ) is said to be a standard linear process in H. In this case the series A = k=0 A k is convergent in L H, and we define the long run covariance operator V L H for our standard linear process to be the composition V = AC η0 A, (2.8) where C η0 L H is the covariance operator of η 0. Note that V is simply the covariance operator of A(η 0 ), and is therefore self-adjoint, positive semidefinite, and compact. 5

6 3 Cointegration in Hilbert space 3.1 Basic setup Our time series of interest X = (X t, t 0) is a sequence in L 2. Denote the first H difference of X by X = ( X t, t 1), with X t = X t X t 1. Leaving aside the choice of the initial condition X 0 L 2, we suppose that X is a standard linear H process in H. It therefore admits the representation X t = Ψ k (ε t k ), t 1, (3.1) k=0 where ε = (ε t, t Z) is an iid sequence in L 2 H, and (Ψ k, k 0) is a sequence in L H satisfying k=0 Ψ k LH <. As discussed in Section 2.3, this summability condition is sufficient for the series in (3.1) to be convergent in L 2 and for H the series Ψ = k=0 Ψ k to be convergent in L H, and X is necessarily stationary. Nonetheless, for reasons to become apparent we require that the stronger condition k Ψ k LH < (3.2) k=0 is satisfied. We require the covariance operator Σ of ε 0 to be positive definite and denote the long run covariance operator of X by Λ = ΨΣΨ, as in (2.8) above. Note that since Σ is positive definite, it must be the case that ker Λ = ker Ψ. (3.3) Assumption 3.1. The differenced process X satisfies (3.1) for some iid sequence ε = (ε t, t Z) in L 2 H and some sequence (Ψ k, k 0) in L H satisfying (3.2). The covariance operator Σ of ε 0 is positive definite. 3.2 Beveridge-Nelson decomposition The purpose of condition (3.2) is to facilitate a version of the Beveridge-Nelson decomposition for X. For k 0 define Ψ k = j=k+1 Ψ j. Under (3.2) we have Ψ k LH k=0 Ψ j LH = k Ψ k LH <. (3.4) k=0 j=k+1 k=0 6

7 Recalling our discussion in Section 2.3, condition (3.4) ensures that the series ν t = Ψ k=0 k (ε t k ) is convergent in L 2 for each t Z, and that the sequence H ν = (ν t, t 0) in L 2 H is stationary. For t 1 let ξ t = t s=1 ε s, and let ξ 0 = 0 L 2 H. With a little algebra we obtain X t = X 0 ν 0 + Ψ(ξ t ) + ν t, t 0. (3.5) The decomposition (3.5) was applied informally by Beveridge and Nelson (1981) for the case H = C to study business cycle fluctuations. For the multidimensional case H = C n it is sometimes known as the common trends representation (Stock and Watson, 1988). Phillips and Solo (1992) gave a detailed treatment of the Beveridge-Nelson decomposition and illustrated its power as a tool for deriving laws of large numbers and invariance principles for linear processes. It was first applied in an infinite dimensional setting by Chang et al. (2016). The point is that X t is decomposed into the sum of an initial condition X 0 ν 0, a random walk component Ψ(ξ t ), and a transitory component ν t. 3.3 Cointegrating and attractor spaces Definition 3.1. We will call the kernel of Λ the cointegrating space, and its orthogonal complement the attractor space. Our next two results explain why the terminology introduced in Definition 3.1 is sensible. In Proposition 3.1 we show that the cointegrating space consists precisely of those vectors whose inner products with the levels of X are stationary given suitable initialization, consistent with the use of the term in prior literature. In Proposition 3.2 we show that the random walk component Ψ(ξ t ) in the Beveridge-Nelson decomposition (3.5) is confined to the attractor space. Deviations of our process from the attractor space may therefore be attributed to the transitory component ν t, making protracted large deviations unlikely; in this sense it is attracted to the attractor space. The earliest use of the term attractor in this context we could find is by Granger and Hallman (1991), who considered both linear and nonlinear attractors. It is also used by Johansen (1995, pp ). Proposition 3.1. Under Assumption 3.1, an element x H belongs to the cointegrating space if and only if, for some choice of the initial condition X 0 L 2 H, the sequence of complex valued random variables X (x) = ( X t, x, t 0) is stationary. Moreover, a single choice of the initial condition can make X (x) stationary for all x in the cointegrating space. 7

8 Proof. Taking the inner product of both sides of (3.5) with x H gives X t, x = X 0 ν 0, x + Ψ(ξ t ), x + ν t, x, t 0. (3.6) Suppose that x ker Λ. Then Ψ(ξ t ), x = ξ t, Ψ (x) = 0 by (3.3) and, employing the initialization X 0 = ν 0, we obtain X (x) = ν (x), where we denote ν (x) = ( ν t, x, t 0). We know that ν (x) is stationary because ν is stationary and, x is linear, hence Borel measurable. Thus X (x) is stationary. Suppose instead that x ker Λ. Then x ker Ψ by (3.3), and so Λ(x), x = ΣΨ (x), Ψ (x) > 0 due to the positive definiteness of Σ. Since Ψ(ξ t ) has covariance operator tψσψ, it follows that E Ψ(ξ t ), x 2 = t Λ(x), x is increasing in t. Using (3.6) and Minkowski s inequality we can bound the square root of E Ψ(ξ t ), x 2 by the quantity (E X t, x 2 ) 1/2 + (E ν t, x 2 ) 1/2 + (E X 0 ν 0, x 2 ) 1/2, (3.7) which is finite for any X 0 L 2 H. Since ν (x) is stationary, E ν t, x 2 cannot depend on t, and so the fact that E Ψ(ξ t ), x 2 increases with t implies that E X t, x 2 must also increase with t. Thus X (x) cannot be stationary. Since the initialization X 0 = ν 0 used to make X (x) stationary for x ker Λ did not depend on x, the final assertion of the Proposition is also proved. Proposition 3.2. Under Assumption 3.1, the attractor space is the closure of the range of Ψ. Proof. Immediate from (3.3) and the strong rank-nullity theorem. Remark 3.1. In their discussion of cointegration in a Hilbert space setting, Chang et al. (2016) assume that H can be written as the direct sum of two subspaces H N and H S, such that ( X t, x, t 0) is nonstationary for all x H N and stationary for all x H S. Proposition 3.1 shows that this is always possible if we simply take H S = ker Λ and H N = (ker Λ) and suitably initialize X. In addition, Proposition 3.2 ensures that Assumption 2.1(b) of Chang et al. is always satisfied, since it implies that ran Ψ H N and that, when H N is finite dimensional, ran Ψ = H N. 3.4 Orders of integration In time series analysis it is common to say that a series is integrated of order d or I(d), frequently I(0) or I(1). As discussed by Davidson (2009), the precise meaning of this expression varies by author and context, and is not always explicitly given. 8

9 Following the approach of Johansen (1995, p. 35), we will say that a sequence in L 2 is I(0) if it is a standard linear process with nonzero long run covariance H operator, and that a sequence in L 2 H is I(d) with d = 1, 2,... if its dth difference is I(0). Note that the difference of a standard linear process is a standard linear process with long run covariance operator equal to zero. In this paper we are primarily concerned with the case where X is I(1). Assumption 3.1 asserts that the first difference of X is a standard linear process, implying that X is I(1) if and only if Λ 0. Moreover, Assumption 3.1 rules out the possibility that X is I(d) for any d 2, because if Λ = 0 then the d th difference of X is a standard linear process with long run covariance equal to zero for every d 0. In the degenerate case Λ = 0 it is not necessarily the case that X suitably initialized is I(0), because our assumptions do not guarantee that X will itself have nonzero long run covariance operator. We may nevertheless assert that X suitably initialized is stationary. Proposition 3.3. Under Assumption 3.1, X is stationary for some choice of the initial condition X 0 L 2 if and only if Λ = 0. H Proof. It follows from (3.3) that Λ = 0 if and only if Ψ = 0. If Ψ = 0 then we see from (3.5) that X is stationary given the initialization X 0 = ν 0. If Ψ 0 then we may choose an x H with x ker Ψ. Taking the inner product of both sides of (3.5) with x and arguing as in the proof of Proposition 3.1, we find that E X t, x 2 is increasing in t for any choice of X 0 L 2, ruling out stationarity of X. H 4 A Hilbert space version of the Granger-Johansen representation theorem Fix p N, and suppose that the sequence of random elements X p+1, X p+2,... L 2 satisfies the autoregressive law of motion H X t = p Φ j (X t j ) + ε t, t 1, (4.1) j=1 for some Φ 1,..., Φ p L H. Let id H denote the identity operator on H, and for z C define Φ(z) = id H zφ 1 z p Φ p. This operator valued function of a complex variable is called an operator pencil (Markus, 2012), or matrix pencil when H = C n. Let Φ (1) (z) denote the first complex derivative of Φ(z). 9

10 Under what conditions will a process satisfying the autoregressive law of motion (4.1) be I(1)? In the case H = C n, this thorny matter is the subject of the Granger-Johansen representation theorem. The result has a complicated history. A version of it first appears in an unpublished working paper of Granger (1983). It appears in published form without a proof in Granger (1986), and with a proof in Engle and Granger (1987). That proof, however, turned out to be incorrect: a counterexample eventually provided by Johansen (2009, p. 126) shows that Lemma 1 of Engle and Granger (1987), which is also Theorem 1 of Granger (1983), is incorrect. The first correct proof of the Granger-Johansen representation theorem was provided by Johansen (1991), who showed that the first difference of a process satisfying (4.1) admits the moving average representation (3.1) when (a) all solutions to det Φ(z) = 0 are equal to one or are outside the unit circle, and (b) Φ(z) satisfies the following technical condition ruling out I(d) solutions to (4.1) for d > 1. We write det for the determinant of a square complex matrix. Definition 4.1. The n n complex matrices Φ 1,..., Φ p are said to satisfy the Johansen I(1) condition if det ( α Φ (1) (1)β ) 0, (4.2) where α and β are full rank n r complex matrices satisfying Φ(1) = αβ, and α and β are full rank n (n r ) complex matrices such that α α = 0 and β β = 0. The main result of our paper, Theorem 4.1 below, is an extension of the Granger-Johansen representation theorem to a more general Hilbert space setting. To obtain this extension we rely on a geometric reformulation of the Johansen I(1) condition in which H is asserted to be the direct sum of two closed linear subspaces depending on Φ(z). Definition 4.2. Given closed linear subspaces V, W H, we say that H is the direct sum of V and W, and write H = V W, if H = V + W and V W = {0}. When H admits a direct sum decomposition H = V W, we may uniquely decompose any element of H into the sum of an element of V and an element of W, and we may uniquely define a map P : H H by requiring that P (v +w) = v for v V and w W. It can be shown (Conway, 1990, Thm. III.13.2) that P belongs to L H and satisfies P 2 = P, ran P = V and ker P = W. We will call P 10

11 the oblique projection on V along W. Oblique projections play a key role in the proof of Theorem 4.1. The following simple lemma provides an equivalence between the invertibility of a product of matrices and a direct sum condition. In it, we write ker A and ran A for the null space and column space of an m n complex matrix. Lemma 4.1. Let A be a full rank m n complex matrix and let B be an n m complex matrix, with m n. Then AB is invertible if and only if C n = ker A ran B. Proof. Suppose first that AB is invertible. Then we must have ker A ran B = {0}, because otherwise AB maps some nonzero vector in C m to zero. As vector subspaces of C n, ker A and ran B also satisfy dim(ker A + ran B) = dim ker A + dim ran B dim(ker A ran B) (4.3) = dim ker A + dim ran B. (4.4) Since AB is invertible we have dim ran A = dim ran B = m. The rank-nullity theorem implies that dim ker A = n dim ran A = n m. Thus, dim(ker A + ran B) = n. Since ker A + ran B C n, it follows that ker A + ran B = C n. We conclude that C n = ker A ran B. Suppose next that C n = ker A ran B. Since ker A ran B = {0} we see immediately that the only vector mapped by AB to zero is zero, implying that AB is injective. To establish surjectivity of AB we note that ABC m = A ran B = A(ker A + ran B) = AC n = C m, (4.5) with the final equality following from the full rank condition on A. Using Lemma 4.1 we obtain the following reformulation of the Johansen I(1) condition as a direct sum condition. Proposition 4.1. The n n complex matrices Φ 1,..., Φ p satisfy the Johansen I(1) condition if and only if C n = ran Φ(1) Φ (1) (1) ker Φ(1). (4.6) When p = 1, this may equivalently be written as C n = ran Φ(1) ker Φ(1). (4.7) 11

12 Proof. We first show that (4.6) and (4.7) are equivalent when p = 1. In this case Φ(z) = id C n zφ 1, and so (a) we have Φ (1) (1) = Φ 1, and (b) ker Φ(1) is the set of fixed points of Φ 1. It follows from (a) and (b) that Φ (1) (1) ker Φ(1) = ker Φ(1), which proves the equivalence of (4.6) and (4.7). We next show that (4.2) is equivalent to (4.6). Lemma 4.1 tells us that the (n r ) (n r ) complex matrix α Φ(1) (1)β is invertible if and only if C n is the direct sum of the kernel of α and the range of Φ(1) (1)β. Since the range of α is ran Φ(1) we must choose α to have kernel equal to ran Φ(1), and similarly, since the kernel of β is ker Φ(1) we must choose β to have range equal to ker Φ(1). Thus α Φ(1) (1)β is invertible if and only if C n is the direct sum of ran Φ(1) and Φ (1) (1) ker Φ(1), and so (4.2) and (4.6) are equivalent. To illustrate our geometric reformulation of the Johansen I(1) condition we revisit the three running examples used in Chapter 4 of Johansen (1995), which deals with the Granger-Johansen representation theorem. Example 4.1. Consider the first order autoregressive law of motion X t = Φ 1 X t 1 + ε t with autoregressive coefficient matrix [ ] 1 + α α Φ 1 =. (4.8) 0 1 It is easy to check that Φ 1 has eigenvalues 1 and 1 + α. To ensure that these eigenvalues are equal to one or inside the unit circle we set α ( 2, 0), excluding α = 0 to rule out the uninteresting case where Φ 1 is the identity. Observe that [ ] α α Φ(1) = id H Φ 1 =. (4.9) 0 0 In Figure 4.1(a) we depict the subspaces ran Φ(1) and ker Φ(1). Since Φ(1) has real elements, we display these as subspaces of R 2, though of course they are in fact subspaces of C 2. We do the same in Examples 4.2 and 4.3 below. The range of Φ(1) is the linear span of the vector (1, 0), and the kernel of Φ(1) is the linear span of the vector (1, 1). Clearly C 2 is the direct sum of these two subspaces, so the Johansen I(1) condition is satisfied. Example 4.2. Consider the first order autoregressive law of motion X t = Φ 1 X t 1 + ε t with autoregressive coefficient matrix [ ] 1 0 Φ 1 = (4.10)

13 It is easy to check that 1 is the only eigenvalue of Φ 1. In Figure 4.1(b) we depict the range and kernel of the matrix Φ(1) = id H Φ 1. There is not a whole lot to see, because the range and kernel both coincide with the vertical axis. We do not have C 2 = ran Φ(1) ker Φ(1) and so the Johansen I(1) condition is violated. In fact, Johansen (1995, p. 47) notes that this law generates I(2) processes. Example 4.3. Consider the second order autoregressive law of motion X t = Φ 1 X t 1 + Φ 2 X t 2 + ε t with autoregressive coefficient matrices [ 5 Φ 1 = 4 γ 1 ] [ ] 4 0 γ, Φ 2 =. (4.11) If γ = 0 then we have a first order law of motion with the autoregressive operator having eigenvalues 1 and 3/2. This case falls outside the scope of our analysis because there is an eigenvalue outside the unit circle, so suppose that γ 0. Consider the matrix pencil [ Φ(z) = id H zφ 1 z Φ 2 = 4 z (γ 1 ] 4 )z + γz2 1 4 z z. (4.12) It is straightforward to show that the set of points z C at which Φ(z) is noninvertible consists of two or three isolated points: 1, and the one or two solutions to the quadratic equation 1 4 γz2 3 z + 1 = 0. (4.13) 2 Suppose that these solutions are equal to one or outside the unit circle. From (4.12) we obtain [ 1 Φ(1) = ], Φ (1) (1) = [ 5 4 γ ]. (4.14) It is apparent that the subspace Φ (1) (1) ker Φ(1) depends on the parameter γ, whereas ran Φ(1) and ker Φ(1) do not. Following Johansen we set γ = 9/4, and note that in this case (4.13) has the unique solution z = 4/3, which is outside the unit circle as required. We depict the three subspaces in Figure 4.1(c). It is apparent that ran Φ(1) and Φ (1) (1) ker Φ(1) are linearly independent subspaces whose direct sum is C 2, and so the Johansen I(1) condition is satisfied. Note however that if we reduced γ from 9/4 to 2, this would have the effect in Figure 4.1(c) of rotating the line Φ (1) (1) ker Φ(1) clockwise about the origin so that it coincides with the line ran Φ(1), leading to a failure of the Johansen I(1) condition. 13

14 x 2 ker Φ(1) ran Φ(1) x 1 ran Φ(1) = ker Φ(1) x 2 x 1 x 2 ker Φ(1) ran Φ(1) x 1 Φ (1) (1) ker Φ(1) (a) Example 4.1 (b) Example 4.2 (c) Example 4.3 Figure 4.1: Visual aid to Examples In view of Proposition 4.1, the following extension of Definition 4.1 to a Hilbert space setting is natural. Definition 4.3. We say that Φ 1,..., Φ p L H satisfy the Johansen I(1) condition if the linear subspaces ran Φ(1) and Φ (1) (1) ker Φ(1) are closed and if H = ran Φ(1) Φ (1) (1) ker Φ(1). (4.15) We are now in a position to state and prove our version of the Granger- Johansen representation theorem. Theorem 4.1. Let Φ 1,..., Φ p L H, and for z C define Φ(z) = id H zφ 1 z p Φ p. (4.16) Suppose there exists η > 0 such that either z = 1 or z 1 + η whenever Φ(z) is noninvertible. Suppose further that Φ 1,..., Φ p satisfy the Johansen I(1) condition. If p > 1, suppose further that Φ 1,..., Φ p are compact. Let ε = (ε t, t Z) be an iid sequence in L 2 H. Then, for suitably chosen X p+1,..., X 0 L 2 depending on H (ε t : t 0), the sequence X = (X t : t 0) in L 2 generated for t 1 by the H autoregressive law of motion p X t = Φ j (X t j ) + ε t (4.17) satisfies j=1 X t = X 0 ν 0 + Ψ 14 t s=1 ε s + ν t (4.18)

15 for all t 1, where Ψ is an operator in L H satisfying ran Ψ = ker Φ(1), and ν = (ν t, t Z) is a standard linear process in H. Proof. We first deal with the case p = 1. By repeating the argument in the first paragraph of the proof of Proposition 4.1 with H in place of C n we find that the Johansen I(1) condition asserts the decomposition H = ran Φ(1) ker Φ(1). We may therefore write id H = P 1 +P 2, where P 1 is the oblique projection on ker Φ(1) along ran Φ(1), and P 2 is the oblique projection on ran Φ(1) along ker Φ(1). Note that P 1 Φ(1) = Φ(1)P 1 = 0 and P 2 Φ(1) = Φ(1)P 2 = Φ(1). We may apply P 1 to both sides of the equality X t = Φ(1)X t 1 + ε t to obtain P 1 ( X t ) = P 1 (ε t ), whence t P 1 (X t ) = P 1 (X 0 ) + P 1 ε s, t 1. (4.19) s=1 Further, we may apply P 2 to both sides of the equality X t = (id H Φ(1))(X t 1 )+ε t to obtain P 2 (X t ) = (id H Φ(1))P 2 (X t 1 ) + P 2 (ε t ), t 1. (4.20) It is clear that the image of ran Φ(1) under id H Φ(1) is contained in ran Φ(1). We may therefore let Γ denote the restriction of id H Φ(1) to ran Φ(1), viewed as an operator in L ran Φ(1). We will show that the spectral radius of Γ is less than one. A point z C belongs to the spectrum of Γ if and only if Φ(1) (1 z) id H is not invertible as a map from ran Φ(1) to ran Φ(1). We first show that z = 1 does not belong to the spectrum of Γ. Our direct sum condition implies that H = ran Φ(1) + ker Φ(1), so we have Φ(1) ran Φ(1) = Φ(1)(ran Φ(1) + ker Φ(1)) = Φ(1)H = ran Φ(1). (4.21) This shows that Φ(1) is surjective as a map from ran Φ(1) to ran Φ(1). Consequently, z = 1 belongs to the spectrum of Γ if and only if Φ(1) is not injective as a map from ran Φ(1) to ran Φ(1). Injectivity fails if and only if there exists a nonzero x ran Φ(1) such that Φ(1)(x) = 0. But any such x must satisfy x ker Φ(1), which is impossible because our direct sum condition implies that ran Φ(1) ker Φ(1) = {0}. Thus Φ(1) is injective as a map from ran Φ(1) to ran Φ(1), and we conclude that z = 1 does not belong to the spectrum of Γ. We next show that Φ(z 1 ) is noninvertible for any nonzero z in the spectrum of Γ. Under our assumptions, noninvertibility of Φ(z 1 ) implies that either z = 1 or z 1/(1 + η). We have shown already that z = 1 does not belong to the 15

16 spectrum of Γ, so this will establish that Γ has spectral radius less than one. Observe first that if z belongs to the spectrum of Γ, then either Φ(1) (1 z) id H is not injective as a map from ran Φ(1) to ran Φ(1), or it is not surjective as a map from ran Φ(1) to ran Φ(1). In the former case it is immediate that Φ(1) (1 z) id H is not injective, hence not invertible. In the latter case we may use our direct sum condition to write (Φ(1) (1 z) id H )H = (Φ(1) (1 z) id H )(ran Φ(1) + ker Φ(1)) (4.22) = (Φ(1) (1 z) id H ) ran Φ(1) + ker Φ(1), (4.23) using z 1 to obtain the second equality. If Φ(1) (1 z) id H is not surjective as a map from ran Φ(1) to ran Φ(1) then (Φ(1) (1 z) id H ) ran Φ(1) is a strict subset of ran Φ(1). Since H = ran Φ(1) ker Φ(1), it follows that (Φ(1) (1 z) id H )H is a strict subset of H, and so Φ(1) (1 z) id H is not surjective, hence not invertible. We conclude that if z belongs to the spectrum of Γ then Φ(1) (1 z) id H is not invertible. In the case p = 1 under present consideration, for z 0 we have Φ(1) (1 z) id H = zφ(z 1 ). Thus if z is a nonzero element of the spectrum of Γ then Φ(z 1 ) is noninvertible. This establishes that Γ has spectral radius less than one. Let ρ(γ) < 1 denote the spectral radius of Γ, and recall (Conway, 1990, Prop. VII.3.8) the Gelfand spectral radius formula ρ(γ) = lim k Γ k 1/k L ran Φ(1). (4.24) Fix a (ρ(γ), 1). It is apparent that there exists n N such that for all k n we have Γ k Lran Φ(1) < a k. Consequently, k=0 Γk Lran Φ(1) <. This summability condition allows us to define the standard linear process ν t = Γ k P 2 (ε t k ), t Z, (4.25) k=0 with the series convergent in L 2 ran Φ(1) and hence also in L2. If we now choose H our initial condition X 0 to satisfy P 2 (X 0 ) = ν 0, then it follows from (4.20) that P 2 (X t ) = ν t for all t 1. In view of (4.19) we therefore have t X t = P 1 (X t ) + P 2 (X t ) = P 1 (X 0 ) + P 1 ε s + ν t, t 1. (4.26) s=1 Since P 1 (X 0 ) = X 0 ν 0, this completes the proof for the case p = 1. 16

17 To deal with the case p > 1 we resort to the so-called companion form. Equip the Cartesian product H p with inner product (x 1,..., x p ), (y 1,...,y p ) p = x 1,y x p,y p (4.27) to obtain a Hilbert space. Define the random elements X t, ε t L 2 H p and operator Φ L H p by X t = X t X t 1. X t p+1, ε t = ε t 0. 0, Φ = Φ 1 Φ 2 Φ p 1 Φ p id H id H 0. (4.28) We then have X t = Φ( X t 1 ) + ε t, t 1; (4.29) see e.g. Johansen (1995, p. 15) or Bosq (2000, p. 128). For z C define Φ(z) = id H p z Φ. Note for future reference that ker Φ(1) is the diagonal embedding of ker Φ(1) in H p ; that is, ker Φ(1) = { (x,..., x) H p : x ker Φ(1) }. (4.30) Suppose (we will verify these two conditions shortly) that and that noninvertibility of Φ(z) implies either z = 1 or z 1 + η (4.31) ran Φ(1) and ker Φ(1) are closed and H p = ran Φ(1) ker Φ(1). (4.32) Then our result proved for p = 1 implies that with a suitable choice of our initial condition X 0 we obtain t X t = X 0 ν 0 + P 1 s=1 ε s + ν t, t 1, (4.33) where P 1 is the oblique projection on ker Φ(1) along ran Φ(1) and ( ν t, t Z) is a standard linear process in H p. Let Π : H p H denote the (continuous, linear) map Π(x 1,..., x p ) = x 1, and note that its adjoint Π : H H p is given by 17

18 Π (x) = (x, 0,..., 0). Applying Π to both sides of the law of motion (4.33), we obtain X t = X 0 ν 0 + Ψ t ε s + ν t, t 1, (4.34) s=1 where we have defined Ψ = Π P 1 Π and ν t = Π ν t. This establishes our desired representation for the case p > 1. It remains only to verify that conditions (4.31) and (4.32) are satisfied. To verify condition (4.31) we write Φ(z) = id H zφ 1 zφ 2 zφ 3 zφ p z id H id H z id H id H id H [ A B C D ]. (4.35) The operator D : H p 1 H p 1 is invertible, with inverse D 1 = id H 0 0 z id H id H 0... z p 2 id H z p 3 id H id H. (4.36) The Schur complement of D in the partition given in (4.35) is defined by the equality D + = A BD 1 C. If D + is invertible then Φ(z) is invertible, with inverse given by the well-known partitioned inverse formula. It is simple to verify that D + = Φ(z). Therefore, if Φ(z) is noninvertible then Φ(z) is noninvertible. We have assumed that noninvertibility of Φ(z) implies z = 1 or z 1 + η, so this establishes that condition (4.31) is satisfied. To verify condition (4.32) we must show that ran Φ(1) and ker Φ(1) are closed, that ran Φ(1) ker Φ(1) = {0}, and that H p = ran Φ(1) + ker Φ(1). We first show that ran Φ(1) and ker Φ(1) are closed. Closedness of ker Φ(1) is immediate from the continuity of Φ(1). To show closedness of ran Φ(1) we write Φ(1) = id H 0 0 id H id H id H Φ 1 Φ 2 Φ p J K. (4.37) 18

19 It is easy to see that ker J = {0}, ker J = {0} and ran J = H p. Thus J is Fredholm. The operator K inherits the property of compactness from Φ 1,..., Φ p. The Fredholm property is preserved by compact perturbation, so Φ(1) = J K is Fredholm and thus has closed range. We next show that ran Φ(1) ker Φ(1) = {0}. If y = (y 1,...,y p ) ran Φ(1) then there exists x = (x 1,..., x p ) H p such that Φ(1)(x) = x 1 p j=1 Φ j (x j ) x 2 x 1. x p x p 1 = y 1 y 2. y p. (4.38) If also y ker Φ(1) then, in view of (4.30), we have y 1 = = y p ker Φ(1). Thus from (4.38) we have x j = x 1 + (j 1)y 1 for j = 2,...,p, and hence Φ(1)(x 1 ) = y 1 + p (j 1)Φ j (y 1 ). (4.39) j=1 Since y 1 ker Φ(1) we may add either side of the equality 0 = y 1 + p j=1 Φ j (y 1 ) to either side of equality (4.39) to obtain Φ(1)(x 1 ) = p jφ j (y 1 ) = Φ (1) (1)(y 1 ). (4.40) j=1 This shows that Φ (1) (1)(y 1 ) ran Φ(1). Since y 1 ker Φ(1), and the Johansen I(1) condition guarantees that ran Φ(1) Φ (1) (1) ker Φ(1) = {0}, it follows that y 1 ker Φ (1) (1). To complete our demonstration that ran Φ(1) ker Φ(1) = {0}, we thus need to show that ker Φ(1) ker Φ (1) (1) = {0}. For this we use the compactness property of Φ 1,..., Φ p, which implies that Φ(1) has Fredholm index zero. This means that ker Φ(1) and (ran Φ(1)) have equal and finite dimension. Since H = ran Φ(1) (ran Φ(1)) and H = ran Φ(1) Φ (1) (1) ker Φ(1), we know that the dimensions of (ran Φ(1)) and Φ (1) ker Φ(1) are equal (Kress, 1999, p. 62). Therefore, the dimensions of ker Φ(1) and Φ (1) (1) ker Φ(1) are equal and finite. It follows that there cannot exist a nonzero element common to both ker Φ(1) and ker Φ (1) (1), because if such an element did exist then we would have dim Φ (1) (1) ker Φ(1) < dim ker Φ(1). Thus ker Φ(1) ker Φ (1) (1) = {0} and hence ran Φ(1) ker Φ(1) = {0} as claimed. 19

20 It remains only to show that H p = ran Φ(1) + ker Φ(1). Pick arbitrary elements x = (x 1, 0,..., 0) H p and y = (0,y 2,...,y p ) H p. It is enough to show that x ran Φ(1) + ker Φ(1) and y ran Φ(1) + ker Φ(1). Under our condition H = ran Φ(1) Φ (1) (1) ker Φ(1) we can find v H and w ker Φ(1) such that x 1 = Φ(1)(v) + Φ (1) (1)(w). Observe that x = Φ(1)(v) + Φ (1) (1)(w) w ẉ. w + w ẉ. w. (4.41) In view of (4.30), the second vector on the right-hand side of (4.41) belongs to ker Φ(1). The first vector on the right-hand side of (4.41) belongs to ran Φ(1), as can be seen by verifying that Φ(1)(v) + Φ (1) (1)(w) w ẉ. w = Thus x ran Φ(1) + ker Φ(1). Observe next that y = id H Φ 1 Φ 2 Φ p id H id H id H id H Φ 1 Φ 2 Φ p id H id H id H 0 y 2. p j=2 y j + p kj=2 Φ k=2 k (y j ) 0. 0 v w v 2w.. v pw (4.42). (4.43) This shows that y can be written as the sum of a vector in ran Φ(1) and a vector of the form x = (x 1, 0,..., 0) H p. Vectors of the latter sort were already shown to belong to ran Φ(1) + ker Φ(1), so we have y ran Φ(1) + ker Φ(1), and may conclude that H p = ran Φ(1) + ker Φ(1). Remark 4.1. An earlier version of this paper by the first two authors contained a special case of Theorem 4.1 allowing for only first order dynamics with a compact self-adjoint autoregressive operator, proved by applying the spectral theorem. An anonymous referee, to whom we are indebted, sketched out an alternative proof not involving the spectral theorem that allowed the self-adjoint condition to be dropped. The geometric reformulation of the Johansen I(1) condition for p = 20

21 1 was suggested by this referee and also appears in an unpublished manuscript of the first and third authors (Beare and Seo, 2017), posted on the arxiv repository in January 2017, while the current paper was under initial review. Some of the material in that manuscript has been incorporated into our revision of this paper. The proof of Theorem 4.1 given above extends the proof suggested by the referee by relaxing either the compactness condition or the assumption of first order dynamics. Remark 4.2. Our proof of Theorem 4.1 for p > 1 hinges critically on the fact that no compactness condition is imposed when p = 1. This is because the operator Φ in the companion form is always noncompact when p > 1 and H is infinite dimensional, even if the operators Φ 1,..., Φ p are compact. To see why, observe that the block representation of Φ in (4.28) contains p 1 identity operators. These identity operators are compact if and only if H is finite dimensional (Kress, 1999, Thm. 2.19) and thus Φ too is compact if and only if H is finite dimensional. Therefore, in order to apply results proved for p = 1 to the companion form when H is infinite dimensional, those results should not require compactness of Φ 1. The lack of a compactness condition complicates our proof of Theorem 4.1 for p = 1 when we verify that Γ L ran Φ(1) has spectral radius less than one. If Γ were compact this would only require showing that the operator Γ z id ran Φ(1) is injective whenever z 1, since the nonzero elements of the spectrum of a compact operator are eigenvalues and may accumulate only at zero (Conway, 1990, Thm. VII.7.1). Without compactness, verifying that Γ has spectral radius less than one requires additional work: we must show that, for some η > 0, the operator Γ z id ran Φ(1) is both injective and surjective whenever z > 1/(1 + η). Remark 4.3. In our proof that the companion form satisfies ran Φ(1) ker Φ(1) = {0}, it becomes apparent that we require ker Φ(1) ker Φ (1) (1) = {0}. (4.44) We show in our proof that (4.44) is implied by the Johansen I(1) condition when Φ 1,..., Φ p are compact. If p = 1 then (4.44) is always true, since Φ (1) (1) = Φ(1) id H. However if p > 1 and we do not insist on compactness of Φ 1,..., Φ p then (4.44) may be violated even when the Johansen I(1) condition is satisfied. As an example, let A L H be a Fredholm operator with index one. Let a 1,..., a n H be a basis for ker A, where n = dim ker A < and we assume n 2. Let b 1,...,b n 1 H be a basis for (ran A), which has dimension n 1 since A has index one. Let B L H be the unique operator satisfying B(a i ) = b i for 21

22 each i = 1,..., n 1, B(a n ) = 0, and B(x) = 0 for each x (ker A). We now have ker A ker B equal to the linear span of a n and B ker A = (ran A), the latter condition implying that H = ran A B ker A. Finally, set p = 2 and let Φ 1 = id H 2A + B and Φ 2 = A B. Then Φ(1) = A and Φ (1) (1) = B, and we find that the Johansen I(1) condition is satisfied while (4.44) is violated. Examples such as these are excluded by requiring Φ 1,..., Φ p to be compact, because this forces Φ(1) to be Fredholm with index zero. Remark 4.4. For simplicity we have considered purely stochastic processes in Theorem 4.1. The inclusion of a deterministic component raises no significant difficulties not already present in the finite dimensional case. Let (δ t, t Z) be a sequence in H bounded in norm by some polynomial in t. Suppose we augment our law of motion with a deterministic component by replacing (4.17) with X t = δ t + p Φ j (X t j ) + ε t. (4.45) j=1 Define τ 0 = k=0 Ψ k (δ k ), τ t = Ψ t s=1 δ s + k=0 Ψ k (δ t k ), t 1, (4.46) where the operators Ψ k L H are the coefficients associated with the standard linear process ν t = Ψ k=0 k (ε t k ). These coefficients decay exponentially in norm, so the polynomial bound on the norm of the deterministic component ensures that the two series in (4.46) converge in H. A simple modification to the proof of Theorem 4.1 in which ε t is replaced by δ t +ε t shows that the counterpart to (4.18) when our law of motion is given by (4.45) is X t = X 0 ν 0 τ 0 + Ψ t s=1 ε s + ν t + τ t. (4.47) In the special case where the deterministic component is constant, so that δ t = δ for all t Z, we may rewrite (4.47) as X t = X 0 ν 0 + Ψ t s=1 ε s + ν t + tψ(δ ). (4.48) 22

23 5 Numerical illustration We conclude by providing a numerical example of a cointegrated autoregressive Hilbertian process of order p = 1. We take H to be the complexification of the vector space of (Lebesgue equivalence classes of) square integrable real valued functions on the unit interval, equipped with the inner product x, y = x (u)y(u)du. Our innovation sequence ε consists of independent Brownian bridges, which we simulated numerically by exploiting their Karhunen-Loève representation as a randomly weighted sum of sinusoids. The autoregressive operator Φ 1 will be compact and self-adjoint. To fully specify it we first choose to be eigenvectors a sequence of shifted Legendre polynomials forming an orthonormal basis for H : v 1 (u) = 1, (5.1) v 2 (u) = 3(2u 1), (5.2) ( 2n + 1 2n v n (u) = 1(2u 1)vn 1(u) n 1 ) v n 2 (u), n 3. (5.3) n 2n 3 To these eigenvectors we assign eigenvalues λ 1 = λ 2 = λ 3 = 1, λ n = 0.9 n 3, n 4. (5.4) The operator Φ 1 is given by its spectral decomposition in terms of these eigenvectors and eigenvalues, Φ 1 (x) = λ n v n, x v n, x H. (5.5) n=1 Note that Φ 1 automatically satisfies the Johansen I(1) condition since it is selfadjoint. Since Φ 1 has all eigenvalues equal to one or with modulus no greater than 0.9, Theorem 4.1 tells us that it is possible to generate an I(1) process X satisfying the law of motion X t = Φ 1 (X t 1 ) + ε t. To do this we set X 0 = Ψ k=0 k (ε k ), where Ψ k = j=k+1 (Φj 1 Φj 1 1 ). Note that Ψ k admits the spectral decomposition Ψ k (x) = λ k n v n, x v n, x H, (5.6) n=4 23

24 1 X X X X X X X X X Figure 5.1: Cointegrated process X t at t = 0, 1, 2, 5, 10, 25, 50, 100, 200. which makes it straightforward to compute Ψ k (ε k ). Of course, computations of series involve truncating after only finitely many terms; we truncated at 10 3 throughout. Given our initialization X 0, we generated further realizations X 1,..., X 200 by iterating on X t = Φ 1 (X t 1 ) + ε t. We plot some of them in Figure 5.1. Inspecting the plots for t = 0, 1, 2, 5, 10, 25, we see that X t gradually evolves into something that looks a lot like a quadratic polynomial. This is because, by choosing Φ 1 to have three eigenvalues of one and associating those with the first three shifted Legendre polynomials, we have ensured that the kernel of id H Φ 1 is the space of quadratic polynomials. This is the attractor space for X. Note that the scales on the vertical axes in the different panels are not the same. The apparent smoothness of X 25 owes to the fact that the random walk component in the Beveridge-Nelson decomposition of X 25, a quadratic polynomial, has grown to the point where it dominates the transitory component. Our process continues 24

25 to roam far afield in the attractor space, with X 50 and X 100 both approximating large parabolas, before wandering back toward the origin. The plot of X 200 no longer resembles a quadratic polynomial, as the scale on the vertical axis in this panel has returned to that of the panel for t = 0, restoring prominence to the transitory component. That X 200 loosely resembles a cubic polynomial is not unexpected in view of the large eigenvalue of 0.9 associated with the fourth shifted Legendre polynomial. References Beare, B. K. (2017). The Chang-Kim-Park model of cointegrated density-valued time series cannot accomodate a stochastic trend. Econ Journal Watch, 14, Beare, B. K. and Seo, W. -K. (2017). Representation of I(1) autoregressive Hilbertian processes. ArXiv e-print, arxiv: v1 [math.st]. Beveridge, S., and Nelson, C. R. (1981). A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle. Journal of Monetary Economics, 7, Bosq, D. (2000). Linear Processes in Function Spaces. Springer, New York. Cerovecki, C. and Hörmann, S. (2017). On the CLT for discrete Fourier transforms of functional time series. Journal of Multivariate Analysis, 154, Chang, Y., Kim, C. S. and Park, J. Y. (2016). Nonstationarity in time series of state densities. Journal of Econometrics, 192, Conway, J. B. (1990). A Course in Functional Analysis, 2nd ed. Springer, New York. Davidson, J. (2009). When is a time series I(0)? Ch. 13 in Castle, J. and Shepherd, N. (Eds.), The Methodology and Practice of Econometrics: A Festschrift for David F. Hendry, pp Oxford University Press, Oxford. Engle, R. F. and Granger, C. W. J. (1987). Co-integration and error correction: representation, estimation and testing. Econometrica, 55,

26 Gabrys, R., Hörmann, S. and Kokoszka, P. (2013). Monitoring the intraday volatility pattern. Journal of Time Series Econometrics, 5, Granger, C. W. J. (1981). Some properties of times series data and their use in econometric model specification. Journal of Econometrics, 16, Granger, C. W. J. (1983). Cointegrated variables and error-correcting models. Mimeo, Department of Economics, UC San Diego. Granger, C. W. J. (1986). Developments in the study of cointegrated variables. Oxford Bulletin of Economics and Statistics, 48, Granger, C. W. J. and Hallman, J. (1991). Long memory series with attractors. Oxford Bulletin of Economics and Statistics, 53, Hörmann, S. and Kokoszka, P. (2012). Functional time series. Ch. 7 in Rao, T. S., Rao, S. S. and Rao, C. R. (Eds.), Handbook of Statistics, Vol. 30: Time Series Analysis Methods and Applications, pp North-Holland, Amsterdam. Hörmann, S., Horváth, L. and Reeder, R. (2013). A functional version of the ARCH model. Econometric Theory, 29, Horváth, L. and Kokoszka, P. (2012). Inference for Functional Data with Applications. Springer, New York. Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica, 59, Johansen, S. (1995). Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford University Press, Oxford. Johansen, S. (2009). Representation of cointegrated autoregressive processes with application to fractional processes. Econometric Reviews, 28, Kargin, V. and Onatski, A. (2008). Curve forecasting by functional autoregression. Journal of Multivariate Analysis, 99, Kokoszka, P. and Zhang, X. (2012). Functional prediction of intraday cumulative returns. Statistical Modelling, 12, Kress, R. (1999). Linear Integral Equations, 2nd ed. Springer, Heidelberg. 26

27 Markus, A. S. (2012). Introduction to the Spectral Theory of Polynomial Operator Pencils. American Mathematical Society, Providence. Phillips, P. C. B. and Solo, V. (1992). Asymptotics for linear processes. Annals of Statistics, 20, Stock, J. H. and Watson, M. W. (1988). Testing for common trends. Journal of the American Statistical Association, 83,

Representation of integrated autoregressive processes in Banach space

Representation of integrated autoregressive processes in Banach space Representation of integrated autoregressive processes in Banach space Won-Ki Seo Department of Economics, University of California, San Diego April 2, 218 Very preliminary and incomplete. Comments welcome.

More information

On the Error Correction Model for Functional Time Series with Unit Roots

On the Error Correction Model for Functional Time Series with Unit Roots On the Error Correction Model for Functional Time Series with Unit Roots Yoosoon Chang Department of Economics Indiana University Bo Hu Department of Economics Indiana University Joon Y. Park Department

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Cointegrated VARIMA models: specification and. simulation

Cointegrated VARIMA models: specification and. simulation Cointegrated VARIMA models: specification and simulation José L. Gallego and Carlos Díaz Universidad de Cantabria. Abstract In this note we show how specify cointegrated vector autoregressive-moving average

More information

Cointegrated Density-Valued Linear Processes arxiv: v3 [math.st] 16 May 2018

Cointegrated Density-Valued Linear Processes arxiv: v3 [math.st] 16 May 2018 Cointegrated Density-Valued Linear Processes arxiv:1710.07792v3 [math.st] 16 May 2018 Won-Ki Seo Department of Economics, University of California, San Diego May 17, 2018 Abstract In data rich environments

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Cointegration Lecture I: Introduction

Cointegration Lecture I: Introduction 1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

Multivariate Time Series: Part 4

Multivariate Time Series: Part 4 Multivariate Time Series: Part 4 Cointegration Gerald P. Dwyer Clemson University March 2016 Outline 1 Multivariate Time Series: Part 4 Cointegration Engle-Granger Test for Cointegration Johansen Test

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS

UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS 2-7 UNIVERSITY OF LIFORNI, SN DIEGO DEPRTMENT OF EONOMIS THE JOHNSEN-GRNGER REPRESENTTION THEOREM: N EXPLIIT EXPRESSION FOR I() PROESSES Y PETER REINHRD HNSEN DISUSSION PPER 2-7 JULY 2 The Johansen-Granger

More information

Compact operators on Banach spaces

Compact operators on Banach spaces Compact operators on Banach spaces Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto November 12, 2017 1 Introduction In this note I prove several things about compact

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS

ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS J. OPERATOR THEORY 44(2000), 243 254 c Copyright by Theta, 2000 ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS DOUGLAS BRIDGES, FRED RICHMAN and PETER SCHUSTER Communicated by William B. Arveson Abstract.

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

Tail behavior of exponentially stopped Markov-modulated Lévy processes & Geometrically stopped Markovian random growth processes and Pareto tails

Tail behavior of exponentially stopped Markov-modulated Lévy processes & Geometrically stopped Markovian random growth processes and Pareto tails Tail behavior of exponentially stopped Markov-modulated Lévy processes & Geometrically stopped Markovian random growth processes and Pareto tails Brendan K. Beare, Won-Ki Seo and Alexis Akira Toda UC San

More information

Spectral theory for compact operators on Banach spaces

Spectral theory for compact operators on Banach spaces 68 Chapter 9 Spectral theory for compact operators on Banach spaces Recall that a subset S of a metric space X is precompact if its closure is compact, or equivalently every sequence contains a Cauchy

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

This chapter reviews properties of regression estimators and test statistics based on

This chapter reviews properties of regression estimators and test statistics based on Chapter 12 COINTEGRATING AND SPURIOUS REGRESSIONS This chapter reviews properties of regression estimators and test statistics based on the estimators when the regressors and regressant are difference

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

On invariant graph subspaces

On invariant graph subspaces On invariant graph subspaces Konstantin A. Makarov, Stephan Schmitz and Albrecht Seelmann Abstract. In this paper we discuss the problem of decomposition for unbounded 2 2 operator matrices by a pair of

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

Multivariate Time Series: VAR(p) Processes and Models

Multivariate Time Series: VAR(p) Processes and Models Multivariate Time Series: VAR(p) Processes and Models A VAR(p) model, for p > 0 is X t = φ 0 + Φ 1 X t 1 + + Φ p X t p + A t, where X t, φ 0, and X t i are k-vectors, Φ 1,..., Φ p are k k matrices, with

More information

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Math Solutions to homework 5

Math Solutions to homework 5 Math 75 - Solutions to homework 5 Cédric De Groote November 9, 207 Problem (7. in the book): Let {e n } be a complete orthonormal sequence in a Hilbert space H and let λ n C for n N. Show that there is

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Introduction to the Mathematical and Statistical Foundations of Econometrics Herman J. Bierens Pennsylvania State University

Introduction to the Mathematical and Statistical Foundations of Econometrics Herman J. Bierens Pennsylvania State University Introduction to the Mathematical and Statistical Foundations of Econometrics 1 Herman J. Bierens Pennsylvania State University November 13, 2003 Revised: March 15, 2004 2 Contents Preface Chapter 1: Probability

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

A Test of Cointegration Rank Based Title Component Analysis.

A Test of Cointegration Rank Based Title Component Analysis. A Test of Cointegration Rank Based Title Component Analysis Author(s) Chigira, Hiroaki Citation Issue 2006-01 Date Type Technical Report Text Version publisher URL http://hdl.handle.net/10086/13683 Right

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Overview of normed linear spaces

Overview of normed linear spaces 20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

DEFINABLE OPERATORS ON HILBERT SPACES

DEFINABLE OPERATORS ON HILBERT SPACES DEFINABLE OPERATORS ON HILBERT SPACES ISAAC GOLDBRING Abstract. Let H be an infinite-dimensional (real or complex) Hilbert space, viewed as a metric structure in its natural signature. We characterize

More information

About Grupo the Mathematical de Investigación Foundation of Quantum Mechanics

About Grupo the Mathematical de Investigación Foundation of Quantum Mechanics About Grupo the Mathematical de Investigación Foundation of Quantum Mechanics M. Victoria Velasco Collado Departamento de Análisis Matemático Universidad de Granada (Spain) Operator Theory and The Principles

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Vector error correction model, VECM Cointegrated VAR

Vector error correction model, VECM Cointegrated VAR 1 / 58 Vector error correction model, VECM Cointegrated VAR Chapter 4 Financial Econometrics Michael Hauser WS17/18 2 / 58 Content Motivation: plausible economic relations Model with I(1) variables: spurious

More information

Fredholm Theory. April 25, 2018

Fredholm Theory. April 25, 2018 Fredholm Theory April 25, 208 Roughly speaking, Fredholm theory consists of the study of operators of the form I + A where A is compact. From this point on, we will also refer to I + A as Fredholm operators.

More information

Financial Econometrics

Financial Econometrics Financial Econometrics Long-run Relationships in Finance Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Long-Run Relationships Review of Nonstationarity in Mean Cointegration Vector Error

More information

On compact operators

On compact operators On compact operators Alen Aleanderian * Abstract In this basic note, we consider some basic properties of compact operators. We also consider the spectrum of compact operators on Hilbert spaces. A basic

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Winter 2013 Prof. Church Midterm Solutions Math 113 Winter 2013 Prof. Church Midterm Solutions Name: Student ID: Signature: Question 1 (20 points). Let V be a finite-dimensional vector space, and let T L(V, W ). Assume that v 1,..., v n is a basis

More information

The Dirichlet-to-Neumann operator

The Dirichlet-to-Neumann operator Lecture 8 The Dirichlet-to-Neumann operator The Dirichlet-to-Neumann operator plays an important role in the theory of inverse problems. In fact, from measurements of electrical currents at the surface

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Econ 423 Lecture Notes: Additional Topics in Time Series 1 Econ 423 Lecture Notes: Additional Topics in Time Series 1 John C. Chao April 25, 2017 1 These notes are based in large part on Chapter 16 of Stock and Watson (2011). They are for instructional purposes

More information

A brief introduction to trace class operators

A brief introduction to trace class operators A brief introduction to trace class operators Christopher Hawthorne December 2015 Contents 1 Introduction 1 2 Preliminaries 1 3 Trace class operators 2 4 Duals 8 1 Introduction The trace is a useful number

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Linear Passive Stationary Scattering Systems with Pontryagin State Spaces

Linear Passive Stationary Scattering Systems with Pontryagin State Spaces mn header will be provided by the publisher Linear Passive Stationary Scattering Systems with Pontryagin State Spaces D. Z. Arov 1, J. Rovnyak 2, and S. M. Saprikin 1 1 Department of Physics and Mathematics,

More information

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017 Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION By Degui Li, Peter C. B. Phillips, and Jiti Gao September 017 COWLES FOUNDATION DISCUSSION PAPER NO.

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of

covariance function, 174 probability structure of; Yule-Walker equations, 174 Moving average process, fluctuations, 5-6, 175 probability structure of Index* The Statistical Analysis of Time Series by T. W. Anderson Copyright 1971 John Wiley & Sons, Inc. Aliasing, 387-388 Autoregressive {continued) Amplitude, 4, 94 case of first-order, 174 Associated

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Darmstadt Discussion Papers in Economics

Darmstadt Discussion Papers in Economics Darmstadt Discussion Papers in Economics The Effect of Linear Time Trends on Cointegration Testing in Single Equations Uwe Hassler Nr. 111 Arbeitspapiere des Instituts für Volkswirtschaftslehre Technische

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Numerically Stable Cointegration Analysis

Numerically Stable Cointegration Analysis Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract

More information

Supplementary Notes March 23, The subgroup Ω for orthogonal groups

Supplementary Notes March 23, The subgroup Ω for orthogonal groups The subgroup Ω for orthogonal groups 18.704 Supplementary Notes March 23, 2005 In the case of the linear group, it is shown in the text that P SL(n, F ) (that is, the group SL(n) of determinant one matrices,

More information

The Canonical Gaussian Measure on R

The Canonical Gaussian Measure on R The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz

More information

Introduction to Algorithmic Trading Strategies Lecture 3

Introduction to Algorithmic Trading Strategies Lecture 3 Introduction to Algorithmic Trading Strategies Lecture 3 Pairs Trading by Cointegration Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Distance method Cointegration Stationarity

More information

1 Linear Difference Equations

1 Linear Difference Equations ARMA Handout Jialin Yu 1 Linear Difference Equations First order systems Let {ε t } t=1 denote an input sequence and {y t} t=1 sequence generated by denote an output y t = φy t 1 + ε t t = 1, 2,... with

More information

Statistical Signal Processing Detection, Estimation, and Time Series Analysis

Statistical Signal Processing Detection, Estimation, and Time Series Analysis Statistical Signal Processing Detection, Estimation, and Time Series Analysis Louis L. Scharf University of Colorado at Boulder with Cedric Demeure collaborating on Chapters 10 and 11 A TT ADDISON-WESLEY

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Cointegration and representation of integrated autoregressive processes in function spaces

Cointegration and representation of integrated autoregressive processes in function spaces arxiv:1712.08748v3 [math.fa] 7 Feb 2018 Cointegration and representation of integrated autoregressive processes in function spaces Won-Ki Seo Department of Economics, University of California, San Diego

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information