Representation of integrated autoregressive processes in Banach space

Similar documents
Cointegration and representation of integrated autoregressive processes in function spaces

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

NORMS ON SPACE OF MATRICES

Cointegrated linear processes in Hilbert space

Chapter 2 Linear Transformations

Lemma 3. Suppose E, F X are closed subspaces with F finite dimensional.

RIEMANN SURFACES. max(0, deg x f)x.

Parameterizing orbits in flag varieties

Cointegrated Density-Valued Linear Processes arxiv: v3 [math.st] 16 May 2018

Infinite-dimensional perturbations, maximally nondensely defined symmetric operators, and some matrix representations

Trace Class Operators and Lidskii s Theorem

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Where is matrix multiplication locally open?

arxiv: v1 [math.co] 25 Jun 2014

Analysis Preliminary Exam Workshop: Hilbert Spaces

INVERSE LIMITS AND PROFINITE GROUPS

Determinant lines and determinant line bundles

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

Part II. Riemann Surfaces. Year

arxiv: v1 [math.gr] 8 Nov 2008

DEFINABLE OPERATORS ON HILBERT SPACES

Compact operators on Banach spaces

Elementary linear algebra

Analytic Fredholm Theory

Chapter 6: The metric space M(G) and normal families

Notes for Functional Analysis

A brief introduction to trace class operators

Reductions of Operator Pencils

INTRODUCTION TO REAL ANALYTIC GEOMETRY

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

I teach myself... Hilbert spaces

IDEAL CLASSES AND RELATIVE INTEGERS

Representation of I(1) and I(2) autoregressive Hilbertian processes

Part III. 10 Topological Space Basics. Topological Spaces

Definition Suppose S R n, V R m are subspaces. A map U : S V is linear if

Empirical Processes: General Weak Convergence Theory

SYMMETRIC SUBGROUP ACTIONS ON ISOTROPIC GRASSMANNIANS

On Unitary Relations between Kre n Spaces

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

SPECTRAL THEORY EVAN JENKINS

Extreme points of compact convex sets

Spectral theory for compact operators on Banach spaces

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ECE 275A Homework #3 Solutions

DISCRETE SUBGROUPS, LATTICES, AND UNITS.

Problem Set 6: Solutions Math 201A: Fall a n x n,

Topological Vector Spaces III: Finite Dimensional Spaces

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

Lemma 1.3. The element [X, X] is nonzero.

1 Math 241A-B Homework Problem List for F2015 and W2016

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

A note on the σ-algebra of cylinder sets and all that

Estimates for probabilities of independent events and infinite series

Segre classes of tautological bundles on Hilbert schemes of surfaces

Fredholm Theory. April 25, 2018

Department of Mathematics, University of California, Berkeley. GRADUATE PRELIMINARY EXAMINATION, Part A Fall Semester 2016

Part IB. Further Analysis. Year

2.2 Annihilators, complemented subspaces

Normed & Inner Product Vector Spaces

4 Linear operators and linear functionals

Fréchet algebras of finite type

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

Banach Spaces II: Elementary Banach Space Theory

Decay to zero of matrix coefficients at Adjoint infinity by Scot Adams

Diffun2, Fredholm Operators

Banach Spaces V: A Closer Look at the w- and the w -Topologies

1 Definition and Basic Properties of Compa Operator

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

5 Banach Algebras. 5.1 Invertibility and the Spectrum. Robert Oeckl FA NOTES 5 19/05/2010 1

Math 396. Quotient spaces

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Algebraic Geometry. Andreas Gathmann. Class Notes TU Kaiserslautern 2014

WEYL S THEOREM FOR PAIRS OF COMMUTING HYPONORMAL OPERATORS

LECTURE 3 Functional spaces on manifolds

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Math 113 Midterm Exam Solutions

Wold decomposition for operators close to isometries

Math 210C. The representation ring

MATH 583A REVIEW SESSION #1

THE DUAL FORM OF THE APPROXIMATION PROPERTY FOR A BANACH SPACE AND A SUBSPACE. In memory of A. Pe lczyński

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Review of complex analysis in one variable

10. Noether Normalization and Hilbert s Nullstellensatz

Chapter 2: Linear Independence and Bases

Linear Passive Stationary Scattering Systems with Pontryagin State Spaces

MATH 8253 ALGEBRAIC GEOMETRY WEEK 12

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

Simple Abelian Topological Groups. Luke Dominic Bush Hipwood. Mathematics Institute

Spectral theorems for bounded self-adjoint operators on a Hilbert space

REPRESENTATION THEORY, LECTURE 0. BASICS

Computing the Autocorrelation Function for the Autoregressive Process

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

MA651 Topology. Lecture 9. Compactness 2.

Chapter 2 Metric Spaces

Some Properties of Closed Range Operators

SOME PROPERTIES OF ESSENTIAL SPECTRA OF A POSITIVE OPERATOR

Transcription:

Representation of integrated autoregressive processes in Banach space Won-Ki Seo Department of Economics, University of California, San Diego April 2, 218 Very preliminary and incomplete. Comments welcome. Abstract In this paper, we extend the Granger-Johansen representation theory for I(1) and I(2) autoregressive processes to accommodate processes taking values in an arbitrary complex separable Banach spaces. To accomplish this goal, we obtain necessary and sufficient conditions for the inverse of a holomorphic Fredholm operator pencil to have a simple pole and a second order pole. Moreover, a closed-form expression of the Laurent expansion of the inverse around an isolated singularity is obtained. Applying these results, we obtain a suitable extension of the Granger-Johansen representation theory. Due to our closed-form expression of the inverse, we may fully characterize I(1) and I(2) solutions except a term that depends on initial values. 1

1 Introduction The so-called Granger-Johansen representation theory is the results on the existence and representation of I(1)(and I(2)) solutions to a given autoregressive law of motion. Due to crucial contributions by Engle and Granger (1987), Johansen (1991, 1995), Schumacher (1991), and Faliva and Zoia (21), we already have well developed representation theory in finite dimensional Euclidean space. Among those, for reasons to become apparent, we need to mention further on the last two papers. They develop the Granger-Johansen theory in the framework of analytic function theory which are rigorously established in mathematics; Schumacher (1991) obtain a necessary and sufficient condition for a matrix-valued function of a single complex variable, commonly called a matrix pencil, to have a simple pole at 1, and then apply this result to derive a representation theorem for I(1) autoregressive processes. The monograph of Faliva and Zoia (21) provides a systematic reworking and extension of Schumacher (1991), and contains a representation theorem for I(2) autoregressive processes as well. Recently, several authors started to extend the Granger-Johansen theory to infinite dimensional Hilbert spaces. Beare et al. (217) appears to be the first effort in this direction. They provide a version of the theorem for AR(1) processes taking values in an arbitrary complex separable Hilbert space, and then extend the result to AR(p) cases resorting to the companion form. Two recent papers, Seo (217) and Beare and Seo (218), obtain a suitable extension of the Granger-Johansen theory in the spirit of Schumacher (1991) and Faliva and Zoia (21). Particularly, Beare and Seo (218) is noticeable since the authors show in detail that how analytic function theory adopted in finite dimensional Euclidean space can be suitably applied to obtain the Granger-Johansen theory in an infinite dimensional Hilbert space; they considered an index-zero Fredholm operator pencil and generalized the results obtained by Schumacher (1991) and Faliva and Zoia (21). In this paper, we extend the Granger-Johansen representation theory to an arbitrary complex separable Banach space setting. A common feature of the previous studies is a Hilbert space setting. This may be a crucial limitation given that recent interest on time series taking values in Banach spaces, e.g. a time series of continuous functions. We derive the Granger-Johansen representation theory in a Banach space without the help of rich geometric structure of a Hilbert space. Our approach is similar to Beare and Seo (218); we first obtain necessary and sufficient conditions for the inverse of a holo- 2

morphic Fredholm pencil to have a simple pole and a second order pole, and then apply these results to obtain our representation theory. However, there are several important differences; first our study on inversion of a holomorphic Fredholm pencil explicitly reveals necessary and sufficient conditions for a simple pole and a second order pole may be differently expressed depending on our choice of complementary subspaces, and our inversion theorems take all possible choices into account. This is not to make our study intentionally complicated. Rather, it is inevitable as long as we consider a Banach space where there is no canonical notion of a complementary subspace. Moreover, we provide a closed-form expression of the inverse by deriving a recursive formula to determine all the coefficients in the Laurent expansion of the inverse around an isolated singularity. Due to our closed-form expression, I(1) and I(2) solutions to a given autoregressive processes can be fully characterized up to a component depending on initial values. The remainder of the paper is organized as follows. In Section 2, we review some essential mathematics. In Section 3, we study in detail on inversion of a holomorphic Fredholm pencil based on the analytic Fredholm theorem; our main results are obtained in this section. Section 4 contains a suitable extension of the Granger-Johansen representation theory as an application of our inversion theorems. Conclusion follows in Section 5. 2 Essential preliminaries 2.1 Review of Banach spaces Let B be a separable Banach space over the complex plane C with norm. Moreover let L B denote the Banach space of bounded linear operators on B with the usual operator norm A LB = sup x 1 Ax. Let id B L B denote the identity map on B. Given a subspace V B, let A V denote the restriction of an operator A L B to V. Given A L B, we define two important subspaces of B as follows. ker A = {x B Ax = } ran A = {Ax x B} Let V 1, V 2,..., V k be subspaces of B. The algebraic sum of V 1, V 2,..., V k 3

is defined by k V j = {v 1 + v 2 +..., v k : v j V j for each j} j=1 We say that B is the (internal) direct sum of V 1, V 2,..., V k, and write B = k j=1v j, if V 1, V 2,..., V k are closed subspaces satisfying V j j j V j = {} and k j=1 V j = B. For any V B, we let V c B denote a subspace (if exists) such that B = V V c. Such a subspace V c is called a complementary subspace of V. It turns out that a subspace V allows complementary subspace V c if and only if there exists the bounded projection onto V c along V (Megginson, 212, Theorem 3.2.11). In general, a complementary subspace is not uniquely determined. Given V B. The cosets of V are the collection of the following sets x + V = {x + v : v V }, x B. The quotient space B/V is the vector space whose elements are equivalence classes of the cosets of V ; equivalence relation is given by x + V y + V x y V. When V = ran A for some A L B, the dimension of B/V is called the defect of A. 2.2 Fredholm operators An operator A L B is said to be a Fredholm operator if ker A and B/ ran A are finite dimensional. The index of a Fredholm operator A is the integer given by dim(ker A) dim(b/ ran A). It turns out that a bounded linear operator with finite defect has a closed range (Abramovich and Aliprantis, 22, Lemma 4.38). Therefore ran A is closed if A is a Fredholm operator. Fredholm operators are invariant under compact perturbation; if A is a Fredholm operator and K is a compact operator, A+K is a Fredholm of the same index. In this paper, we mainly consider Fredholm operators of index zero, so we let F L B denote the collection of such operators 4

2.3 Generalized inverse operators Let B 1 and B 2 be Banach spaces and L B1,B 2 denote the space of bounded linear operators from B 1 to B 2. In the subsequent discussion, we need a notion of a generalized inverse operator of A L B1,B 2. Suppose that B 1 = ker A (ker A) c and B 2 = ran A (ran A) c. Given the direct sum conditions, the generalized inverse of A, denoted by A g, is defined as the unique linear extension of (A (ker A) c) 1 (defined on ran A) to B. Specifically, A g is given by A g = (A (ker A) c) 1 (id B P (ran A) c), (2.1) where P V c denote the bounded projection on V c along V. Then it can be shown that the generalized inverse A g has the following properties. AA g A = A, A g AA g = A g, AA g = (id B P (ran A) c), A g A = P (ker A) c. Since complementary subspaces are not uniquely determined, A g depends on our choice of them. 2.4 Operator Pencils Let U be an open connected subset of C. A map A : U L B is called an operator pencil. An operator pencil A is holomorphic at z U if the limit A (1) (z ) := lim z z A(z) A(z ) z z (2.2) exists in the uniform operator topology. If A is holomorphic for all z D U for an open connected set D, then we say that A is holomorphic on D. A holomorphic operator pencil A on D allows the Taylor series for all z D. An operator pencil A is said to be meromorphic on U if there exists a discrete set U U such that A : U \ U L B is holomorphic and the following Laurent expansion is allowed in a punctured neighborhood of z U. A(z) = 1 A j (z z ) j + A j (z z ) j, (2.3) j= m j= where the first term is called the principal part, and the second term is called the holomorphic part of the Laurent series. A finite positive integer m 5

is called the order of pole at z. When m = 1 (resp. m = 2), we simply say that A(z) has a simple pole (resp. second order pole) at z. If A m,..., A 1 are finite rank operators, we say that A(z) is finitely meromorphic at z. In addition, A(z) is said to be finitely meromorphic on U if it is finitely meromorphic at each of its poles. The set of complex numbers z U at which the operator A(z) is noninvertible is called the spectrum of A, and denoted by σ(a). It turns out that the spectrum is always a closed set (Markus, 212, p. 56). If A(z) is a Fredholm operator of index zero for z U, it is called an F -pencil. 2.5 Fredholm Theorem We provides a crucial input, called the analytic Fredholm theorem, for the subsequent discussion. Analytic Fredholm Theorem. (Corollary 8.4 in Gohberg et al. (213)) Let A : U L B be a holomorphic Fredholm operator pencil, and assume that A(z) is invertible for some element z U. Then (i) σ(a) is a discrete set. (ii) In a punctured neighborhood of z σ(a), A(z) 1 = j= m A j (z z ) j where A is a Fredholm operator of index zero and A m,..., A 1 are finite rank operators. That is, the analytic Fredholm theorem implies that if the inverse of a holomorphic Fredholm pencil exists, it is finitely meromorphic. 2.6 Random elements of Banach space We briefly introduce Banach-valued random variables, called B-random variables. More detailed discussion on this subject can be found in Bosq (2, Chapter 1). We let B denote the topological dual of B. Let (Ω, F, P) be an underlying probability triple. A B-random variable is defined as a measurable map X : Ω B, where B is understood to be 6

equipped with its Borel σ field. X is said to be integrable if E X <. If X is integrable, there exists a unique element EX B such that for all f B, E[f(X)] = f(ex). Let L 2 B denote the space of B-random variables X such that EX = and E X 2 <. 2.7 I(1) and I(2) sequences in Banach space Let ε = (ε t, t Z) be an independent and identically distributed sequence in L 2 B such that Eε t = and < E ε t 2 <. In this paper, ε is simply called a strong white noise. For some t Z { }, let X = (X t, t t ) be a time series taking values in B satisfying X t = A j ε t j (2.4) j= where (A j, j ) is a sequence in L B satisfying j= A j LB <. We call the sequence (X t, t t ) a standard linear process. In this case j= A j is convergent in L B. We say a sequence in L 2 B is I() if it is a standard linear process with j= A j. For d {1, 2}, let X = (X t, t d + 1) be a sequence in L 2 B. We say (X t, t ) is I(d) if its d-th differences d X = ( d X t, t 1) is I(). 3 Inversion of a holomorphic F -pencil around an isolated singularity Throughout this section, we employ the following assumption. Assumption 3.1. A : U L B be a holomorphic Fredholm pencil and z σ(a) is an isolated element. Since A(z) is holomorphic, it allows the Taylor series around z as follows. A(z) = A j (z z ) j, (3.1) j= 7

where A = A(z ), A j = A (j) (z )/j! for j 1, and A (j) (z) denotes the j-th complex derivative of A(z). Furthermore, we know from the analytic Fredholm theorem that N(z) := A(z) 1 allows the Laurent series expansion in a punctured neighborhood of z as follows. N(z) = 1 N j (z z ) j + N j (z z ) j, 1 m < (3.2) j= m j= Our first goal is to find necessary and sufficient conditions for m = 1 and 2. We then provide a recursive formula to obtain N j for j m. Before stating our main assumptions and results of this section, we provide some preliminary results. It can be shown that any Fredholm operator pencil satisfying Assumption 3.1 is in fact an F -pencil. Lemma 3.1. Under Assumption 3.1, A : U L B is an F -pencil. Proof. Since z is an isolated element, it implies that there exists some point in U where the operator pencil is invertible. It turns out that the index of A(z) does not depend on z U given that U is connected, and Fredholm operators of nonzero index are not invertible (Kaballo, 212, Section 2). Therefore, this implies that A(z) has index zero for z U. In view of Lemma 3.1, it may be deduced that the analytic Fredholm theorem provided in Section 2.5 is in fact only for F -pencils. The following is an important observation implied by Assumption 3.1. Lemma 3.2. Under Assumption 3.1, (i) ran A(z) allows a complementary subspace for z U. (ii) ker A(z) allows a complementary subspace for z U. (iii) For any finite dimensional subspace V, ran A(z) + V allows a complementary subspace for z U Proof. (i) : since A(z) is a Fredholm operator, we know that ran A(z) is closed and B/ ran A(z) is finite dimensional. Given any closed subspace V, it turns out that V allows a complementary subspace if B/V is finite dimensional (Megginson, 212, Theorem 3.2.18). Thus, (i) is proved. (ii) : Every finite dimensional subspace allows a complementary subspace (Megginson, 212, 8

Theorem 3.2.18). (iii) : since ran A(z) is closed and V is finite dimensional, their algebraic sum ran A(z) + V is a closed subspace, and B/(ran A(z) + V ) is finite dimensional. In a Hilbert space, a closed subspace allows a complementary subspace, and furthermore we can always fix it to the orthogonal complement. We therefore know that ran A(z) and ran A(z) + V allows complementary subspaces in a Hilbert space if ran A(z) is closed. However in a Banach space, closedness of a subspace is not sufficient for the existence of a complementary subspace. The reader is referred to Megginson (212, pp. 31-32) for a detailed discussion on this subject. 3.1 Simple poles of holomorphic F inverses Due to Lemma 3.2, we know that ran A and ker A are complemented, meaning that we may find their complementary subspaces, as well as the associated bounded projections. Depending on our choice of complementary subspaces, we may also define the corresponding generalized inverse of A as in (2.1). To simplify expressions, we let id B if j = 1 j= = otherwise, G j (l, m) = R = ran A j 1 K = ker A K 1 = {x K : A 1 x R } N k A j+l k, l =, 1, 2,... R c : a complementary subspace of ran A K c : a complementary subspace of ker A P R c : bounded projection onto R c along R P K c : bounded projection onto K c along K S R c = P R c A 1 K : K R c (A ) g {R c,kc } : generalized inverse of A, (3.3) where R c and K c depend on our choice. Therefore, we need to be careful with that P R c, P K c and S R c could be differently defined depending on our 9

choice of complementary subspaces. Given a specific choice of R c and K c, they are uniquely defined. The subscript {R c, K c } of a generalized inverse indicates it depends on our choice of R c and K c. We provide another useful lemma. Lemma 3.3. Suppose that Assumption 3.1 is satisfied. Then invertibility (or noninvertibility) of S R c does not depend on the choice of R c. Proof. Let V and W two different choices of R c. Then it is trivial to show that ker S V = ker S W = K 1 (3.4) Moreover, we know due to Lemma 3.1 that A(z) satisfying Assumption 3.1 is in fact an F -pencil, which implies that dim(b/ ran A ) = dim(ker A ) <. Since a complementary subspace of ran A is isomorphic to B/ ran A (Megginson, 212, Corollary 3.2.16), we have dim(v ) = dim(w ) = dim(k ) < (3.5) Any injective linear map between finite dimensional vector spaces of the same dimension is also bijective. Therefore in view of (3.5), K 1 = {} is necessary and sufficient condition for S V (and S W ) to be invertible. Therefore if either of one is invertible (resp. noninvertible), then the other is also invertible (resp. noninvertible). We next provide necessary and sufficient conditions for A(z) 1 to have a simple pole at z and its closed form expression in a punctured neighborhood of z. Proposition 3.1. Suppose that Assumptions 3.1 are satisfied. following conditions are equivalent to each other. Then the (i) m = 1 in the Laurent series expansion (3.2). (ii) B = R A 1 K. (iii) For any choice of R c, S R c : K R c is invertible. (iv) For some choice of R c, S R c : K R c is invertible. 1

Under any of these conditions and any choice of R c and K c, the coefficients (N j 1) in (3.2) are given by the following recursive formula. N 1 = S 1 R c P R c (3.6) N j = (1 j= G j (, 1))(A ) g {R c,kc }(id B A 1 S 1 R c P R c ) G j (1, 1)S 1 R c P R c, (3.7) where each N j is understood as a map from B to B without restriction of the codomain. Proof. We first show that the claimed equivalence between conditions (i)- (iv), and then verify the recursive formula. Equivalence between (i)-(iv) : Due to the analytic Fredholm theorem, we know that A(z) 1 admits the Laurent series expansion (3.2) in a punctured neighborhood z. Moreover, A(z) is holomorphic and thus admits the Taylor series as in (3.1). Combining (3.1) and (3.2), we obtain the identity expansion id B = A(z) 1 A(z) as follows. m+k id B = N k j A j (z z ) k. (3.8) j= Since (iii) (iv) is deduced from Lemma 3.3, we demonstrate equivalence between (i)-(iv) by showing (ii) (i) (iv) (ii). Now we show that (ii) (i). Suppose that m > 1. Collecting the coefficients of (z z ) m and (z z ) m+1 in (3.8), we obtain N m A = (3.9) N m+1 A + N m A 1 =. (3.1) Equation (3.9) implies that N m R = {}, and further (3.1) implies that N m A 1 K = {}. Therefore, if the direct sum decomposition (ii) is true, we necessarily have N m =. Note that N m = holds for any 2 m <. We therefore conclude that m = 1, which proves (ii) (i). We next show that (i) (iv). Collecting the coefficients of (z z ) 1 and (z z ) in (3.8) when m = 1, we have N 1 A = (3.11) N 1 A 1 + N A = id B (3.12) 11

Since A is a Fredholm operator, we know due to Lemma 3.2 that R allows a complementary subspace V, and there exists the associated projection operator P V. Then equation (3.11) implies that N 1 (id B P V ) = and N 1 = N 1 P V. (3.13) Moreover (3.12) implies id B K = N 1 A 1 K. In view of (3.13), it is apparent that id B K = N 1 S V (3.14) Equation (3.14) implies that S V is an injection. Moreover, due to Lemma 3.1, we know A F. Using the same arguments we used to establish (3.5), we obtain dim(v ) = dim(b/r ) = dim(k ) <. (3.15) Equations (3.14) and (3.15) together imply that S V : K V is an injective linear map between finite dimensional vector spaces of the same dimension. Therefore, we conclude that S V : K V is a bijection. To show (iv) (ii), suppose that our direct sum condition (ii) is false. We first consider the case that R A 1 K {}. If there exists a nonzero element x in R A 1 K, we have for any choice of R c, S R c x =. This implies that S R c cannot be injective. We next consider the case that B R + A 1 K even if R A 1 K = {} holds. In this case, clearly R A 1 K is a strict subspace of B. On the other hand, since R c is a complementary subspace of R, it is deduced that dim(a 1 K ) < dim(r c ) (3.16) Note that S R c can be viewed as composition of P R c and A 1 K. From the rank-nullity theorem, dim(s R c K ) must be at most equal to dim(a 1 K ). In view of (3.16), this implies that S R c cannot be surjective for any choice of R c. Therefore, we conclude that (iv) (ii). Recursive formula for (N j, j 1) : Assume that V as a choice of R c and W as a choice of K c are fixed. We first verify the claimed formulas (3.6) and (3.7) for this specific choice of complementary subspaces. 12

At first, we consider the claimed formula for N 1. In our demonstration of (i) (iii) above, we obtained (3.14). Since the codomain of S V is restricted to V, (3.14) can be written as id B K = N 1 V S V (3.17) Moreover we know that S V : K V is invertible. We therefore have N 1 V = SV 1, where note that we still need to restrict the domain of N 1 to V. By composing both sides of (3.17) with P V, we obtain N 1 P V = SV 1 P V. Recalling (3.13), which implies that N 1 = N 1 P V, so we have N 1 = S 1 V P V (3.18) Since the codomain of SV 1 is K, the map (3.18) is the formula for N 1 with the restricted codomain. Howeover, it can be understood as a map from B to B by composing both sides of (3.18) with a proper embedding. Now we verify the recursive formulas for (N j, j ). Collecting the coefficients of (z 1) j and (z 1) j+1 in the identity expansion (3.8), the following can be shown. G j (, 1) + N j A = 1 j= (3.19) G j (1, 1) + N j A 1 + N j+1 A = (3.2) Since id B = (id B P V ) + P V, N j can be written as the sum of N j (id B P V ) and N j P V. We will obtain an explicit formula for each summand. Given complementary subspace V and W, we may define (A ) g {V,W } : B W. Since we have A (A ) g {V,W } = id B P V, (3.19) implies that N j (id B P V ) = 1 j= (A ) g {V,W } G j(, 1)(A ) g {V,W }. (3.21) Moreover by restricting the domain of both sides of (3.2) to K, we have G j (1, 1) K + N j A 1 K = (3.22) Since N j = N j P V + N j (id B P V ), it is easily deduced from (3.22) that N j S V = G j (1, 1) ker A N j (id B P V )A 1 K (3.23) Substituting (3.21) into (3.23), we obtain N j S V = G j (1, 1) K 1 j= (A ) g {V,W } A 1 K + G j (, 1)(A ) g {V,W } A 1 K (3.24) 13

Since S V : K V is invertible, it is deduced that ran SV 1 = K and S V SV 1 = id B V. We therefore obtain the following equation from (3.24). N j V = G j (1, 1) 1 j= (A ) g {V,W } A 1S 1 V + G j (, 1)(A ) g {V,W } A 1S 1 V (3.25) Composing the projection operator of both sides of (3.25) with P V, we then obtain an explicit formula for N j P V. Combining this result with (3.21), then we obtain the formula N j similar to (3.7) in terms of P V, (A ) g {V,W }, S V, G j (, 1), and G j (1, 1) after a little algebra. Of course, the resulting operator N j should be understood as a map from B to B. Our formula for each N j that we have obtained seems to depend on our choice of complementary subspaces, especially due to P V, S V and (A ) g {V,W }. However if a Laurent series exists, it is unique. We could differently define the aforementioned operators by choosing different complementary subspaces, and then could obtain a recursive formula for (N j, j 1) in terms of those operators. However, such a newly obtained formula cannot be different from what we have obtained from a fixed choice of complementary subspaces due to the uniqueness of the Laurent series. Therefore, it is easily deduced that our recursive formula for N j derived in Propositions 3.1 does not depend on a specific choice of complementary subspaces. 3.2 Second order poles of holomorphic F inverses To simplify expressions, we let R 1 = ran A + A 1 ker A R c 1 : a complementary subspace of ran A + A 1 ker A K c 1 : a complementary subspace of K 1 in K P R c 1 : bounded projection onto R c along R P K c 1 : bounded projection onto K c along K (3.26) We know from Lemma 3.2 that R, K and R 1 are complemented, so we may find complementary subspaces R, c K c, and R1, c as well as the bounded projections P R c, P K c and P R c 1. Given R, c R1 c is not uniquely determined in general. We require our choice to satisfy R1 c R, c (3.27) 14

so that R c = S R c K R c 1 (3.28) P R c P R c 1 = P R c 1 P R c = P R c 1 (3.29) Given R c, a choice of a complementary subspace satisfying (3.27) is always possible, and such an subspace is easily obtained. Lemma 3.4. Suppose that Assumption 3.1 is satisfied. Given R c, let V 1 be a specific choice of R c 1. Then P R c V 1 R c is also a complementary subspace of R 1. Proof. Let V be a given choice of R. c If B = R + A 1 K, then V 1 = {}, then our statement trivially holds. Now consider the case when V 1 is a nontrivial subspace. Since we have R +A 1 K = R P V A 1 K holds, it is deduced that B = R P V A 1 K V 1. This implies that M := P V A 1 K V 1 is a complementary subspace of R. Since P V B = P V M, clearly P V M : M V must be a surjection, so we have P V M = V (3.3) Moreover, both M and V are complementary subspaces of R, and we know due to Lemma 3.1 that A F. Then it is deduced from similar arguments to those we used to derive (3.5) that dim(b/r ) = dim(v ) = dim(m) Therefore, P V M : M R c is a surjection between vector spaces of the same finite dimension, meaning that it is also an injection. We therefore obtain P V A 1 K P V V 1 = {}, which implies that P V M = P V A 1 K P V V 1. Combining this with (3.3), it is deduced that B = R P V A 1 K P V V 1 (3.31) Clearly P V V 1 is a complementary subspace of R 1. Due to Lemma 3.4, we know how to make an arbitrary choice of R c 1 satisfy the requirement (3.27). Therefore in the subsequent discussion, we may simply assume that our choice of R c 1 satisfies (3.27). 15

Under any choice of our complementary subspaces satisfying (3.27), we define A 2 {R c,kc } = A 2 A 1 (A ) g {R c,kc }A 1 S {R c,kc,rc 1 } = P R c 1 A 2 {R c,kc } K 1 : K 1 R c 1, where subscripts also indicate the collection of complementary subspaces on which the above operators depend. In this section, we consider the case that K 1 {}. Then S R c is not invertible since ker S R c = K 1. However, note that S R c is a linear map between finite dimensional subspaces, so we can always define a generalized inverse as follows. (S R c ) g {R c 1,Kc 1 } = ( S R c K c 1 ) 1 (idb P R c 1 ) R c, (3.32) Before stating our main proposition of this section, we first establish the following preliminary result. Lemma 3.5. Suppose that Assumption 3.1 is satisfied. Let V and Ṽ be arbitrary choices of R, c and V 1 V and Ṽ1 Ṽ be arbitrary choices of R1. c Then dim(v 1 ) = dim(ṽ1) = dim(k 1 ) Proof. For V and Ṽ, we have two defined operators S V : K V and SṼ : K Ṽ. We established that ker S V = ker SṼ = K 1 in (3.4). From Lemma 3.1, we know A F, so it is easily deduced that dim(v ) = dim(k ) = dim(s V K ) + dim(k 1 ) (3.33) dim(ṽ) = dim(k ) = dim(sṽ K ) + dim(k 1 ) (3.34) In each of (3.33) and (3.34), the first equality is deduced from the same argument to those we used to derive (3.5), and the second equality is justified by the rank-nullity theorem. Moreover, the following direct sum decompositions are allowed. V = S V K V 1 (3.35) Ṽ = SṼ K Ṽ1 (3.36) 16

To see why (3.35) and (3.36) are true, first note that we have R + A 1 K = R S V K = R SṼ K. We thus have B = R S V K V 1 = R SṼ K Ṽ 1. These direct sum conditions imply that S V K V 1 and SṼ K Ṽ1 are complementary subspaces of R. Since V 1 V and Ṽ1 Ṽ, (3.35) and (3.36) are established. Now it is deduced from (3.35) and (3.36) that dim(v ) = dim(s V K ) + dim(v 1 ) (3.37) dim(ṽ) = dim(sṽ K ) + dim(ṽ1) (3.38) Comparing (3.33) and (3.37), we obtain dim(k 1 ) = dim(v 1 ). Additionally from (3.34) and (3.38), we obtain dim(k 1 ) = dim(ṽ1) Now we provide necessary and sufficient conditions for A(z) 1 to have a second order pole at z and its closed-form expression in a punctured neighborhood of z. Proposition 3.2. Suppose that Assumptions 3.1 are satisfied and K 1 {}. Then the following conditions are equivalent to each other. (i) m = 2 in the Laurent series expansion (3.2). (ii) For some choice of R c, K c, we have B = R 1 A 2 {R c,kc }K 1. (3.39) (iii) For any choice of R c, K c, and R c 1 satisfying (3.27), S {R c,kc,rc 1 } : K 1 R c 1 is invertible. (iv) For some choice of R c, K c, and R c 1 satisfying (3.27), S {R c,kc,rc 1 } : K 1 R c 1 is invertible. Under any of these conditions and any choice of complementary subspaces satisfying (3.27), the coefficients (N j 2) in (3.2) are given by the follow- 17

ing recursive formula. N 2 = (S {R c,kc,rc 1 }) 1 P R c 1 (3.4) ) N 1 = (Q R {R c,kc }(S R c )g {R c1,kc1 }P R c N 2A 1 (A ) g {R c,kc } Q L {R c,kc } Q R {R c,kc}(a ) g {R c,kc}a 1N 2 N 2 A 3 {R c,kc}n 2 (3.41) ( ) N j = G j (1, 2)(A ) g {R c,kc}a 1 G j (2, 2) N 2 ( ) + (1 j= G j (, 2)) (A ) g {R c,kc } id B A 1 (S R c ) g {R1 c,kc}p R c Q L 1 {R c,kc } where, G j (1, 2)(S R c ) g {R c 1,Kc 1 }P R c QL {R c,kc } (3.42) A 3 {R c,kc } = A 3 A 1 (A ) g {R c,kc }A 1(A ) g {R c,kc }A 1 Q L {R c,kc } = id B A 2 {R c,kc }N 2 Q R {R c,kc } = id B N 2 A 2 {R c,kc } Each N j is understood as a map from B to B without restriction of the codomain. Proof. We first establish some results are repeatedly mentioned in the subsequent proof. Given any choice of complementary subspaces satisfying (3.27), the following identity decomposition is easily deduced from (3.29). id B = (id B P R c ) + (id B P R c 1 )P R c + P R c 1 (3.43) Since we have R 1 = R + A 1 K = R S R c K, our direct sum condition (ii) is trivially equivalent to B = R S R c K A 2 {R c,kc }K 1 (3.44) Moreover, we may obtain the following expansion of the identity from (3.1) and (3.2). m+k id B = N k j A j (z z ) k. (3.45) = j= m+k j= A j N k j (z z ) k. (3.46) 18

Equivalence between (i)-(iv) : Since (iii) (iv) is trivial, we will show that (ii) (i) (iii) and (iv) (ii). To show (ii) (i), let V (resp. W ) be a choice of R c (resp. K c ), and the direct sum condition (ii) holds for V and W. Since ker S V = K 1 {}, S V cannot be invertible. Therefore, m 1 by Proposition 3.1. Therefore, suppose that 2 m < in (3.2). Collecting the coefficients of (z z ) m, (z z ) m+1 and (z z ) m+2 in (3.45) and (3.46), we obtain N m A = A N m = (3.47) N m A 1 + N m+1 A = A 1 N m + A N m+1 = (3.48) N m A 2 + N m+1 A 1 + N m+2 A = (3.49) We may define the generalized inverse (A ) g {V,W } for V and W. Composing both sides of (3.47) with (A ) g {V,W }, we obtain N m (id B P V ) = and N m = N m P V (3.5) From (3.48) and (3.5), it is deduced that N m A 1 K = N m P V A 1 K = N m S V = (3.51) Restricting the domain of the both sides of (3.49) to K 1, we obtain Moreover (3.48) trivially implies that N m A 2 K1 + N m+1 A 1 K1 = (3.52) N m+1 A = N m A 1 and A N m+1 = A 1 N m (3.53) By composing each of (3.53) with (A ) g {V,W }, it can be deduced that N m+1 (id B P V ) = N m A 1 (A ) g {V,W } (3.54) P W N m+1 = (A ) g {V,W } A 1N m, (3.55) Composing both sides of (3.55) with P V, then it is deduced from (3.5) that P W N m+1 P V = (A ) g {V,W } A 1N m (3.56) 19

From (3.56) and the identity decomposition id B = (id B P W ) + P W, we obtain N m+1 P V = (A ) g {V,W } A 1N m + (id B P W )N m+1 P V (3.57) Summing both sides of (3.54) and (3.57) gives N m+1 = N m A 1 (A ) g {V,W } (A ) g {V,W } A 1N m + (id B P W )N m+1 P V (3.58) Therefore, (3.52) and (3.58) together imply that = N m A 2 K1 N m A 1 (A ) g {V,W } A 1 K1 (A ) g {V,W } A 1N m A 1 K1 + (id B P W )N m+1 P V A 1 K1 (3.59) From the definition of K 1, P V A 1 K1 =. Therefore, the last term in (3.59) is zero. Moreover in view of (3.5), we have N m (id B P V ) =. This implies that the third in (3.59) is zero. Therefore, (3.59) reduces to N m A 2 {V,W } K 1 =. (3.6) Given our direct sum condition (ii) (or equivalently (3.44)) with equations (3.5), (3.51) and (3.6), we conclude that N m =. The above arguments hold for any arbitrary choice of m such that 2 < m <, and we already showed that m = 1 is impossible. Therefore, m must be 2. This proves (ii) (i). Now we show that (i) (iii). We let V, W, and V 1 ( V ) be choices of R, c K c and R1 c respectively. Suppose that S {V,W,V 1 } is not invertible. Due to Lemma 3.5, we know dim(v 1 ) = dim(k 1 ), meaning that S {V,W,V 1 } is not injective. Therefore, we know there exists an element x K 1 such that S {V,W,V 1 } x =. Collecting the coefficients of (z z ) 1, (z z ) and (z z ) in (3.45) and (3.46), we have 3 3 3 N k A 2 k + N 2 A = (3.61) N k A 1 k + N 2 A 1 + N 1 A = (3.62) N k A k + N 2 A 2 + N 1 A 1 + N A = id B (3.63) 2

From the identity decomposition (3.43), N 2 can be written as the sum of N 2 (id B P V ), N 2 (id B P V1 )P V and N 2 P V1. We will obtain an explicit formula for each summand. It is deduced from (3.61) that N 2 (id B P V ) = 3 Restricting both sides of (3.62) to K, we obtain N 2 A 1 K = 3 N k A 2 k (A ) g {V,W } (3.64) N k A 1 k K (3.65) Since N 2 = N 2 (id B P V ) + P V, we obtain from (3.64) and (3.65), N 2 S V = 3 N k A 1 k K + 3 N k A 2 k (A ) g {V,W } A 1 K (3.66) We may define (S V ) g {V 1,W 1 } as in (3.32). Composing both sides of (3.66) with (S V ) g {V 1,W 1 } P V, then we obtain N 2 (id B P V1 )P V = + 3 3 N k A 1 k (S V ) g {V 1,W 1 } P V Restricting both sides of (3.63) to K 1, we have 3 From (3.62), we can also obtain N 1 (id B P V ) = N k A 2 k (A ) g {V,W } A 1(S V ) g {V 1,W 1 } P V (3.67) N k A k K1 + N 2 A 2 K1 + N 1 A 1 K1 = id B K1 (3.68) 3 N k A 1 k (A ) g {V,W } N 2A 1 (A ) g {V,W } (3.69) Since A 1 K 1 R, we have N 1 A 1 K1 = N 1 (id B P V )A 1 K1. Substituting (3.69) into (3.68), it can be obtained that 3 N k A k K1 3 N k A 1 k (A ) g {V,W } A 1 K1 + N 2 A 2 {V,W } K 1 = id B K1 21 (3.7)

Since N 2 = N 2 (id B P V ) + N 2 (id B P V1 )P V + N 2 P V1, we have id B K = 3 N k A k K1 3 N k A 1 k (A ) g {V,W } A 1 K1 + N 2 (id B P V )A 2 {V,W } K + N 2 (id B P V1 )P V A 2 {V,W } K + N 2 S {V,W,V 1 } (3.71) Note that if N j is zero for every j 3, the first four terms of the right hand side of (3.71) are equal to zero, which can be easily deduced from obtained formulas for N 2 (id B P V ) and N 2 (id B P V1 )P V in (3.64) and (3.67). However, we showed that there exists some x K 1 such that S {V,W,V 1 } x =. Therefore it is obvious for (3.71) to hold, N j for some j 3 must not be zero. This shows (i) (iii). It remains to show (iv) (ii). Suppose that (ii) does not hold. Then for any choice of R c and K c, we must have either of or R 1 A 2 {R c,kc }K 1 {} (3.72) R 1 + A 2 {R c,kc }K 1 B (3.73) If (3.72) is true, then clearly S {R c,kc,rc 1 } cannot be injective for any choice of R1 c satisfying (3.27). Moreover if (3.73) is true, then we must have dim(a 2 {R c,kc}k 1) < dim(r1). c This implies that S {R c,kc,rc 1 } cannot be surjective for any choice of R1 c satisfying (3.27). Therefore (iv) (ii) is easily deduced. Formulas for N 2 and N 1 : We let V, W, V 1 ( V ), W 1 our choice of R c, K c, R c 1 and K c 1 respectively. Collecting the coefficients of (z z ) 2, (z z ) 1 and (z z ) from (3.45) and (3.46), we obtain, N 2 A = A N 2 = (3.74) N 2 A 1 + N 1 A = A 1 N 2 + A N 1 = (3.75) N 2 A 2 + N 1 A 1 + N A = (3.76) From similar arguments and algebra to those in our demonstration of (ii) (i), it can easily deduced that N 2 R 1 = {} (3.77) N 2 A 2 {V,W } K 1 = id B K1 (3.78) 22

Equation (3.77) implies that N 2 (id B P V1 ) = and N 2 = N 2 P V1 (3.79) Equations (3.77) and (3.78) together imply that N 2 V1 S {V,W,V 1 } = id B K1. (3.8) Composing both sides of (3.8) with (S {V,W,V 1 } ) 1 P V1, we obtain N 2 P V1 = (S {V,W,V 1 } ) 1 P V1 (3.81) In view of (3.79), (3.81) is in fact equal to N 2 with the codomain restricted to K 1. Viewing this as a map from B to B, we obtain (3.4) for our choice of complementary subspaces. We next verify the claimed formula for N 1. In view of the identity decomposition (3.43), N 1 may be written as the sum of N 1 (id B P V ), N 1 (id B P V1 )P V and N 1 P V1. We will find an explicit formula for each summand. From (3.45) when m = 2, we obtain the coefficients of (z z ) 1, (z z ) and (z z ) 1 as follows. N 2 A 1 + N 1 A = (3.82) N 2 A 2 + N 1 A 1 + N A = id B (3.83) N 2 A 3 + N 1 A 2 + N A 1 + N 1 A = (3.84) From (3.82) and the properties of the generalized inverse, it is easily deduced that N 1 (id B P V ) = N 2 A 1 (A ) g {V,W } (3.85) Restricting the domain of the both sides of (3.83) to K, we obtain N 1 A 1 K = id B K N 2 A 2 K (3.86) Using identity decomposition id B = P V + (id B P V ), (3.86) can be written as N 1 S V = id B K N 2 A 2 K N 1 (id B P V )A 1 K (3.87) Substituting (3.85) into (3.87), we obtain N 1 S V = ( id B N 2 A 2 {V,W }) K (3.88) 23

Under our direct sum condition (ii), S V : K V is not invertible, but allows a generalized sense as in (3.32). From the construction of (S V ) g {V 1,W 1 }, we have S V (S V ) g {V 1,W 1 } = (id B P V1 ) V. Composing both sides of (3.88) with (S V ) g {V 1,W 1 } P V, we obtain N 1 (id B P V1 )P V = ( id B N 2 A 2 {V,W }) (SV ) g {V 1,W 1 } P V (3.89) Restricting the domain of both sides of (3.84) to K 1, we have N 2 A 3 K1 + N 2 A 2 + N A 1 K1 =. (3.9) Composing both sides of (3.83) with (A ) g {V,W }, it is deduced that N (id B P V ) = (id B N 2 A 2 N 1 A 1 ) (A ) g {V,W } (3.91) From the definition of K 1, we have A 1 K 1 R. Therefore, it is easily deduced that N A 1 K1 = N (id B P V )A 1 K1. (3.92) Combining (3.9), (3.91) and (3.92), we have ( N 2 A 3 + N 1 A 2 + (id B N 2 A 2 N 1 A 1 )(A ) g {V,W} A ) 1 K1 = (3.93) Rearranging terms, (3.93) reduces to N 1 A 2 {V,W } K 1 = N 2 ( A3 A 2 (A ) g {V,W } A 1) K1 (A ) g {V,W } A 1 K1 (3.94) Moreover with a trivial algebra, it can be shown that (3.94) is equal to N 1 A 2 {V,W } K 1 = N 2 ( A 3 {V,W } A 2 {V,W } (A ) g {V,W } A 1) K1 (A ) g {V,W } A 1 K1 (3.95) From the identity decomposition (3.43), we have N 1 = N 1 (id B P V ) + N 1 (id B P V1 )P V + N 1 P V1, so (3.95) can be written as follows. N 1 S {V,W,V 1 } = N ( 2 A 3 {V,W } A 2 {V,W } (A ) g {V,W } A 1) K1 (A ) g {V,W } A 1 K1 N 1 (id B P V )A 2 {V,W } K 1 N 1 (id B P V1 )P V A 2 {V,W } K (z ) (3.96) 24

We obtained explicit formulas for N 1 (id B P V ) and N 1 (id B P V1 )P V in (3.85) and (3.89). Moreover we proved that S {V,W,V 1 } : K 1 R 1 is invertible. After some tedious algebra from (3.96), one can obtain the claimed formula for N 1 (3.41) for our choice of complementary subspaces. Of course, the resulting N 1 needs to be understood as a map from B to B. Formulas for (N j, j ) : Collecting the coefficients of (z 1) j, (z 1) j+1 and (z 1) j+2 in the expansion of the identity (3.45) when m = 2, we have G j (, 2) + N j A = 1 j= (3.97) G j (1, 2) + N j A 1 + N j+1 A = (3.98) G j (2, 2) + N j A 2 + N j+1 A 1 + N j+2 A = (3.99) From the identity decomposition (3.43), similarly the operator N j can be written as the sum of N j (id B P V ), N j (id B P V1 )P V, and N j P V1. We will find an explicit formula for each summand. First from (3.97), it can be easily verified that N j (id B P V ) = 1 j= (A ) g G j (, 2)(A ) g {V,W } (3.1) By restricting the domain of (3.98) to K, we obtain N j A 1 K = G j (1, 2) K (3.11) Using the identity decomposition id B may rewrite (3.11) as follows. = P V + (id B P V ) and (3.1), we N j S V = G j (1, 2) K 1 j= (A ) g {V,W } A 1 K + G j (, 2)(A ) g {V,W } A 1 K (3.12) Composing both sides of (3.12) with (S V ) g {V 1,W 1 } P V, we obtain an explicit formula for N j (id B P V1 )P V as follows. N j (id B P V1 )P V = G j (1, 2)(S V ) g {V 1,W 1 } P V 1 j= (A ) g {V,W } A 1(S V ) g {V 1,W 1 } P V + G j (, 2)(A ) g {V,W } A 1(S V ) g {V 1,W 1 } P V (3.13) Restricting the domain of (3.99) to K 1, we obtain G j (2, 2) K1 + N j A 2 K1 + N j+1 A 1 K1 = (3.14) 25

Composing both sides of (3.98) with (A ) g {V,W }, it is easily deduced that N j+1 (id B P V ) = G j (1, 2)(A ) g {V,W } N ja 1 (A ) g {V,W } (3.15) Note that we have N j+1 A 1 K1 = N j+1 (id B P V )A 1 K1 from the definition of K 1. Combining this with (3.14) and (3.15), we obtain the following equation. N j A 2 {V,W } K 1 = G j (2, 2) K1 + G j (1, 2)(A ) g {V,W } A 1 K1 (3.16) We know N j = N j (id B P V ) + N j (id B P V1 )P V + N j P V1, and already obtained explicit formulas for the last two term. Substituting the obtained formulas into (3.16), we obtain N j P V1 A 2 {V,W } K 1 = G j (2, 2) K1 + G j (1, 2)(A ) g {V,W } A 1 K1 1 j= (A ) g {V,W } A 2 {V,W } K 1 + G j (, 2, )(A ) g {V,W } A 2 {V,W } K 1 + G j (1, 2)(S V ) g {V 1,W 1 } P V A 2 {V,W } K 1 + 1 j= (A ) g A 1 (S V ) g {V 1,W 1 } P V A 2 {V,W } K 1 G j (, 2)(A ) g {V,W } A 1(S V ) g {V 1,W 1 } P V A 2 {V,W } K 1 (3.17) Composing both sides of (3.17) with (S {V,W,V 1 } ) 1 P V1, we obtain the formula for N j P V1. Combining this formula with (3.1) and (3.13), one can verify the claimed formula (3.42) for our choice of complementary subspaces after some algebra. Viewing the resulting operator as a map from B to B. Even though our recursive formula is obtained under a given choice of complementary subspaces V, W, V 1 and W 1, we know, due to the uniqueness of the Laurent series, that it does not depend on our choice of complementary subspaces. Remark 3.1. Let us specialize our discussion to H, a complex separable Hilbert space. In H, there is a canonical notion of a complementary subspace, called the orthogonal complement, while we do not have such a notion in B. We therefore may let (ran A ) (resp. (ker A ) ) be our choice of R c (resp. K c ). Then P (ran A ) and P (ker A ) are orthogonal projections. Then our generalized inverse (A ) g {(ran A ),(ker A ) } has the following properties. (A ) g {(ran A ),(ker A ) } A = (id H P (ran A ) ) A (A ) g {(ran A ),(ker A ) } = P (ker A ) 26

That is, both of (A ) g {(ran A ),(ker A ) } A and A (A ) g {(ran A ),(ker A ) } are selfadjoint operators, meaning that (A ) g {(ran A ),(ker A ) } is the Moore-Penrose inverse operator of A (Engl and Nashed, 1981, Section 1). Moreover, we may let (ran A ) (S (ran A ) K ) be our choice of R1. c This choice trivially satisfies (3.27), and it allows the orthogonal decomposition of H as follows. H = R S (ran A ) K R c 1 Letting K 1 K be our choice of K c 1, we can also make a generalized inverse of S (ran A ) become the Moore-Penrose inverse operator. This specific choice of complementary subspaces appears to be standard in H among many other possible choices. Remark 3.2. Under the specific choice of complementary subspaces in Remark 3.1, Beare and Seo (218) stated and proved similar theorems to our Propositions 3.1 and 3.2, without providing a recursive formula for N j. The reader is referred to Theorem 3.1 and 3.2 of their paper for more details. On the other hand, we explicitly take all other possible choices of complementary subspaces into account and provide a recursive formula to obtain a closed-form expression of the Laurent series. Therefore even if we restrict our concern to a Hilbert space setting, our propositions can be viewed as extended versions of those in Beare and Seo (218). 4 Representation theory In this section, we derive a suitable extension of the Granger-Johansen representation theory, which is given as an application of the results established in Section 3. Let A : C L B be a holomorphic operator pencil, then it allows the following Taylor series. A(z) = A j,() z j, (4.1) j= where A j,() denotes the coefficient of z j in the Taylor series of A(z) around. Note that we use additional subscript () to distinguish it from A j which denotes the coefficient of (z 1) j in the Taylor series of A(z) around 1. As in the previous sections, we let N(z) denote A(z) 1 if it exists. 27

Let D r C denote the open disk centered at the origin with radius r > and D r be its closure. Throughout this section, we employ the following assumption. Assumption 4.1. (i) A : C L B is a holomorphic Fredholm pencil. (ii) A(z) is invertible on D 1 \ {1}. Now we provide main results of this section. To simplify expressions in the following propositions, we keep using the notations introduced in Section 3. Moreover, we introduce π j (k) for j, which is given by π (k) = 1, π 1 (k) = k, π j (k) = k(k 1) (k j + 1), j 2 Proposition 4.1. Suppose that A(z) satisfies Assumption 4.1 and we have a sequence (X t, t p + 1) satisfying A j,() X t j = ε t, (4.2) j= where ε = (ε t, t Z) is a strong white noise. Then the following conditions are equivalent to each other. (i) A(z) 1 has a simple pole at z = 1. (ii) B = R A 1 K. (iii) For any choice of R c, S R c : K R c is invertible. (iv) For some choice of R c, S R c : K R c is invertible. Under any of these equivalent conditions, X t allows the representation: for some τ depending on initial values, Moreover, ν t L 2 B and satisfies t X t = τ N 1 ε s + ν t, t (4.3) s=1 ν t = Φ j ε t j, Φ j = ( 1) k j π j (k)n k, (4.4) j= k=j where, (N j, j 1) can be explicitly obtained from Proposition 3.1. 28

Proof. Under Assumption 4.1, there exists η > such that A(z) 1 depends holomorphically on z D 1+η \ {1}. To see this, note that the analytic Fredholm theorem implies that σ(a) is a discrete set. Since σ(a) is closed, it is deduced that σ(a) D 1+r is a closed discrete subset of D 1+r for some < r <. The fact that D 1+r is a compact subset of C implies that there are only finitely many elements in σ(a) D 1+r. Furthermore since 1 is an isolated element of σ(a), it can be easily deduced that there exists η (, r) such that A(z) 1 depends holomorphically on z D 1+η \ {1}. Since 1 σ(a) is an isolated element, the equivalence of conditions (i)- (iv) is implied by Proposition 3.1. Under any of the equivalent conditions, it is deduced from Proposition 3.1 that N(z) = N 1 (z 1) 1 + N H (z), where N H (z) denotes the holomorphic part of the Laurent series. Moreover, we can explicitly obtain the coefficients (N j, j 1) using the recursive formula provided in Proposition 3.1. It is clear that (1 z)n(z) can be holomorphically extended over 1, and we can rewrite it as (1 z)n(z) 1 = N 1 + (1 z)n H (z), (4.5) Applying the linear filter induced by (4.5) to both sides of (4.2), we obtain X t := X t X t 1 = N 1 ε t + (ν t ν t 1 ) (4.6) where ν s := j= Nj,() H ε s j, and Nj,() H denotes the coefficient of zj in the Taylor series of N H (z) around. Clearly the process t Xt = N 1 ε s + ν t (4.7) s=1 is a solution, and the complete solution is obtained by adding the solution to 2 X t =, which is given by τ. We then show ν s is convergent in L 2 H. Note that Nj,()ε H s j Nj,() H LB ε s j C Nj,() H LB (4.8) j= j= where C is some positive constant. The fact that N H (z) is holomorphic on D 1+η implies that Nj,() H exponentially decreases as j goes to infinity. This shows that the right-hand side of (4.8) converges to a finite quantity, so ν s converges in L 2 H. It is easy to verify (4.4) from an elementary calculus. 29 j=

Remark 4.1. Given that ε t is a strong white noise, the sequence (ν t, t Z) in our representation (4.3) is a stationary sequence. Therefore, (4.3) shows that X t can be decomposed into three different components; a random walk, a stationary process and a term that depends on initial values. Proposition 4.2. Suppose that A(z) satisfies Assumption 4.1 and we have a sequence (X t, t p + 1) satisfying (4.2). Then the following conditions are equivalent to each other. (i) A(z) 1 has a second order pole at z = 1. (ii) For some choice of R c, K c, we have B = R 1 A 2 {R c,kc }K 1. (4.9) (iii) For any choice of R c, K c, and R c 1 satisfying (3.27), S {R c,kc,rc 1 } : K 1 R c 1 is invertible. (iv) For some choice of R c, K c, and R c 1 satisfying (3.27), S {R c,kc,rc 1 } : K 1 R c 1 is invertible. Under any of these equivalent conditions, X t allows the representation: for some τ and τ 1 depending on initial values, t τ t X t =τ + τ 1 t + N 2 ε s N 1 ε t + ν t, t (4.1) τ=1 s=1 s=1 Moreover, ν t L 2 B and satisfies ν t = Φ j ε t j, Φ j = ( 1) k j π j (k)n k, (4.11) j= k=j where, (N j, j 2) can be explicitly obtained from Proposition 3.2. Proof. As we showed in Proposition 4.1, we know there exists η > such that A(z) 1 depends holomorphically on z D 1+η \ {1}. Due to Proposition 3.2, we know N(z) = N 2 (z 1) 2 + N 1 (z 1) 1 + N H (z), where N H (z) is the holomorphic part of the Laurent series. (1 z) 2 A(z) 1 can be holomorphically extended over 1 so that it holomorphic on D 1+η. Then we have (1 z) 2 N(z) 1 = N 2 N 1 (1 z) + (1 z) 2 N H (z) (4.12) 3

Applying the linear filter induced by (1 z) 2 A(z) 1 to both sides of (4.2), we obtain for s = 1,..., t 2 X t = N 2 ε t N 1 ε t + ( ν t ν t 1 ) (4.13) where ν t := j N H j,() ε t j. From (4.8), we know ν t converges in L 2 B. Clearly the process t τ t Xt = N 2 ε s N 1 ε t + ν t (4.14) τ=1 s=1 s=1 is a solution. Since the solution to 2 X t = is given by τ + τ 1 t, we obtain (4.1). It is also easy to verify (4.11) from an elementary calculus. Remark 4.2. Similarly the sequence (ν t, t Z) in our representation (4.1) is stationary given that ε is a strong white noise. Then the representation (4.1) shows that X t can be decomposed into a cumulative random walk, a random walk, a stationary process and a term that depends on initial values. Remark 4.3. Propositions 4.1 and 4.2 require the autoregressive law of motion to be characterized by a holomorphic operator pencil satisfying Assumption 4.1. However, we expect a wide class of autoregressive processes considered in practice satisfies the requirement. For example, for p N, let Φ 1,..., Φ p be compact operators. Then the autoregressive law of motion given by satisfies the requirement. p X t = Φ j X t j + ε t j=1 Remark 4.4. Even though we have assumed that ε is a strong white noise for simplicity, we may allow more general innovations in Proposition 4.1 and 4.2. For example, we could allow ε t to depend on t. Even in this case, if ε t is bounded by a + t b for some a, b R, the right hand side of (4.8) is still bounded by a finite quantity, meaning that ν t converges in L 2 H. Remark 4.5. For simplicity, we have only considered purely stochastic process in Proposition 4.1. However, the inclusion of a deterministic component 31

does not cause significant difficulties. For example, suppose that we have (X t, t p + 1) generated by the following autoregressive law of motion. A j,() X t j = γ t + ε t t 1 (4.15) j= where (γ t, t Z) is a deterministic sequence. In this case, we may need some condition on (γ t, t Z) for ν t to converges in L 2 B. We could assume that γ t is bounded by a + t b for some a, b R. 5 Conclusion In this paper, we provide a suitable extension of the Granger-Johansen representation theory. To achieve this goal, inversion of a holomorphic Fredholm pencil based on the analytic Fredholm theorem is studied in detail; we obtain necessary and sufficient conditions for the inverse of a Fredholm operator pencil to have a simple pole and a second order pole, and further derive a closed-form expression of the Laurent expansion of the inverse around an isolated singularity. Using the results, our representation theorems are easily derived. Since we obtain a closed-form expression of the Laurent series of the inverse, we can fully characterized I(1) (and I(2)) solutions except a term depending on initial values. References Abramovich, Y. A. and Aliprantis, C. D. (22). An Invitation to Operator Theory (Graduate Studies in Mathematics). Amer Mathematical Society. Beare, B. K., Seo, J., and Seo, W.-K. (217). Cointegrated linear processes in hilbert space. Journal of Time Series Analysis, 38(6):11 127. Beare, B. K. and Seo, W.-K. (218). Representation of I(1) and I(2) autoregressive Hilbertian processes. ArXiv e-print, arxiv:171.8149v2 [math.st]. Bosq, D. (2). Linear Processes in Function Spaces. Springer-Verlag New York. 32