ON THE INDEX OF INVARIANT SUBSPACES IN SPACES OF ANALYTIC FUNCTIONS OF SEVERAL COMPLEX VARIABLES

Similar documents
08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

p-summable COMMUTATORS IN DIMENSION d

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEORY EVAN JENKINS

LINEAR GRAPH TRANSFORMATIONS ON SPACES OF ANALYTIC FUNCTIONS

INVARIANT SUBSPACES FOR CERTAIN FINITE-RANK PERTURBATIONS OF DIAGONAL OPERATORS. Quanlei Fang and Jingbo Xia

Chapter 2 Linear Transformations

Cauchy Duals of 2-hyperexpansive Operators. The Multivariable Case

DEFINABLE OPERATORS ON HILBERT SPACES

Spectral theory for compact operators on Banach spaces

Wold decomposition for operators close to isometries

Compact operators on Banach spaces

WEYL S THEOREM FOR PAIRS OF COMMUTING HYPONORMAL OPERATORS

Multiplication Operators, Riemann Surfaces and Analytic continuation

Almost Invariant Half-Spaces of Operators on Banach Spaces

CHAPTER VIII HILBERT SPACES

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Rings With Topologies Induced by Spaces of Functions

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

Analysis Preliminary Exam Workshop: Hilbert Spaces

Trace Class Operators and Lidskii s Theorem

Exercise Solutions to Functional Analysis

Hypercyclic and supercyclic operators

Topological vectorspaces

1. General Vector Spaces

Complete Nevanlinna-Pick Kernels

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

arxiv: v2 [math.fa] 8 Jan 2014

TRANSLATION INVARIANCE OF FOCK SPACES

Analysis of Five Diagonal Reproducing Kernels

Chapter 8 Integral Operators

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Determinant lines and determinant line bundles

Functional Analysis I

DOMINANT TAYLOR SPECTRUM AND INVARIANT SUBSPACES

NORMS ON SPACE OF MATRICES

MATH 583A REVIEW SESSION #1

THE CLOSED RANGE PROPERTY FOR BANACH SPACE OPERATORS

Analysis-3 lecture schemes

An introduction to some aspects of functional analysis

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm

THE DIRAC OPERATOR OF A COMMUTING d-tuple. William Arveson Department of Mathematics University of California Berkeley CA 94720, USA

Boolean Inner-Product Spaces and Boolean Matrices

CLASSIFICATION OF REDUCING SUBSPACES OF A CLASS OF MULTIPLICATION OPERATORS ON THE BERGMAN SPACE VIA THE HARDY SPACE OF THE BIDISK

Elementary linear algebra

Wandering subspaces of the Bergman space and the Dirichlet space over polydisc

ON MATRIX VALUED SQUARE INTEGRABLE POSITIVE DEFINITE FUNCTIONS

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

COMPOSITION OPERATORS ON HARDY-SOBOLEV SPACES

The following definition is fundamental.

Topics in linear algebra

A Brief Introduction to Functional Analysis

Your first day at work MATH 806 (Fall 2015)

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Math 121 Homework 4: Notes on Selected Problems

MATHEMATICS 217 NOTES

REAL AND COMPLEX ANALYSIS

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the

Rational and H dilation

Diffun2, Fredholm Operators

Hilbert space methods for quantum mechanics. S. Richard

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

1 Definition and Basic Properties of Compa Operator

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Introduction to Index Theory. Elmar Schrohe Institut für Analysis

Lecture Summaries for Linear Algebra M51A

Foundations of Matrix Analysis

Essentially normal Hilbert modules and K- homology III: Homogenous quotient modules of Hardy modules on the bidisk

6 Lecture 6: More constructions with Huber rings

QUATERNIONS AND ROTATIONS

Spectral Theory, with an Introduction to Operator Means. William L. Green

SOLUTION SETS OF RECURRENCE RELATIONS

Factorizations of Kernels and Reproducing Kernel Hilbert Spaces

2. The Concept of Convergence: Ultrafilters and Nets

C.6 Adjoints for Operators on Hilbert Spaces

Where is matrix multiplication locally open?

B 1 = {B(x, r) x = (x 1, x 2 ) H, 0 < r < x 2 }. (a) Show that B = B 1 B 2 is a basis for a topology on X.

Essential Descent Spectrum and Commuting Compact Perturbations

Global holomorphic functions in several non-commuting variables II

William Arveson Department of Mathematics University of California Berkeley CA 94720, USA. 18 May 1997

Math 121 Homework 5: Notes on Selected Problems

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

5 Compact linear operators

Commutative Banach algebras 79

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

WEIERSTRASS THEOREMS AND RINGS OF HOLOMORPHIC FUNCTIONS

COMPOSITION OPERATORS BETWEEN SEGAL BARGMANN SPACES

N-WEAKLY SUPERCYCLIC MATRICES

Spectral Continuity Properties of Graph Laplacians

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Composition Operators on the Fock Space

Infinite-Dimensional Triangularization

Problems in Linear Algebra and Representation Theory

MULTIPLICATION OPERATORS ON THE BERGMAN SPACE VIA THE HARDY SPACE OF THE BIDISK

Transcription:

ON THE INDEX OF INVARIANT SUBSPACES IN SPACES OF ANALYTIC FUNCTIONS OF SEVERAL COMPLEX VARIABLES JIM GLEASON, STEFAN RICHTER, AND CARL SUNDBERG Abstract. Let B d be the open unit ball in C d,d 1, and Hd 2 be the space of analytic functions on B d determined by the reproducing kernel (1 z, λ ) 1. This reproducing kernel Hilbert space serves a universal role in the model theory for d-contractions, i.e. tuples T = (T 1,.., T d ) of commuting operators on a Hilbert space K such that T 1 x 1 +..+T d x d 2 x 1 2 +..+ x d 2 for all x 1,.., x d K. If D is a separable Hilbert space then we write Hd 2(D) = Hd 2 D for the space of D-valued Hd 2 functions and we use M z = (M z1,.., M zd ) to denote the tuple of multiplication by the coordinate functions. We consider M z invariant subspaces M Hd 2 (D). The fiber dimension of M is defined to be sup λ Bd dim{f(λ) : f M}. We show that if M has finite positive fiber dimension m, then the essential Taylor spectrum of M z M, σ e(m z M), equals B d plus possibly a subset of the zero set of a nonzero bounded analytic function on B d and ind(m z λ) M = ( 1) d m for every λ B d \ σ e(m z M). As a Corollary we prove that if T = (T 1,.., T d ) is a pure d-contraction of finite rank, then σ e(t ) B d is contained in the zero set of a nonzero bounded analytic function and ( 1) d ind(t λ) = κ(t ) for all λ B d \ σ e(t ). Here κ(t ) denotes Arveson s curvature invariant. We will also show that for d > 1 there are such d-contractions with σ e(t ) B d. These results answer a question of Arveson, [11]. We also prove related results for the Hardy and Bergman spaces of the unit ball and unit polydisc of C d. 1. Introduction Let d 1, let Ω be a region in C d with 0 Ω, and let H be a Hilbert space of analytic functions on Ω. In this paper we will be particularly interested in the cases where H equals one of the usual Hardy or Bergman spaces of the ball, B d = {z C d : z < 1}, or the polydisc, D d = {z C d : z i < 1 for i = 1,..., d}, or where H = Hd 2 where H2 d is the Hilbert space of analytic functions on B d determined by 1 the reproducing kernel k w (z) = 1 z,w, where z, w = d z iw i. Associated with each such space H we have a multiplier algebra M(H) consisting of all analytic functions ϕ on Ω such that ϕf H for each f H. It is easy to see that each multiplier gives rise to a bounded operator M ϕ : H H, f ϕf, and the multiplier norm ϕ M is defined to be the operator norm of M ϕ. One always has M(H) H (Ω), the algebra of bounded analytic functions on Ω, and it is well-known that for the Hardy and Bergman spaces we have M(H) = H (Ω). It is also known that M(Hd 2) H (B d ) ([8]). In the following we will assume that M(H) H, and that for each i = 1,..., d the i-th coordinate function z i is 2000 Mathematics Subject Classification. Primary 47A13; Secondary 47A15. Work of the second and third author was supported by the National Science Foundation, grants DMS-0070451 and DMS-0245384. 1

2 GLEASON, RICHTER, AND SUNDBERG a multiplier of H. We will write M z for the d-tuple (M z1,..., M zd ) of commuting operators on H. A subspace M of H is called multiplier invariant if ϕf M for each f M and ϕ M(H). In this paper we will investigate the Fredholm spectrum and Fredholm index (in the sense of Taylor invertibility) of M z M for nonzero multiplier invariant subspaces of H. Our work is motivated in part by recent work of Arveson regarding certain tuples of commuting Hilbert space operators, the so-called d-contractions ([11]). In particular we answer the question of whether or not all pure d-contractions of finite rank are Fredholm tuples. We also prove that off the essential spectrum the curvature invariant of Arveson is the same as ( 1) d times the Fredholm index of the d-contraction. We will explain further details later. A second motivation has been that in the study of the invariant subspace structure of spaces of analytic functions of one complex variable the index of the invariant subspace has played an important role. In fact we consider the current paper to be the beginning of an effort to obtain a similar understanding of the index of invariant subspaces for spaces of functions of several complex variables. Assume for a moment that d = 1. Then the index of an invariant subspace M of H is defined to be the dimension of M/zM. If we assume that for each λ Ω that M z λ is a Fredholm operator on H, then it follows that (M z λ) M is bounded below for each multiplier invariant subspace M of H. Thus, (M z λ) M is a semi- Fredholm operator and the continuity properties of the Fredholm index imply the following stability of the index of an invariant subspace: indm = dim M/zM = indm z M = ind(m z λ) M = dim M/(z λ)m for all λ Ω. It is easy to see that {0} is the only invariant subspace of index 0. Invariant subspaces of index one always exist in abundance. One checks that singly generated invariant subspaces and nontrivial zero set invariant subspaces all have index 1, and it is well known that all nontrivial invariant subspaces of the Hardy space H 2 (D) have index 1. It was a real surprise when results of Apostol, Bercovici, Foias and Pearcy implied that the Bergman space L 2 a = {f Hol(D) : f 2 = D f 2 da π < } contains invariant subspaces of M of arbitrary index, even (see [6]). Recent work explains the existence of invariant subspaces of index greater than one in analytic terms (see [3], [23], and [24]). In fact, for many spaces of analytic functions on D the existence of invariant subspaces of high index is linked to the existence of functions in H that do not have nontangential limits on any set of positive measure (see [3] and [4]). Now let d 1 again, and let M be an invariant subspace of H. By analogy with the situation in d = 1 one would like to consider dim M/((z 1 λ 1 )M + + (z d λ d )M) for λ = (λ 1,..., λ d ) Ω. However two problems immediately arise. The first is that even if we assume that (z 1 λ 1 )H + + (z d λ d )H is closed in H it is not clear that the same is true for an arbitrary invariant subspace. The second is that even for very simple examples dim M/((z 1 λ 1 )M + + (z d λ d )M) will depend on the choice of the point λ. Indeed, if H is the Hardy or Bergman space of the ball or polydisc, or H = Hd 2, and if M = {f H : f(0) = 0}, then one easily checks that dim M/(z 1 M+ +z d M) = d while dim M/((z 1 λ 1 )M + + (z d λ d )M) = 1 for all λ B d \ {0}. The second problem in this case is connected to the fact that all functions in M are zero at

INDEX OF INVARIANT SUBSPACES 3 λ = 0. It is well-known that one obtains a more stable definition of index if one uses the Fredholm index of the tuple (M z λ) M, where λ B d \ σ e (M z M), to define the index of an invariant subspace. Here we set M z λ = (M z1 λ 1 I,..., M zd λ d I) and we use σ e (T ) to denote the essential Taylor spectrum of the operator tuple T ; the Fredholm index of T is defined to be the alternating sum of the Betti numbers of the Koszul complex associated with T - we shall give the full definitions in Section 2. However, it will still turn out that for d > 1 and for all the spaces H mentioned above there are invariant subspaces M of H such that σ e (M z M) B d is nonempty. Thus in general the Fredholmness of (M z λ) M may depend on the base point λ Ω. Nevertheless by analogy with the situation in d = 1 one might expect a generic situation where (M z λ) M is a Fredholm tuple of index ( 1) d for many λ and all nonzero invariant subspaces M of spaces H where all functions have boundary values in a strong enough sense. The results of this paper support such an expectation. For f H we write Z(f) = {λ Ω : f(λ) = 0} and Z(M) = {λ Ω : f(λ) = 0 for all f M}. The first theorem that we state is a consequence of Theorem 4.3 or Corollary 4.6. Theorem 1.1. Let H be the Hardy or Bergman space of the ball or polydisc of C d, or let H = H 2 d. If an invariant subspace M of H contains a nonzero multiplier ϕ, then σ e (M z M) Ω Z(ϕ) and for every λ Ω \ σ e (M z M) the tuple (M z λ) M has Fredholm index ( 1) d. In fact, for all λ Ω \ Z(ϕ) we have dim M/((z 1 λ 1 )M + + (z d λ d )M) = 1. It is known that every invariant subspace of Hd 2 is generated by multipliers (see [10] and [26]). Thus for Hd 2 the theorem implies that (M z λ) M is a Fredholm tuple for every nonzero invariant subspace and for every λ B d \ Z(M). We will see that for d > 1 the exclusion of Z(M) is necessary, i.e. for d > 1 there are invariant subspaces M of Hd 2 such that σ e(m z M) B d = Z(M). For the usual Hardy space of the ball or polydisc (d > 1) it is known that there are invariant subspaces that do not contain any bounded functions, so those subspaces are not covered by this theorem (see page 71 of [32]). Furthermore, we will see in Section 4.3 that for d = 2 there are invariant subspaces M of the Hardy space such that σ e (M z M) B d is strictly larger than Z(M). On the other hand we mention that for H 2 (D 2 ) Yang showed in [37] that if M is generated by a finite number of polynomials, then 0 / σ e (M z M) and ind(m z M) = 1. Finally, for the Bergman space of any bounded region it follows from results of Bercovici that for all d 1 there are invariant subspaces M such that σ e (M z M) contains Ω ([14]). We will now briefly outline the part of the proof that shows dim M/((z 1 λ 1 )M+ +(z d λ d )M) = 1. For d = 1 a similar argument was given in [30]; for the Hardy space of the bidisk see also Example 3(a) of [37]. Recall that if B is a Banach space of analytic functions on Ω, then one says that one can solve Gleason s problem for B if whenever g B and λ Ω then there are functions g 1,..., g d B such that g g(λ) = d (z i λ i )g i (see [33]). We may assume that the multiplier ϕ M

4 GLEASON, RICHTER, AND SUNDBERG satisfies ϕ(λ) = 1. Let f M. Then f = f(λ)ϕ + ϕ(f f(λ)) (ϕ 1)f. Now, if we assume that one can solve Gleason s problem for both the space H and the multiplier algebra M(H), then there are functions f 1,..., f d H and multipliers ϕ 1,..., ϕ d M(H) such that f = f(λ)ϕ + (z i λ i )(ϕf i ϕ i f). It is clear that for each i the function ϕ i f is in the multiplier invariant subspace M, and the same is true for ϕf i, if one assumes for example that the multipliers are dense in H. Thus dim M/((z 1 λ 1 )M + + (z d λ d )M) = 1 provided all the assumptions are satisfied. In Section 4.1 we will show that one can solve Gleason s problem for the multiplier algebra of Hd 2. All the other assumptions are already known to be true for the spaces mentioned in the theorem. We will provide more background information in Section 2. The rest of Theorem 1.1 will follow by combining the above idea with a general lemma about the structure of the Koszul complex, see Lemma 2.1. Our theorem generalizes to cover spaces of analytic functions with values in a separable Hilbert space. This is important for applications. If D is a separable complex Hilbert space, then we denote by H D the space of D-valued H-functions. It is the set of all analytic functions f : Ω D such that for each x D the function f x (λ) = f(λ), x D defines a function in H and such that f 2 = f en 2 < n=1 for some orthonormal basis {e n } n 1 of D. One shows that the above expression is independent of the choice of orthonormal basis. In particular, one has for f H, x D that the function fx : λ f(λ)x is in H D and fx = f x D. If f H D, x D, and λ B d we have f(λ), x D = f, k λ x, where we have used k λ H to denote the reproducing kernel for H at λ. There is an obvious identification of the tensor product H D with H D, where one identifies the elementary tensors f x with the functions fx. Considering the definition of the norm in H D, one may also think of H D as a direct sum of dim D copies of the scalar valued space H. Each (scalar valued) multiplier ϕ M(H) defines an operator on H D of the same norm, and we shall also denote this operator by M ϕ. We shall say that a subspace M of H D is scalar multiplier invariant if M ϕ M M for each ϕ M(H). Let M be a scalar multiplier invariant subspace of H D. For λ Ω we write M λ = clos {f(λ) : f M}, and we define the fiber dimension of M to be sup λ Ω dim M λ. We will be interested in invariant subspaces with finite fiber dimension m. In this case we write Z(M) = {λ Ω : dim M λ < m}. Note that for the scalar case this agrees with the earlier definition of Z(M). Let λ 0 Ω \ Z(M), then if m < the set {f(λ 0 ) : f H D } is closed, and there are f 1,..., f m M such that f 1 (λ 0 ),..., f m (λ 0 ) forms an orthonormal basis for M λ0. Then g(λ) = det ( f i (λ), f j (λ 0 ) ) 1 i,j m

INDEX OF INVARIANT SUBSPACES 5 is an analytic function on Ω, and it is a standard fact from linear algebra that dim M λ = m whenever g(λ) 0. Thus, the family of vector spaces {M λ } λ Ω\Z(M) defines a vector bundle over Ω \ Z(M) and Z(M) is the intersection of at most a countably infinite number of zero sets of analytic functions. In particular Ω\ Z(M) is connected and it is dense in Ω. We are now able to state our main theorem for the case that H = H 2 d. Theorem 1.2. Let D be a separable Hilbert space and let M be a nonzero scalar multiplier invariant subspace of Hd 2 (D) with finite fiber dimension m. Then B d σ e (M z M) B d Z(M) and for every λ B d \σ e (M z M) the tuple (M z λ) M has Fredholm index ( 1) d m. In fact for all λ B d \ Z(M) we have dim M/((z 1 λ 1 )M + + (z d λ d )M) = m. If N M are two invariant subspaces of Hd 2 (D), then the fiber dimension of N is less than or equal to the fiber dimension of M, so the theorem implies an inequality between ind(m z λ) N and ind(m z λ) M for λ B d \ (Z(M) Z(N )). Thus for d = 1 our theorem recaptures a well-known fact ([31], p.11). Theorem 1.2 will be seen to be a consequence of Theorem 4.3. As a corollary we obtain an answer to a question of Arveson regarding his curvature invariant for a d-contraction. Recall from [8] that a commuting tuple T = (T 1,..., T d ) of operators on a Hilbert space K is called a d-contraction if T 1 x 1 + + T d x d 2 x 1 2 + + x d 2 for all x 1,..., x d K. This condition is equivalent to d T iti 1 (see [10]). One then defines the defect operator T = (I d T iti )1/2, the defect space D = clos T K, and one says that T has finite rank if D is finite dimensional. Furthermore, associated to each d-contraction is a completely positive map Ψ : B(K) B(K) defined by Ψ(X) = d T ixti (see e.g. [2] for a definition of completely positive). The d-contraction is called pure if lim n Ψ n (I) = 0 in the strong operator topology. In [8] it is shown that every pure d-contraction is the compression of (M z, Hd 2 (D)) to the orthocomplement of some scalar multiplier invariant subspace of Hd 2 (D) (see also [1] and [2], Section 14.5). We have included a precise statement of that result in Section 4.2. The curvature invariant κ(t ) of a pure d-contraction of finite rank was defined in [10]. First we need to define a B(D)-valued function on B d by k(λ) = (1 λ 2 ) T (I T (λ) ) 1 (I T (λ)) 1 T, where T (λ) = d λ it i. Arveson shows that for σ-a.e. z B d the nontangential limit of k(λ) exists in the strong operator topology as λ approaches z. Here we have used σ to denote the rotationally invariant probability measure on B d. We call this limit k(z) and define the curvature invariant of T by κ(t ) = trace k(z)dσ(z). B d It is clear that 0 κ(t ) dim D. In [21], Theorem 5.2 it was shown that κ(t ) is always an integer, in fact that κ(t ) = inf λ B d dim d ker(ti λ i ),

6 GLEASON, RICHTER, AND SUNDBERG and that for σ-a.e. z B d κ(t ) = trace k(z). Furthermore, if we write K λ = d ker(t i λ i), and E T = {λ B d : dim K λ > κ(t ) = inf z Bd dim K z }, then it follows from the proof of Theorem 5.2 and Lemma 3.1 of [21] that E T is contained in the zero set of a bounded analytic function. In Theorem 4.5 we will obtain the following new information about the value of κ(t ) along with some spectral information of T. We write σ(t ) for the Taylor spectrum (see Section 2) and σ p (T ) for the set of d-tuples of complex conjugates of eigenvalues corresponding to the common eigenvectors of T1,..., Td. Theorem 1.3. If T is a pure d-contraction with finite rank, then σ e (T ) B d E T and for λ B d \ σ e (T ), κ(t ) = ( 1) d ind(t λ). Furthermore, we have σ(t ) B d = σ p (T ). Thus, if κ(t ) = 0, then σ(t ) B d = σ p (T ) = E T. We note that this theorem implies that if S and T are two pure d-contractions of finite rank such that S i T i is a compact operator for each i = 1,..., d, then κ(t ) = κ(s) even though T and S may have different rank. We also mention that if each T i is essentially normal, then one can show that σ e (T ) B d =, thus our theorem implies in this case that κ(t ) = ( 1) d ind(t ). However, we will see that there are examples of pure finite rank d-contractions that are not Fredholm (i.e. 0 σ e (T )). This last observation answers Problems 1 and 2 of [11] in the negative. Nevertheless, Theorem 1.3 is a close variant of a statement that was conjectured there, and in fact, Arveson proved the equality κ(t ) = ( 1) d ind(t ) for certain special d-contractions in [11]. Also see Corollary 2.2, Theorem 4.3, Conjecture C and Problem D of [12]. We also mention that for d = 1 Theorem 1.3 was proved in [28]. Finally, for the vector-valued Hardy and Bergman spaces we obtain the following theorem ( see Corollary 4.6). Theorem 1.4. Let H denote the Hardy or Bergman space of the ball or polydisc, let D be a separable Hilbert space, and let M be an invariant subspace of H D of finite fiber dimension m. If λ Ω and there are bounded functions f 1,..., f m M such that the set {f 1 (λ),..., f m (λ)} is linearly independent, then the tuple (M z λ) M is Fredholm with index ( 1) d m. In fact, for all such λ we have dim M/((z 1 λ 1 )M +... + (z d λ d )M) = m. In Section 3 we will prove our main theorem, which will be valid for other spaces of analytic functions and which will imply the above theorems. Without going into further detail, we mention that for d = 1 it holds for certain spaces with so-called Pick kernels. Thus, the identification of the index of the invariant subspace with its fiber dimension makes it clear that whenever M N are two invariant subspaces, then indm indn. For the vector-valued Dirichlet space this has been observed by Fang, [20]. We would like to thank Shashikant Mulay for several discussions that helped clarify to us certain issues of algebraic nature.

INDEX OF INVARIANT SUBSPACES 7 2. Preliminaries and Notation Let T = (T 1,..., T d ) be a commuting tuple of operators on a Hilbert space H. We will now define the Koszul complex of T. We will follow [11]. For more information of a general type on the Koszul complex and its relationship to invertible and Fredholm tuples, the reader is also referred to [16], and [34]. Let Λ = Λ[e] = Λ d [e] be the exterior algebra generated by the d symbols e 1,..., e d, along with the identity e 0 defined by e 0 ξ = ξ for all ξ. Then Λ is the algebra of forms in e 1,..., e d with complex coefficients, subject to the anticommutative property e i e j + e j e i = 0 (1 i, j d). In fact, we can make Λ into a 2 d -dimensional Hilbert space with orthonormal basis {e 0 } {e i1 e ik i j {1,..., d}, i 1 < i 2 < < i k }. For each i = 0, 1,..., d let E i : Λ Λ be given by E i ξ = e i ξ. E 0 is thus the identity on Λ. For i = 1,..., d the E i are called the creation operators and they satisfy the following anticommutation relations E i E j + E j E i = 0 and E i E j + E j E i = δ ij E 0. Let Λ(H) := H C Λ and define T : Λ(H) Λ(H) by T := T i E i. It follows easily from the anticommutation relationships that T 2 Koszul complex of the tuple T can be defined by K(T ) : 0 Λ 0 (H) T,0 Λ 1 (H) T,1 T,d 1 Λ d (H) 0 = 0. Thus, the where Λ p (H) is the collection of p forms in Λ(H) and T,p := T Λ p (H). For purposes of notation we also define Λ 1 (H) = 0 and T, 1 and T,d to be the zero maps at the two ends of the complex. For each p = 1,..., d the cohomology vector space associated to the complex at the point p is the vector space (ker T,p )/(ran T,p 1 ). The tuple T = (T 1,..., T d ) is said to be invertible if the Koszul complex is exact for all p. Likewise, T is said to be a Fredholm tuple if the cohomology vector space is finite dimensional for all p. We note that this implies that the range of each T,p must be closed for T to be Fredholm. If T is a Fredholm tuple, then the index of T is ( ) ker ind T := ( 1) p T,p dim. ran T,p 1 p=0 Finally, one defines the Taylor spectrum of T to be σ(t ) = {λ C d : T λ is not invertible} and the Taylor essential spectrum of T by σ e (T ) = {λ C d : T λ is not Fredholm}. Using the anticommutation relations of the creation operators we are able to prove the following lemma which will be needed in the proof of the main results. Lemma 2.1. If T = (T 1,..., T d ) is a commuting tuple of operators on H, then for each i = 1,..., d (T i E 0 ) T = T (T i E 0 ) = T (I E i ) T.

8 GLEASON, RICHTER, AND SUNDBERG Proof. Since E k Ei = E i E k + δ ik I we have T (I Ei ) = T k E k Ei So k=1 = (T i E 0 ) (T k Ei E k ) k=1 = (T i E 0 ) (I E i ) T. T (I Ei ) T = (T i E 0 ) T (I Ei ) T 2 = (T i E 0 ) T since T 2 = 0. Since T is a commuting tuple we have that (T i E 0 ) commutes with T, and this implies the desired result. If H is a Hilbert space of analytic functions on Ω such that for each i = 1,..., d multiplication by the coordinate functions defines a bounded linear operator on H, then for each λ Ω we will be interested in the Koszul complex K(M z λ) for the d-tuple M z λ. Our standard hypothesis on H will be that this complex is exact at every stage except at the last one, where its cohomology is one dimensional. This can be restated as saying that the augmented complex K(M z λ, C) : 0 Λ 0 (H) 0 Λ 1 (H) 1 d 1 Λ d (H) δ λ C 0 is exact at every stage. Here we have written, for k = 0,..., d 1, k = Mz λ,k, and δ λ is the evaluation map, δ λ (f e 1 e d ) = f(λ). Similarly, if H is as above, D is a separable Hilbert space, and M H D is a scalar multiplier invariant subspace, then we will be interested in the augmented complex K((M z λ) M, M λ ) : 0 Λ 0 (M) 0 d 1 Λ d (M) δ λ M λ 0, where as above, for k = 0,..., d 1, k = (Mz λ) M,k, and δ λ is the evaluation map, δ λ (f e 1 e d ) = f(λ), f M. The purpose of introducing the augmented complex is that it will allow for a simple statement of our main results. We note that the statement made in the Introduction that if λ Ω \ Z(M), then dim M/((z 1 λ 1 )M + + (z d λ d )M) equals the fiber dimension of M is equivalent to saying the the augmented Koszul complex is exact at the penultimate stage. In fact, we have the following lemma. Lemma 2.2. If λ Ω and M is a scalar multiplier invariant subspace of H D of finite fiber dimension, then dim M/((z 1 λ 1 )M + + (z d λ d )M) = dim M λ if and only if the augmented Koszul complex K((M z λ) M, M λ ) is exact at the penultimate stage. In particular if the augmented complex K((M z λ) M, M λ ) is exact, then λ / σ e (M z M) and ind(m z λ) M) = ( 1) d dim M λ = dim M/((z 1 λ 1 )M + + (z d λ d )M). Proof. Let k = dim M λ and h 1,.., h k M such that M λ equals the linear span of h 1 (λ),..., h k (λ). It is clear that the cosets of h 1,.., h k are linearly independent in M/((z 1 λ 1 )M + + (z d λ d )M). It follows that M/((z 1 λ 1 )M + + (z d

INDEX OF INVARIANT SUBSPACES 9 λ d )M) = dim M λ, if and only if every f M is of the form f(z) = k a ih i (z) + d (z i λ i )g i (z) for some a 1,.., a k C and g 1,...g d M. Suppose M/((z 1 λ 1 )M+ +(z d λ d )M) = dim M λ, and let f e 1... e d ker δ λ. Then f(λ) = 0 and the hypothesis implies that there are g 1,..., g d M such that f(z) = d (z i λ i )g i (z). We set x = d ( 1)i 1 g i e 1..ê i.. e d Λ d 1 (M), where ê i is used to indicate that e i is to be omitted. Then d 1 x = f e 1... e d, i.e. the augmented complex is exact at this stage. Conversely, suppose that the augmented complex is exact at this stage, i.e. ker δ λ = ran d 1. Let f M, then since h 1 (λ),.., h k (λ) is a basis for M λ, there are a 1,..., a k C such that f(λ) = k a ih i (λ). Set f 1 = f k a ih i, then f 1 e 1... e d ker δ λ. Hence there is an x Λ d 1 (M) such that d 1 x = f 1 e 1... e d. The definition of Λ d 1 (M) implies that there are g 1,..., g d M such that x = d g i e 1..ê i.. e d. It follows that f 1 (z) = d ( 1)i 1 (z i λ i )g i (z), and this implies f(z) = k a ih i (z) + d ( 1)i 1 (z i λ i )g i (z). We shall now briefly outline the proofs of a few further facts that will be needed. These results are certainly known, and it is our intent to point the reader in the direction of elementary references. Note that all our spaces are Hilbert or Banach spaces, so by the word isomorphism we will mean a bounded linear map with bounded inverse. Obviously, this is equivalent to a vector space isomorphism if the spaces involved are finite dimensional. We say that the Koszul complexes for two tuples T and S of commuting operators on H are isomorphic if there is an invertible operator U : Λ(H) Λ(H) such that each Λ p (H) is a reducing subspace for U and we have S U = U T. It is clear that if the two Koszul complexes K(T ) and K(S) are isomorphic then for each p the cohomology vector spaces ker T,p /ran T,p 1 and ker S,p /ran S,p 1 are isomorphic. Lemma 2.3. Let T be a d-tuple of commuting operators on a Hilbert space H, let A = (a ij ) be an invertible d d matrix, and set S = (S 1,..., S d ), S i = d j=1 a ijt j. Then the Koszul complex for T is isomorphic to the Koszul complex for S. Thus T is a Fredholm tuple if and only if S is a Fredholm tuple. Proof. We have S = d S i E i = d i,j=1 a ijt j E i = d j=1 T j D j, where D j = d a ije i, and it is a standard fact that there is an invertible linear map L : Λ Λ such that each Λ p is reducing for L and LD i = E i L for i = 1,..., d (see [25]). Thus for each p the space Λ p (H) is reducing for I L and (I L) S = T (I L). The lemma follows. Recall that for λ B d we have an automorphism of B d taking λ to 0 of the form ϕ λ (z) = λ P λz s λ Q λ z 1 z, λ = A λ(z λ) 1 z, λ, where we have used P λ z = z,λ λ λ if λ 0 and P 2 0 = 0, Q λ = I P λ, s λ = (1 λ 2 ) 1/2 and A λ = (P λ + s λ Q λ ) (see [33], p. 25). Thus if T is a d-tuple of commuting operators on a Hilbert space H, and if λ B d such that I T, λ = I d λ it i is invertible, then we can form the d-tuple ϕ λ (T ). If we write A λ = (a ij (λ)) 1 i,j d with respect to the standard orthonormal

10 GLEASON, RICHTER, AND SUNDBERG basis in C d, then for i = 1,..., d we have (ϕ λ (T )) i = (I T, λ ) 1 a ij (λ)(t j λ j I). Lemma 2.4. Let T be a d-tuple of commuting operators on a Hilbert space H and let λ B d such that I T, λ is invertible. Then the Koszul complexes of T λ and ϕ λ (T ) are isomorphic. Thus T λ is a Fredholm tuple if and only if ϕ λ (T ) is a Fredholm tuple. Proof. We note that A λ is invertible on C d. Hence it is easy to see that an isomorphism K(T λ) K(ϕ λ (T )) is given by U = (I T, λ ) 1 L λ, where L λ is the isomorphism from the proof of Lemma 2.3 applied with T λ and the matrix for A λ. In view of the hypothesis of the previous lemma, and for future reference it is important to note that for any d-tuple T of commuting operators on H and for any λ B d the one-dimensional spectrum of the operator < T, λ > is contained in the disc of radius λ whenever the Taylor spectrum of T is contained in clos B d. In order to verify this we recall the spectral radius formula from [27]: if for an operator X we define the operator ψ(x) = d T i XT i and use ψ n to denote the n-th iterate of ψ, then the (Taylor-)spectral radius of T equals lim n ψ n (I) 1 2n. If x H, λ B d, and n > 0, then we have < T, λ > n x 2 = i 1,..,i n =1 i 1,..,i n=1 j=1 λ i1..λ in T i1..t in x 2 λ i1..λ in 2 = λ 2n < ψ n (I)x, x >. i 1,..,i n=1 T i1..t in x 2 We thus conclude that the spectral radius of < T, λ > is less than or equal to λ times the spectral radius of the tuple T. The spaces H 2 d, the Hardy, and Bergman spaces of the ball B d are members of a family of Hilbert spaces of analytic functions. For α > 0 we let K α be the space of analytic functions on B d with reproducing kernel k λ (z) = 1 (1 z,λ ) α. Obviously, K 1 = H 2 d, and it is also well-known that K d = H 2 ( B d ) is the Hardy space and K d+1 = L 2 a(b d ) is the Bergman space of the ball (see [15], p.23). We shall need some spectral information about these three spaces and it will be convenient to treat all values of α > 0 simultaneously. We start with a preliminary result. Lemma 2.5. Let α > 0, and H = K α. Then for each i = 1,.., d the self commutator Mz i M zi M zi Mz i is compact (i.e. M zi is essentially normal) and d M z i M zi = I + K for some compact operator K. Furthermore, σ(m z ) clos B d. Proof. For the proof we need to recall multiindex notation. Let j = (j 1, j 2,..., j d ) be a multiindex of nonnegative integers, then j = j 1 + j 2 + + j d, j! = j 1!j 2! j d!, and for λ = (λ 1, λ 2,..., λ d ) C d, λ j = λ j1 1 λj2 2 λj d d, and the multinomial formula implies that for z, λ B d and n 0

INDEX OF INVARIANT SUBSPACES 11 z, λ n = Thus if we write k λ (z) = 1 (1 z,λ ) α j =n j! j! zj λ j. = n=0 a n(< z, λ >) n, where a 0 = 1 and for n 1 a n = α(α+1)...(α+n 1) n!, then k λ (z) = j a j j! j! zj λ j, where the sum is taken over all multiindices j with entries in the nonnegative integers. Since k λ (z) = k λ, k z it follows that monomials in K α are mutually orthogonal and z j 2 = j! a j j! = j! α(α + 1) (α + j 1). Now for 1 i d let S i denote the self-commutator of M zi, i.e. S i = Mz i M zi M zi Mz i, and let P n denote the projection of K α onto the subspace of all polynomials of total degree less than n. We will show that S i P n S i P n 0 as n. It is clear that S i is diagonalized by the monomials so that S i z j = c i,j z j for each multiindex j and some c i,j R. Hence it will suffice to show that sup j n c i,j 0 as n. We write e i for the multiindex with a 1 in the i-th spot and 0 s otherwise. Then for any multiindex j and any 1 i d we have Mz i z j = 0 if j i = 0 and Mz i z j = z j 2 z j e i otherwise. Hence if j z j e i 2 i = 0 we obtain < S i z j, z j >= 1 α+ j zj 2, while for j i > 0 we compute < S i z j, z j > = ( zj+ei 2 z j 2 zj 2 z j e i 2 ) zj 2 = Thus, if n > 1 and j = n, then < S i z j, z j > α + j j i 1 (α + j )(α + j 1) zj 2. α + 2n (α + n)(α + n 1) zj 2 2 α + n 1 zj 2. Hence, for j n we have c i,j = <S iz j,z j > 2 z j 2 α+n 1 0 as n. This implies that S i is compact and M zi is essentially normal. Similarly, we compute < Mz i M zi z j, z j >= z i z j 2 = (1 + d α α + j ) zj 2. This implies that d M z i M zi I is compact. Finally we show that σ(m z ) clos B d, or equivalently that the spectral radius of M z is less than or equal to 1. As above we set ψ(x) = d M z i XM zi, X B(K α ). By the above mentioned spectral radius formula we must show that lim sup n ψ n (I) 1 2n 1. It is easy to see that ψ n (I) is diagonalized by the monomials, and one calculates that for any multiindex j n 1 < ψ n (I)z j, z j >= M zi1..m zin z j 2 = (1 + d α α + j + k ) zj 2. i 1,..,i n=1 k=0

12 GLEASON, RICHTER, AND SUNDBERG Thus for α d we see that ψ n (I) 1 and for 0 < α d we have that ψ n (I) n 1 d α k=0 (1 + α+k ). From this the result follows easily. Because of the previous lemma one can apply a theorem of Curto ([16], Cor. 3.9) to see that for each α > 0 (M z, K α ) is a Fredholm tuple. Actually, much more is known about the Koszul complex of M z. Proposition 2.6. Let α > 0, and H = K α. Then σ(m z ) = clos B d, σ e (M z ) = B d, and for each λ B d the augmented Koszul complex for M z λ is exact. Proof. We know from Lemma 2.5 that σ e (M z ) σ(m z ) clos B d. Next we let λ = 0, and we proceed as in the proof of [22], Theorem 2.6. From the remark before the Proposition we know that M z is a Fredholm tuple. Hence the operator Mz has closed range. Observe that H = H 0 H 1, where H n is the space of homogeneous polynomials of degree n. Thus, for each p we get Λ p (H) = Λ p (H 0 ) Λ p (H 1 ). The definition of = Mz implies that for each p and n p takes Λ p (H n ) into Λ p+1 (H n+1 ). Now let 0 p d 1 and x ker p. Then x = n=0 x n for x n Λ p (H n ), and it is clear that x n ker p for each n. We must show that x ran p 1. We already know that ran p 1 is closed, so it is enough to show that each x n ran p 1. This is equivalent to exactness of the Koszul complex at stage p for the polynomial ring C[z 1,..., z d ], and that is well known (see [18], Corollary 17.5). Similarly, the exactness of the augmented complex at the last stage is clear, because 1 H. Next we claim that K α is automorphism invariant, i.e. for each λ B d composition with ϕ λ defines a bounded invertible operator on K α. Fix λ B d, and choose a branch of f(u) = (1 λ 2 ) α/2 (1 u) α that is analytic for u D. For z B d set g(z) = f(< z, λ >). It follows from Lemma 2.5 and the remarks preceding it that σ(< M z, λ >) D. Thus it is clear that the operator f(< M z, λ >) as defined by the Riesz-Dunford functional calculus equals the multiplication operator M g, i.e. g is a multiplier of K α. Notice that the well-known transformation formula for ball automorphisms ([33], p. 26) shows that k ϕλ (w)(ϕ λ (z)) = g(z)g(w)k w (z) for all z, w B d. Thus the linear transformation T defined on the reproducing kernels by T k w = k ϕλ (w) extends to be a bounded operator of norm Mg. Hence T is also bounded, and it is easy to verify that T is the operator of composition with ϕ λ. The automorphism invariance of K α implies that the tuples M z and ϕ λ (M z ) are similar, and the result about the exactness of the augmented complex follows from Lemma 2.4. This implies that σ(m z ) = clos B d and σ e (M z ) B d =. It also implies that the index of M z λ is ( 1) d for each λ B d. Thus the continuity property of the index on the components of the complement of the essential spectrum implies that σ e (M z ) = B d ([16]). We remark that a similar argument works for regions other than B d as long as the analytic automorphisms act transitively and one has the needed spectral information at the origin. Thus, for example, if H denotes the Hardy or Bergman space of the polydisc D d then for each λ D d the augmented Koszul complex for M z λ is exact (see e.g. [16] or [17] and the references therein).

INDEX OF INVARIANT SUBSPACES 13 3. The Main Results Now let H be a Hilbert space of complex-valued analytic functions on the open, connected and nonempty set Ω C d. We assume 1 H. Then M(H) H. We use k λ to denote the reproducing kernel of H. For λ Ω it is defined by the relation f(λ) = f, k λ for every f H. In the scalar-valued version of our Main Theorem we assume that the invariant subspace M contains a multiplier φ (see Theorem 1.1). For the vector-valued versions it will be convenient to use operator-valued multipliers. Let D and E be two separable Hilbert spaces, and let φ : B d B(E, D) be an operator valued analytic function. For λ B d and f H E we define (Φf)(λ) = φ(λ)f(λ). Then Φf is a D-valued analytic function. If Φf H D for every f H E, then φ is called an operator-valued multiplier, and the closed graph theorem shows that the associated multiplication operator Φ : H E H D is bounded. One hypothesis on the invariant subspace M of H D in our Main Theorem will be that there exists a separable Hilbert space E and a multiplication operator Φ B(H E, H D ) such that ran Φ M. We will then see that the augmented complex K((M z λ) M, M λ ) is exact at every λ Ω \ Z(M) with ran φ(λ) = M λ. In this connection we note that it is known that every scalar multiplier invariant subspace M of Hd 2 (D) is of the form M = ran Φ for some multiplication operator Φ ([10, 26]). Furthermore, it is easy to construct a multiplication operator Φ with ran Φ M for a scalar multiplier invariant subspace M of the Hardy or Bergman space whenever M contains some bounded functions (see the proof of Theorem 4.6). We will start by refining part of the argument of Lemma 3.1 of [21]. We need to fix some notation. For the rest of this Section we let M be a scalar multiplier invariant subspace of H D of finite fiber dimension m, i.e. m = sup dim M λ <, λ Ω and we let Φ B(H E, H D ) be a multiplication operator with associated operator valued analytic function φ : Ω B(E, D). We assume that ran Φ M and that sup λ Bd dim ran φ(λ) = m. For λ Ω write D λ = ran φ(λ) D. Then since D λ M λ we have dim D λ dim M λ, and D λ = M λ whenever dim D λ = m. We fix a λ 0 Ω with dim D λ0 = m. Let {e n } m n=1 be an orthonormal basis for ker φ(λ 0 ) E, and {d k } m k=1 be an orthonormal basis for D λ0 = ran φ(λ 0 ) D. We define the m m matrix and the analytic function ϕ, M(λ) = ( φ(λ)e n, d k D ) 1 n,k m, ϕ(λ) = det M(λ). The choice of λ 0 implies that ϕ(λ 0 ) 0. It is easy to check that all entries of the matrix M are multipliers of H, and it follows that ϕ is a multiplier of H also. Finally, we write P Dλ for the orthogonal projection of D onto D λ. Lemma 3.1. If f M is such that for each λ Ω we have P Dλ0 (f(λ)) = 0, then f = 0. Proof. Let f M be as in the hypothesis. We will show that ϕ(λ)f(λ) = 0 for all λ Ω. Since ϕ 0 this will imply that f = 0. Let λ Ω such that ϕ(λ) 0. We must show f(λ) = 0.

14 GLEASON, RICHTER, AND SUNDBERG Since ϕ(λ) 0, the matrix M(λ) has full rank and the set of vectors {φ(λ)e 1,..., φ(λ)e m } is linearly independent in D λ. Thus, dim D λ = m and f(λ) M λ = D λ = ran φ(λ). Hence there must be a 1 (λ), a 2 (λ),..., a m (λ) C such that f(λ) = m n=1 a n(λ)φ(λ)e n. Now the hypothesis on f implies that for each k = 1, 2,..., m we have m 0 = f(λ), d k = a n (λ) φ(λ)e n, d k. n=1 Thus, (a 1 (λ), a 2 (λ),..., a m (λ))m(λ) = 0. But M(λ) has full rank, hence a 1 (λ) = = a m (λ) = 0, and it follows that f(λ) = 0. Lemma 3.2. If x D λ0 then there is a g x ran Φ M H D such that (1) P Dλ0 (g x (λ)) = ϕ(λ)x for all λ Ω, and (2) hg x M for all h H. Proof. For λ Ω and 1 i, j m and m 2 we let b i,j (λ) equal ( 1) i+j times the determinant of the (m 1) (m 1) matrix obtained from M(λ) by deleting the j-th row and the i-th column. If m = 1, we set b 1,1 (λ) = 1. Then each b i,j is a multiplier of H, and the matrix M + (λ) = (b i,j (λ)) 1 i,j m is the adjoint matrix of M(λ). It satisfies M + (λ) M(λ) = M(λ) M + (λ) = ϕ(λ)i m, where I m denotes the m m identity matrix. Now let x D λ0 and set f x (λ) = m i,j=1 b j,i(λ) x, d j e i for λ Ω. Since M(H) H, it is clear that f x H E. Thus we may set g x = Φf x ran Φ and we claim that P Dλ0 (g x (λ)) = ϕ(λ)x for all λ Ω. Since {d n } is an orthonormal basis for D λ0 we have m m P Dλ0 (g x (λ)) = g x (λ), d n d n = φ(λ)f x (λ), d n d n = = = n=1 m n,i,j=1 m n,j=1 m n,j=1 n=1 φ(λ)b j,i (λ) x, d j e i, d n d n x, d j m b j,i (λ) φ(λ)e i, d n d n x, d j ϕ(λ)δ n,j d n = ϕ(λ)x. Thus, g x satisfies (1). If h H, then hf x = m i,j=1 hb j,i x, d j e i H E. Thus, for each λ Ω we have h(λ)(φf x )(λ) = h(λ)φ(λ)f x (λ) = Φ(λ)(hf x )(λ), i.e. hg x = Φ(hf x ) ran Φ M and (2) follows.

INDEX OF INVARIANT SUBSPACES 15 If λ Ω, then D λ D so we can think of H Dλ as a subspace of H D. We will write P λ for the orthogonal projection of H D onto H Dλ. It satisfies (P λ f)(z) = P Dλ (f(z)) for every f H D and every z Ω. Thus it is clear that P λ intertwines every scalar multiplication operator and Lemma 3.1 says that P λ0 is 1 1 when restricted to M. The following lemma explains the structure of the inverse of P λ0 M. Lemma 3.3. Let H be a Hilbert space of holomorphic functions on Ω C d with 1 H, let E and D be separable Hilbert spaces, and let M be a scalar multiplier invariant subspace of H D with finite fiber dimension m. If Φ B(H E, H D ) is a multiplication operator with associated operator-valued multiplier φ such that ran Φ M and if λ 0 Ω\Z(M) such that rank φ(λ 0 ) = m, then there exists a ϕ M(H) with ϕ(λ 0 ) = 1 and there is a multiplication operator Ψ B(H Dλ0, H D ) with ran Ψ M and such that P λ0 Ψf = M ϕ f for every f H Dλ0, and ΨP λ0 f = M ϕ f for every f M. Proof. We fix λ 0 Ω \ Z(M) such that rank φ(λ 0 ) = m, and we note that it is sufficient to construct a function ϕ and an operator Ψ that satisfy the conclusions of the lemma with the weaker condition ϕ(λ 0 ) 0 instead of ϕ(λ 0 ) = 1. We will continue to use the notation that was introduced before Lemma 3.1 and in Lemma 3.2. In particular, we already have the function ϕ M(H) with ϕ(λ 0 ) 0. In order to construct Ψ let g 1, g 2,..., g m H D satisfy conditions (1) and (2) of Lemma 3.2 with x = d 1, d 2,..., d m. For λ Ω we set ψ(λ) = m n=1 g n(λ) d n, and if f H Dλ0, then m (Ψf)(λ) = ψ(λ)f(λ) = f(λ), d n g n (λ). Equation (2) of Lemma 3.2 implies that Ψf M for each f H Dλ0 and a simple argument with the closed graph theorem shows that Ψ is bounded. Thus Ψ is a multiplication operator with ran Ψ M. If f H Dλ0, then f(λ) = m n=1 f(λ), d n d n. Hence the choice of the g n s and condition (1) of Lemma 3.2 imply that P λ0 Ψf = ϕf. Finally, if f M, then the function h = ϕf ΨP λ0 f satisfies h M and m h(λ) = ϕ(λ)f(λ) f(λ), d n g n (λ). n=1 Hence it follows from condition (1) of Lemma 3.2 and the fact that {d n } forms an orthonormal basis for D λ0 that P Dλ0 (h(λ)) = 0 for each λ Ω. Thus the result follows from Lemma 3.1. n=1 We are now able to prove the main theorem. Theorem 3.4. Let H be a Hilbert space of holomorphic functions on Ω C d with the properties that 1 H, the coordinate functions z i are multipliers, one can solve Gleason s problem in the multiplier algebra of H, and M z λ is a Fredholm tuple with exact augmented Koszul complex K(M z λ, C) for all λ Ω. Let D be a separable Hilbert space and let M be a nonzero scalar multiplier invariant subspace of H D of finite fiber dimension m such that there is a Hilbert

16 GLEASON, RICHTER, AND SUNDBERG space E and a bounded multiplication operator Φ B(H E, H D ) with associated operator valued multiplier φ such that ran Φ M. Then for every λ Ω \ Z(M) such that rank φ(λ) = m the augmented complex is exact. In particular, we have K((M z λ) M, M λ ) σ e (M z M) Ω {λ Ω : rank φ(λ) < m}, and the tuple (M z λ) M is Fredholm with index ( 1) d m for every λ Ω\σ e (M z ), whenever {λ Ω \ Z(M) : rank φ(λ) = m} is nonempty. Proof. Note that whenever {λ Ω \ Z(M) : rank φ(λ) = m} is nonempty, then it must be connected and dense in Ω. Hence the statement in the last sentence follows from the exactness of the augmented complex and Lemma 2.2. Let λ 0 = (λ 01,..., λ 0d ) Ω \ Z(M) be such that rank φ(λ 0 ) = m. We must show that the augmented Koszul complex K((M z λ 0 ) M, M λ0 ) is exact. The definition and finite-dimensionality of M λ0 imply that δ λ0 is onto, and the complex is exact at the last stage. Before we prove exactness at the other stages we make some remarks. The hypothesis implies that D λ0 = M λ0. Also note that since H Dλ0 is isomorphic to a direct sum of m copies of C the augmented Koszul complex K((M z λ 0 ) H Dλ0, D λ0 ) is isomorphic to a direct sum of m copies of the augmented complex K(M z λ 0, C). Hence it is exact. Since M and H Dλ0 are M z -invariant subspaces of H D it follows that the boundary maps for the Koszul complexes K((M z λ 0 ) M) and K((M z λ 0 ) H Dλ0 ) are the restrictions to Λ(M) and Λ(H Dλ0 ) of the boundary map Mz λ 0 for the complex K((M z λ 0 ) H D ). We will write in all cases. We will use the multiplier ϕ and the operator Ψ from Lemma 3.3. Write P = P λ0 E 0. Then P is the projection of Λ(H D ) onto Λ(H Dλ0 ). Since P λ0 is 1 1 on M it follows that P is 1 1 on Λ(M). Notice that P Λ(M) = P Λ(M) and (Ψ E 0 ) Λ(H Dλ0 ) = (Ψ E 0 ) Λ(H Dλ0 ). The function 1 ϕ is a multiplier that vanishes at λ 0. Since we assume that one can solve Gleason s problem in the multiplier algebra, there are ϕ 1,, ϕ d M(H) such that (3) 1 ϕ(z) = (z i λ 0i )ϕ i (z). To show exactness at Λ d (M) we let f M such that f(λ 0 ) = 0 (i.e. f e 1 e d ker δ λ0 ). We must show that there are g 1,..., g d M such that f(z) = d (z i λ 0i )g i (z) (see the proof of Lemma 2.2). By Lemma 3.3 and Equation (3) we have f(z) = (1 ϕ(z))f(z) + ϕ(z)f(z) = (z i λ 0i )ϕ i (z)f(z) + (ΨP λ0 f)(z) The hypothesis f(λ 0 ) = 0 implies that (P λ0 f)(λ 0 ) = 0. Thus, by the exactness of the augmented complex K((M z λ 0 ) H Dλ0, D λ0 ) there are h 1,..., h d H Dλ0 such

INDEX OF INVARIANT SUBSPACES 17 that P λ0 f(z) = d (z i λ 0i )h i (z). Then (ΨP λ0 f)(z) = d (z i λ 0i )(Ψh i )(z). It follows that f(z) = (z i λ 0i )(ϕ i (z)f(z) + (Ψh i )(z)), and that for each i the function ϕ i f + Ψh i M. This shows that the augmented Koszul complex is exact at Λ d (M). Exactness at Λ 0 (M) is obvious. To finish the proof let 0 < k d 1 and let y Λ k (M) such that k y = 0. We must show that there is x Λ k 1 (M) such that k 1 x = y. We start by letting y = P y. Then y Λ k (H Dλ0 ) and k y = k P y = P k y = 0. By the exactness at Λ k (H Dλ0 ) there is an x Λ k 1 (H Dλ0 ) such that k 1 x = y. Now set x = (M ϕi Ei )y + (Ψ E 0 )x. It is clear that x Λ k 1 (M). Since P is 1 1 on Λ(M) it suffices to show that P k 1 x = P y. By Lemma 3.3 we have P k 1 x = = k 1 P = (M ϕi E 0 )(I Ei )y + k 1 (P λ0 E 0 )(Ψ E 0 )x (M ϕi E 0 ) k 1 (I Ei )P y + k 1 (M ϕ E 0 )x Plugging in k 1 x for P y and using Lemma 2.1 we get Hence k 1 (I E i )P y = k 1 (I E i ) k 1 x = k 1 ((M zi λ 0i ) E 0 )x. P k 1 x = = (M ϕi E 0 ) k 1 ((M zi λ 0i ) E 0 )x + k 1 (M ϕ E 0 )x = k 1 ((I M ϕ ) E 0 )x + k 1 (M ϕ E 0 )x = k 1 x = y = P y, where we have used Equation (3). Corollary 3.5. Let H be a Hilbert space of holomorphic functions on Ω C d with the properties that 1 H, the coordinate functions z i are multipliers, one can solve Gleason s problem in the multiplier algebra of H, and M z λ is a Fredholm tuple with exact augmented Koszul complex K(M z λ, C) for all λ Ω. Let D be a finite dimensional Hilbert space and let M be a nonzero scalar multiplier invariant subspace of H D of finite fiber dimension m such that there is a Hilbert space E and a bounded multiplication operator Φ B(H E, H D ) with associated operator valued multiplier φ such that ran Φ M. Let S := M z M and T := P M M z M.

18 GLEASON, RICHTER, AND SUNDBERG Then if λ Ω \ Z(M) and M λ = ran φ(λ), the tuple T λ is Fredholm with index ( 1) d (dim D m) and 0 K(T λ) δ λ M λ 0 is an exact complex, where δ λ : Λ d (M ) M λ is defined by δ λ(f e 1 e d ) = P M λ f(λ). If λ Ω Z(M), then dim d ker(t i λ i ) > dim D m. In particular it follows that Z(M) σ p (T ). Proof. Let λ Ω. We must first show that 0 K(T λ) δ λ M λ 0 is a complex. Since K(T λ) is a Koszul complex it suffices to show that the range of the last boundary map of the Koszul complex is contained in the kernel of δ λ. If f 1,..., f d M, then δ λ d 1 ( 1) i+1 f i e 1 ê i e d since and = δ λ ((T i λ i )f i ) e 1 e d = P M λ (( = P M λ (( = P M λ (( = 0 (P M M zi λ i ) f i ) (λ) ) (I P M ) (M zi λ i ) f i ) (λ) (M zi λ i ) f i ) (λ) ( ) (M zi λ i ) f i (λ) = ) ) P M λ (( P M ) ) (M zi λ i ) f i (λ) (λ i λ i ) f(λ) = 0 (P M d (M z i λ i ) f i ) (λ) M λ. Therefore, 0 K(T λ) δ λ M λ 0 is a complex which we will denote by K(T λ, M λ ). For p = 0, 1,..., d let ι p : Λ p (M) Λ p (H D ) and ι d+1 : M λ D be the natural inclusion maps. Similarly let π p : Λ p (H D ) Λ p (M ) and π d+1 : D M λ be the natural projections. With these definitions of ι and π one can easily check that 0 K(S λ, M λ ) ι K(M z λ, D) π K(T λ, M λ ) 0 is a short exact sequence of Hilbert space complexes. Therefore, by the Fundamental Theorem of Homological Algebra, ([25], Theorem 3.3), there exists an induced long exact sequence of cohomology spaces. The argument at the beginning of the proof of Theorem 3.4 shows that K(M z λ, D) is exact. This means all of the corresponding cohomology spaces of this complex are {0}.

INDEX OF INVARIANT SUBSPACES 19 Now assume that λ Ω \ Z(M) and M λ = ran φ(λ). From Theorem 3.4 we know that K(S λ, M λ ) is exact, hence its corresponding cohomology spaces are {0}. So we have that for each p = 0, 1,..., d, d + 1 0 ker p ran p 1 0 is part of a long exact sequence. Therefore, these cohomology spaces are also all equal to {0}. This means that K(T λ, M λ ) is exact and since M λ is finite dimensional, it follows that T λ is Fredholm and ind(t λ) = ( 1) d (dim D m). Finally, we assume λ Ω Z(M). Then dim M λ < m, hence dim M λ > dim D m. The statement follows, because M λ = {x D :< f(λ), x >= 0 f M} = {x D :< f, k λ x >= 0 f M} = { x D : k λ x M } and this is isomorphic to {k λ x M } = d ker(t i λ i). 4. Applications 4.1. The Space Hd 2(D). As in the Introduction we let H = H2 d be the Hilbert space of analytic functions on the unit ball of C d 1 defined by the kernel k λ (z) = 1 z,λ. If D is a separable Hilbert space, then we will write Hd 2(D) = H D for the space of D-valued Hd 2 -functions. We refer the reader to [8], [10], [21], and [26] for general information about this space. In particular, the polynomials are dense in Hd 2 and each coordinate function z i is a multiplier. The tuple M z = (M z1,..., M zd ) on Hd 2 (D) is called the d-shift of multiplicity dim D. We note that each subspace that is invariant for M z is in fact a scalar multiplier invariant subspace of Hd 2 (D) (see [21], Lemma 4.1). It was shown in [22] that the augmented Koszul complex for (M z, Hd 2 ) is exact, and in Proposition 2.6 we have used that argument to show that the same is true for (M z λ, Hd 2) for each λ B d. Furthermore, Arveson showed that for every scalar multiplier invariant subspace M of Hd 2 (D) there exists a Hilbert space E and a bounded multiplier Φ B(Hd 2(E), H2 d (D)) such that ran Φ = M (also see [26]). Therefore once we show that Gleason s problem for the multiplier algebra of Hd 2 can be solved we will have verified all the hypotheses of Theorem 3.4 for H = Hd 2, Ω = B d, and M any nonzero M z -invariant subspace of Hd 2(D). We shall use the following multivariable version of Leech s Theorem (see [31], page 107). It is well-known that such a result follows from a commutant lifting theorem, which is available for Hd 2. We refer the reader to [2], [5], [13], and [19]. Theorem 4.1 ([2], Theorem 3.57). Let E, F and G be complex Hilbert spaces and let S B d be arbitrary. Suppose that α : S B(F, G) and β : S B(E, G) are given operator-valued functions. Then there is a multiplier ψ : B d B(E, F) with associated multiplication operator Ψ such that Ψ B(H 2 d (E),H 2 d (F)) 1 and α(z)ψ(z) = β(z) (z S) if and only if the mapping K α,β : S S B(G), K α,β (z, w) = α(z)α(w) β(z)β(w) 1 z, w