A Short Course on Frame Theory

Similar documents
Contents. 0.1 Notation... 3


Linear Algebra. Min Yan

MATH 583A REVIEW SESSION #1

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Fourier and Wavelet Signal Processing

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

A primer on the theory of frames

October 25, 2013 INNER PRODUCT SPACES

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Lecture notes: Applied linear algebra Part 1. Version 2

NOTES ON FRAMES. Damir Bakić University of Zagreb. June 6, 2017

Linear Algebra. Preliminary Lecture Notes

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

7. Dimension and Structure.

2. Signal Space Concepts

Linear Algebra. Preliminary Lecture Notes

Properties of Matrices and Operations on Matrices

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Spectral Theory, with an Introduction to Operator Means. William L. Green

Spanning and Independence Properties of Finite Frames

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

(v, w) = arccos( < v, w >

Chapter 2. Linear Algebra. rather simple and learning them will eventually allow us to explain the strange results of

Fidelity and trace norm

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Linear Systems. Carlo Tomasi

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Consistent Histories. Chapter Chain Operators and Weights

(v, w) = arccos( < v, w >

Elementary linear algebra

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

INF-SUP CONDITION FOR OPERATOR EQUATIONS

(v, w) = arccos( < v, w >

Math Linear Algebra II. 1. Inner Products and Norms

12. Perturbed Matrices

Linear Algebra: Matrix Eigenvalue Problems

The quantum state as a vector

arxiv: v1 [math.fa] 4 Feb 2016

Chapter Two Elements of Linear Algebra

I teach myself... Hilbert spaces

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Lecture Notes 1: Vector spaces

ECE 275A Homework # 3 Due Thursday 10/27/2016

Chapter 8 Integral Operators

Boolean Inner-Product Spaces and Boolean Matrices

Inner product spaces. Layers of structure:

Introduction to Group Theory

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

Review problems for MA 54, Fall 2004.

Checking Consistency. Chapter Introduction Support of a Consistent Family

LOCAL AND GLOBAL STABILITY OF FUSION FRAMES

Basic Concepts of Group Theory

Quantum Computing Lecture 2. Review of Linear Algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Unit 2, Section 3: Linear Combinations, Spanning, and Linear Independence Linear Combinations, Spanning, and Linear Independence

ECE 275A Homework #3 Solutions

Analysis Preliminary Exam Workshop: Hilbert Spaces

Chapter III. Quantum Computation. Mathematical preliminaries. A.1 Complex numbers. A.2 Linear algebra review

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

2.2 Some Consequences of the Completeness Axiom

MULTIPLEXING AND DEMULTIPLEXING FRAME PAIRS

In this section again we shall assume that the matrix A is m m, real and symmetric.

Vector spaces. EE 387, Notes 8, Handout #12

GEOMETRY OF MATRICES x 1

Review of linear algebra

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Unitary Dynamics and Quantum Circuits

Incompatibility Paradoxes

Density, Overcompleteness, and Localization of Frames. I. Theory

Recall the convention that, for us, all vectors are column vectors.

Math 3108: Linear Algebra

Density Matrices. Chapter Introduction

Eigenvectors and Hermitian Operators

Mathematical Introduction

Chapter 7. Linear Algebra: Matrices, Vectors,

Stochastic Quantum Dynamics I. Born Rule

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

1 Mathematical preliminaries

EE731 Lecture Notes: Matrix Computations for Signal Processing

NORMS ON SPACE OF MATRICES

Introduction to Hilbert Space Frames

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Linear Operators, Eigenvalues, and Green s Operator

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

An Introduction to Filterbank Frames

MTH 2032 SemesterII

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

On Riesz-Fischer sequences and lower frame bounds

Vector Spaces in Quantum Mechanics

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

Matrices and Matrix Algebra.

Chapter 2 The Group U(1) and its Representations

CHAPTER 7. Connectedness

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Transcription:

A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and the associated concept of orthonormal bases are of fundamental importance in signal processing, communications, control, and information theory. However, linear independence and orthonormality of the basis elements impose constraints that often make it difficult to have the basis elements satisfy additional desirable properties. This calls for a theory of signal decompositions that is flexible enough to accommodate decompositions into possibly nonorthogonal and redundant signal sets. The theory of frames provides such a tool. This chapter is an introduction to the theory of frames, which was developed by Duffin and Schaeffer [2] and popularized mostly through [3 6]. Meanwhile frame theory, in particular the aspect of redundancy in signal expansions, has found numerous applications such as, e.g., denoising [7, 8], code division multiple access (CDMA) [9], orthogonal frequency division multiplexing (OFDM) systems [0], coding theory [, 2], quantum information theory [3], analog-to-digital (A/D) converters [4 6], and compressive sensing [7 9]. A more extensive list of relevant references can be found in [20]. For a comprehensive treatment of frame theory we refer to the excellent textbook [2]. Examples of Signal Expansions We start by considering some simple motivating examples. Example. (Orthonormal basis in R 2 ). Consider the orthonormal basis (ONB) e = [ ], e 2 = 0 in R 2 (see Figure ). We can represent every signal x R 2 as the following linear combination of the basis vectors e and e 2 : x = x, e e + x, e 2 e 2. () To rewrite () in vector-matrix notation, we start by defining the vector of expansion coefficients as c = [ c c 2 ] [ ] x, e = x, e 2 [ e T e T 2 [ ] 0 ] x = [ ] 0 x. 0 It is convenient to define the matrix [ ] e T T = e T 2 [ ] 0. 0

R 2 e 2 x e Figure : Orthonormal basis in R 2. Henceforth we call T the analysis matrix; it multiplies the signal x to produce the expansion coefficients c = Tx. Following (), we can reconstruct the signal x from the coefficient vector c according to We call x = T T c = [ ] [ ] [ ] x, e e e 2 c = e e 2 = x, e e + x, e 2 e 2. (2) x, e 2 T T = [ [ ] ] 0 e e 2 = 0 the synthesis matrix; it multiplies the coefficient vector c to recover the signal x. It follows from (2) that () is equivalent to [ ] [ ] x = T T 0 0 Tx = x. (4) 0 0 The introduction of the analysis and the synthesis matrix in the example above may seem artificial and may appear as complicating matters unnecessarily. After all, both T and T T are equal to the identity matrix in this example. We will, however, see shortly that this notation paves the way to developing a unified framework for nonorthogonal and redundant signal expansions. Let us now look at a somewhat more interesting example. Example.2 (Biorthonormal bases in R 2 ). Consider two noncollinear unit norm vectors in R 2. For concreteness, take (see Figure 2) e = [ ], e 2 = [ ]. 0 2 For an arbitrary signal x R 2, we can compute the expansion coefficients c x, e c 2 x, e 2. As in Example. above, we stack the expansion coefficients into a vector so that [ ] [ ] [ ] [ ] c x, e e T c = = = 0 x = x, e 2 / 2 / x. (5) 2 c 2 e T 2 (3) 2

ẽ 2 R 2 e 2 e ẽ Figure 2: Biorthonormal bases in R 2. Analogously to Example., we can define the analysis matrix [ ] [ ] e T T 0 = / 2 / 2 and rewrite (5) as e T 2 c = Tx. Now, obviously, the vectors e and e 2 are not orthonormal (or, equivalently, T is not unitary) so that we cannot write x in the form (). We could, however, try to find a decomposition of x of the form x = x, e ẽ + x, e 2 ẽ 2 (6) with ẽ, ẽ 2 R 2. That this is, indeed, possible is easily seen by rewriting (6) according to x = [ ẽ ẽ 2 ] Tx (7) and choosing the vectors ẽ and ẽ 2 to be given by the columns of T according to [ẽ ẽ 2 ] = T. (8) Note that T is invertible as a consequence of e and e 2 not being collinear. For the specific example at hand we find [ ] ] [ẽ ẽ 2 = T 0 = 2 and therefore (see Figure 2) Note that (8) implies that T [ ẽ ẽ = [ ] [ ] 0, ẽ 2 = 2. ] ẽ 2 = I2, which is equivalent to [ ] e T ] [ẽ ẽ 2 = I2. e T 2 More directly the two sets of vectors {e, e 2 } and {ẽ, ẽ 2 } satisfy a biorthonormality property according to {, j = k e j, ẽ k =, j, k =, 2. 0, else 3

R 2 g 2 g g 3 Figure 3: Overcomplete set of vectors in R 2. We say that {e, e 2 } and {ẽ, ẽ 2 } are biorthonormal bases. Analogously to (3), we can now define the synthesis matrix as follows: T T [ [ ] ] 0 ẽ ẽ 2 =. 2 Our observations can be summarized according to x = x, e ẽ + x, e 2 ẽ 2 = T T c = T T Tx [ ] [ ] 0 = 0 2 / 2 / x = 2 [ ] 0 x. (9) 0 Comparing (9) to (4), we observe the following: To synthesize x from the expansion coefficients c corresponding to the nonorthogonal set {e, e 2 }, we need to use the synthesis matrix T T obtained from the set {ẽ, ẽ 2 }, which forms a biorthonormal pair with {e, e 2 }. In Example. {e, e 2 } is an ONB and hence T = T, or, equivalently, {e, e 2 } forms a biorthonormal pair with itself. As the vectors e and e 2 are linearly independent, the 2 2 analysis matrix T has full rank and is hence invertible, i.e., there is a unique matrix T that satisfies T T = I 2. According to (7) this means that for each analysis set {e, e 2 } there is precisely one synthesis set {ẽ, ẽ 2 } such that (6) is satisfied for all x R 2. So far we considered nonredundant signal expansions where the number of expansion coefficients is equal to the dimension of the Hilbert space. Often, however, redundancy in the expansion is desirable. Example.3 (Overcomplete expansion in R 2, [20, Ex. 3.]). Consider the following three vectors in R 2 (see Figure 3): [ ] [ ] [ ] 0 g =, g 2 =, g 3 =. 0 Three vectors in a two-dimensional space are always linearly dependent. In particular, in this example we have g 3 = g g 2. Let us compute the expansion coefficients c corresponding to {g, g 2, g 3 }: c x, g g T 0 c = c 2 x, g 2 = g T 2 x = 0 x. (0) c 3 x, g 3 g3 T 4

Following Examples. and.2, we define the analysis matrix g T 0 T g2 T 0 g3 T and rewrite (0) as c = Tx. Note that here, unlike in Examples. and.2, c is a redundant representation of x as we have three expansion coefficients for a two-dimensional signal x. We next ask if x can be represented as a linear combination of the form x = x, g g }{{} + x, g 2 g }{{} 2 + x, g 3 g }{{} 3 () c c 2 c 3 with g, g 2, g 3 R 2? To answer this question (in the affirmative) we first note that the vectors g, g 2 form an ONB for R 2. We therefore know that the following is true: Setting x = x, g g + x, g 2 g 2. (2) g = g, g 2 = g 2, g 3 = 0 obviously yields a representation of the form (). It turns out, however, that this representation is not unique and that an alternative representation of the form () can be obtained as follows. We start by adding zero to the right-hand side of (2): Rearranging terms in this expression, we obtain We recognize that g g 2 = g 3 and set This allows us to rewrite (3) as x = x, g g + x, g 2 g 2 + x, g g 2 (g g ). }{{} 0 x = x, g 2g + x, g 2 (g 2 g ) x, g g 2 g. (3) g = 2g, g 2 = g 2 g, g 3 = g. (4) x = x, g g + x, g 2 g 2 + x, g 3 g 3. The redundant set of vectors {g, g 2, g 3 } is called a frame. The set { g, g 2, g 3 } in (4) is called a dual frame to the frame {g, g 2, g 3 }. Obviously another dual frame is given by g = g, g 2 = g 2, and g 3 = 0. In fact, there are infinitely many dual frames. To see this, we first define the synthesis matrix corresponding to a dual frame { g, g 2, g 3 } as It then follows that we can write T T [ g g 2 g 3 ]. (5) x = x, g g + x, g 2 g 2 + x, g 3 g 3 = T T c = T T Tx, 5

which implies that setting T T = [ g g 2 g 3 ] to be a left-inverse of T yields a valid dual frame. Since T is a 3 2 ( tall ) matrix, its left-inverse is not unique. In fact, T has infinitely many left-inverses (two of them were found above). Every left-inverse of T leads to a dual frame according to (5). Thanks to the redundancy of the frame {g, g 2, g 3 }, we obtain design freedom: In order to synthesize the signal x from its expansion coefficients c k = x, g k, k =, 2, 3, in the frame {g, g 2, g 3 }, we can choose between infinitely many dual frames { g, g 2, g 3 }. In practice the particular choice of the dual frame is usually dictated by the requirements of the specific problem at hand. We shall discuss this issue in detail in the context of sampling theory in Section 4.2. 2 Signal Expansions in Finite-Dimensional Hilbert Spaces Motivated by the examples above, we now consider general signal expansions in finite-dimensional Hilbert spaces. As in the previous section, we first review the concept of an ONB, we then consider arbitrary (nonorthogonal) bases, and, finally, we discuss redundant vector sets frames. While the discussion in this section is confined to the finite-dimensional case, we develop the general (possibly infinite-dimensional) case in Section 3. 2. Orthonormal Bases We start by reviewing the concept of an ONB. Definition 2.. The set of vectors {e k } M k=, e k C M, is called an ONB for C M if. span{e k } M k= = {c e + c 2 e 2 +... + c M e M c, c 2,..., c M C} = C M 2. e k, e j = {, k = j 0, k j k, j =,..., M. When {e k } M k= is an ONB, thanks to the spanning property in Definition 2., every x CM can be decomposed as M x = c k e k. (6) The expansion coefficients {c k } M k= in (6) can be found through the following calculation: M M x, e j = c k e k, e j = c k e k, e j = c j. In summary, we have the decomposition k= x = k= k= M x, e k e k. Just like in Example., in the previous section, we define the analysis matrix k= T e H.. e H M 6

If we organize the inner products { x, e k } M k= into the vector c, we have x, e c = Tx = x.. x, e M e H. e H M Thanks to the orthonormality of the vectors e, e 2,..., e M the matrix T is unitary, i.e., T H = T and hence e H TT H =. [ e, e e M, e ] e... e M =..... = I M = T H T. e H M e, e M e M, e M Thus, if we multiply the vector c by T H, we synthesize x according to T H c = T H Tx = M x, e k e k = I M x = x. (7) k= We shall therefore call the matrix T H the synthesis matrix, corresponding to the analysis matrix T. In the ONB case considered here the synthesis matrix is simply the Hermitian adjoint of the analysis matrix. 2.2 General Bases We next relax the orthonormality property, i.e., the second condition in Definition 2., and consider general bases. Definition 2.2. The set of vectors {e k } M k=, e k C M, is a basis for C M if. span{e k } M k= = {c e + c 2 e 2 +... + c M e M c, c 2,..., c M C} = C M 2. {e k } M k= is a linearly independent set, i.e., if M k= c ke k = 0 for some scalar coefficients {c k } M k=, then necessarily c k = 0 for all k =,..., M. Now consider a signal x C M and compute the expansion coefficients Again, it is convenient to introduce the analysis matrix c k x, e k, k =,..., M. (8) T e H. e H M and to stack the coefficients {c k } M k= in the vector c. Then (8) can be written as c = Tx. Next, let us ask how we can find a set of vectors {ẽ,..., ẽ M }, ẽ k C M, k =,..., M, that is dual to the set {e,..., e M } in the sense that M M x = c k ẽ k = x, e k ẽ k (9) k= 7 k=

for all x C M. If we introduce the synthesis matrix T H [ẽ ẽ M ], we can rewrite (9) in vector-matrix notation as follows x = T H c = T H Tx. This shows that finding vectors ẽ,..., ẽ M that satisfy (9) is equivalent to finding the inverse of the analysis matrix T and setting T H = T. Thanks to the linear independence of the vectors {e k } M k=, the matrix T has full rank and is, therefore, invertible. Summarizing our findings, we conclude that in the case of a basis {e k } M k=, the analysis matrix and the synthesis matrix are inverses of each other, i.e., T H T = T T H = I M. Recall that in the case of an ONB the analysis matrix T is unitary and hence its inverse is simply given by T H [see (7)], so that in this case T = T. Next, note that T T H = I M is equivalent to e H. e H M [ ẽ... ẽ M ] = ẽ, e ẽ M, e..... ẽ, e M ẽ M, e M = I M or equivalently {, k = j e k, ẽ j =, k, j =,..., M. (20) 0, else The sets {e k } M k= and {ẽ k} M k= are biorthonormal bases. ONBs are biorthonormal to themselves in this terminology, as already noted in Example.2. We emphasize that it is the fact that T and T H are square and full-rank that allows us to conclude that T H T = I M implies T T H = I M and hence to conclude that (20) holds. We shall see below that for redundant expansions T is a tall matrix and T H T T T H ( T H T and T T H have different dimensions) so that dual frames will not be biorthonormal. As T is a square matrix and of full rank, its inverse is unique, which means that for a given analysis set {e k } M k=, the synthesis set {ẽ k} M k= is unique. Alternatively, for a given synthesis set {ẽ k} M k=, there is a unique analysis set {e k } M k=. This uniqueness property is not always desirable. For example, one may want to impose certain structural properties on the synthesis set {ẽ k } M k= in which case having freedom in choosing the synthesis set as in Example.2 is helpful. An important property of ONBs is that they are norm-preserving: The norm of the coefficient vector c is equal to the norm of the signal x. This can easily be seen by noting that c 2 = c H c = x H T H Tx = x H I M x = x 2, (2) where we used (7). Biorthonormal bases are not norm-preserving, in general. Rather, the equality in (2) is relaxed to a double-inequality, by application of the Rayleigh-Ritz theorem [22, Sec. 9.7.2.2] according to ) ( ) λ min (T H T x 2 c 2 = x H T H Tx λ max T H T x 2. (22) 2.3 Redundant Signal Expansions The signal expansions we considered so far are non-redundant in the sense that the number of expansion coefficients equals the dimension of the Hilbert space. Such signal expansions have a number of disadvantages. First, corruption or loss of expansion coefficients can result in significant 8

reconstruction errors. Second, the reconstruction process is very rigid: As we have seen in Section 2.2, for each set of analysis vectors, there is a unique set of synthesis vectors. In practical applications it is often desirable to impose additional constraints on the reconstruction functions, such as smoothness properties or structural properties that allow for computationally efficient reconstruction. Redundant expansions allow to overcome many of these problems as they offer design freedom and robustness to corruption or loss of expansion coefficients. We already saw in Example.3 that in the case of redundant expansions, for a given set of analysis vectors the set of synthesis vectors that allows perfect recovery of a signal from its expansion coefficients is not unique; in fact, there are infinitely many sets of synthesis vectors, in general. This results in design freedom and provides robustness. Suppose that the expansion coefficient c 3 = x, g 3 in Example.3 is corrupted or even completely lost. We can still reconstruct x exactly from (2). Now, let us turn to developing the general theory of redundant signal expansions in finite-dimensional Hilbert spaces. Consider a set of N vectors {g,..., g N }, g k C M, k =,..., N, with N M. Clearly, when N is strictly greater than M, the vectors g,..., g N must be linearly dependent. Next, consider a signal x C M and compute the expansion coefficients Just as before, it is convenient to introduce the analysis matrix c k = x, g k, k =,..., N. (23) T g. H (24) gn H and to stack the coefficients {c k } N k= in the vector c. Then (23) can be written as c = Tx. (25) Note that c C N and x C M. Differently from ONBs and biorthonormal bases considered in Sections 2. and 2.2, respectively, in the case of redundant expansions, the signal x and the expansion coefficient vector c will, in general, belong to different Hilbert spaces. The question now is: How can we find a set of vectors { g,..., g N }, g k C M, k =,..., N, such that N N x = c k g k = x, g k g k (26) k= for all x C M? If we introduce the synthesis matrix k= T H [ g g N ], we can rewrite (26) in vector-matrix notation as follows x = T H c = T H Tx. (27) Finding vectors g,..., g N that satisfy (26) for all x C M is therefore equivalent to finding a left-inverse T H of T, i.e., T H T = I M. First note that T is left-invertible if and only if C M = span{g k } N k=, i.e., if and only if the set of vectors {g k } N k= spans CM. Next observe that when N > M, the N M matrix T is a tall matrix, and therefore its left-inverse is, in general, not unique. In fact, there are infinitely many left-inverses. The following theorem [23, Ch. 2, Th. ] provides a convenient parametrization of all these left-inverses. 9

Theorem 2.3. Let A C N M, N M. Assume that rank(a) = M. Then A (A H A) A H is a left-inverse of A, i.e., A A = I M. Moreover, the general solution L C M N of the equation LA = I M is given by L = A + M(I N AA ), (28) where M C M N is an arbitrary matrix. Proof. Since rank(a) = M, the matrix A H A is invertible and hence A is well defined. Now, let us verify that A is, indeed, a left-inverse of A: A A = (A H A) A H A = I M. (29) The matrix A is called the Moore-Penrose inverse of A. Next, we show that every matrix L of the form (28) is a valid left-inverse of A: ( ) LA = A + M(I N AA ) A = A }{{ A} +MA MA } A {{ A} I M I M = I M + MA MA = I M, where we used (29) twice. Finally, assume that L is a valid left-inverse of A, i.e., L is a solution of the equation LA = I M. We show that L can be written in the form (28). Multiplying the equation LA = I M by A from the right, we have LAA = A. Adding L to both sides of this equation and rearranging terms yields L = A + L LAA = A + L(I N AA ), which shows that L can be written in the form (28) (with M = L), as required. We conclude that for each redundant set of vectors {g,..., g N } that spans C M, there are infinitely many dual sets { g,..., g N } such that the decomposition (26) holds for all x C M. These dual sets are obtained by identifying { g,..., g N } with the columns of L according to where L can be written as follows [ g g N ] = L, L = T + M(I N TT ) and M C M N is an arbitrary matrix. The dual set { g,..., g N } corresponding to the Moore-Penrose inverse L = T of the matrix T, i.e., [ g g N ] = T = (T H T) T H is called the canonical dual of {g,..., g N }. Using (24), we see that in this case g k = (T H T) g k, k =,..., N. (30) Note that unlike in the case of a basis, the equation T H T = I M does not imply that the sets { g k } N k= and {g k } N k= are biorthonormal. This is because the matrix T is not a square matrix, and thus, T H T T T H ( T H T and T T H have different dimensions). 0

Similar to biorthonormal bases, redundant sets of vectors are, in general, not norm-preserving. Indeed, from (25) we see that c 2 = x H T H Tx and thus, by the Rayleigh-Ritz theorem [22, Sec. 9.7.2.2], we have ) ( ) λ min (T H T x 2 c 2 λ max T H T x 2 (3) as in the case of biorthonormal bases. We already saw some of the basic issues that a theory of orthonormal, biorthonormal, and redundant signal expansions should address. It should account for the signals and the expansion coefficients belonging, potentially, to different Hilbert spaces; it should account for the fact that for a given analysis set, the synthesis set is not unique in the redundant case, it should prescribe how synthesis vectors can be obtained from the analysis vectors. Finally, it should apply not only to finite-dimensional Hilbert spaces, as considered so far, but also to infinite-dimensional Hilbert spaces. We now proceed to develop this general theory, known as the theory of frames. 3 Frames for General Hilbert Spaces Let {g k } (K is a countable set) be a set of elements taken from the Hilbert space H. Note that this set need not be orthogonal. In developing a general theory of signal expansions in Hilbert spaces, as outlined at the end of the previous section, we start by noting that the central quantity in Section 2 was the analysis matrix T associated to the (possibly nonorthogonal or redundant) set of vectors {g,..., g N }. Now matrices are nothing but linear operators in finite-dimensional Hilbert spaces. In formulating frame theory for general (possibly infinite-dimensional) Hilbert spaces, it is therefore sensible to define the analysis operator T that assigns to each signal x H the sequence of inner products Tx = { x, g k }. Throughout this section, we assume that {g k } is a Bessel sequence, i.e., x, g k 2 < for all x H. Definition 3.. The linear operator T is defined as the operator that maps the Hilbert space H into the space l 2 of square-summable complex sequences, T : H l 2, by assigning to each signal x H the sequence of inner products x, g k according to T : x { x, g k }. Note that Tx 2 = x, g k 2, i.e., the energy Tx 2 of Tx can be expressed as Tx 2 = x, g k 2. (32) We shall next formulate properties that the set {g k } and hence the operator T should satisfy if we have signal expansions in mind:. The signal x can be perfectly reconstructed from the coefficients { x, g k }. This means that we want x, g k = y, g k, for all k K, (i.e., Tx = Ty) to imply that x = y, for all x, y H. In other words, the operator T has to be left-invertible, which means that T is invertible on its range space R(T) = {y l 2 : y = Tx, x H}. The fact that the range space of T is contained in l 2 is a consequence of {g k } being a Bessel sequence.

This requirement will clearly be satisfied if we demand that there exist a constant A > 0 such that for all x, y H we have A x y 2 Tx Ty 2. Setting z = x y and using the linearity of T, we see that this condition is equivalent to for all z H with A > 0. A z 2 Tz 2 (33) 2. The energy in the sequence of expansion coefficients Tx = { x, g k } should be related to the energy in the signal x. For example, we saw in (2) that if {e k } M k= is an ONB for CM, then Tx 2 = M x, e k 2 = x 2, for all x C M. (34) k= This property is a consequence of the unitarity of T = T and it is clear that it will not hold for general sets {g k } (see the discussion around (22) and (3)). Instead, we will relax (34) to demand that for all x H there exist a finite constant B such that 2 Tx 2 = x, g k 2 B x 2. (35) Together with (33) this sandwiches the quantity Tx 2 according to A x 2 Tx 2 B x 2. We are now ready to formally define a frame for the Hilbert space H. Definition 3.2. A set of elements {g k }, g k H, k K, is called a frame for the Hilbert space H if A x 2 x, g k 2 B x 2, for all x H, (36) with A, B R and 0 < A B <. Valid constants A and B are called frame bounds. The largest valid constant A and the smallest valid constant B are called the (tightest possible) frame bounds. Let us next consider some simple examples of frames. Example 3.3 ([2]). Let {e k } k= be an ONB for an infinite-dimensional Hilbert space H. By repeating each element in {e k } k= once, we obtain the redundant set {g k } k= = {e, e, e 2, e 2,...}. To see that this set is a frame for H, we note that because {e k } k= is an ONB, for all x H, we have x, e k 2 = x 2 and therefore x, g k 2 = x, e k 2 + x, e k 2 = 2 x 2. k= k= k= k= This verifies the frame condition (36) and shows that the frame bounds are given by A = B = 2. 2 Note that if (35) is satisfied with B <, then {g k } is a Bessel sequence. 2

Example 3.4 ([2]). Starting from the ONB {e k } k=, we can construct another redundant set as follows {g k } k= = {e, 2 e 2, 2 e 2, 3 e 3, 3 e 3, } e 3,.... 3 To see that the set {g k } k= is a frame for H, take an arbitrary x H and note that x, g k 2 = k= k 2 x, e k = k k= k k x, e k 2 = k= x, e k 2 = x 2. k= We conclude that {g k } k= is a frame with the frame bounds A = B =. From (32) it follows that an equivalent formulation of (36) is A x 2 Tx 2 B x 2, for all x H. This means that the energy in the coefficient sequence Tx is bounded above and below by bounds that are proportional to the signal energy. The existence of a lower frame bound A > 0 guarantees that the linear operator T is left-invertible, i.e., our first requirement above is satisfied. Besides that it also guarantees completeness of the set {g k } for H, as we shall see next. To this end, we first recall the following definition: Definition 3.5. A set of elements {g k }, g k H, k K, is complete for the Hilbert space H if x, g k = 0 for all k K and with x H implies x = 0, i.e., the only element in H that is orthogonal to all g k, is x = 0. To see that the frame {g k } is complete for H, take an arbitrary signal x H and assume that x, g k = 0 for all k K. Due to the existence of a lower frame bound A > 0 we have A x 2 x, g k 2 = 0, which implies x 2 = 0 and hence x = 0. Finally, note that the existence of an upper frame bound B < guarantees that T is a bounded linear operator 3 (see [, Def. 2.7-]), and, therefore (see [, Th. 2.7-9]), continuous 4 (see [, Sec. 2.7]). Recall that we would like to find a general method to reconstruct a signal x H from its expansion coefficients { x, g k }. In Section 2.3 we saw that in the finite-dimensional case, this can be accomplished according to N x = x, g k g k. k= Here { g,..., g N } can be chosen to be the canonical dual set to the set {g,..., g N } and can be computed as follows: g k = (T H T) g k, k =,..., N. We already know that T is the generalization of T to the infinite-dimensional setting. Which operator will then correspond to T H? To answer this question we start with a definition. 3 Let H and H be Hilbert spaces and A : H H a linear operator. The operator A is said to be bounded if there exists a finite number c such that for all x H, Ax c x. 4 Let H and H be Hilbert spaces and A : H H a linear operator. The operator A is said to be continuous at a point x 0 H if for every ɛ > 0 there is a δ > 0 such that for all x H satisfying x x 0 < δ it follows that Ax Ax 0 < ɛ. The operator A is said to be continuous on H, if it is continuous at every point x 0 H. 3

Definition 3.6. The linear operator T is defined as T : l 2 H T : {c k } c k g k. Next, we recall the definition of the adjoint of an operator. Definition 3.7. Let A : H H be a bounded linear operator between the Hilbert spaces H and H. The unique bounded linear operator A : H H that satisfies for all x H and all y H is called the adjoint of A. Ax, y = x, A y (37) Note that the concept of the adjoint of an operator directly generalizes that of the Hermitian transpose of a matrix: if A C N M, x C M, y C N, then Ax, y = y H Ax = (A H y) H x = x, A H y, which, comparing to (37), shows that A H corresponds to A. We shall next show that the operator T defined above is nothing but the adjoint T of the operator T. To see this consider an arbitrary sequence {c k } l 2 and an arbitrary signal x H. We have to prove that Tx, {c k } = x, T {c k }. This can be established by noting that Tx, {c k } = x, g k c k x, T {c k } = x, c k g k = c k x, g k. We therefore showed that the adjoint operator of T is T, i.e., T = T. In what follows, we shall always write T instead of T. As pointed out above the concept of the adjoint of an operator generalizes the concept of the Hermitian transpose of a matrix to the infinite-dimensional case. Thus, T is the generalization of T H to the infinite-dimensional setting. 3. The Frame Operator Let us return to the discussion we had immediately before Definition 3.6. We saw that in the finitedimensional case, the canonical dual set { g,..., g N } to the set {g,..., g N } can be computed as follows: g k = (T H T) g k, k =,..., N. We know that T is the generalization of T to the infinitedimensional case and we have just seen that T is the generalization of T H. It is now obvious that the operator T T must correspond to T H T. The operator T T is of central importance in frame theory. Definition 3.8. Let {g k } be a frame for the Hilbert space H. The operator S : H H defined as S = T T, (38) Sx = x, g k g k is called the frame operator. 4

We note that x, g k 2 = Tx 2 = Tx, Tx = T Tx, x = Sx, x. (39) We are now able to formulate the frame condition in terms of the frame operator S by simply noting that (36) can be written as A x 2 Sx, x B x 2. (40) We shall next discuss the properties of S. Theorem 3.9. The frame operator S satisfies the properties:. S is linear and bounded; 2. S is self-adjoint, i.e., S = S; 3. S is positive definite, i.e., Sx, x > 0 for all x H; 4. S has a unique self-adjoint positive definite square root (denoted as S /2 ). Proof.. Linearity and boundedness of S follow from the fact that S is obtained by cascading a bounded linear operator and its adjoint (see (38)). 2. To see that S is self-adjoint simply note that S = (T T) = T T = S. 3. To see that S is positive definite note that, with (40) for all x H, x 0. Sx, x A x 2 > 0 4. Recall the following basic fact from functional analysis [, Th. 9.4-2]. Lemma 3.0. Every self-adjoint positive definite bounded operator A : H H has a unique self-adjoint positive definite square root, i.e., there exists a unique self-adjoint positive-definite operator B such that A = BB. The operator B commutes with the operator A, i.e., BA = AB. Property 4 now follows directly form Property 2, Property 3, and Lemma 3.0. We next show that the tightest possible frame bounds A and B are given by the smallest and the largest spectral value [, Def. 7.2-] of the frame operator S, respectively. Theorem 3.. Let A and B be the tightest possible frame bounds for a frame with frame operator S. Then A = λ min and B = λ max, (4) where λ min and λ max denote the smallest and the largest spectral value of S, respectively. 5

Proof. By standard results on the spectrum of self-adjoint operators [, Th. 9.2-, Th. 9.2-3, Th. 9.2-4], we have Sx, x Sx, x λ min = inf x H x 2 and λ max = sup x H x 2. (42) This means that λ min and λ max are, respectively, the largest and the smallest constants such that λ min x 2 Sx, x λ max x 2 (43) is satisfied for every x H. According to (40) this implies that λ min and λ max are the tightest possible frame bounds. It is instructive to compare (43) to (3). Remember that S = T T corresponds to the matrix T H T in the finite-dimensional case considered in Section 2.3. Thus, c 2 = x H T H Tx = Sx, x, which upon insertion into (3), shows that (43) is simply a generalization of (3) to the infinite-dimensional case. 3.2 The Canonical Dual Frame Recall that in the finite-dimensional case considered in Section 2.3, the canonical dual frame { g k } N k= of the frame {g k } N k= can be used to reconstruct the signal x from the expansion coefficients { x, g k } N k= according to N x = x, g k g k. In (30) we saw that the canonical dual frame can be computed as follows: k= g k = (T H T) g k, k =,..., N. (44) We already pointed out that the frame operator S = T T is represented by the matrix T H T in the finite-dimensional case. The matrix (T H T) therefore corresponds to the operator S, which will be studied next. From (4) it follows that λ min, the smallest spectral value of S, satisfies λ min > 0 if {g k } is a frame. This implies that zero is a regular value [, Def. 7.2-] of S and hence S is invertible on H, i.e., there exists a unique operator S such that SS = S S = I H. Next, we summarize the properties of S. Theorem 3.2. The following properties hold:. S is self-adjoint, i.e., ( S ) = S ; 2. S satisfies S B = inf x, x x H x 2 and where A and B are the tightest possible frame bounds of S; 3. S is positive definite. S A = sup x, x x H x 2, (45) Proof.. To prove that S is self-adjoint we write (SS ) = (S ) S = I H. Since S is self-adjoint, i.e., S = S, we conclude that (S ) S = I H. 6

Multiplying by S from the right, we finally obtain (S ) = S. 2. To prove the first equation in (45) we write Sx, x SS /2 S y, S /2 S y B = sup x H x 2 = sup y H S /2 S y, S /2 S y S S /2 SS /2 S y, y = sup y H S S /2 S /2 S y, y = sup y H y, y S y, y (46) where the first equality follows from (4) and (42); in the second equality we used the fact that the operator S /2 S is one-to-one on H and changed variables according to x = S /2 S y; in the third equality we used the fact that S /2 and S are self-adjoint, and in the fourth equality we used S = S /2 S /2. The first equation in (45) is now obtained by noting that (46) implies B = /( sup y H y, y S y, y The second equation in (45) is proved analogously. ) = inf y H S y, y y, y 3. Positive-definiteness of S follows from the first equation in (45) and the fact that B < so that /B > 0.. We are now ready to generalize (44) and state the main result on canonical dual frames in the case of general (possibly infinite-dimensional) Hilbert spaces. Theorem 3.3. Let {g k } be a frame for the Hilbert space H with the frame bounds A and B, and let S be the corresponding frame operator. Then, the set { g k } given by is a frame for H with the frame bounds à = /B and B = /A. The analysis operator associated to { g k } defined as g k = S g k, k K, (47) T : H l 2 T : x { x, g k } satisfies T = TS = T (T T). (48) Proof. Recall that S is self-adjoint. Hence, we have x, g k = x, S g k = S x, g k for all x H. Thus, using (39), we obtain x, g k 2 = S x, g k 2 Therefore, we conclude from (45) that = S(S x), S x = x, S x = S x, x. B x 2 x, g k 2 A x 2, 7

i.e., the set { g k } constitutes a frame for H with frame bounds à = /B and B = /A; moreover, it follows from (45) that à = /B and B = /A are the tightest possible frame bounds. It remains to show that T = TS : Tx = { x, g k } = { x, S g k } = { S x, g k } = TS x. We call { g k } the canonical dual frame associated to the frame {g k }. It is convenient to introduce the canonical dual frame operator: Definition 3.4. The frame operator associated to the canonical dual frame, S = T T, Sx = x, g k g k (49) is called the canonical dual frame operator. Theorem 3.5. The canonical dual frame operator S satisfies S = S. Proof. For every x H, we have Sx = x, g k g k = x, S g k S g k = S S x, g k gk = S SS x = S x, where in the first equality we used (49), in the second we used (47), in the third we made use of the fact that S is self-adjoint, and in the fourth we used the definition of S. Note that canonical duality is a reciprocity relation. If the frame { g k } is the canonical dual of the frame {g k }, then {g k } is the canonical dual of the frame { g k }. This can be seen by noting that S g k = (S ) S g k = SS g k = g k. 3.3 Signal Expansions The following theorem can be considered as one of the central results in frame theory. It states that every signal x H can be expanded into a frame. The expansion coefficients can be chosen as the inner products of x with the canonical dual frame elements. Theorem 3.6. Let {g k } and { g k } be canonical dual frames for the Hilbert space H. Every signal x H can be decomposed as follows x = T Tx = x, g k g k x = T Tx = x, g k g k. (50) Note that, equivalently, we have T T = T T = I H. 8

Proof. We have T Tx = x, g k g k = x, S g k gk = S x, g k gk = SS x = x. This proves that T T = IH. The proof of T T = I H is similar. Note that (50) corresponds to the decomposition (26) we found in the finite-dimensional case. It is now natural to ask whether reconstruction of x from the coefficients x, g k, k K, according to (50) is the only way of recovering x from x, g k, k K. Recall that we showed in the finitedimensional case (see Section 2.3) that for each complete and redundant set of vectors {g,..., g N }, there are infinitely many dual sets { g,..., g N } that can be used to reconstruct a signal x from the coefficients x, g k, k =,..., N, according to (26). These dual sets are obtained by identifying { g,..., g N } with the columns of L, where L is a left-inverse of the analysis matrix T. In the infinitedimensional case the question of finding all dual frames for a given frame boils down to finding, for a given analysis operator T, all linear operators L that satisfy LTx = x for all x H. In other words, we want to identify all left-inverses L of the analysis operator T. The answer to this question is the infinite-dimensional version of Theorem 2.3 that we state here without proof. Theorem 3.7. Let A : H l 2 be a bounded linear operator. Assume that A A : H H is invertible on H. Then, the operator A : l 2 H defined as A (A A) A is a left-inverse of A, i.e., A A = I H, where I H is the identity operator on H. Moreover, the general solution L of the equation LA = I H is given by L = A + M(I l 2 AA ) where M : l 2 H is an arbitrary bounded linear operator and I l 2 is the identity operator on l 2. Applying this theorem to the operator T we see that all left-inverses of T can be written as where M : l 2 H is an arbitrary bounded linear operator and L = T + M(I l 2 TT ) (5) T = (T T) T. Now, using (48), we obtain the following important identity: T = (T T) T = S T = T. This shows that reconstruction according to (50), i.e., by applying the operator T to the coefficient sequence Tx = { x, g k } is nothing but applying the infinite-dimensional analog of the Moore- Penrose inverse T = (T H T) T H. As already noted in the finite-dimensional case the existence of infinitely many left-inverses of the operator T provides us with freedom in designing dual frames. We close this discussion with a geometric interpretation of the parametrization (5). First observe the following. 9

Theorem 3.8. The operator defined as satisfies the following properties: P : l 2 R(T) l 2 P = TS T. P is the identity operator I l 2 on R(T). 2. P is the zero operator on R(T), where R(T) denotes the orthogonal complement of the space R(T). In other words, P is the orthogonal projection operator onto R(T) = {{c k } {c k } = Tx, x H}, the range space of the operator T. Proof.. Take a sequence {c k } R(T) and note that it can be written as {c k } = Tx, where x H. Then, we have P{c k } = TS T Tx = TS Sx = TI H x = Tx = {c k }. This proves that P is the identity operator on R(T). 2. Next, take a sequence {c k } R(T). As the orthogonal complement of the range space of an operator is the null space of its adjoint, we have T {c k } = 0 and therefore P{c k } = TS T {c k } = 0. This proves that P is the zero operator on R(T). Now using that TT = TS T = P and T = S T = S SS T = S T TS T = T P, we can rewrite (5) as follows L = T P + M(I l 2 P). (52) Next, we show that (I l 2 P) : l 2 l 2 is the orthogonal projection onto R(T). Indeed, we can directly verify the following: For every {c k } R(T), we have (I l 2 P){c k } = I l 2{c k } 0 = {c k }, i.e., I l 2 P is the identity operator on R(T) ; for every {c k } (R(T) ) = R(T), we have (I l 2 P){c k } = I l 2{c k } {c k } = 0, i.e., I l 2 P is the zero operator on (R(T) ). We are now ready to re-interpret (52) as follows. Every left-inverse L of T acts as T (the synthesis operator of the canonical dual frame) on the range space of the analysis operator T, and can act in an arbitrary linear and bounded fashion on the orthogonal complement of the range space of the analysis operator T. 3.4 Tight Frames The frames considered in Examples 3.3 and 3.4 above have an interesting property: In both cases the tightest possible frame bounds A and B are equal. Frames with this property are called tight frames. Definition 3.9. A frame {g k } with tightest possible frame bounds A = B is called a tight frame. Tight frames are of significant practical interest because of the following central fact. 20

Theorem 3.20. Let {g k } be a frame for the Hilbert space H. The frame {g k } is tight with frame bound A if and only if its corresponding frame operator satisfies S = AI H, or equivalently, if for all x H. x = x, g k g k (53) A Proof. First observe that S = AI H is equivalent to Sx = AI H x = Ax for all x H, which, in turn, is equivalent to (53) by definition of the frame operator. To prove that tightness of {g k } implies S = AI H, note that by Definition 3.9, using (40) we can write Sx, x = A x, x, for all x H. Therefore (S AI H )x, x = 0, for all x H, which implies S = AI H. To prove that S = AI H implies tightness of {g k }, we take the inner product with x on both sides of (53) to obtain x, x = x, g k g k, x. A This is equivalent to A x 2 = x, g k 2, which shows that {g k } is a tight frame for H with frame bound equal to A. The practical importance of tight frames lies in the fact that they make the computation of the canonical dual frame, which in the general case requires inversion of an operator and application of this inverse to all frame elements, particularly simple. Specifically, we have: g k = S g k = A I Hg k = A g k. A well-known example of a tight frame for R 2 is the following: Example 3.2 (The Mercedes-Benz frame [20]). The Mercedes-Benz frame (see Figure 4) is given by the following three vectors in R 2 : g = [ ] 0, g 2 = [ ] 3/2, g 3 = /2 [ ] 3/2. (54) /2 To see that this frame is indeed tight, note that its analysis operator T is given by the matrix 0 T = 3/2 /2. 3/2 /2 The adjoint T of the analysis operator is given by the matrix [ ] T H 0 3/2 3/2 =. /2 /2 2

g R 2 g 2 g 3 Figure 4: The Mercedes-Benz frame. Therefore, the frame operator S is represented by the matrix [ ] 0 S = T H 0 3/2 3/2 T = 3/2 /2 /2 /2 = 3 2 3/2 /2 [ ] 0 = 3 0 2 I 2, and hence S = AI R 2 with A = 3/2, which implies, by Theorem 3.20, that {g, g 2, g 3 } is a tight frame (for R 2 ). The design of tight frames is challenging in general. It is hence interesting to devise simple systematic methods for obtaining tight frames. The following theorem shows how we can obtain a tight frame from a given general frame. Theorem 3.22. Let {g k } be a frame for the Hilbert space H with frame operator S. Denote the positive definite square root of S by S /2. Then {S /2 g k } is a tight frame for H with frame bound A =, i.e., x = x, S /2 g k S /2 g k, for all x H. Proof. Since S is self-adjoint and positive definite by Theorem 3.2, it has, by Lemma 3.0, a unique self-adjoint positive definite square root S /2 that commutes with S. Moreover S /2 also commutes with S, which can be seen as follows: The proof is then effected by noting the following: S /2 S = S S /2 SS /2 S = S /2 SS /2 = S /2 S. x = S Sx = S /2 S /2 Sx = S /2 SS /2 x = S /2 x, g k S /2 g k = x, S /2 g k S /2 g k. 22

It is evident that every ONB is a tight frame with A =. Note, however, that conversely a tight frame (even with A = ) need not be an orthonormal or orthogonal basis, as can be seen from Example 3.4. However, as the next theorem shows, a tight frame with A = and g k =, for all k K, is necessarily an ONB. Theorem 3.23. A tight frame {g k } for the Hilbert space H with A = and g k =, for all k K, is an ONB for H. Proof. Combining with Sg k, g k = A g k 2 = g k 2 Sg k, g k = g k, g j 2 = g k 4 + g k, g j 2 j K j k we obtain g k 4 + j k g k, g j 2 = g k 2. Since g k 2 =, for all k K, it follows that j k g k, g j 2 = 0, for all k K. This implies that the elements of {g j } j K are necessarily orthogonal to each other. There is an elegant result that tells us that every tight frame with frame bound A = can be realized as an orthogonal projection of an ONB from a space with larger dimension. This result is known as Naimark s theorem. Here we state the finite-dimensional version of this theorem, for the infinite-dimensional version see [24]. Theorem 3.24 (Naimark, [24, Prop..]). Let N > M. Suppose that the set {g,..., g N }, g k H, k =,..., N, is a tight frame for an M-dimensional Hilbert space H with frame bound A =. Then, there exists an N-dimensional Hilbert space K H and an ONB {e,..., e N } for K such that Pe k = g k, k =,..., N, where P : K K is the orthogonal projection onto H. We omit the proof and illustrate the theorem by an example instead. Example 3.25. Consider the Hilbert space K = R 3, and assume that H K is the plane spanned by the vectors [ 0 0] T and [0 0] T, i.e., { H = span [ 0 0] T, [0 0] T}. We can construct a tight frame for H with three elements and frame bound A = if we rescale the Mercedes-Benz frame from Example 3.2. Specifically, consider the vectors g k, k =, 2, 3, defined in (54) and let g k 2/3 g k, k =, 2, 3. In the following, we think about the two-dimensional vectors g k as being embedded into the three-dimensional space K with the third coordinate (in the standard basis of K) being equal to zero. Clearly, {g k }3 k= is a tight frame for H with frame bound A =. Now consider the following three vectors in K: 0 / 2 e = 2/3 /, e 2 = / / 2 6 3 /, e 3 = / 6 3 /. 3 23

Direct calculation reveals that {e k } 3 k= is an ONB for K. Observe that the frame vectors g k, k =, 2, 3, can be obtained from the ONB vectors e k, k =, 2, 3, by applying the orthogonal projection from K onto H: 0 0 P 0 0, 0 0 0 according to g k = Pe k, k =, 2, 3. This illustrates Naimark s theorem. 3.5 Exact Frames and Biorthonormality In Section 2.2 we studied expansions of signals in C M into (not necessarily orthogonal) bases. The main results we established in this context can be summarized as follows:. The number of vectors in a basis is always equal to the dimension of the Hilbert space under consideration. Every set of vectors that spans C M and has more than M vectors is necessarily redundant, i.e., the vectors in this set are linearly dependent. Removal of an arbitrary vector from a basis for C M leaves a set that no longer spans C M. 2. For a given basis {e k } M k= every signal x CM has a unique representation according to x = M x, e k ẽ k. (55) k= The basis {e k } M k= and its dual basis {ẽ k} M k= satisfy the biorthonormality relation (20). The theory of ONBs in infinite-dimensional spaces is well-developed. In this section, we ask how the concept of general (i.e., not necessarily orthogonal) bases can be extended to infinite-dimensional spaces. Clearly, in the infinite-dimensional case, we can not simply say that the number of elements in a basis must be equal to the dimension of the Hilbert space. However, we can use the property that removing an element from a basis, leaves us with an incomplete set of vectors to motivate the following definition. Definition 3.26. Let {g k } be a frame for the Hilbert space H. We call the frame {g k } exact if, for all m K, the set {g k } k m is incomplete for H; we call the frame {g k } inexact if there is at least one element g m that can be removed from the frame, so that the set {g k } k m is again a frame for H. There are two more properties of general bases in finite-dimensional spaces that carry over to the infinite-dimensional case, namely uniqueness of representation in the sense of (55) and biorthonormality between the frame and its canonical dual. To show that representation of a signal in an exact frame is unique and that an exact frame is biorthonormal to its canonical dual frame, we will need the following two lemmas. Let {g k } and { g k } be canonical dual frames. The first lemma below states that for a fixed x H, among all possible expansion coefficient sequences {c k } satisfying x = c kg k, the coefficients c k = x, g k have minimum l 2 -norm. Lemma 3.27 ([5]). Let {g k } be a frame for the Hilbert space H and { g k } its canonical dual frame. For a fixed x H, let c k = x, g k so that x = c kg k. If it is possible to find scalars {a k } {c k } such that x = a kg k, then we must have a k 2 = c k 2 + c k a k 2. (56) 24

Proof. We have c k = x, g k = x, S g k = S x, g k = x, gk with x = S x. Therefore, x, x = c k g k, x = c k g k, x = c k c k = c k 2 and x, x = a k g k, x = a k g k, x = a k c k. We can therefore conclude that c k 2 = a k c k = a k c k. (57) Hence, c k 2 + c k a k 2 = c k 2 + (c k a k ) (c k a k ) = c k 2 + c k 2 c k a k c k a k + a k 2. Using (57), we get c k 2 + c k a k 2 = a k 2. Note that this lemma implies a k 2 > c k 2, i.e., the coefficient sequence {a k } has larger l 2 -norm than the coefficient sequence {c k = x, g k }. Lemma 3.28 ([5]). Let {g k } be a frame for the Hilbert space H and { g k } its canonical dual frame. Then for each m K, we have k m g m, g k 2 = g m, g m 2 g m, g m 2. 2 Proof. We can represent g m in two different ways. Obviously g m = a kg k with a m = and a k = 0 for k m, so that a k 2 =. Furthermore, we can write g m = c kg k with c k = g m, g k. From (56) it then follows that = a k 2 = c k 2 + = c k 2 + c m a m 2 + k m c k a k 2 c k a k 2 = g m, g k 2 + g m, g m 2 + k m = 2 k m g m, g k 2 g m, g k 2 + g m, g m 2 + g m, g m 2 and hence k m g m, g k 2 = g m, g m 2 g m, g m 2. 2 25

We are now able to formulate an equivalent condition for a frame to be exact. Theorem 3.29 ([5]). Let {g k } be a frame for the Hilbert space H and { g k } its canonical dual frame. Then,. {g k } is exact if and only if g m, g m = for all m K; 2. {g k } is inexact if and only if there exists at least one m K such that g m, g m. Proof. We first show that if g m, g m = for all m K, then {g k } k m is incomplete for H (for all m K) and hence {g k } is an exact frame for H. Indeed, fix an arbitrary m K. From Lemma 3.28 we have g m, g k 2 = g m, g m 2 g m, g m 2. 2 k m Since g m, g m =, we have k m g m, g k 2 = 0 so that g m, g k = g m, g k = 0 for all k m. But g m 0 since g m, g m =. Therefore, {g k } k m is incomplete for H, because g m 0 is orthogonal to all elements of the set {g k } k m. Next, we show that if there exists at least one m K such that g m, g m, then {g k } is inexact. More specifically, we will show that {g k } k m is still a frame for H if g m, g m. We start by noting that g m = gm, g k gk = g m, g m gm + gm, g k gk. (58) If g m, g m, (58) can be rewritten as and for every x H we have g m = x, g m 2 = g m, g m k m g m, g k g k, g m, g m k m 2 g m, g m 2 g m, g k x, g k k m g m, g k 2 x, g k 2. k m k m 2 Therefore x, g k 2 = x, g m 2 + x, g k 2 k m g m, g m 2 g m, g k 2 x, g k 2 + x, g k 2 k m k m k m = x, g k 2 + g k m m, g m 2 g m, g k 2 k m }{{} C = C x, g k 2 k m 26

or equivalently With (36) it follows that A C x 2 C C x, g k 2 x, g k 2. k m x, g k 2 x, g k 2 x, g k 2 B x 2, (59) k m where A and B are the frame bounds of the frame {g k }. Note that (trivially) C > 0; moreover C < since g m, g m = and k m g m, g k 2 < as a consequence of { g k } being a frame for H. This implies that A/C > 0, and, therefore, (59) shows that {g k } k m is a frame with frame bounds A/C and B. To see that, conversely, exactness of {g k } implies that g m, g m = for all m K, we suppose that {g k } is exact and g m, g m for at least one m K. But the condition g m, g m for at least one m K implies that {g k } is inexact, which results in a contradiction. It remains to show that {g k } inexact implies g m, g m for at least one m K. Suppose that {g k } is inexact and g m, g m = for all m K. But the condition g m, g m = for all m K implies that {g k } is exact, which again results in a contradiction. Now we are ready to state the two main results of this section. The first result generalizes the biorthonormality relation (20) to the infinite-dimensional setting. Corollary 3.30 ([5]). Let {g k } be a frame for the Hilbert space H. If {g k } is exact, then {g k } and its canonical dual { g k } are biorthonormal, i.e., {, if k = m g m, g k = 0, if k m. Conversely, if {g k } and { g k } are biorthonormal, then {g k } is exact. Proof. If {g k } is exact, then biorthonormality follows by noting that Theorem 3.29 implies g m, g m = for all m K, and Lemma 3.28 implies k m g m, g k 2 = 0 for all m K and thus g m, g k = 0 for all k m. To show that, conversely, biorthonormality of {g k } and { g k } implies that the frame {g k } is exact, we simply note that g m, g m = for all m K, by Theorem 3.29, implies that {g k } is exact. The second main result in this section states that the expansion into an exact frame is unique and, therefore, the concept of an exact frame generalizes that of a basis to infinite-dimensional spaces. Theorem 3.3 ([5]). If {g k } is an exact frame for the Hilbert space H and x = c kg k with x H, then the coefficients {c k } are unique and are given by c k = x, g k, where { g k } is the canonical dual frame to {g k }. Proof. We know from (50) that x can be written as x = x, g k g k. Now assume that there is another set of coefficients {c k } such that x = c k g k. (60) 27