A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators

Similar documents

INF-SUP CONDITION FOR OPERATOR EQUATIONS

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

The following definition is fundamental.

Analysis Preliminary Exam Workshop: Hilbert Spaces

1 Definition and Basic Properties of Compa Operator

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Review and problem list for Applied Math I

Your first day at work MATH 806 (Fall 2015)

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Normed & Inner Product Vector Spaces

Fredholm Theory. April 25, 2018

A nonlinear singular perturbation problem

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Compact operators on Banach spaces

Math Linear Algebra

MAT Linear Algebra Collection of sample exams

FUNCTIONAL ANALYSIS-NORMED SPACE

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Functional Analysis F3/F4/NVP (2005) Homework assignment 3

5 Compact linear operators

Dynamical systems method (DSM) for selfadjoint operators

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

Linear Algebra. Linear Algebra. Chih-Wei Yi. Dept. of Computer Science National Chiao Tung University. November 12, 2008

Chapter 8 Integral Operators

REAL AND COMPLEX ANALYSIS

Math Solutions to homework 5

CHAPTER VIII HILBERT SPACES

Functional Analysis F3/F4/NVP (2005) Homework assignment 2

I teach myself... Hilbert spaces

Spectral Theorem for Self-adjoint Linear Operators

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

Prove that this gives a bounded linear operator T : X l 1. (6p) Prove that T is a bounded linear operator T : l l and compute (5p)

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

Math 414: Linear Algebra II, Fall 2015 Final Exam

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Comm. Nonlin. Sci. and Numer. Simul., 12, (2007),

Chapter 3: Baire category and open mapping theorems

On Linear Operators with Closed Range

Chapter 7: Bounded Operators in Hilbert Spaces

The Proper Generalized Decomposition: A Functional Analysis Approach

Remarks on the Spectrum of Bounded and Normal Operators on Hilbert Spaces

Homework Assignment #5 Due Wednesday, March 3rd.

Homework If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Vectors in Function Spaces

Intertibility and spectrum of the multiplication operator on the space of square-summable sequences

Elementary linear algebra

Essential Descent Spectrum and Commuting Compact Perturbations

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors

Math 240 (Driver) Qual Exam (5/22/2017)

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

CHAPTER 3. Hilbert spaces

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS

Problem Set 6: Solutions Math 201A: Fall a n x n,

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

The Residual Spectrum and the Continuous Spectrum of Upper Triangular Operator Matrices

Inner product on B -algebras of operators on a free Banach space over the Levi-Civita field

FREDHOLM ALTERNATIVE. y Y. We want to know when the

Mathematics Department Stanford University Math 61CM/DM Inner products

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

Math 240 (Driver) Qual Exam (9/12/2017)

MATH Linear Algebra

Normed Vector Spaces and Double Duals

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

Linear Analysis Lecture 5

General Inner Product & Fourier Series

LINEAR ALGEBRA W W L CHEN

A 3 3 DILATION COUNTEREXAMPLE

Lecture 2: Linear Algebra Review

Determinant lines and determinant line bundles

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

2.3 Variational form of boundary value problems

MATH 263: PROBLEM SET 1: BUNDLES, SHEAVES AND HODGE THEORY

Math 113 Practice Final Solutions

Fall 2016 MATH*1160 Final Exam

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Author copy. for some integers a i, b i. b i

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

On Riesz-Fischer sequences and lower frame bounds

Math 5520 Homework 2 Solutions

C -Algebra B H (I) Consisting of Bessel Sequences in a Hilbert Space

Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016

Math 250B Midterm II Information Spring 2019 SOLUTIONS TO PRACTICE PROBLEMS

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

Complex Analytic Functions and Differential Operators. Robert Carlson

General Instructions

Math 396. Quotient spaces

ECE 275A Homework #3 Solutions

Math Real Analysis II

Spanning and Independence Properties of Finite Frames

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

October 25, 2013 INNER PRODUCT SPACES

Transcription:

thus a n+1 = (2n + 1)a n /2(n + 1). We know that a 0 = π, and the remaining part follows by induction. Thus g(x, y) dx dy = 1 2 tanh 2n v cosh v dv Equations (4) and (5) give the desired result. Remarks. a) It is easy to see that ζ(m) = 2 = 1 2 (( ) ) 2n π 2. (5) n 2 2n 1/(2 m 1) cosh 2 x 1 cosh 2 x m sinh 2 x 1 sinh 2 x m dx 1 dx 2 dx m, for m = 2, 3, 4,... b) If n = 0 then Example 2 reduces to Example 1. c) For n = 1Example2reducesto REFERENCE k=0 (k + 1)(k + 2) (2k + 1)(2k + 3) 2 (2k + 5) = π 2 128. 1. Ákos László, About the norm of some Gramians, Acta Sci. Math. (Szeged) 65 (1999) 687 700. University of Pécs, Pécs, Ifjúság útja 6. H-7624 Hungary LAkos@math.ttk.pte.hu A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators A. G. Ramm 1. INTRODUCTION. The aim of our paper is to prove the Fredholm alternative and to give a characterization of the class of Fredholm operators in a very simple way, by a reduction of the operator equation with a Fredholm operator to a linear algebraic system in a finite dimensional space. Our emphasis is on the simplicity of the argument. The paper is written for a wide audience. The Fredholm alternative is a classical well-known result whose proof for linear equations of the form (I + T )u = f,wheret is a compact operator in a Banach space, can be found in most texts on functional analysis, of which we mention just [1] November 2001] NOTES 855

and [2]. A characterization of the set of Fredholm operators is in [1, p. 500], but it is not given in most texts. The proofs in the cited books follow the classical Riesz argument in construction of the Riesz-Fredholm theory. Though beautiful, this theory is not very simple. Our aim is to give a very short and simple proof of the Fredholm alternative and of a characterization of the class of Fredholm operators. We give the argument for the case of Hilbert space, but the proof is quite easy to adjust for the case of Banach space. The idea is to reduce the problem to the one for linear algebraic systems in finitedimensional case, for which the Fredholm alternative is a basic fact: in a finitedimensional space R N property (1.4) in the Definition 1.1 of Fredholm operators is a consequence of the closedness of any finite-dimensional linear subspace, since R(A) is such a subspace in R N, while property (1.3) is a consequence of the simple formulas r(a) = r(a ) and n(a) = N r(a), valid for matrices, where r(a) is the rank of A and n(a) is the dimension of the null-space of A. Throughout the paper A is a linear bounded operator, A is its adjoint, and N(A) and R(A) are the null-space and the range of A, respectively. Recall that an operator F with dim R(F) < is called a finite-rank operator, its rank is n := dim R(F). We call a linear bounded operator B on H an isomorphism if it is a bicontinuous injection of H onto H,thatis,B 1 is defined on all of H and is bounded. If e j, 1 j n, is an orthonormal basis of R(F),thenFu = n j=1 (Fu, e j)e j,so Fu = (u, F e j )e j, (1.1) j=1 and F u = (u, e j )F e j, (1.2) j=1 where (u,v)is the inner product in H. Definition 1.1. An operator A is called Fredholm if and only if and where the overline stands for the closure. Recall that dim N(A) = dim N(A ) := n <, (1.3) R(A) = R(A), R(A ) = R(A ), (1.4) H = R(A) N(A ), H = R(A ) N(A), (1.5) for any linear densely-defined (i.e., having a domain of definition that is dense in H) operator A, not necessarily bounded. For a Fredholm operator A one has: H = R(A) N(A ), H = R(A ) N(A). (1.6) 856 c THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 108

Consider the equations: Let us formulate the Fredholm alternative: Au = f, (1.7) Au 0 = 0, (1.8) A v = g, (1.9) A v 0 = 0. (1.10) Theorem 1.1. If B is an isomorphism and F is a finite rank operator, then A = B + F is Fredholm. For any Fredholm operator A the following (Fredholm) alternative holds: 1) either (1.8) has only the trivial solution u 0 = 0, and then (1.10) has only the trivial solution, and equations (1.7) and (1.9) are uniquely solvable for any right-hand sides f and g, or 2) (1.8) has exactly n > 0 linearly independent solutions {φ j }, 1 j n, and then (1.10) has also n linearly independent solutions {ψ j }, 1 j n, equations (1.7) and (1.9) are solvable if and only if ( f,ψ j ) = 0, 1 j n, and correspondingly (g,φ j ) = 0, 1 j n. If they are solvable, their solutions are not unique and their general solutions are, respectively: u = u p + n j=1 a jφ j, and v = v p + n j=1 b jψ j,wherea j and b j are arbitrary constants, and u p and v p are some particular solutions to (1.7) and (1.9), respectively. Let us give a characterization of the class of Fredholm operators, that is, a necessary and sufficient condition for A to be Fredholm. Theorem 1.2. A linear bounded operator A is Fredholm if and only if A = B + F, where B is an isomorphism and F has finite rank. We prove these theorems in Section 2. 2. PROOFS. Proof of Theorem 1.2. From the proof of Theorem 1.1 that follows, we see that if A = B + F,whereB is an isomorphism and F has finite rank, then A is Fredholm. To prove the converse, choose some orthonormal bases {φ j } and {ψ j },inn(a) and N(A ), respectively, using assumption (1.3). Define Bu := Au (u,φ j )ψ j := Au Fu. (2.1) j=1 Clearly F has finite rank, and A = B + F. Let us prove that B is an isomorphism. If this is done, then Theorem 1.2 is proved. We need to prove that N(B) ={0} and R(B) = H. It is known (Banach s theorem [2, p. 49]), that if B is a linear injection and R(B) = H, thenb 1 is a bounded operator, so B is an isomorphism. Suppose Bu = 0. Then Au = 0(sothatu N(A)), and Fu = 0 (because, according to (1.6), Au is orthogonal to Fu). Since {ψ j }, 1 j n, is a linearly independent November 2001] NOTES 857

system, the equation Fu = 0 implies (u,φ j ) = 0forall1 j n, thatis,u is orthogonal to N(A). Ifu N(A) and at the same time it is orthogonal to N(A), then u = 0. So, N(B) ={0}. Let us now prove that R(B) = H: Take an arbitrary f H and, using (1.6), represent it as f = f 1 + f 2 where f 1 R(A) and f 2 N(A ) are orthogonal. Thus there is a u p H and some constants c j such that f = Au p + n 1 c jψ j. We choose u p orthogonal to N(A). This is clearly possible. We claim that Bu = f,whereu := u p n 1 c jφ j. Indeed, using the orthonormality of the system φ j,1 j n, one gets Bu = Au p + n 1 c jψ j = f. Thus we have proved that R(B) = H. We now prove Theorem 1.1. Proof of Theorem 1.1. If A is Fredholm, then the statements 1) and 2) of Theorem 1.1 are equivalent to (1.3) and (1.4), since (1.6) follows from (1.4). Let us prove that if A = B + F,where B is an isomorphism and F has finite-rank, then A is Fredholm. Both properties (1.3) and (1.4) are known for operators in finitedimensional spaces. Therefore to prove that A is Fredholm it is sufficient to prove that equations (1.7) and (1.9) are equivalent to linear algebraic systems in a finitedimensional space. Let us prove this equivalence. We start with equation (1.7), denote Bu := w, and get an equation w + T w = f, (2.2) that is equivalent to (1.7). Here, T := FB 1 is a finite rank operator that has the same rank n as F because B is an isomorphism. Equation (2.2) is equivalent to (1.7): each solution to (1.7) is in one-to-one correspondence with a solution of (2.2) since B is an isomorphism. In particular, the dimensions of the null-spaces N(A) and N(I + T ) are equal, R(A) = R(I + T ), andr(i + T ) is closed. The last claim is a consequence of the Fredholm alternative for finite-dimensional linear equations, but we give an independent proof of the closedness of R(A) at the end of the paper. Since T is a finite rank operator, the dimension of N(I + T ) is finite and is not greater than the rank of T. Indeed, if u = Tu and T has finite rank n, then Tu = n j=1 (Tu, e j)e j, where {e j } 1 j n, is an orthonormal basis of R(T ), and u = n j=1 (u, T e j )e j,sothatu belongs to a subspace of dimension n = r(t ). Since A and A enter symmetrically in the statement of Theorem 1.1, it is sufficient to prove (1.3) and (1.4) for A and check that the dimensions of N(A) and N(A ) are equal. To prove (1.3) and (1.4), let us reduce (1.9) to an equivalent equation of the form where T := B 1 F, is the adjoint to T,and v + T v = h, (2.3) h := B 1 g. (2.4) Since B is an isomorphism, (B 1 ) = (B ) 1. Applying B 1 to equation (1.9), one gets an equivalent equation (2.3) and T is a finite-rank operator of the same rank n as T. 858 c THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 108

The last claim is easy to prove: if {e j } 1 j n is a basis in R(T ), thentu = n j=1 (Tu, e j)e j,andt u = n j=1 (u, e j)t e j,sor(t ) r(t ). By symmetry one has r(t ) r(t ), and the claim is proved. Writing explicitly the linear algebraic systems, equivalent to the equations (2.2) and (2.3), one sees that the matrices of these systems are adjoint. The system equivalent to equation (2.2) is: c i + t ij c j = f i, (2.5) 1 where t ij := (e j, T e i ), c j := (w, T e j ), f i := ( f, T e i ), and the one equivalent to (2.3) is: ξ i + 1 t ij ξ j = h i, (2.6) where t ij = (T e j, e i ), ξ j := (v, e j ), h i := (h, e i ), and t ij is the matrix adjoint to t ij. For linear algebraic systems (2.5) and (2.6) the Fredholm alternative is a well-known elementary result. These systems are equivalent to equations (1.7) and (1.9), respectively. Therefore the Fredholm alternative holds for equations (1.7) and (1.9), so that properties (1.3) and (1.4) are proved. In conclusion let us explain in detail why equations (2.2) and (2.5) are equivalent in the following sense: every solution to (2.2) generates a solution to (2.5) and vice versa. It is clear that (2.2) implies (2.5): just take the inner product of (2.2) with T e j and get (2.5). So, each solution to (2.2) generates a solution to (2.5). We claim that each so- to (2.5) generates a solution to (2.2). Indeed, let c j solve (2.5). Define w := f lution n 1 c je j.thentw = Tf n j=1 c j Te j = n i=1 [(Tf, e i)e i n j=1 c j(te j, e i )e i ]= n i=1 c ie i = f w. Here we use (2.5) and take into account that (Tf, e i ) = f i and (Te j, e i ) = t ij. Thus, the element w := f n 1 c je j solves (2.2), as claimed. It is easy to check that if {w 1,...w k } are k linearly independent solutions to the homogeneous version of equation (2.2), then the corresponding k solutions {c 1m,...c nm } 1 m k of the homogeneous version of the system (2.5) are also linearly indepenedent, and vice versa. Let us give an independent proof of property (1.4): R(A) is closed if A = B + F, where B is an isomorphism and F is a finite rank operator. Since A = (I + T )B and B is an isomorphism, it is sufficient to prove that R(I + T ) is closed if T has finite rank. Let u j + Tu j := f j f as j. Without loss of generality choose u j orthogonal to N(I + T ). We want to prove that there exists a u such that (I + T )u = f. Suppose first that sup 1 j< u j <, where denotes the norm in H. SinceT is a finite-rank operator, Tu j converges in H for some subsequence, which is denoted by u j again. (Recall that in finite-dimensional spaces bounded sets are precompact). This implies that u j = f j Tu j converges in H to an element u. Passing to the limit, November 2001] NOTES 859

one gets (I + T )u = f. To complete the proof, let us establish that sup j u j <. Assuming that this is false, one can choose a subsequence, denoted by u j again, such that u j > j. Letz j := u j / u j.then z j =1, z j is orthogonal to N(I + T ), and z j + Tz j = f j / u j 0. As before, it follows that z j z in H, and passing to the limit in the equation for z j one gets z + Tz = 0. Since z is orthogonal to N(I + T ),it follows that z = 0. This is a contradiction since z =lim j z j =1. This contradiction proves the desired estimate and the proof is completed. This proof is valid for any compact linear operator T.IfT is a finite-rank operator, then the closedness of R(I + T ) follows from a simple observation: finite-dimensional linear spaces are closed. ACKNOWLEDGEMENT. The author thanks Professor R. Burckel for very useful comments. REFERENCES 1. L. Kantorovich and G. Akilov, Functional Analysis in Normed Spaces, Macmillan, New York, 1964. 2. W. Rudin, Functional Analysis, McGraw Hill, New York, 1973. Kansas State University, Manhattan, KS 66506-2602 ramm@math.ksu.edu An Elementary Product Identity in Polynomial Dynamics Robert L. Benedetto 1. A SURPRISING IDENTITY. In this article we present a surprising but completely elementary product formula concerning periodic cycles in the iteration of polynomials. A list of complex numbers {x 1,...,x n } is said to be a cycle of length n for a polynomial f if all the x i s are distinct and if x 2 = f (x 1 ), x 3 = f (x 2 ),... x n = f (x n 1 ), and x 1 = f (x n ). Thus, repeatedly evaluating f (z) at the points x 1,...,x n causes them to cycle. The simplest case of our identity concerns cycles in the much-studied family of quadratic polynomials of the form f (z) = z 2 + c. Theorem 1. Let c C be given, and let f (z) = z 2 + c. Let n 2 be an integer, and let {x 1,...,x n } be a cycle of length n. Then n ( f (x i ) + x i ) = 1. i=1 This result was originally suggested by experimental computations using PARI/GP. Given the numbers that tend to arise along the way, it was quite unexpected, as the following example helps to illustrate. 860 c THE MATHEMATICAL ASSOCIATION OF AMERICA [Monthly 108