Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications

Similar documents
On the simplest expression of the perturbed Moore Penrose metric generalized inverse

The best generalised inverse of the linear operator in normed linear space

Elementary linear algebra

CHAPTER 5. Birkhoff Orthogonality and a Revisit to the Characterization of Best Approximations. 5.1 Introduction

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

INF-SUP CONDITION FOR OPERATOR EQUATIONS

Normed & Inner Product Vector Spaces

Linear Algebra Massoud Malek

Spectral theory for compact operators on Banach spaces

LINEAR PRESERVER PROBLEMS: generalized inverse

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

16 1 Basic Facts from Functional Analysis and Banach Lattices

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

An introduction to some aspects of functional analysis

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

The following definition is fundamental.

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Functional Analysis I

A Spectral Characterization of Closed Range Operators 1

Lecture notes: Applied linear algebra Part 1. Version 2

Hilbert Spaces. Contents

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

A SHORT INTRODUCTION TO BANACH LATTICES AND

ECE 275A Homework #3 Solutions

Fredholm Theory. April 25, 2018

Overview of normed linear spaces

1 Definition and Basic Properties of Compa Operator

Operators with Compatible Ranges

Analysis Preliminary Exam Workshop: Hilbert Spaces

Optimization and Optimal Control in Banach Spaces

Chapter 1. Matrix Algebra

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences,

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

LINEAR MAPS ON M n (C) PRESERVING INNER LOCAL SPECTRAL RADIUS ZERO

Functional Analysis. Martin Brokate. 1 Normed Spaces 2. 2 Hilbert Spaces The Principle of Uniform Boundedness 32

Mathematical foundations - linear algebra

CHAPTER II HILBERT SPACES

I teach myself... Hilbert spaces

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Best approximations in normed vector spaces

Compact operators on Banach spaces

Geometry and topology of continuous best and near best approximations

Math 5052 Measure Theory and Functional Analysis II Homework Assignment 7

David Hilbert was old and partly deaf in the nineteen thirties. Yet being a diligent

Your first day at work MATH 806 (Fall 2015)

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

LECTURE 15: COMPLETENESS AND CONVEXITY

Problem Set 6: Solutions Math 201A: Fall a n x n,

PROJECTIONS ONTO CONES IN BANACH SPACES

Chapter 1. Introduction

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

On positive maps in quantum information.

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

ON SPECTRA OF LÜDERS OPERATIONS

INVERSE FUNCTION THEOREM and SURFACES IN R n

Spectral Theory, with an Introduction to Operator Means. William L. Green

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

2.2 Annihilators, complemented subspaces

Real Analysis Notes. Thomas Goller

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Functional Analysis, Math 7321 Lecture Notes from April 04, 2017 taken by Chandi Bhandari

Non commutative Khintchine inequalities and Grothendieck s theo

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

3 Compact Operators, Generalized Inverse, Best- Approximate Solution

Exercise Solutions to Functional Analysis

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

Workshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract

Notes on Integrable Functions and the Riesz Representation Theorem Math 8445, Winter 06, Professor J. Segert. f(x) = f + (x) + f (x).

On the Moore-Penrose Inverse in C -algebras

1 Inner Product Space

Topological vectorspaces

Review and problem list for Applied Math I

Optimization Theory. A Concise Introduction. Jiongmin Yong

Lipschitz p-convex and q-concave maps

1 Functional Analysis

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

Lecture 2: Linear Algebra Review

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

On invariant graph subspaces

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

Chapter 3: Baire category and open mapping theorems

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

REAL RENORMINGS ON COMPLEX BANACH SPACES

5 Compact linear operators

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism

arxiv: v1 [math.fa] 11 Oct 2018

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

IN AN ALGEBRA OF OPERATORS

Functional Analysis F3/F4/NVP (2005) Homework assignment 2

Math Solutions to homework 5

Dragan S. Djordjević. 1. Introduction

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Transcription:

Ph. D. Dissertation Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications by Haifeng Ma Presented to the Faculty of Mathematics and Computer Science on September 30, 2011 Co-supervisors: Prof. Dr. Henryk Hudzik Faculty of Mathematics and Computer Science, Adam Mickiewicz University & Prof. Dr. Yuwen Wang School of Mathematical Science, Harbin Normal University Faculty of Mathematics and Computer Science, Adam Mickiewicz University & School of Mathematical Science, Harbin Normal University

Contents Contents Abstract..................................... III Chapter 1 Introduction........................... 1 1.1 Generalized Inverses.......................... 1 1.2 Perturbation Theory of Moore-Penrose Metric Generalized Inverses 4 1.3 Spectrum and Narrow Spectrum.................... 5 Chapter 2 Preliminaries.......................... 7 2.1 Some Geometric Properties of Banach Spaces............ 7 2.2 Metric Generalized Inverse....................... 11 Chapter 3 Selections of Metric Generalized Inverses......... 14 3.1 Criteria for the Metric Generalized Inverses of Linear Operators. 14 3.2 Continuous Homogeneous Selections of Metric Generalized Inverses of Linear Operators........................... 23 Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators....................... 27 4.1 Perturbation of the Solution of the Operator Equation T x = b... 27 4.2 Perturbation of Moore-Penrose Metric Generalized Inverse..... 33 4.3 The Error Bound Estimate of Perturbation for Moore-Penrose Metric Generalized Inverse........................... 44 Chapter 5 Narrow Spectrum....................... 52 5.1 Locally Fine Points........................... 52 5.2 Generalized regular points and narrow spectrum points....... 53 Chapter 6 Some Applications of Generalized Inverses....... 59 References.................................... 62 II

Abstract Abstract In this dissertation, continuous homogeneous selections for the set-valued metric generalized inverses T of linear operators T in Banach spaces are investigated by means of the methods of geometry of Banach spaces. Necessary and sufficient conditions in order that the set-valued metric generalized inverses T of bounded linear operators T have continuous homogeneous selections are given. The results give an answer to the problem posed by Nashed and Votruba. Secondly, the perturbations of the Moore-Penrose metric generalized inverses for linear operators in Banach spaces are described. Using the notion of metric stable perturbation and the theorem of generalized orthogonal decomposition, under some assumptions we give some error estimates of the single-valued Moore-Penrose metric generalized inverses for bounded linear operators. Moreover, an estimate of the norm of the perturbation of the metric generalized inverse is given. The concepts of generalized regular points and narrow spectrum points of bounded linear operators on Hilbert spaces are introduced. It is proved that some properties of the narrow spectrum are the same as of the spectrum but some other properties are distinguished by these two notions. Finally, it is shown that the well known problem of the existence of invariant subspaces for bounded linear operators on separable Hilbert spaces can be restricted to the problem of the operators with the narrow spectrum only. Keywords Generalized Inverses; Metric Generalized Inverses; Moore-Penrose Metric Generalized Inverse; Perturbation; Spectrum; Regular Point; Narrow Spectrum; Invariant Subspace III

Chapter 1 Introduction Chapter 1 Introduction 1.1 Generalized Inverses The observation that generalized inverses are like prose /Good Heavens! For more than forty years I have been speaking prose without knowing it0- MoliYre, Le Bourgois Gentilhomme ) is nowhere truer than in the literature of linear operators. In fact, generalized inverses of integral and differential operators were studied by Fredholm, Hilbert, Schmoidt, Bounitzky, Hurwitz, and others, before E. H. Moore introduced formally the notion of generalized inverses in an algebraic setting, see, e.g., the historic survey by W. T. Reid [84]. The theory of generalized inverses has its genetic roots essentially in the context of so called ill-posed linear problems. It is well known that if A is a nonsingular square) matrix, then there exists a unique matrix B, which is called the inverse of A, such that AB = BA = I, where I is the identity matrix. If A is a singular or a rectangular but not square) matrix, no such matrix B exist. Now if A 1 exists, then the system of linear equations Ax = b has the unique solution x = A 1 b for each b. On the other hand, in many cases, solutions of a system of linear equations exist even when the inverse of the matrix defining these equations does not exist. Also in the case when the equations are inconsistent, there is often interested in a leastsquares solutions, i.e., vectors that minimize the sum of the squares of the residuals. These problems, along with many others in numerical linear algebra, optimization and control, statistics, and other areas of analysis and applied mathematics, are readily handled via the concept of a generalized inverse or pseudo inverse) of a matrix or a linear operator. In a paper given at the Fourteenth Western Meeting of the American Mathematical Society at the University of Chicago, April, 1920, Professor E. H. Moore 1

1.1 Generalized Inverses first drew attention to a useful extension of the classical notion of the reciprocal of a nonsingular square matrix [68]. The definition of the pseudo inverse of a m n matrix A, denoted by A +, originally given by E. H. Moore, has been interpreted by A. Ben-Israel and A. Charnes [17] in the following way: A + is the pseudo inverse of A if AA + = P RA), A + A = P RA + ), 1.1) where P RA) is an orthogonal projection on the range space of A. E. H. Moore established the existence and uniqueness of A + for any A, and gave an explicit form for A + in terms of the subdeterminants of A and A, the conjugate transpose of A. Various properties of A + and the relationships among A, A and A + were incorporated in his General Analysis, and concurrently an algebraic basis and extensions were given by J. von Neumann [77] in his studies on regular rings. Unaware of Moore s results, A. Bjerhammar [22, 23] and R. Penrose[79, 80] both gave independent treatments of the pseudo inverse. In 1955 R. Penrose [79] sharpened and extended A. Bjerhammar s results on linear systems, and showed that E. H. Moore s inverse for a given matrix A is the unique matrix X satisfying the following four equations: AXA = A, 1.2) XAX = X, 1.3) AX) = AX, 1.4) XA) = XA, 1.5) where A is the conjugate transpose of A. These conditions 1.2), 1.3), 1.4), 1.5) are equivalent to Moore s conditions equation 1.1). The latter discovery has been so important and fruitful that this unique inverse called by some mathematicans the generalized inverse) is now commonly called the Moore-Penrose inverse. Since the first publication on this subject by E. H. Moore [68] many other papers appeared. Namely, generalized inverses for matrices were given by C. L. Siegel in [88], and for operators by Y. Y. Tseng [92, 93, 94, 95], F. J. Murray and J. von Neumann [70], F. V. Atkinson in [8, 9], and others. Revival of interest in the subject centered around the least squares properties not mentioned by E. H. Moore) of certain generalized inverses. 2

Chapter 1 Introduction There are several types of generalized inverses such as generalized inverses of matrices, generalized inverses of linear operators, algebraic generalized inverses, metric generalized inverses, Moore-Penrose metric generalized inverses, generalized inverses of nonlinear operators see [3, 5, 10, 18, 19, 38]), etc. A lot of research on the theory and applications of generalized inverses has been done in the last decades see [61, 66, 72, 104, 111, 117]), etc. In Chapter 6, two applications are showed, which are extracted from [104] and [117], respectively. One of them is that least extremal solutions of ill-posed Neumann boundary value problem for semilinear elliptic equations in L p. Another one concerns the structure of the set of extremal solutions of ill-posed operator equation T x = y with codimrt ) = 1. It is well known that linear generalized inverses have many important applications, especially in numerical approximation [72], nonlinear analysis [61], and the structural theory of Banach manifolds [66], but generally speaking, other linear inverses than metric generalized inverses were not suitable to construct the extremal solutions, the minimal norm solutions, and the best approximate solutions of an ill-posed linear operator equations in Banach spaces [73]. In order to solve the best approximation problems for ill-posed linear operator equations in Banach spaces, it is necessary to study the metric generalized inverses of linear operators between Banach spaces. This kind of generalized inverses, which are set-valued bounded homogeneous operators, was introduced by M. Z. Nashed and G. F. Votruba in 1974 in [73]. In the same paper they raised the following suggestion: The problem of obtaining selections with nice properties for the metric generalized inverses is worth studying. Metric generalized inverses of linear operators between Banach spaces are multivalued and in general nonlinear, so the problem of constructing their selections is natural and important. An important progress in this direction has been made by constructing some selections of metric generalized inverses of linear operators in Banach spaces see [48, 97, 99]). In 2008, H. Hudzik, Y. W. Wang and W. J. Zheng established bounded homogeneous selections for the set-valued metric generalized inverses of linear operators on Banach spaces [48]. In 2009, C. Wang, S. P. Qu and Y. W. Wang obtained linear continuous selections for metric generalized inverses of bounded linear operators [97]. In Chapter 3 of this dissertation, some continuous homogeneous selections for the set-valued metric generalized inverses of linear oper- 3

1.2 Perturbation Theory of Moore-Penrose Metric Generalized Inverses ators in Banach spaces are investigated by using the methods of geometry of Banach spaces. Some necessary and sufficient conditions in order that bounded linear operators have continuous homogeneous selections for the set-valued metric generalized inverses are also given. These results are answers to the problem mentioned above) formulated by Nashed and Votruba in [73]. 1.2 Perturbation Theory of Moore-Penrose Metric Generalized Inverses Throughout this dissertation, /perturbation theory0means /perturbation theory for linear operators0. There are other disciplines in mathematics called perturbation theory, such as the ones in analytical dynamics celestial mechanics) and in nonlinear oscillation theory. All of them are based on the idea of studying a system deviating slightly from a simple ideal system for which the complete solution of the problem under consideration is known. However the problems they treat and the tools that they use are quite different. Perturbation theory was created by L. Rayleigh and E. Schrödinger [51, 71], and it occupies an important place in applied mathematics. L. Rayleigh gave a formula for computing the natural frequencies and modes of a vibrating system deviating slightly from a simpler system which admits a complete determination of the frequencies and modes [83]. E. Schrödinger developed a similar method, with higher generality and systematization, for the eigenvalue problems that appear in quantum mechanics [86]. In the last years the group of mathematicians working in the perturbation theory, involving several directions in analytical dynamics and nonlinear oscillation theory etc, increased essentially [15, 26, 27, 28, 39, 51, 54, 108]. There is a wide literature of the results towards linear operators, especially generalized inverses [28, 33, 34, 35, 36, 47, 56, 57, 105, 115], etc. Since its creation, the theory has occupied an important place in applied mathematics. During the last decades it has grown into a mathematical discipline with its own interests and techniques [51]. There are some perturbations theories for generalized inverses such as linear generalized inverses and nonlinear generalized inverses. Although the perturbation 4

Chapter 1 Introduction of linear generalized inverses have been studied, and numerous results were obtained [55, 85, 90, 91, 107, 109], the problems of nonlinear generalized inverses remaind unsolved except some initiated study of this theory by us in [56, 57]. The Moore-Penrose metric generalized inverses of operators between Banach spaces are bounded homogeneous and nonlinear in general) operators, which can be applied to of ill-posed boundary value problems concerning some equations. In 1995, Z. W. Li and Y. W. Wang introduced the notion of Moore-Penrose generalized inverses for closed linear operators with dense domain between Banach spaces [103]. In 2003, H. Wang and Y. W. Wang introduced the notion of Moore-Penrose metric generalized inverses of linear operators between Banach spaces [104]. In 2006, some description concerning the solution of the equality T x = b through the Moore-Penrose metric generalized inverse was obtained in [56]. In 2008, H. F. Ma and Y. W. Wang gave the definition of metric stable perturbation. After that a new method has been developed in [57] to analyze the perturbation problems for Moore- Penrose metric generalized inverses with respect to a special norm. In Chapter 4, the perturbations theory of Moore-Penrose metric generalized inverses for operators between Banach spaces was further studied. By using the continuity of the metric projection operators and the quasi-additivity of metric generalized inverses, we obtain a complete description of Moore-Penrose single-valued metric generalized inverses of operators on Banach spaces. 1.3 Spectrum and Narrow Spectrum Spectral theory of operators is an important part of functional analysis. Many applications require the spectral theory. This theory has numerous applications in many branches of mathematics and physics including matrix theory, function space theory, complex analysis, differential and integral equations, control theory and quantum physics [32, 46, 53, 81]. In the recent years, spectral theory has witnessed an explosive development. There are many types of spectra for one or several commuting operators, for example, the approximate point spectrum, Taylor spectrum, local spectrum, essential spectrum, etc [24, 37, 41, 69], all of them with important applications. In Chapter 5, we introduce a new type of spectrum, which is called the narrow spectrum for bounded linear operators on Hilbert spaces, by 5

1.3 Spectrum and Narrow Spectrum using the concept of locally fine points. Some properties and applications of the narrow spectrum are presented. We show that the narrow spectrum, which form a smaller set than the spectrum, can still keep some important properties of the spectrum. In our studies of the narrow spectrum, the concept of locally fine points plays an important role. In 1999, J. P. Ma one of the students of Y. Y. Tseng) introduced the concept of locally fine points for operator valve maps through the concept of generalized inverses, as the notion which guarantee some stabilities of the existence of generalized inverses see [61, 63]). This concept has been extensively studied in the last years. Such studies appear for example in: a local linearization theorem, a local conjugacy theorem, a generalized preimage theorem in global analysis, a series of the rank theorems for some operators [61, 62, 63, 64, 65, 66, 67]. Let E be a separable infinite-dimensional complex Hilbert space, BE) be the set of all bounded linear operators from E into itself. The invariant subspace problem can be formulated as: /Does every operator in BE) have a nontrivial invariant subspace? 0, and it is one of the most important problems in functional analysis. This problem remains still open for non-separable infinite-dimensional complex Hilbert spaces. It has its origins approximately in 1935 when according to [6]) J. von Neumann proved in his unpublished paper) that every compact operator on a separable infinite dimensional complex Hilbert space has a non-trivial invariant subspace the proof uses the spectral theorem for normal operators [76]). Since then, the invariant subspace problem has motivated enormous literature in operator theory. The books [16, 20, 78, 82], the lecture notes [7] and [44], and the survey papers [1] and [40] are centered around the invariant subspace problem. Related open problems and some conjectures appeared in [2]. The invariant subspaces appear in a natural way in prediction theory see A. N. Kolmogorov [52], and N. Wiener [114]), and in mathematical physics. The problem of the existence of nontrivial invariant subspaces for bounded linear operators on separable Hilbert spaces is reformulated in my dissertation as a problem of the narrow spectrum of bounded linear operators on Hilbert spaces. A sufficient condition for this is given in Theorem 5.2.11. 6

Chapter 2 Preliminaries Chapter 2 Preliminaries Throughout this dissertation, we will denote by DT ), R T ) and N T ) the domain, the range and the null space of an operator T, respectively. Let X and Y be two real Banach spaces. The space of all bounded linear operators from X to Y is denoted by BX, Y ), BX, X) =: BX). Write HX, Y ) for the space of all bounded homogenous operators from X to Y, HX, X) =: HX). Similarly, write LX, Y ) for the space of all linear operators from X to Y if T LX, Y ), the domain DT ) of T is just a subspace of X). In this dissertation, θ is always a zero vector in vector space. X is the conjugate space of X and x x) =:< x, x >. F will denote either the real field, R, or the complex field C. 2.1 Some Geometric Properties of Banach Spaces Definition 2.1.1 [13] The operator F X : X X defined by F X x) = { x X : x, x = x 2 = x 2}, x X is called the duality mapping of X. Remark 2.1.1 There always exists the non-zero duality mapping of X. In fact, for any x 1, x 2 X, if x 1 x 2, then x 1 x 2 θ. Let us define x 0 = x 1 x 2, X 0 = { λx 0 λ C}, and f 0 λx 0 ) = λ x 0 λ C). 7

2.1 Some Geometric Properties of Banach Spaces Then f 0 x 0 ) = x 0 and f 0 0 = 1, where f 0 0 is the norm of f 0 on X 0. By the Hahn-Banach Theorem, there exist some f X such that fx 0 ) = f 0 x 0 ) = x 0 and f = f 0 0 = 1. Therefore, there exist some f X \θ such that < f, x 0 x 0 > = x 0 x 0 2 = 1 = f 2. Proposition 2.1.2 [13] The duality mapping of X has the following properties: 1. it is homogeneous. 2. it is injective or strictly monotone if and only if X is strictly convex. 3. it is surjective if and only if X is reflexive. 4. it is single-valued if and only if X is smooth. 5. it is additive if and only if X is a Hilbert space. Definition 2.1.2 [89] If K X, the set-valued mapping P K : X K defined by P K x) = {y K : x y = d K x)}, x X), where d K x) = inf y K x y, is called the metric projection. 1. K is said to be proximinal if P K x) for any x X. 2. K is said to be semi-chebyshev if P K x) is at most a single point set for each x X. 3. K is called a Chebyshev set if it is both proximinal and semi-chebyshev. When K is a Chebyshev set, we denote P K x) by π K x) for any x X. 8

Chapter 2 Preliminaries Remark 2.1.3 Every Chebyshev set is closed and every closed convex set in a rotund reflexive space is Chebyshev. In particular every non-empty closed convex set in Hilbert space is Chebyshev see [25]). Lemma 2.1.4 [89] If X is a normed linear space, and L is a subspace of X, then i) π 2 L x) = π Lx) for all x Dπ L ), i.e. π L is idempotent; ii) x π L x) x for all x Dπ L ). Furthermore, if L is a semi-chebyshev subspace, then iii) π L αx) = απ L x) for all x X and α R, i.e. π L is homogeneous; iv) π L x + y) = π L x) + π L y) = π L x) + y for all x Dπ L ) and y L, i.e. π L is quasi-additive. Lemma 2.1.5 If L is a closed subspace of X, then the following statements are equivalent i) π L is a linear operator; ii) π 1 L θ) is a linear subspace of X; iii) π 1 L y) is a linear manifold of X for every y L. Remark 2.1.6 This result has been obtained in [89] under the assumption that the underlying Banach space X is reflexive and strictly convex, but it is easy to show that the result remains valid under the weaker assumption that L be a Chebyshev subspace of X. Theorem 2.1.7 [102] Generalized Orthogonal Decomposition Theorem) Let L be a proximinal subspace of X. Then for any x X, we have the decomposition where x 1 L and x 2 F 1 X x = x 1 + x 2, ) L. In this case we have X = L + F 1 X L ). If L is a Chebyshev subspace of X, then the decomposition is unique and x = P L x) + x 2, x 2 F 1 X L ). In this case we have X = P L x) F 1 X L ), where P L x) = {π L x}. Lemma 2.1.8 [48] Let L be a subspace of X, x X\L and x 0 L. Then x 0 P L x) if and only if F X x x 0 ) L, where F X is the duality mapping of X and L = {x X : x, x = θ, x L}. 9

2.2 Metric Generalized Inverse Definition 2.1.3 [50] A nonempty subset C of X is said to be approximately compact, if for any sequence {x n } in C and any y X such that x n y dist y, C) := inf { y z : z C}, we have that {x n } has a Cauchy subsequence. X is called approximately compact if any nonempty closed and convex subset of X is approximately compact. Remark 2.1.9 i) If C is approximatively compact, then C. ii) If C is approximatively compact, then C is a closed and approximinal set. Lemma 2.1.10 [30] Let C be a semi-chebyshev closed subset of X. If C is an approximately compact, then C is a Chebyshev subset and the metric projector π C is continuous. Definition 2.1.4 [51] Let T BX, Y ). The minimum modulus γt ) of T is defined by γt ) = inf{ T x) : distx, NT )) = 1}. Thus, from the definition of γt ), we deduce that T x) γt )distx, NT )), x X. Lemma 2.1.11 [100] If T HX, Y ), the addition and the scalar multiplication are defined as usual in linear structures. If the norm of T is defined as then HX, Y ), ) is a Banach space. T = sup T x, T HX, Y ), 2.1) x =1 Definition 2.1.5 [31] Let T LX, Y ). If DT ) is dense in X, T is said to be densely defined. Definition 2.1.6 [116] Let T LX, Y ). If x DT ), and y = T x when x n DT ), x n x and T x n Y, then T is said to closed operator. 10

Chapter 2 Preliminaries 2.2 Metric Generalized Inverse Definition 2.2.1 An operator T + BY, X) is said to be a generalized inverse of an operator T BX, Y ) provided T T + T = T and T + T T + = T +. Remark 2.2.1 This is of course an extension of the notion of the bounded linear inverse T 1 of T. It is well known that an operator T BX, Y ) has a generalized inverse in BY, X) if and only if NT ) and RT ) are both splited, which means that there exist linear subspaces R + X and N + Y such that the following decompositions of X and Y hold: X = NT ) R +, Y = RT ) N +. In this case, R + and N + are called topological complements of NT ) and RT ), respectively. In this case T is said to be double splited. For any T L X, Y ), an element x 0 X is said to be an extremal solution of the equation T x = y, if x = x 0 minimizes the functional T x y on X, that is, inf{ T x y : x X} = T x 0 y. Any extremal solution with the minimal norm is called the best approximate solution b.a.s. for short). In 1974, M. Z. Nashed and G. F. Votruba introduced the concept of the metric generalized inverse for linear operators between Banach spaces, which are set-valued operators in general. Definition 2.2.2 [73] Let T L X, Y ), and consider a y Y such that T x = y has the best approximate solution in X. We define T y) = {x X : x is the best approximate solution to T x = y} and call the set-valued mapping y T y) the metric generalized inverse of T. Here D T ) = {y Y : T x = y has a best approximate solution in X}. A in general nonlinear) function T σ y) T y) is called a selection for the metric generalized inverse. 11

2.2 Metric Generalized Inverse Definition 2.2.3 [106] Let T LX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If there exists a homogeneous operator T M : DT M ) DT ) such that: 1. T T M T = T on DT ). 2. T M T T M = T M on DT M ). 3. T M T = I DT ) π NT ) on DT ). 4. T T M = π RT ) on DT M ), then T M is called the Moore-Penrose metric generalized inverse of T, where I DT ) is the identity operator on DT ) and DT M ) = RT ) F 1 Y RT ) ). Lemma 2.2.2 Let X and Y be Banach spaces, T LX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If T has a Moore-Penrose metric generalized inverse T M, then 1) T M is unique on DT M ), and T M y = T CT ) ) 1 π RT ) y when y DT M ), where DT M ) = RT ) F 1 Y RT ) ); 2) there exists a linear inner inverse T from RT ) to DT ) i.e., T T T = T ) such that for y DT M ). T M y = I DT ) π NT ) )T π RT ) y, 2.2) Remark 2.2.3 This result has been obtained in [106] by H. Wang and Y. W. Wang under the assumption that the underlying Banach space X and Y are strictly convex, but it is easy to show that the result remains valid under the weaker assumption that NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. Theorem 2.2.4 [75, 98] Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. Then there exists a unique Moore-Penrose metric generalized inverse T M of T such that T M y) = T CT ) ) 1 π RT ) y) for any y DT M ), where DT M ) = RT ) F 1 Y RT ) ), CT ) = DT ) F 1 X NT ) ). 12

Chapter 2 Preliminaries Remark 2.2.5 In Theorem 2.2.4, since π RT ) and T CT ) ) 1 are all bounded homogenous operators, T M is also bounded homogenous operator. Thus, the norm of T M is well defined by 2.1) in Lemma 2.1.11. 13

3.1 Criteria for the Metric Generalized Inverses of Linear Operators Chapter 3 Selections of Metric Generalized Inverses 3.1 Criteria for the Metric Generalized Inverses space, of Linear Operators To get continuous selections of the metric generalized inverses in a Banach 3.1.1 and 3.1.2. we first refine Theorem 3.1 and Theorem 3.2 in [48], obtaining Theorems Theorem 3.1.1 Let T L X, Y ) and assume that R T ) is an approximatively compact subspace of Y and N T ) is a proximinal subspace of X. If P RT ) y) R T ) for each y R T ) F 1 Y R T ) ), then 1. D T ) = R T ) F 1 Y R T ) ) ; 2. for all y D T ) T y) = P where T 1 P RT ) y) = ) T 1 P RT ) y) ; θ, { } x D T ) : T x) P RT ) y). Proof 1. Since R T ) is approximatively compact in Y, which is a proximinal subspace. If Y = R T ), then D T ) = R T ) = R T ) F 1 Y R T ) ). If Y R T ), it follows from the Hahn-Banach Theorem that there exists a y R T ) = R T ) Y such that y = 1. Hence R T ) {θ}. 14

Chapter 3 Selections of Metric Generalized Inverses Take any y Y \R T ). Since R T ) is a proximinal subspace of Y, we have that P RT ) y). Taking any y 0 P RT ) y), by Lemma 2.1.8, we have Hence θ y y 0 F 1 Y F Y y y 0 ) R T ). R T ) ), whence F 1 Y R T ) ) {θ}. We claim that D T ) = R T ) F 1 Y R T ) ). Indeed, the operator equation T x = y has a best approximate solution x 0 D T ) for any y D T ). Thus y 0 = T x 0 R T ) satisfies the equalities y y 0 = y T x 0 = inf x DT ) y T x = dist y, R T )) ) = dist y, R T ). Hence y 0 P RT ) y). It follows from Lemma 2.1.8 that F Y y y 0 ) R T ). Setting y 1 = y y 0, we have y 1 F 1 Y R T ) ), whence Therefore y = y 0 + y 1 R T ) F 1 Y D T ) R T ) F 1 Y Conversely, for any y R T ) F 1 Y R T ) ). R T ) ). 3.1) R T ) ), we claim that the operator equation T x = y has a best approximate solution in D T ), that is, R T ) F 1 Y We will divide the proof of 3.2) into three steps. Step 1. For any y R T ) F 1 Y y b y = R T ) ) D T ). 3.2) R T ) ), there exists b y P RT ) y) with inf y z. z RT ) 15

F 1 Y 3.1 Criteria for the Metric Generalized Inverses of Linear Operators In fact, for any y R T ) F 1 Y R T ) ) such that R T ) ), there exist b y R T ) and y 1 y = b y + y 1, i.e. y b y = y 1 F 1 Y R T ) ), whence F Y y b y ) R T ). Again, by Lemma 2.1.8, we have b y P RT ) y). Step 2. For any b y P RT ) y), we claim that there exists ˆx by D T ) such that i) ˆx by is a minimal norm solution to the operator equation T x = b y ; ii) ˆx by is an extremal solution to the operator equation T x = y. Indeed, b y R T ) for any b y P RT ) y), whence there exists x D T ) such that T x = b y. Since N T ) is a proximinal subspace of X, we may choose x P NT ) x ). Defining ˆx by = x x, we have ˆx by ) I DT ) P NT ) x ), whence T ˆx by = T x = b y, i.e. ˆx by T 1 b y, Next, we are going to show that ˆx by P T 1 b y θ), i.e. ˆx by is a minimal norm solution to the operator equation T x = b y, which will prove i). For any v N T ), setting w = x + v, we have w N T ). Since x P NT ) x ), we get ˆx by θ = x x x w = x x v = ˆx by v, 16

Chapter 3 Selections of Metric Generalized Inverses ) i.e. θ P NT ) ˆxby. Now it follows from Lemma 2.1.8 that ) F X ˆxby N T ). ) Then, we take ˆx F X ˆxby N T ), obtaining ˆx, ˆx by = ˆx 2 = ˆx by 2. For any x T 1 b y, we have T x = b y = T ˆx by, whence x 0 := x ˆx by N T ) and ˆx by 2 = ˆx, ˆx by = ˆx, ˆx by + x 0 = ˆx, x ˆx x ˆx by x. This implies that ˆx by P T 1 b y θ). Since b y P RT ) y) and b y = T ˆx by, we have y T ˆx by = y b y = inf y z z RT ) = inf x DT ) y T x, i.e. ˆx by is an extremal solution to the operator equation T x = y, so ii) follows. Step 3. For any y R T ) F 1 Y R T ) ), we claim that the operator equation T x = y has a best approximate solution, i.e. R T ) F 1 Y Indeed, we define for any y DT ) = R T ) F 1 Y L y) = R T ) ) D T ). 3.3) R T ) ) : { } inf ˆxby : ˆx by is a minimal norm solution to T x = b y. b y P RT ) y) { Next, we choose a sequence ˆx b n) y ˆx b n) y } such that ˆx n+1) b, T x n) y b = b n) y y n = 1, 2, ) 17

3.1 Criteria for the Metric Generalized Inverses of Linear Operators and L y) = lim ˆx n) n b. y Since R T ) is an approximately compact subspace of Y, {b n) y } P RT ) y) R T ) R T ), and y b n) y = inf y z. 3.4) z RT ) We may assume without loss of generality that {b n) y } is a Cauchy sequence in Y. By the completeness of Y, there exists b 0) y From 3.4), we have Hence b 0) y b 0) y R T ) such that = lim b n) y. n y b 0) y = inf y z. z RT ) P RT ) y). Since y R T ) F 1 Y that P RT ) y) R T ) for each y R T ) F 1 Y b 0) y P RT ) y) R T ), R T ) ), then by using the fact R T ) ), we have and hence b 0) y P RT ) y). From the Steps 1 and 2, we know that there exists an ˆx 0) b D T ) such that y ˆx 0) b is a minimal norm solution to the operator equation T x = b 0) y, and a best y approximate solution to the operator equation T x = y. For any extremal solution x D T ) to the operator equation T x = y, we have T x P RT ) y). Let us set b y = T x P RT ) y) in Step 2. There exists ˆx by D T ) such that ˆx by is a minimal norm solution of the operator equation T x = b y, so { By the definition of ˆx b n) y ˆx by x. }, let n such that ˆx ˆx 0) b b y x. y Hence ˆx 0) b is a best approximate solution of the operator equation T x = y and then y 3.3) follows. 18

Chapter 3 Selections of Metric Generalized Inverses Combining 3.1) and 3.3) we obtain D T ) = R T ) F 1 Y R T ) ). 2. For any y D T ), by the definition of T, there exists a best approximate solution x 0 D T ), whence i.e. P RT ) y). R T ) F 1 F 1 Y T x 0 R T ) R T ) and T x 0 P RT ) y), It is obvious that P RT ) y) is a closed convex subset of Y. Since y D T ) = Y R T ) ), by the condition P RT ) y) R T ) for each y R T ) R T ) ), we see that T 1 P RT ) y), { } where T 1 P RT ) y) = x D T ) : T x P RT ) y), which is a nonempty convex subset of X. For any y D T ) and any x 0 T y), by the definition of T y), we see that x 0 T 1 P RT ) y) and whence P x 0 = inf ) T 1 P RT ) y) ; θ and Conversely, for any y P T y) P { } x : x T 1 P RT ) y), ) T 1 P RT ) y) ; θ. 3.5) ) T 1 P RT ) y) ; θ, by the definition of the set-valued metric projection and the definition of the set-valued metric generalized inverse, we see that y T y), i.e. Combining 3.5)and 3.6), we obtain P T y) = P ) T 1 P RT ) y) ; θ T y). 3.6) ) T 1 P RT ) y) ; θ, y D T ). This finishes the proof. Theorem 3.1.2 Let T L X, Y ) and R T ) be an approximately compact Chebyshev subspace of Y, and N T ) be a proximinal subspace of X. Then 19

3.1 Criteria for the Metric Generalized Inverses of Linear Operators 1. D T ) = R T ) F 1 Y 2. for all y D T ), we have R T ) ) ; T y) = I DT ) P NT ) ) T 1 π RT ) y). Proof First we show that π RT ) y) R T ), y R T ) F 1 Y R T ) ). 3.7) Indeed, since R T ) is an approximately compact Chebyshev subspace of Y, for any y R T ) F 1 Y R T ) ), there exists a unique π RT ) y) such that P RT ) y) = { } π RT ) y). On the other hand, there exist y 0 R T ) and y 1 F 1 Y R T ) ) such that y = y 0 + y 1, whence Therefore y y 0 = y 1 F 1 Y R T ) ) = F 1 Y R T ) ). F Y y y 0 ) R T ). { } From Lemma 2.1.8, we have y 0 P RT ) y) = π RT ) y), and hence which shows that 3.7) holds. By Theorem 3.1.1, we have π RT ) y) = y 0 R T ), D T ) = R T ) F 1 Y R T ) ), and T y) = P ) T 1 π RT ) y) ; θ, y D T ). In order to finish the proof, we need only to show that P ) T 1 π RT ) y) ; θ = ) I DT ) P NT ) T 1 π RT ) y), y D T ), 3.8) where π RT ) y) R T ). 20

Chapter 3 Taking arbitrary y D T ) and x P Selections of Metric Generalized Inverses x T 1 P RT ) y) ) T 1 π RT ) y) ; θ, we obtain and x = inf{ w : w T 1 π RT ) y)}. 3.9) Theorem 2.1.7 implies that x can be decomposed in the form x = x 1 + x 2, where x 1 P NT )x), x 2 F 1 X NT )). Hence T x 2 ) = T x x 1 ) = T x) = π RT ) y), i.e. x 2 T 1 π RT ) y). For any v NT ), we have x 2 v T 1 π RT ) y), and hence it follows from 3.9) that i.e. x 1 P NT ) x 2 ). Therefore and consequently P x 2 x 1 ) = x x 2 v, x = x 2 x 1 ) I DT ) P NT ) x 2 ) I DT ) P NT ) T 1 π RT ) y), ) T 1 π RT ) y) ; θ ) I DT ) P NT ) T 1 π RT ) y). 3.10) Conversely, taking arbitrary x I DT ) P NT ) )T 1 π RT ) y), y D T ), there exists x T 1 π RT ) y) such that x I DT ) PNT )) x). Hence, there exists x P NT ) x ) such that x = x x and x NT ), 21

3.2 Continuous Homogeneous Selection of Metric Generalized Inverses of Linear Operators consequently T x) = T x ) = π RT ) y). Thus we have x T 1 π RT ) y). Next, we will verify that x PT 1 π RT ) y); θ). Taking arbitrary v NT ) and setting w = x +v, we get w NT ). Noticing that x P NT ) x ), for all v NT ), we have x θ = x x x w = x x v = x v. Whence θ P NT ) x). It follows from Theorem 2.1.8 that F X x) NT ). Choosing x F X x) NT ) such that x, x = x 2 = x 2. For any x T 1 π RT ) y), we have T x) = T x) = π RT ) y), whence x x NT ). Let x 0 = x x. Then x = x 0 + x, x 0 NT ). Therefore x 2 = x, x = x, x + x 0 = x, x x x = x x. Hence, it follows that x x for any x T 1 π RT ) y), i.e. x P T 1 π RT ) y); θ). Thus ) ) IDT ) P NT ) T 1 π RT ) y) P T 1 π RT ) y) ; θ. 3.11) Combining 3.10) and 3.11), we obtain ) ) IDT ) P NT ) T 1 π RT ) y) = P T 1 π RT ) y) ; θ, which finished the proof. 22

Chapter 3 Selections of Metric Generalized Inverses 3.2 Continuous Homogeneous Selections of Metric Generalized Inverses of Linear Operators Theorem 3.2.1 Let T L X, Y ) be a densely defined closed linear operator. Suppose that R T ) is an approximately compact Chebyshev subspace of Y and N T ) is a proximinal subspace that is topologically complemented in X. If the setvalued projection ˆP NT ) : D T ) 2 NT ) has a continuous homogeneous selection ˆπ NT ) : D T ) N T ), where ˆP NT ) is the restriction of P NT ) to D T ) and ˆπ NT ) is the restriction of π NT ) to D T ), then the metric generalized inverse T : Y 2 DT ) has a continuous homogeneous selection T σ : Y D T ). In this case, we have where T 0 = T NT ) c DT ) T σ = I DT ) ˆπ NT ) ) T 1 0 π RT ), is the restriction of T to the subspace N T ) c D T ), and N T ) c is a topologically complemented subspace of N T ) in X. Proof Since R T ) is an approximately compact Chebyshev subspace of Y, by Lemma 2.1.4 and Lemma 2.1.10, the metric projection π RT ) : Y R T ) is a single-valued continuous homogeneous operator. On the other hand, since N T ) is a topologically complemented subspace of X, there exists a closed subspace N T ) c of X such that X = N T ) N T ) c. Let T 0 := T NT ) c DT ) be the restriction of T to the subspace N T ) c D T ). Then we claim that T 0 : N T ) c D T ) R T ) is one-to-one and onto, whence the converse operator T 1 0 : R T ) N T ) c D T ) exists and is a linear operator. Indeed, if x, y N T ) c D T ) are such that T 0 x) = T 0 y), then x y N T ) and x y N T ) c. Since N T ) N T ) c = {θ}, we see that x = y, i.e. T 0 is one-to-one. 23

3.2 Continuous Homogeneous Selection of Metric Generalized Inverses of Linear Operators On the other hand, for any y R T ), there exists an x D T ) such that y = T x. Since x D T ) X = N T ) N T ) c, there exist x 0 x 1 N T ) c such that x = x 0 + x 1. Hence x 1 = x x 0 N T ) c D T ), which satisfies the equalities T x 1 = T x = y, i.e. T 0 is onto. N T ) and Next, we will prove that T 1 0 : R T ) N T ) c D T ) is a closed linear operator. Let {x n } N T ) c D T ) be such that x n x 0 and T 0 x n ) = T x n y 0 as n. Since T is a closed linear operator, we have that x 0 D T ) and y 0 = T x 0. On the other hand, N T ) c is a closed linear subspace, we see that x 0 N T ) c D T ), y 0 = T 0 x 0 ). Therefore T 0 is a closed linear operator, which converse operator T 1 0 : R T ) N T ) c D T ) is also a closed linear operator. Since R T ) is a closed linear subspace of Y, whence R T ) is complete, it follows by the Closed Graph Theorem that is a continuous linear operator. T 1 0 : R T ) N T ) c D T ) Since ˆπ NT ) : D T ) N T ) is a single-valued continuous homogeneous selection for the set-valued projection ˆP NT ) : D T ) 2 NT ), we get I DT ) ˆπ NT ) : D T ) N T ) is also a single-valued continuous homogeneous selection for the set-valued mapping I DT ) ˆP NT ) : D T ) 2 NT ). We define T σ : Y D T ) by the formula T σ y) = I DT ) ˆπ NT ) ) T 1 0 π RT ) y), y Y. Then, by Theorem 3.1.2, we have T σ y) I DT ) ˆP ) NT ) T0 1 π RT ) y) 24

Chapter 3 Selections of Metric Generalized Inverses I DT ) P NT ) ) T 1 π RT ) y) = T y) for any y D T ). Since R T ) is an approximately compact Chebyshev subspace of Y, by Theorem 3.1.2 and Theorem 2.1.7, we obtain that D T ) = R T ) F 1 Y R T ) ) = Y and T σ = I DT ) ˆπ NT ) ) T 1 0 π RT ) is a continuous homogeneous selection for the metric generalized inverse T : Y 2 DT ). Theorem 3.2.2 Suppose L X, Y ) be the space of all bounded linear operators T from X to Y with closed range R T ) the domain DT ) of T is just a subspace of X). Let T L X, Y ), R T ) be an approximately compact Chebyshev subspace of Y, N T ) be a proximinal subspace and is topologically complemented in X. Then the metric generalized inverse T : Y 2 X has a continuous homogeneous selection T σ : Y X if and only if the set-valued projection P NT ) : X 2 NT ) has a continuous homogeneous selection π NT ) : X N T ). In this case, we have where T 0 = T NT ) c T σ = I DT ) π NT ) ) T 1 0 π RT ), is the restriction of T to the subspace N T ) c, and N T ) c is a topologically complemented subspace of N T ) in X. Proof Necessity. If the set-valued metric generalized inverse T : Y 2 X of T has a continuous homogeneous selection T σ : Y X, defining π NT ) x) := x T σ T x, then π NT ) : X N T ) is a continuous homogeneous operator. By the definition of T and T σ, we see that T π NT ) x) ) = T x T T σ T x = θ, 25

3.2 Continuous Homogeneous Selection of Metric Generalized Inverses of Linear Operators i.e. π NT ) x) N T ), and for any y T 1 T x = x + N T ): T σ T x y. 3.12) Let y = x z for any z N T ). Then y x + N T ) = T 1 T x. From 3.12) we have x π NT ) x) = T σ T x y x z for any z N T ). Hence, π NT ) x) P NT ) x), x X), i.e. π NT ) : X N T ) is a continuous homogeneous selection for the set-valued projection P NT ) : X 2 NT ). Sufficiency. Since T L X, Y ) is a bounded linear operator defined on X, the fact that T is a densely defined closed linear operator with D T ) = X follows from Theorem 3.2.1. 26

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators 4.1 Perturbation of the Solution of the Operator Equation T x = b Let T BX, Y ). Throughout this section, let δt BX, Y ), T = T + δt, b RT ) and b θ. Let us define ST, b) = {x X T x = b}, ST, b) = { x X T x = b}. Lemma 4.1.1 Let T BX, Y ). If NT ) and RT ) are Chebyshev subspaces of X and Y, respectively. Then there exists the Moore-Penrose metric generalized inverse T M of T such that 1 T M γt ) T M T T T M. T M Proof By Theorem 2.2.4, there exists a unique Moore-Penrose metric generalized inverse T M of T such that T M y) = T CT ) ) 1 π RT ) y), y DT M ). It follows from Remark 2.2.5 that the norm of T M is well defined by 2.1). For any x X and y NT ), we have T M T x y) = T M T x T M T x y 4.1) 27

4.1 Perturbation of the Solution of the Operator Equation T x = b and distx, NT )) x I T M T )x = T M T x. It follows from 4.1) that x y T M T x T M T 1, or equivalently distx, NT )) T M T x T M T 1. Therefore T M T x T M T x distx, NT )) T M T x T M T. 4.2) By the definition of γt ), inequality 4.2) implies that and γt ) 1 T M T x γt )distx, NT )) γt ) T M T x T M T. 4.3) For any z Y, if we substitute T M z for x in inequality 4.3), we get Therefore T T M z γt ) T M T T M z T M T = γt ) T M z T M T. γt ) T M T T T M. T M The proof is completed. Lemma 4.1.2 Let T BX, Y ). If NT ) and RT ) are Chebyshev subspaces of X and Y, respectively, then T 1 δt x distx, ST, b)) T M δt x for every x ST, b) 28

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators Proof By Definition 2.1.4, it is easy to see that T x γt ) distx, NT )), x X. Therefore, for any x ST, b) distx, ST, b)) = distx T M b, NT )) γt ) 1 T x T M b). 4.4) It follows from Lemma 4.1.1 that γt ) 1 T M. 4.5) This inequality together with 4.4) gives distx, ST, b)) T M T x T M b) = T M T x T T M b. The definition of T M shows that T T M = π RT ). Thus T T M b = b. Moreover, distx, ST, b)) T M T x b = T M T δt )x b = T M T x b δt x = T M δt x. On the other hand, it is easy to see that T x T M b) = T x T M b y) T x T M b + y) for any y NT ). Therefore distx, ST, b)) T 1 T x T M b) = T 1 T δt )x T T M b = T 1 T x δt x b = T 1 δt x. The proof is completed. 29

4.1 Perturbation of the Solution of the Operator Equation T x = b Theorem 4.1.3 Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If T M δt < 1, then there exists a unique x ST, b) such that x x x T M δt, x ST, b). 1 T M δt Proof For any x ST, b), it follows from b θ that x θ. Since ST, b) = T M b + NT ) and NT ) is a Chebyshev subspace of X, ST, b) is a Chebyshev linear manifold. Thus, there exists a unique x ST, b) such that x x = distx, ST, b)), for any x ST, b) X. By Lemma 4.1.2, we have x x T M δt x, or equivalently Moreover x x x x x T M δt x x. 4.6) x x + x, x and x x This means by inequality 4.6) that 1 + x x x. 4.7) x x x T M δt 1 T M δt, which finishes the proof. In the following, let δb RT ), b = b + δb RT ) and b θ. Suppose ST, b) = {x X T x = b}. 30

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators Lemma 4.1.4 Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. Then T 1 δt x δb distx, ST, b)) T M δt x δb for every x ST, b). Proof It follows from the definition of γt ) that T x T M b) γt ) distx, ST, b)) = γt ) distx T M b, NT )) 4.8) for all x ST, b). Lemma 4.1.1 means that γt ) 1 T M. 4.9) In addition, 4.8) implies that distx, ST, b)) T M T x T M b) = T M T x T T M b. By the definition of T M, T T M = π RT ). Thus, T T M b = b and distx, ST, b)) T M T x b = T M T δt )x b = T M T x b δt x = T M δt x δb. On the other hand, T x T M b) = T x T M b y) T x T M b + y) for any y NT ). Hence distx, ST, b)) T 1 T x T M b) = T 1 T x T T M b = T 1 T δt )x b 31

4.1 Perturbation of the Solution of the Operator Equation T x = b = T 1 T x δt x b = T 1 b b δt x = T 1 δb δt x. The proof is completed. Theorem 4.1.5 Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If T M δt < 1, then there exists a unique x ST, b) such that for every x ST, b). x x x T M δt + T δb b 1 ) 1 T M δt Proof By the proof of Theorem 4.1.3, there exists a unique x ST, b) such that x θ and x x = distx, ST, b)), x ST, b). By Lemma 4.1.4, we have x x T M δt x δb. for every x ST, b). Therefore x x x T M δt x δb x T M δt x + δb ) x = T M δt x x + δb x T M. 4.10) Furthermore, it follows from T x = b T x that This implies by 4.10) that 1 x T b. x x x T M δt x x + T b δb T M. 4.11) 32

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators On the other hand, it is easy to see that x x x x + x x = 1 + x x x. 4.12) By 4.12) and 4.11), we get x x x T M δt + T δb b 1 ). 1 T M δt The proof is completed. 4.2 Perturbation of Moore-Penrose Metric Generalized Inverse Definition 4.2.1 [57] Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. Suppose that δt BX, Y ), T = T + δt, and NT ) and RT ) are Chebyshev subspaces of X and Y, respectively. Then δt is called the metric stable perturbation of T. In addition, T ) = {δt BX, Y ) : δt is the metricly stable perturbation of T} is called the metric stable perturbation set of T. Remark 4.2.1 For any δt T, it follows from Theorem 2.2.4 that there exist the Moore-Penrose metric generalized inverse T M and T M of T and T, respectively, such that DT M ) = RT ) F 1 Y RT ) ), DT M ) = RT ) F 1 Y RT ) ). Since RT ) and RT ) are all Chebyshev subspaces of Y. Then Theorem 2.1.7 means DT M ) = DT M ) = Y. Let us define Y δt T ) := {b Y : F X T M b T M b) NT ) }. It is obvious that θ Y δt T ). Therefore, 1. Y δt T ), 33

4.2 Perturbation of Moore-Penrose Metric Generalized Inverse 2. Y δt T ) Y is a homogenous set. Remark 4.2.2 Let T BX, Y ). If NT ) and RT ) are Chebyshev subspaces of X and Y, respectively, then T )\{θ}. Indeed, if δt = τt for any τ R, then T = T + δt = 1 + τ)t, NT ) = NT ), RT ) = RT ). Thus δt T ). Theorem 4.2.3 Let T BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If δt T ), T M δt < 1, then T M T M 0 T M T M δt T M δt 1 T M δt, 4.13) where both T M and T M are the norms for bounded homogenous operators, and T M T M 0 = sup b θ b Y δt T ) Here T M T M 0 = θ when Y δt T ) = {θ}. T M T M )b). b Proof By Theorem 2.2.4, there exist the Moore-Penrose metric generalized inverse T M and T M of T and T, respectively, such that DT M ) = RT ) F 1 Y RT ) ), DT M ) = RT ) F 1 Y RT ) ). Noting that RT ) and RT ) are both Chebyshev subspaces of Y, Theorem 2.1.7 shows that DT M ) = DT M ) = Y. If Y δt T ) = {θ}, then inequality 4.13) is trivial. Otherwise, we have b θ and x = T M b ST, b) for all b Y δt T ). By Lemma 4.1.2, we have distx, ST, b)) T M δt x. 4.14) Since ST, b) = T M b+nt ) is a closed linear manifold, NT ) is Chebyshev subspace of X, then ST, b) is a Chebyshev linear manifold. Hence distx, ST, b)) = distt M b, T M b + NT )) = T M b T M b π NT ) T M b T M b), 4.15) 34

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators where π NT ) is a metric projector operator from X to NT ). x b M Let us denote x b M θ such that F := T M T M )b. Since b Y δt T ) and b 0, we assume X x b M) NT ). 4.16) Take any x F X x b M ) NT ) such that x b M 2 = x, x b M = x, x b M x x x b M x for any x NT ). Since x b M = x = θ, we get x b M θ = inf x b M x. x NT ) It follows from the assumption that NT ) is a Chebyshev subspace such that π NT ) x b M) = θ. 4.17) Next from equality 4.15), we obtain distx, ST, b)) = T M b T M b. In addition, by inequality 4.14) we obtain T M T M )b T M δt T M b, whence T M T M )b b T M δt T M. 4.18) Taking the supremum of the left hand side over b in 4.18), we get T M T M 0 T M < T M δt. By the assumption that T M δt < 1, we have and the proof is completed. T M T M 0 T M T M δt, T M δt 1 T M δt, 35

4.2 Perturbation of Moore-Penrose Metric Generalized Inverse In general, the metric generalized inverse is a bounded homogeneous nonlinear operator, which suggests the discussion of nonlinear generalized inverse. In these circumstances, we are going to change the nonlinear operator into the product of the linear operator and quasi-linear operator, and then partially draw on the discussing of the perturbation of the linear generalized inverses. To obtain the perturbation of Moore-Penrose metric generalized inverse T M of T BX, Y ). At first, we discuss the quasi-additivity of T M. Theorem 4.2.4 Let T BX, Y ) and assume that NT ) and RT ) are Chebyshev subspaces of X and Y, respectively, and that π 1 NT )θ) is a linear subspace of X. Then: 1) there exists a unique Moore-Penrose metric generalized inverse T M of T, and T M y = I DT ) π NT ) )T π RT ) y, y Y 4.19) where T is a linear inner inverses of T ; 2) T M is quasi-additive i.e, T M is quasi-additive on RT )) and T M x + y) = T M x + T M y for all x Y, y RT ). Proof 1) Since NT ) and RT ) are Chebyshev subspaces of X and Y, respectively, by Lemma 2.2.2, there exists a unique Moore-Penrose meric generalized inverse T M of T such that T M y = I DT ) π NT ) )T π RT ) y, y DT M ), where DT M ) = RT ) F 1 Y RT ) ), and T is a linear inner inverse of T. Since RT ) is a Chebyshev subspace of Y, so DT M ) = Y, by Theorem 2.1.7. Therefore equality 4.19) is valid. 2) Noticing that π 1 NT )θ) is a linear subspace of X, Lemma 2.1.5 implies that π NT ) is a linear operator. Thus I DT ) π NT ) is a linear operator. By Lemma 2.2.2, there exists a linear inner T of T. Moreover, π RT ) = π RT ) is bounded quasi-linear 36

Chapter 4 Perturbations of Moore-Penrose Metric Generalized Inverses of Linear Operators quasi-additive) metric projector, which shows that T M is a bounded homogeneous operator. Thus for each x Y, y RT ), we have T M x + y) = I DT ) π NT ) )T π RT ) x + y) = I DT ) π NT ) )T [π RT ) x + y] = I DT ) π NT ) )T π RT ) x + I DT ) π NT ) )T y = T M x + I DT ) π NT ) )T y = T M x + T M y. Corollary 4.2.5 Let T BX, Y ), δt BX, Y ), NT ) and RT ) be Chebyshev subspaces of X and Y, respectively. If we assume that π 1 NT )θ) is a linear subspace of X and RδT ) RT ), then T M δt is a linear operator. Proof By Theorem 4.2.4, there exists a unique Moore-Penrose metric generalized inverse T M of T such that T M y = I DT ) π NT ) )T π RT ) y, y Y ). By RδT ) RT ), it is easy to see that T M δt = I DT ) π NT ) )T π RT ) δt = I DT ) π NT ) )T δt. Therefore, T M δt is also a linear operator because I DT ) π NT ) )T δt is a linear operator. The proof is completed. In order to prove Theorem 4.2.8, we need the following result. Lemma 4.2.6 Let T HX). If T is quasi-additive on RT ) and T < 1, then the operator I T ) 1 exists and 1) I T ) 1 HX); 2) I T ) 1 = k=0 T k ; 3) I T ) 1 1 1 T ; 37