GEOMETRY OF LINEAR ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES

Size: px
Start display at page:

Download "GEOMETRY OF LINEAR ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES"

Transcription

1 GEOMETRY OF LINEAR ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES PETER MATHÉ AND SERGEI V. PEREVERZEV Abstract. The authors study the best possible accuracy of recovering the unknown solution from linear ill-posed problems in variable Hilbert scales. A priori smoothness of the unknown solution is expressed in terms of general source conditions, given through index functions. Emphasis is on geometric concepts. The notion of regularization is appropriately generalized, and the interplay between qualification of regularization and index function becomes visible. A general adaptation strategy is presented and its optimality properties are studied. 1. Introduction In the present paper we consider numerical solution of operator equations Ax = y under presence of noise, which means we are given (1) y δ = Ax + δξ, where the operator A acts between Hilbert spaces X and Y and the noise ξ is assumed to be bounded ξ 1. A numerical method S for approximation x, based on observations y δ is given as an arbitrary mapping S : Y X. Its error at any problem instance x X is then given by (2) e(x, S, δ) := sup x S(y δ ). ξ 1 The worst-case error over a class F of problem instances is determined as (3) e(f, S, δ) := sup e(x, S, δ). x F The best possible accuracy is defined by minimization over all numerical methods, i.e., (4) e(f, δ) := inf e(f, S, δ). S:Y X Date: March 28,

2 2 PETER MATHÉ AND SERGEI V. PEREVERZEV In the present context we are interested in the asymptotic behavior of e(f, δ) as δ 0, when the class of problem instances A ϕ (R) is given through index functions ϕ as (5) A ϕ (R) := {x X, x = ϕ(a A)v, v R}, where A ϕ (R) is called source condition. For regularization we shall later on assume more specifically, that the index function is continuous, increasing and satisfies ϕ(0) = 0. The problem under consideration is the following. Suppose we are given an index function ϕ. Which qualification of a chosen regularization guarantees the optimal order of approximation, uniformly over A ϕ (R), after appropriate regularization? When the source conditions are given in terms of powers ϕ(t) := t µ, the answer to this question is known: We should use regularization of qualification p µ/2, see [28, Thm. 1.2]. Our aim is to answer the above question in the context of general source conditions. Moreover, we are going to show, how conditions on ϕ can be given, which are easy to verify, and which allow classical regularization, in particular Tikhonov s, to yield the best order. The presentation of the material is as follows. We use this overview to point at relevant references. Within the body of the presentation only a few further references are given, if necessary. We first recall the concept of variable Hilbert scales. Major contributions were made by Hegland in [6, 7] and Tautenhahn [25]. There are also other papers, where implicitly such concept can be found, we mention the recent paper on regularization of non-linear problems by Bakushinskiĭ [1] and the study by Mair [11]. Having the basic parameters, which drive a variable Hilbert scale, we may describe the degree of ill-posedness. This is in accordance with the notion as originally introduced by Wahba [29] in statistical context. More recently we mention [21]. The present authors tried to capture the main idea in [14]. This approach is generalized here. The best possible accuracy in the general context, without making the notion of variable Hilbert scales explicit, was given in [9]. Here we rely on this description. Hegland [7] also relied upon it, while Tautenhahn gave another proof in [25]. Actually, a corresponding problem was studied by Melkmann and Micchelli [17], we refer also to Micchelli and Rivlin [18]. Turning form the original problem (1) to the symmetrized one is common in many studies, we mention [28, Chapt. 2]. Within the present framework, the original problem and its symmetrized version, see (15)

3 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 3 below, may be studied equivalently. Note however, that this equivalence does not extend to discretizations, since there are discretizations for (1), which are not obtained from ones of the symmetrized problem. The study of discretizations is deferred to a separate paper, see [15]. The observation, that under certain conditions one can switch from the original problem to a diagonal one in l 2 was exploited in [12] and also in [14] by the present authors. This is particularly useful, when the noise is Gaussian white noise, thus orthogonally invariant. The important notion of the qualification of a regularization method was first introduced in [28, Chapt. 2, 3]. Here this concept is generalized to variable Hilbert scales, where its properties become transparent. The importance of appropriate qualification of regularization methods in variable Hilbert scales was recently emphasized in [20, 4.7]. Bakushinskiĭ [1] tried to capture the quality of the source condition within classical regularization. We mention the paper by Deuflhard, Engl and Scherzer [2], where the importance of determining the required qualification for general source condition became evident (Remark 3.2, there). Data-based adaptation to find the optimal regularization parameter, known as discrepancy principle dates back to Phillips [23], predating even Tikhonov s original paper [26]. It has then been reinvented by Morozov [19] and Marti [13]. However, it is known, that this principle does not provide the best order of approximation for all type of source conditions, for which Tikhonov regularization is optimal, see e.g. [5]. So, the question arises, whether there are strategies, which adapt to unknown smoothness, uniformly for all such source conditions. This has been studied by Gfrerer [3] and more recently by Tautenhahn and H amarik [24]. The proposed strategies are still not satisfactory, since additional approximate solutions, e.g. by iterated Tikhonov regularization have to be computed. The procedure of adapting to unknown smoothness, which our proposal is based on, was first studied in the context of statistics by Lepskiĭ [10]. Since then many authors have adopted this approach towards various applications, we mention [27] and [4]. The a posteriori principle proposed in the present paper is free from the above mentioned drawback of discrepancy principle. Namely, for the first time one has an adaptive principle that allows to reach the best order of accuracy for all linear ill-posed problems that in principle can be treated in optimal way by the regularization method with fixed qualification. In a final example we treat classes of source conditions, where classical regularization works. This class covers all source conditions, studied do far, in particular by Hohage [8]. This also sheds light on the

4 4 PETER MATHÉ AND SERGEI V. PEREVERZEV discussion in [2, Remark 3.2]. Under certain concavity assumptions, classical regularization of qualification, capturing these, is suited for regularization. 2. Variable Hilbert scales and the degree of ill-posedness Here we briefly summarize the concept of variable Hilbert scales, as introduced in Hegland [6, 7]. The basic ingredient is a non-negative compact self-adjoint operator T : X X, acting in a given initial Hilbert space X. Moreover T is assumed to be injective. Its singular numbers are denoted by (s k ) k=1, arranged in non-increasing order. In particular a := s 1 = T. Since T is compact, the only limit point is 0. T admits a (monotonic) Schmidt representation for an orthonormal system u 1, u 1,..., given by T x = s j x, u j u j, x X. j=1 Any function ϕ : (0, a] (0, ) is called an index function. We denote I(0, a] the set of all index functions on (0, a]. As pointed out in [6], this forms a multiplicative group. Each such function can be assigned a pre-hilbert space in the following way. Let { } n F := x, x = x, u j u j, n <, j=1 be the linear space of finite expansions in u 1, u 2,.... Given ϕ I(0, a] we can endow F with scalar product x, u j y, u j x, y :=, x, y F. ϕ 2 (s j ) j=1 The completion of F is denoted by X ϕ. The family {X ϕ, ϕ I(0, a]}, is called a variable Hilbert scale. For the understanding of this concept the following facts are important. (1) There is an embedding X ϕ X ψ, iff there is a constant C <, such that ϕ(s k ) Cψ(s k ). (2) The above embedding is compact if and only if lim ϕ(s k)/ψ(s k ) = 0. k (3) The adjoint space X ϕ to X ϕ is isometric to X 1/ϕ.

5 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 5 (4) The following interpolation inequality holds true for any two increasing functions ϕ, ψ, such that the composition ϕ 2 (ψ 2 ) 1 is convex. ) ) (ϕ 2 ) 1 ( x 2 θ/ϕ x 2 θ (ψ 2 ) 1 ( x 2 θ/ψ x 2 θ, x X max{ϕ,ψ} 1 θ \ 0. for any index function θ. (5) The following transfer relation is true. Let {X ϕ } be a scale generated by T and {Y ψ } be generated by the operator θ(t ), for some increasing function θ. Then an easy calculation shows, that X ϕ is isometric to Y ϕ θ 1. (6) Let A: X Y be any injective compact operator. The scales {X ϕ, ϕ I(0, a]}, generated by A A and {Y ϕ, ϕ I(0, a]}, generated by AA are isometrically isomorphic. As a consequence, any Hilbert scale in X, generated by some operator with singular values s 1, s 2,... is isometrically isomorphic to some scale in l 2, generated by a corresponding diagonal operator with diagonal entries s 1, s 2,.... From now on we shall assume, that the scale is generated by T := A A. We mention, that then A ϕ (R) is the ball of radius R in X ϕ. With this preliminary discussion we may turn to the description of the degree of ill-posedness, following previous discussion in [14]. We need to define the degree of ill-posedness of the operator A as well as the effective smoothness of the solution. The latter is certainly represented by ϕ, if the error is measured in X. It is represented by the ϕ/ν, the distance between the spaces X ϕ and X ν, if we agree to measure the error in the latter space. The degree of ill-posedness, say λ of the operator A is defined by (6) λ := inf { ψ, A 1 : X X ψ 1 }. Note, that in contrast to [14, 2], we need to fix the constant, since otherwise any multiple of λ would be admissible. An easy computation shows, that we arrive at λ(t) = 1/ t. As it will appear in Theorem 1 below, the error can be expressed by the two indices, the effective smoothness, and the distance between X ϕ and X λ, which is (7) Θ(t) := tϕ(t), t > 0, precisely as ϕ Θ 1, if Θ was increasing. This function will play a crucial role in choosing the regularization parameter as well as for representing the error, see Section 3.

6 6 PETER MATHÉ AND SERGEI V. PEREVERZEV 3. Best possible accuracy Within the context of variable Hilbert scales the best possible accuracy is known for a variety of index functions ϕ and for deterministic noise. For later use we briefly recall the approach. As usual, the best possible accuracy is represented by e(a ϕ (R), δ) = sup { x, x ϕ R, Ax δ}. Introducing v with x = ϕ(a A)v, and rescaling, we may rewrite this as (8) e 2 (A ϕ (R), δ) = R 2 sup { ϕ(a A)v 2, v 2 1, Aϕ(A A)v 2 δ 2 /R 2}. In many cases the optimization problem (8) has an explicit solution, we refer to [9] and also [6, 4], [25, Thm. 2.1]. It is immediate, that the function Θ from (7) is strictly increasing if ϕ was. Theorem 1. Let ϕ be any index function, for which Θ is strictly increasing, Θ(t) 0 as t 0. Then (9) e(a ϕ (R), RΘ(s j )) Rϕ(s j ), j = 1, 2,.... Moreover, if the function t ϕ 2 ((Θ 2 ) 1 (t)) is concave, then (10) e 2 (A ϕ (R), δ) = R 2 s(δ 2 /R 2 ), δ R A, where s is a piece-wise linear spline, interpolating (11) s(θ 2 (s j )) = ϕ 2 (s j ), j = 1, 2,.... As a consequence, (12) e(a ϕ (R), δ) Rϕ(Θ 1 (δ/r)), δ R A. Proof. The first statement is trivial. Representation (10) with (11) follows from [9]. It remains to prove (12). Let δ/r A be given. Then there are some k and convex combination with α + β = 1 for which (δ/r) 2 = αθ 2 (s k ) + βθ 2 (s k+1 ). By (10) we may write e 2 (A ϕ (R), δ) = R 2 αϕ 2 (s k ) + R 2 βϕ 2 (s k+1 ) (13) = R 2 αϕ 2 ((Θ 2 ) 1 (Θ 2 (s k ))) + R 2 βϕ 2 ((Θ 2 ) 1 (Θ 2 (s k+1 ))) R 2 ϕ 2 ((Θ 2 ) 1 ((δ/r) 2 )) = R 2 ϕ 2 (Θ 1 (δ/r)),, where we used the concavity assumption to derive (13). The proof is complete.

7 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 7 Remark 1. We first make the important observation, that the error tends to 0 only, if ϕ tends to 0, which means, if X ϕ is compactly embedded in X. We further note, that the concavity assumption on ϕ 2 ((Θ 2 ) 1 ) is equivalent to convexity of ρ(t) := Θ 2 (ϕ 2 (t)), which is just the assumption made in Tautenhahn [25, 1.1(iii)], thus Theorem 1 summarizes previous results. However, the representation of the error in terms of the function ϕ Θ 1 turns out to be useful. Under fairly general assumptions the function δ ϕ(θ 1 (δ/r)) also provides a lower bound. Precisely we state Corollary 1. Assume that ϕ is increasing and obeys a 2 -condition, i.e., there is C < for which ϕ(2t) Cϕ(t), 0 < t a. Assume furthermore, that t ϕ 2 ((Θ 2 ) 1 ) is concave. If moreover, the singular numbers of A A obey s j+1 /s j γ > 0, then there is a constant c γ > 0, such that (14) e(a ϕ, δ) c γ Rϕ(Θ 1 (δ/r)), 0 < δ Ra. Proof. First, iterating the 2 -condition, if necessary, we can find c γ, such that ϕ(γt) c γ ϕ(t), 0 < t a. Secondly, by monotonicity of Θ, given δ Ra, there is an index i, for which Θ 2 (s i+1 ) δ 2 /R 2 Θ 2 (s i ). Recall, that by (11), the exact error is provided through the piece-wise linear spline s. Taking all above facts into account we can estimate s(δ 2 /R 2 ) s(θ 2 (s i+1 )) = ϕ 2 (s i+1 ) ϕ 2 (γs i ) c 2 γϕ 2 (s i ) = c 2 γϕ 2 ((Θ 2 ) 1 (Θ 2 (s i ))) which allows to complete the proof. c 2 γϕ 2 ((Θ 2 ) 1 (δ 2 /R 2 )) = c 2 γϕ 2 (Θ 1 (δ/r)), A sufficient condition for ϕ 2 ((Θ 2 ) 1 ) to be concave is given in Proposition 1. Suppose ϕ is non-decreasing and twice differentiable on (0, a). Then ϕ 2 ((Θ 2 ) 1 ) is concave, provided t log ϕ(t) was concave. Sketch of the proof. Let f(t) := ϕ 2 ((Θ 2 ) 1 (t)). Re-parametrization yields the implicit formula f(tϕ 2 (t)) = ϕ 2 (t). Differentiating this twice results in the following representation in terms of s := Θ 2 (t). f (s) = ((ϕ)2 (s)) ((Θ) 2 (s)) + (log ϕ2 (s)) ϕ 4 (s)/((θ) 2 (s)),

8 8 PETER MATHÉ AND SERGEI V. PEREVERZEV which is negative, if (log ϕ 2 (s)) The general linear ill-posed problem Let us recall the original equation y δ = Ax + δξ. It is interesting to relate this to the symmetrized equation A y δ = A Ax + δa ξ, or after letting z δ := A y δ and ζ := A ξ we arrive at (15) z δ = A Ax + δζ. The advantage of (15) is the following: It is entirely defined within the Hilbert scale X ϕ, ϕ I(0, a]. But, the error ζ is now bounded in X t, as shows ζ t = A ξ t = ξ, the latter norm is the one in Y. Since A is injective, its adjoint has dense range, such that the noise is is not degenerate. Moreover, any method z δ S(z δ ) for solving equation (15) corresponds a method for solving (1), letting y δ S(A y δ ), which is seen to have the same error. On the other hand, the original problem is not simpler, since the best possible accuracy of problem (15) is the same as for the original one, which can be seen by an easy calculation. In this sense the problems as given by equations (1) and (15) are equivalent. Returning to (15), it is natural, and it has applications when studying random noise, a problem, which will be treated elsewhere in [16], to extend to more general assumptions on the noise, precisely we assume, that ζ ψ 1, for some index function ψ. The description of the problem is complete, after fixing the space X ν, in which the error will be measured. As a shorthand for the problem under consideration we will agree to write (A A: X ϕ X ν, A ϕ (R), A ψ, δ), showing all parameters involved. The following two relations between the parameters are basic. (1) The embedding X ϕ X ν is compact. (2) The limit lim t 0 tϕ(t)/ψ(t) = 0. As can be seen from the reasoning below, if the first assumption is violated, then the error does not tend to 0. Moreover, we note, that A A has a bounded inverse from X ψ X ψ/t. The second relation means that the embedding X ϕ X ψ/t is compact, and our problem is really ill-posed. Thus below we shall assume, that the above relations are satisfied.

9 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 9 The respective degree of ill-posedness of the operator, corresponding to (6) in Section 2, is now given through ψ/t, which is the minimal index function, for which the operator A A is invertible. The same reasoning as in the proof of Theorem 1 yields the following Corollary 2. Let us make the following assumptions, in addition to the relations above. (1) The function ϕ/ν is increasing. (2) The function tϕ/ψ is increasing. (3) The function t (ϕ/ν) 2 ((t 2 ϕ 2 /ψ 2 ) 1 (t)) is concave. Then the following error bound holds true. (16) e(a A: X ϕ X ν, A ϕ (R), A ψ, δ) R(ϕ/ν) ( (tϕ/ψ) 1 (δ/r) ). Remark 2. Theorem 1 is of course a special case, with ν 1 and ψ(t) = t. For the important case ν 1, we may provide the following sufficient condition for concavity. Proposition 2. Suppose, ϕ and ψ ate twice differentiable on (0, a). Under (1) and (2) the function t ϕ 2 ((t 2 ϕ 2 /ψ 2 ) 1 (t)) is concave, provided log ϕ was concave and t t 2 /ψ 2 (t) was convex. Sketch of the proof. Let f denote the function under consideration. If we furthermore abbreviate g := t 2 /ψ 2 and h := ϕ 2, then we obtain the following implicit representation. f(g(t)h(t)) = h(t). Differentiating this twice yields (suppressing the variable t), f (gh)[(gh) ] 2 = 1 (gh) [ g h 2 (log ϕ) h g (h ) 2 g ], such that under the above assumptions f is non-negative. Remark 3. Note, that for the function ψ(t) = t, the function t t 2 /ψ 2 (t) is just t t, which is convex in a trivial manner. We end this section with the following discussion. If two problems (A A: X ϕ X ν, A ϕ (R), A ψ, δ) and (B B : X ϕ X ν, A ϕ (R), A ψ, δ) are related via a monotone transformation θ, for which A A = θ(b B), ϕ = ϕ θ and ψ = ψ θ, then the best possible accuracies coincide. It is thus interesting to know, whether a given ill-posed problem with operator, say B : X Z, mapping X into some other Hilbert space Z, corresponds to some problem within the Hilbert scale, generated by

10 10 PETER MATHÉ AND SERGEI V. PEREVERZEV A A. This is is the case, if B admits a (monotonic) Schmidt representation B = j=1 β ju j z j, with the same orthonormal system u 1, u 2,..., as A A. Such condition turned out to be useful by Mair and Ruymgaart [12]. As mentioned in Section 2, this allows to reduce the original ill-posed problem to a diagonal one in l 2. If this is fulfilled, then we can assign θ(β j ) = s j, j = 1, 2,.... If this correspondence is increasing, then the above calculus applies. 5. Regularization The above upper bounds are obtained by some abstract argument and are not supported by an explicit method. Hegland [6] and Tautenhahn [25] have indicated that certain specific methods, involving the index function ϕ, may be used, to achieve the best possible accuracy as given in (12). It is however easy to design a method, say S, which realizes e(a ϕ (R), S, δ) 2Rϕ(Θ 1 (δ/r)). The considerations in Section 4 allow to restrict our attention to selfadjoint problems (15), with noise in X t. To this end, given δ Ra, let α satisfy Θ(α) = δ/r. Furthermore, let N = N(α) := max {j, s j α}. It is then straight forward to see, that (17) S N (y δ ) := N j=1 1 s j y δ, u j u j, realizes e(a ϕ (R), S, δ) Rϕ(α) + δ/ α 2Rϕ(α), from which the assertion follows. This reveals two things. First, up to a factor 2, the upper bound can be achieved without additional concavity assumptions. Second, the above method (17) requires complete knowledge of the operator, which is numerically infeasible. For this reason regularization methods, which replace the above spectral cut-off by feasible functions of the operator A A have been developed. The study of such regularizations within the framework of variable Hilbert scales is the scope of this section. Specifically, we are interested in regularization methods given by some operator function α g α (A A), 0 < α a, i.e., the approximation to x A ϕ is given by choosing some α = α(δ) and letting x α,δ := g α (A A)z δ (= g α (A A)A y δ ).

11 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 11 By the spectral calculus, each function defined on (0, a) taking real values can be assigned a respective function taking non-negative operators to self-adjoint ones. Therefore we may and do identify g α with its real valued function. Our aim is to discuss the interplay between certain qualification of the regularization and the index function. A look at the error for solving (15) using g α shows, that for any x X we have (18) x g α (A A)z δ = x g α (A A)A Ax δg α (A A)ζ. Therefore it is natural to assume that I g α (A A)A A: X X C, 0 < α a, even convergence to 0 as α 0 will be required later on, and g α (A A)A A: X t X C α, 0 < α a. The latter requirement is somewhat arbitrary. We want however, that the optimal parameter α corresponds to the spectral cut-off, i.e., it should solve Θ(α) = δ/r. Under this side restriction, the asymptotics g α (A A)A A: X C t X α is necessary. In terms of real functions g α these requirements can be expressed as follows. Definition 1. A family g α, 0 < α a is called regularization, if there are constants γ and γ for which and sup 1 λg α (λ) γ, 0 < α a. 0<λ a sup λ gα (λ) γ, 0 < α a, 0<λ a α The regularization g α is said to have qualification ρ, for an increasing function ρ : (0, a) R +, if (19) sup 1 λg α (λ) ρ(λ) γρ(α), 0 < α a. 0<λ a Remark 4. It is worthwhile to note, that for any given ρ respective regularization of qualification ρ can be constructed. Actually, the spectral cut-off (17), which corresponds to { 1/λ, α λ a, g α (λ) := 0, 0 < λ < α, has arbitrary qualification.

12 12 PETER MATHÉ AND SERGEI V. PEREVERZEV The classical regularizations are special cases of the general definition by using monomials of prescribed degree, thus sup λ q 1 λg α (λ) γ q α q, for every 0 q p. 0<λ a In this case, we call this classical qualification of order p. We now turn to the study of the interplay between qualification, say ρ and properties of the index function, say ϕ. Definition 2. We say, that the qualification ρ covers ϕ, if there is c > 0 such that (20) c ρ(α) ϕ(α) inf ρ(λ) α λ a ϕ(λ), 0 < α a. If this is the case, then we shall say, that ρ covers ϕ with constant c. Formally, we need the slightly weaker assumption. Lemma 1. The qualification ρ covers ϕ, if there are c > 0 and 0 < t 0 a, such that (20) is fulfilled with inf restricted to α λ t 0 and constant c. Proof. If α t 0 λ a, then by monotonicity of ρ and of ϕ, we conclude ρ(λ) ϕ(λ) ρ(t 0) ϕ(a) cϕ(t 0) ϕ(a) ρ(α) ϕ(α), making use of (20) for α λ t 0, only, to obtain the last estimate. Remark 5. If the function λ ρ(λ)/ϕ(λ) is increasing, then (20) is certainly satisfied with c = 1. Since the infimum in (20) is certainly less then or equal its value at a, necessarily X ρ X ϕ. The importance of this notion is expressed in the following Proposition 3. Let ϕ be any non-decreasing index function and let g α be a regularization of qualification ρ that covers ϕ. Then (21) sup 1 λg α (λ) ϕ(λ) γ ϕ(α), α a, 0<λ a c where γ is from (19) and c from (20). Proof. We introduce the function λ 1 λg α (λ) ϕ(λ). We need to show, that it is uniformly bounded by the right hand side in (21), for any value of α. We distinguish two cases. First, if λ α, then (21) is

13 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 13 fulfilled by monotonicity of ϕ with constant γ. Otherwise, if α λ a, then we conclude 1 λg α (λ) ϕ(λ) 1 λg α (λ) ρ(λ) ϕ(λ) ρ(λ) ϕ(λ) γρ(α) sup α λ a ρ(λ) The proof is complete. γ c ρ(α)ϕ(α) ρ(α) = γ c ϕ(α). It is interesting to know, how a regularization acts, if the qualification does not cover the actual smoothness. A look at the above proof reveals the following Corollary 3. Let ϕ be any non-decreasing index function and let g α be a regularization of qualification ρ. If X ϕ X ρ, then there is a constant C < for which (22) sup 1 λg α (λ) ϕ(λ) Cρ(α), α a, 0<λ a We now state and prove the main result in this section. Theorem 2. Let ϕ be any increasing index function and let ᾱ be chosen to satisfy (23) ᾱϕ(ᾱ) = δ/r. If g α is any regularization of qualification that covers ϕ with constant c, then (24) e(a ϕ (R), gᾱ, δ) R (γ/c + γ ) ϕ(θ 1 (δ/r)), 0 < δ Ra. Proof. We analyze the error of gᾱ with the above choice of ᾱ. Recall that for any x X we have x xᾱ,δ = x gᾱ(a A)A Ax δgᾱ(a A)ζ. Therefore, and since x A ϕ (R), (25) x xᾱ,δ R (I gᾱ(a A)A A)ϕ(A A) X X + δ gᾱ(a A) X t X R sup 1 λgᾱ(λ) ϕ(λ) + δ sup λgᾱ(λ). 0<λ a 0<λ a

14 14 PETER MATHÉ AND SERGEI V. PEREVERZEV By the qualification of g α the first summand is bounded by Rγ/cϕ(ᾱ), using Proposition 3. The second summand above can be bounded by γ δ/ ᾱ. This leads to x xᾱ,δ ( Rγ/cϕ(ᾱ) + γ δ/ ᾱ ) = R (γ/c + γ ) ϕ(ᾱ). Since ᾱ = Θ 1 (δ/r), the proof of the theorem is complete. In view of Theorem 1 and Corollary 1 the above asymptotics cannot be beaten. Remark 6. It is interesting to look at the respective result when measuring the error in X ν. If ν(t) 0 as t 0, then we need regularization with less qualification! It needs to cover ϕ/ν,only. 6. Adaptation to unknown source condition To complete the picture of regularization we shall now discuss the topic of adaptation. Here the goal is to find a strategy of choosing the regularization parameter α without knowledge of ϕ. Additionally the quality of the adaptive strategy will depend on some bound of the 2 -condition. Details are given below. This strategy will work successfully uniformly for index functions, which are uniformly covered by some known qualification. It is not true for discrepancy principle. For example, Tikhonov regularization has a qualification ρ(λ) = λ, while discrepancy principle may provide order-optimal choice of regularization parameter if index function ϕ(λ) is covered by λ. We propose a scheme, based on Lepskii s original approach, which chooses the appropriate α from a finite set of possible parameters, for a given regularization, with known constant γ from (19). To this end we recall the estimate (24) e(a ϕ, gᾱ, δ) (γ/c + γ ) ϕ(θ 1 (δ)), 0 < δ a, Let us denote C γ := max {γ/c, γ }. Precisely, fix q > 1 and α 0 > δ 2, which is certainly necessary to have a nontrivial error bound. Now let α k := α 0 q k, k = 1, 2,..., n, with n such that α n 1 a α n. (26) q := {α k, k = 0, 1,..., n}. The cardinality n is of order log(a/δ 2 )/ log q log(1/δ). Now the strategy consists in computing successively α 0, α 1,... as long as (27) x αi,δ x αi 1,δ 4C γ δ αi 1.

15 It terminates with ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 15 { } δ (28) ᾱ := max α i : x αi,δ x αi 1,δ 4C γ. αi 1 Remark 7. As described above, this is an ideal situation, since in general the involved norm might not be computable. However, in practical situations, the regularization is based in discretization, where often these norms can be computed. In order to study the property of the finally chosen regularization xᾱ,δ we need to introduce (29) α := max {α q, Θ(α) δ}, which is not accessible, since ϕ, thus Θ are unknown. We note, that by this definition Θ(α ) δ, while Θ(qα ) > δ, which will be important below. Proposition 4. Fix any 0 < c 1 and let C γ,q := 2C γ (1 + 2 q q 1 ). If ϕ is covered by ρ with constant c, then the following assertions are true, uniformly for x A ϕ. (1) α ᾱ. (2) x xᾱ,δ C γ,q δ/ α. Proof. First, by construction, for α, condition (27) is satisfied, since α = α l for some l, hence x α,δ x αl 1,δ x x αl,δ + x x αl 1,δ ( C γ ϕ(α l ) + δ 4C γ, αl 1 δ αl + ϕ(α l 1 ) + δ ) αl 1

16 16 PETER MATHÉ AND SERGEI V. PEREVERZEV by monotonicity. Thus α ᾱ. Also, ᾱ = α m, for some m l. Using the triangle inequality successively, we arrive at m l x xᾱ,δ x x α,δ + x αm j,δ x αm j+1,δ j=1 m l 1 x x α,δ + 4δC γ αm i = C γ ( ϕ(α ) + δ 2C γ + δ 4C γ α j=1 δ ) + δ 4C m l γ α α α j=1 1 q m l j q δ = C γ,q. q 1 α The proof is complete. The adaptive strategy described above provides the optimal order of accuracy, uniformly over the following class of index functions. Given D <, let F c,d := {ϕ, ϕ(qt) Dϕ(t), t > 0, ϕ is covered by ρ with constant c}. Theorem 3. On the class F c,d the adaptive strategy provides the following error bound, uniformly for x A ϕ. (30) x xᾱ,δ C γ,q D qϕ(θ 1 (δ)), 0 < δ a. Proof. By Proposition 4 we can conclude δ δ x xᾱ,δ C γ,q = C γ,q q α qα which proves (30). C γ,q qϕ(qα ) C γ,q qdϕ(α ) C γ,q qdϕ(θ 1 (δ)), Remark 8. In the classical Hilbert scales, when smoothness is measured in terms of ϕ(t) := t ν, and regularization is classical with ρ(t) = t µ, then ρ covers ϕ if and only if µ ν. Moreover, the respective constant c equals 1. The 2 -conditions for all ϕ as above are then satisfied with D := q µ, which allows to apply the adaptive strategy, if smoothness is unknown, but an upper bound is given. Again, it is interesting to know, how this adaptation proceeds, if the qualification of the chosen regularization does not cover the actual smoothness ϕ.

17 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 17 Corollary 4. Suppose the qualification ρ of the chosen regularization does not cover ϕ. If X ϕ X ρ, then the adaptive strategy yields automatically (31) x xᾱ,δ 6γCρ(Θ 1 (δ)), 0 < δ a, uniformly for x A ϕ, where C is the embedding constant and Θ(t) := tρ(t). Thus xᾱ,δ provides the optimal order on class A ρ. 7. Example: p-concave source conditions We now describe a class of source conditions, obeying certain concavity properties. This allows to prove, that classical regularization with appropriate qualification provides the optimal order of approximation. The following notion generalizes concavity of a real function. Let p 0 be any real number. Definition 3. A function ϕ : R + R + is called concave of order p on the interval (0, t 0 ), if t ϕ(t)/t p is non-decreasing and concave for 0 < t t 0. For p = 0 this resembles the notion of monotonicity and concavity (near 0). The following class of functions covers all cases studied so far. Example. Given some p 0 and µ 0, we let ϕ p,µ (t) := t p log µ (1/t), t > 0. It is straight forward to check, that each ϕ p,µ is concave of order p on (0, e ( µ 1) ), provided that at least one of the parameters p or µ is positive. The following result shows, that Theorem 1 is applicable for such source conditions. Proposition 5. Suppose ϕ is twice differentiable on (0, a). If for some p the function ϕ is concave of order p on (0, t 0 ), then t ϕ 2 ((Θ 2 ) 1 (t)) is concave for t t 0. Proof. Suppose, ϕ is concave of order p on (0, t 0 ). Since it is the product of a concave index function with a polynomial, log ϕ is concave. Thus Proposition 1 applies. For p-concave functions we state the following basic observation. Lemma 2. Suppose ϕ is concave of order p on (0, t 0 ). covered by any classical regularization of order p + 1. Then it is

18 18 PETER MATHÉ AND SERGEI V. PEREVERZEV Proof. By assumption, since ϕ p : t ϕ(t)/t p is concave, we have ϕ p (s)/s ϕ p (t)/t, whenever 0 < s t. Thus the function t t p+1 /ϕ(t) is monotonically increasing, which ensures property (20) with constant c = 1. We conclude our study with the following discussion. First, for the index functions from the above example, we can provide the asymptotically optimal error and indicate the respective parameter choice for appropriate regularization. Let us abbreviate A p,µ := A ϕp,µ. Theorem 4. Let 0 p < and µ > 0. (1) As δ 0 we have e(a p,µ, δ) ( (δ 2 ) p log µ (1/δ 2 ) 1/(2p+1)) 1/(2p+1) (1 + O(δ)). (2) Any regularization of qualification at least p+1 with choice of α := ( δ 2 log 2µ (1/δ 2 ) 1/(2p+1)) 1/(2p+1), provides the optimal order of approximation. The proof is based on the following assertions, which can easily be verified. Lemma 3. Let q, µ > 0 and introduce ψ q,µ (s) := s 1/q log µ/q 1/s 1/q, s > 0. For the inverse function ϕ 1 q,µ we have ϕ 1 q,µ(s) lim s 0 ψ q,µ (s) = 1. Moreover, if Θ p,µ (t) := tϕ p,µ (t), t > 0, then lim δ 0 ϕp,µ 2p+1 (Θ 1 p,µ(δ)) (δ 2 ) p log µ (1/δ 2 ) 1/(2p+1) = 1. Remark 9. The assertions from Theorem 4 extend naturally to µ = 0, yielding the classical estimates. For µ > 0 and p = 0, this provides the asymptotic error e(a 0,µ, δ) log µ (1/δ 2 )(1 + O(δ)). Also, Tikhonov regularization with α := δ 2 log 2µ (1/δ 2 ), provides the optimal order. However, the error is not very sensitive with respect to over-smoothing, i.e., choosing α too large, for instance α δ r, for 0 < r < 2. Such choice still provides the optimal order. This is consistent with [8] and [22].

19 ILL-POSED PROBLEMS IN VARIABLE HILBERT SCALES 19 Finally, we finish with an immediate consequence for concave source conditions. We state the following interesting Corollary 5. Tikhonov regularization ḡ α (λ) := 1/(α + λ), used with Θ(α) = δ/r is optimal, uniformly for all source conditions, given by concave index functions. With this regularization parameter its error is bounded by (32) e(a ϕ (R), ḡ α, δ) 3/2Rϕ(Θ 1 (δ/r)). Moreover, since concave functions belong to F(1, q), the adaptive strategy applies and adapts to the optimal rate of convergence. References [1] A. B. Bakushinskiĭ. On the rate of convergence of iterative processes for nonlinear operator equations. Zh. Vychisl. Mat. Mat. Fiz., 38(4): , [2] Peter Deuflhard, Heinz W. Engl, and Otmar Scherzer. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Problems, 14(5): , [3] Helmut Gfrerer. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Math. Comp., 49(180): , S5 S12, [4] Alexander Goldenshluger and Sergei V. Pereverzev. Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observations. Probab. Theory Related Fields, 118(2): , [5] C. W. Groetsch. The theory of Tikhonov regularization for Fredholm equations of the first kind, volume 105 of Research Notes in Mathematics. Pitman (Advanced Publishing Program), Boston, MA, [6] Markus Hegland. An optimal order regularization method which does not use additional smoothness assumptions. SIAM J. Numer. Anal., 29(5): , [7] Markus Hegland. Variable Hilbert scales and their interpolation inequalities with applications to Tikhonov regularization. Appl. Anal., 59(1-4): , [8] Thorsten Hohage. Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optim., 21(3-4): , [9] V. Ivanov and T. Korolyuk. Error estimates for solutions of incorrectly posed linear problems. Zh. Vychisl. Mat. i Mat. Fiz., 9:30 41, [10] O. V. Lepskiĭ. A problem of adaptive estimation in Gaussian white noise. Teor. Veroyatnost. i Primenen., 35(3): , [11] B. A. Mair. Tikhonov regularization for finitely and infinitely smoothing operators. SIAM J. Math. Anal., 25(1): , [12] Bernard A. Mair and Frits H. Ruymgaart. Statistical inverse estimation in Hilbert scales. SIAM J. Appl. Math., 56(5): , [13] J. T. Marti. An algorithm for computing minimum norm solutions of Fredholm integral equations of the first kind. SIAM J. Numer. Anal., 15(6): , 1978.

20 20 PETER MATHÉ AND SERGEI V. PEREVERZEV [14] Peter Mathé and Sergei V. Pereverzev. Optimal discretization of inverse problems in Hilbert scales. Regularization and self-regularization of projection methods. SIAM J. Numer. Anal., 38(6): , [15] Peter Mathé and Sergei V. Pereverzev. Discretization strategy for ill-posed problems in variable Hilbert scales. In preparation, [16] Peter Mathé and Sergei V. Pereverzev. Optimal error of ill-posed problems in variable Hilbert scales under the presence of white noise. In preparation, [17] Avraham A. Melkman and Charles A. Micchelli. Optimal estimation of linear operators in Hilbert spaces from inaccurate data. SIAM J. Numer. Anal., 16(1):87 105, [18] C. A. Micchelli and T. J. Rivlin. A survey of optimal recovery. In Optimal estimation in approximation theory (Proc. Internat. Sympos., Freudenstadt, 1976), pages Plenum, New York, [19] V. A. Morozov. On the solution of functional equations by the method of regularization. Soviet Math. Dokl., 7: , [20] Th. M. Nair, E. Schock, and U. Tautenhahn. Morozov s discrepancy principle under general source conditions. Univ. Kaiserslautern, [21] M. Nussbaum and S. V. Pereverzev. The degree of ill-posedness in stochastic and deterministic noise models. Preprint 509, WIAS Berlin, [22] Sergei Pereverzev and Eberhard Schock. Morozov s discrepancy principle for Tikhonov regularization of severely ill-posed problems in finite-dimensional subspaces. Numer. Funct. Anal. Optim., 21(7-8): , [23] David L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9:84 97, [24] U. Tautenhahn and U. Hämarik. The use of monotonicity for choosing the regularization parameter in ill-posed problems. Inverse Problems, 15(6): , [25] Ulrich Tautenhahn. Optimality for ill-posed problems under general source conditions. Numer. Funct. Anal. Optim., 19(3-4): , [26] A. N. Tikhonov. On the solution of incorrectly put problems and the regularisation method. In Outlines Joint Sympos. Partial Differential Equations (Novosibirsk, 1963), pages Acad. Sci. USSR Siberian Branch, Moscow, [27] Alexandre Tsybakov. On the best rate of adaptive estimation in some inverse problems. C. R. Acad. Sci. Paris Sér. I Math., 330(9): , [28] G. M. Vaĭnikko and A. Yu. Veretennikov. Iteracionnye procedury v nekorrektnyh zadaqah. Nauka, Moscow, [29] Grace Wahba. Practical approximate solutions to linear operator equations when the data are noisy. SIAM J. Numer. Anal., 14(4): , Weierstra s Institute for Applied Analysis and Stochastics, Mohrenstra se 39, D Berlin, Germany address: mathe@wias-berlin.de National Academy of Sciences of Ukraine, Institute of Mathematics, Tereshenkivska Str. 3, Kiev 4, Ukraine address: serg-p@mail.kar.net

Direct estimation of linear functionals from indirect noisy observations

Direct estimation of linear functionals from indirect noisy observations Direct estimation of linear functionals from indirect noisy observations Peter Mathé Weierstraß Institute for Applied Analysis and Stochastics, Mohrenstraße 39, D 10117 Berlin, Germany E-mail: mathe@wias-berlin.de

More information

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA

REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA REGULARIZATION OF SOME LINEAR ILL-POSED PROBLEMS WITH DISCRETIZED RANDOM NOISY DATA PETER MATHÉ AND SERGEI V. PEREVERZEV Abstract. For linear statistical ill-posed problems in Hilbert spaces we introduce

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

Smoothness beyond dierentiability

Smoothness beyond dierentiability Smoothness beyond dierentiability Peter Mathé Tartu, October 15, 014 1 Brief Motivation Fundamental Theorem of Analysis Theorem 1. Suppose that f H 1 (0, 1). Then there exists a g L (0, 1) such that f(x)

More information

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise

Numerical differentiation by means of Legendre polynomials in the presence of square summable noise www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical

More information

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE

ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space

Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Conditional stability versus ill-posedness for operator equations with monotone operators in Hilbert space Radu Ioan Boț and Bernd Hofmann September 16, 2016 Abstract In the literature on singular perturbation

More information

Convergence rates in l 1 -regularization when the basis is not smooth enough

Convergence rates in l 1 -regularization when the basis is not smooth enough Convergence rates in l 1 -regularization when the basis is not smooth enough Jens Flemming, Markus Hegland November 29, 2013 Abstract Sparsity promoting regularization is an important technique for signal

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1999 83: 139 159 Numerische Mathematik c Springer-Verlag 1999 On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems Jin Qi-nian 1, Hou Zong-yi

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators

Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Approximate source conditions in Tikhonov-Phillips regularization and consequences for inverse problems with multiplication operators Bernd Hofmann Abstract The object of this paper is threefold. First,

More information

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1

Rolle s Theorem for Polynomials of Degree Four in a Hilbert Space 1 Journal of Mathematical Analysis and Applications 265, 322 33 (2002) doi:0.006/jmaa.200.7708, available online at http://www.idealibrary.com on Rolle s Theorem for Polynomials of Degree Four in a Hilbert

More information

A note on the σ-algebra of cylinder sets and all that

A note on the σ-algebra of cylinder sets and all that A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In

More information

Preconditioned Newton methods for ill-posed problems

Preconditioned Newton methods for ill-posed problems Preconditioned Newton methods for ill-posed problems Dissertation zur Erlangung des Doktorgrades der Mathematisch-Naturwissenschaftlichen Fakultäten der Georg-August-Universität zu Göttingen vorgelegt

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Applied Mathematical Sciences, Vol. 6, 212, no. 63, 319-3117 Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces Nguyen Buong Vietnamese

More information

Dynamical systems method (DSM) for selfadjoint operators

Dynamical systems method (DSM) for selfadjoint operators Dynamical systems method (DSM) for selfadjoint operators A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 6656-262, USA ramm@math.ksu.edu http://www.math.ksu.edu/ ramm Abstract

More information

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS

ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX- ON THE DYNAMICAL SYSTEMS METHOD FOR SOLVING NONLINEAR EQUATIONS WITH MONOTONE OPERATORS N. S. HOANG AND A. G. RAMM (Communicated

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

A Spectral Characterization of Closed Range Operators 1

A Spectral Characterization of Closed Range Operators 1 A Spectral Characterization of Closed Range Operators 1 M.THAMBAN NAIR (IIT Madras) 1 Closed Range Operators Operator equations of the form T x = y, where T : X Y is a linear operator between normed linear

More information

Convergence rates of spectral methods for statistical inverse learning problems

Convergence rates of spectral methods for statistical inverse learning problems Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization

A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization A Family of Preconditioned Iteratively Regularized Methods For Nonlinear Minimization Alexandra Smirnova Rosemary A Renaut March 27, 2008 Abstract The preconditioned iteratively regularized Gauss-Newton

More information

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3,

[11] Peter Mathé and Ulrich Tautenhahn, Regularization under general noise assumptions, Inverse Problems 27 (2011), no. 3, Literatur [1] Radu Boţ, Bernd Hofmann, and Peter Mathé, Regularizability of illposed problems and the modulus of continuity, Zeitschrift für Analysis und ihre Anwendungen. Journal of Analysis and its Applications

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Sung-Wook Park*, Hyuk Han**, and Se Won Park***

Sung-Wook Park*, Hyuk Han**, and Se Won Park*** JOURNAL OF THE CHUNGCHEONG MATHEMATICAL SOCIETY Volume 16, No. 1, June 2003 CONTINUITY OF LINEAR OPERATOR INTERTWINING WITH DECOMPOSABLE OPERATORS AND PURE HYPONORMAL OPERATORS Sung-Wook Park*, Hyuk Han**,

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author s benefit and for the benefit of the author s institution, for non-commercial

More information

Commutative Banach algebras 79

Commutative Banach algebras 79 8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Int. J. Contemp. Math. Sciences, Vol. 5, 2010, no. 52, 2547-2565 An Iteratively Regularized Projection Method for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical and Computational

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 1 (2007), no. 1, 56 65 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) http://www.math-analysis.org SOME REMARKS ON STABILITY AND SOLVABILITY OF LINEAR FUNCTIONAL

More information

Ill-Posedness of Backward Heat Conduction Problem 1

Ill-Posedness of Backward Heat Conduction Problem 1 Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

On John type ellipsoids

On John type ellipsoids On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to

More information

Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005

Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 Proceedings of the 5th International Conference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July 2005 SOME INVERSE SCATTERING PROBLEMS FOR TWO-DIMENSIONAL SCHRÖDINGER

More information

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS

ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity

More information

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems

An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical

More information

1.2 Fundamental Theorems of Functional Analysis

1.2 Fundamental Theorems of Functional Analysis 1.2 Fundamental Theorems of Functional Analysis 15 Indeed, h = ω ψ ωdx is continuous compactly supported with R hdx = 0 R and thus it has a unique compactly supported primitive. Hence fφ dx = f(ω ψ ωdy)dx

More information

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with

More information

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday

A NOTE ON THE NONLINEAR LANDWEBER ITERATION. Dedicated to Heinz W. Engl on the occasion of his 60th birthday A NOTE ON THE NONLINEAR LANDWEBER ITERATION Martin Hanke Dedicated to Heinz W. Engl on the occasion of his 60th birthday Abstract. We reconsider the Landweber iteration for nonlinear ill-posed problems.

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

Stability of an abstract wave equation with delay and a Kelvin Voigt damping Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability

More information

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame

More information

Continued fractions for complex numbers and values of binary quadratic forms

Continued fractions for complex numbers and values of binary quadratic forms arxiv:110.3754v1 [math.nt] 18 Feb 011 Continued fractions for complex numbers and values of binary quadratic forms S.G. Dani and Arnaldo Nogueira February 1, 011 Abstract We describe various properties

More information

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz

FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES. Yılmaz Yılmaz Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 33, 2009, 335 353 FUNCTION BASES FOR TOPOLOGICAL VECTOR SPACES Yılmaz Yılmaz Abstract. Our main interest in this

More information

Regularization in Hilbert space under unbounded operators and general source conditions*

Regularization in Hilbert space under unbounded operators and general source conditions* IOP PUBLISHING Inverse Problems 25 (2009) 115013 (15pp) INVERSE PROBLEMS doi:10.1088/0266-5611/25/11/115013 Regularization in Hilbert space under unbounded operators and general source conditions* Bernd

More information

Patryk Pagacz. Characterization of strong stability of power-bounded operators. Uniwersytet Jagielloński

Patryk Pagacz. Characterization of strong stability of power-bounded operators. Uniwersytet Jagielloński Patryk Pagacz Uniwersytet Jagielloński Characterization of strong stability of power-bounded operators Praca semestralna nr 3 (semestr zimowy 2011/12) Opiekun pracy: Jaroslav Zemanek CHARACTERIZATION OF

More information

Scalar curvature and the Thurston norm

Scalar curvature and the Thurston norm Scalar curvature and the Thurston norm P. B. Kronheimer 1 andt.s.mrowka 2 Harvard University, CAMBRIDGE MA 02138 Massachusetts Institute of Technology, CAMBRIDGE MA 02139 1. Introduction Let Y be a closed,

More information

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities

Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Convergence rates for Morozov s Discrepancy Principle using Variational Inequalities Stephan W Anzengruber Ronny Ramlau Abstract We derive convergence rates for Tikhonov-type regularization with conve

More information

Nonlinear Stationary Subdivision

Nonlinear Stationary Subdivision Nonlinear Stationary Subdivision Michael S. Floater SINTEF P. O. Box 4 Blindern, 034 Oslo, Norway E-mail: michael.floater@math.sintef.no Charles A. Micchelli IBM Corporation T.J. Watson Research Center

More information

Nonstationary Subdivision Schemes and Totally Positive Refinable Functions

Nonstationary Subdivision Schemes and Totally Positive Refinable Functions Nonstationary Subdivision Schemes and Totally Positive Refinable Functions Laura Gori and Francesca Pitolli December, 2007 Abstract In this paper we construct a class of totally positive refinable functions,

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997

More information

Convergence rates of the continuous regularized Gauss Newton method

Convergence rates of the continuous regularized Gauss Newton method J. Inv. Ill-Posed Problems, Vol. 1, No. 3, pp. 261 28 (22) c VSP 22 Convergence rates of the continuous regularized Gauss Newton method B. KALTENBACHER, A. NEUBAUER, and A. G. RAMM Abstract In this paper

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

Generalized metric properties of spheres and renorming of Banach spaces

Generalized metric properties of spheres and renorming of Banach spaces arxiv:1605.08175v2 [math.fa] 5 Nov 2018 Generalized metric properties of spheres and renorming of Banach spaces 1 Introduction S. Ferrari, J. Orihuela, M. Raja November 6, 2018 Throughout this paper X

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1

Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Applied Mathematical Sciences, Vol. 5, 2011, no. 76, 3781-3788 Regularization for a Common Solution of a System of Ill-Posed Equations Involving Linear Bounded Mappings 1 Nguyen Buong and Nguyen Dinh Dung

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality

Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality Po-Lam Yung The Chinese University of Hong Kong Introduction While multiplier operators are very useful in studying

More information

Kramers formula for chemical reactions in the context of Wasserstein gradient flows. Michael Herrmann. Mathematical Institute, University of Oxford

Kramers formula for chemical reactions in the context of Wasserstein gradient flows. Michael Herrmann. Mathematical Institute, University of Oxford eport no. OxPDE-/8 Kramers formula for chemical reactions in the context of Wasserstein gradient flows by Michael Herrmann Mathematical Institute, University of Oxford & Barbara Niethammer Mathematical

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

MODULUS OF CONTINUITY OF THE DIRICHLET SOLUTIONS

MODULUS OF CONTINUITY OF THE DIRICHLET SOLUTIONS MODULUS OF CONTINUITY OF THE DIRICHLET SOLUTIONS HIROAKI AIKAWA Abstract. Let D be a bounded domain in R n with n 2. For a function f on D we denote by H D f the Dirichlet solution, for the Laplacian,

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE

COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert

More information

Geometry and topology of continuous best and near best approximations

Geometry and topology of continuous best and near best approximations Journal of Approximation Theory 105: 252 262, Geometry and topology of continuous best and near best approximations Paul C. Kainen Dept. of Mathematics Georgetown University Washington, D.C. 20057 Věra

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

An asymptotic ratio characterization of input-to-state stability

An asymptotic ratio characterization of input-to-state stability 1 An asymptotic ratio characterization of input-to-state stability Daniel Liberzon and Hyungbo Shim Abstract For continuous-time nonlinear systems with inputs, we introduce the notion of an asymptotic

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

The small ball property in Banach spaces (quantitative results)

The small ball property in Banach spaces (quantitative results) The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence

More information

Ann. Polon. Math., 95, N1,(2009),

Ann. Polon. Math., 95, N1,(2009), Ann. Polon. Math., 95, N1,(29), 77-93. Email: nguyenhs@math.ksu.edu Corresponding author. Email: ramm@math.ksu.edu 1 Dynamical systems method for solving linear finite-rank operator equations N. S. Hoang

More information

Multiresolution analysis by infinitely differentiable compactly supported functions. N. Dyn A. Ron. September 1992 ABSTRACT

Multiresolution analysis by infinitely differentiable compactly supported functions. N. Dyn A. Ron. September 1992 ABSTRACT Multiresolution analysis by infinitely differentiable compactly supported functions N. Dyn A. Ron School of of Mathematical Sciences Tel-Aviv University Tel-Aviv, Israel Computer Sciences Department University

More information

Iowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v

Iowa State University. Instructor: Alex Roitershtein Summer Exam #1. Solutions. x u = 2 x v Math 501 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 015 Exam #1 Solutions This is a take-home examination. The exam includes 8 questions.

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Packing-Dimension Profiles and Fractional Brownian Motion

Packing-Dimension Profiles and Fractional Brownian Motion Under consideration for publication in Math. Proc. Camb. Phil. Soc. 1 Packing-Dimension Profiles and Fractional Brownian Motion By DAVAR KHOSHNEVISAN Department of Mathematics, 155 S. 1400 E., JWB 233,

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information