BLOCK THRESHOLDING AND SHARP ADAPTIVE ESTIMATION IN SEVERELY ILL-POSED INVERSE PROBLEMS 1)
|
|
- Ambrose Boone
- 6 years ago
- Views:
Transcription
1 Т Е О Р И Я В Е Р О Я Т Н О С Т Е Й Т о м 48 И Е Е П Р И М Е Н Е Н И Я В ы п у с к c 2003 г. CAVALIER L., GOLUBEV Y., LEPSKI O., TSYBAKOV A. BLOCK THRESHOLDING AND SHARP ADAPTIVE ESTIMATION IN SEVERELY ILL-POSED INVERSE PROBLEMS 1) Рассматривается проблема решения линейных операторных уравнений, полученных из наблюдений с шумом, в предположении, что сингулярные значения оператора экспоненциально убывают и что преобразование Фурье соответствующего решения также экспоненциально гладко. Мы предлагаем оценку решения, основанную на скользящем варианте блокового порога в пространстве коэффициентов Фурье. Показано, что эта оценка может быть быстро адаптировано к неизвестной степени гладкости решения. Ключевые слова и фразы: линейное операторное уравнение, некорректно поставленные задачи, наблюдения с шумом. 1. Introduction. The problem of solving linear operator equations from noisy observations has been extensively studied in the literature. Among the first to develop a statistical approach to this problem were Sudakov and Khalfin [17] and Bakushinskii [1]. For a survey of recent results we refer to Mathé and Pereverzev [14], Goldenshluger and Pereverzev [7], Cavalier and Tsybakov [3]. A usual statistical framework in this context is as follows. Let K: H H be a known linear operator on a Hilbert space H with inner product, ) and norm. The problem is to estimate an unknown function f H from indirect observations Y g) = Kf, g) + εξg), g H, 1) where 0 < ε < 1 and ξg) is a zero-mean Gaussian random process indexed by H on a probability space Ω, A, P), such that Eξg)ξv) = g, v) for any g, v H, where E is the expectation w.r.t. P. Relation 1) defines a Gaussian white noise model. CMI, Université Aix-Marseille 1, 39 rue F. Joliot-Curie, F Marseille Cedex, France. Laboratoire de Probabilités et Modèles Aléatoires, Université Paris 6, 4 pl. Jussieu, BP 188, F Paris Cedex 05, France.
2 2 Cavalier L., Golubev Y., Lepski O., Tsybakov A. Instead of dealing with all the observations Y g), g H, it is usually sufficient to consider a sequence of values Y g k ) k=1, for some orthonormal basis g k k=1. The corresponding random errors ξg k ) = ξ k are i.i.d. standard Gaussian random variables. We assume that the basis g k is such that Kf, g k ) = b k θ k, where b k 0 are real numbers and θ k = f, ϕ k ) are the Fourier coefficients of f w.r.t. some orthonormal basis ϕ k not necessarily ϕ k = g k ). A typical example when it occurs is that the operator K admits a singular value decomposition: Kϕ k = b k g k, K g k = b k ϕ k, 2) where K is the adjoint of K, b k are singular values, g k is an orthonormal basis in RangeK) and ϕ k is the corresponding orthonormal basis in H. Under these assumptions, one gets a discrete sequence of observations derived from 1): y k = b k θ k + εξ k, k = 1, 2,..., 3) where y k = Y g k ) and ξ i are i.i.d. standard Gaussian random variables. The problem of estimating f reduces to estimation the sequence θ k k=1 from observations 3). The model 3) also describes other problems such as the estimation of a signal from direct observations with correlated data see Johnstone [11]). Let ˆθ = ˆθ 1, ˆθ 2,...) be an estimator of θ = θ 1, θ 2,...) based on the data 3). Then f is estimated by ˆfc = k ˆθ k ϕ k. The mean integrated squared error of the estimator ˆf is E ˆf f 2 = E θ k=1 ˆθ k θ k ) 2 def = R ε ˆθ, θ), 4) where E θ denotes the expectation w.r.t. the distribution of the data in the model 3). In this paper we consider the problem of estimation of θ in the model 3) using the mean-squared risk 4). One can characterize linear inverse problems by the difficulty of the operator, i.e., with our notations, by the behavior of b k s. If b k 0, as k, the problem is ill-posed. An inverse problem will be called softly ill-posed if the sequence b k tends to 0 at a polynomial rate in k and it will be called severely ill-posed if 1 lim k k log 1 = c, b k for some 0 < c <. Thus, the problem is severely ill-posed if, in the main term, b k tends to 0 exponentially in k.
3 Ill-posed inverse problem 3 An important element of the model is the prior information about θ. Successful estimation of a sequence θ is possible only if its elements θ k tend to zero sufficiently fast, as k, which means that f is sufficiently smooth. A standard assumption on the smoothness of f is to pose that θ belongs to an ellipsoid Θ = θ: a 2 kθ 2 k, L k=1 where a = a k is a positive sequence that tends to infinity, and L > 0. Special cases of Θ are the Sobolev balls and the classes of analytic functions, corresponding to a k s increasing as a polynomial in k and as an exponential in k, respectively. Thus appears a natural classification of different cases in the study of linear inverse problems. Regarding the difficulty of the operator described in terms of b k s and the smoothness assumptions described in terms of a k s, one obtains the following three typical cases. 1. Softly ill-posed problems: b k s are polynomial and a k s are general usually polynomial or exponential). These problems have been studied by many authors, and they are essentially similar to estimation of derivatives of smooth functions. Sharp adaptive estimators for a general framework are given by Cavalier and Tsybakov [3], and by Cavalier, Golubev, Picard, and Tsybakov [2]. 2. Severely ill-posed problems with log-rates: b k s are exponential and a k s are polynomial. This case is highly degenerate in the sense that the variance of the optimal estimators is asymptotically negligible as compared to their bias. The optimal rates of convergence are very slow logarithmic) and sharp adaptation can be attained on a simple projection estimator [4], [8]) exp-severely ill-posed problems: b k s are exponential and a k s are exponential too the abbreviation «2 exp» stands for «two exponentials»). These problems will be studied here. They are characterized by some unusual phenomena. Golubev and Khasminskii [9] proved that 2 exp problems admit fast optimal rates converging to 0 as a power law, despite the «severe» form of the operator. They also showed that sharp minimax estimators for these problems are nonlinear, unlike all other known cases where sharp minimaxity is explored. Also the adaptation issue turns out to be nonstandard here. As shown by Tsybakov [19], there is a logarithmic deterioration in the best rate of convergence for adaptive estimation under the L 2 -risk. In other words, here one has to pay a price for L 2 -adaptation, while this is not the case for the inverse problems described in 1 and 2 : there the L 2 -adaptation is possible without any loss, and even the exact constants are preserved. Since the ellipsoid Θ with exponential a k s corresponds to analytic functions, the 2 exp framework can be viewed as an analogue of convolution
4 4 Cavalier L., Golubev Y., Lepski O., Tsybakov A. through a er-smooth filter described by exponential b k s), with an analytical function f to reconstruct. There is an important reason why the 2 exp setup is of interest. In the study of inverse problems, a standard assumption is to connect the smoothness of the underlying function to the smoothness of the operator. Roughly, if a function is observed through a very smooth filter, then the function itself has to be very smooth. A formalization of this idea can be found, for example, in the well-known Hilbert scale approach to inverse problems see [7], [13] [15]). Nonadaptive minimax estimation for some inverse problems different from 3) but characterized by a similar «two exponentials» behavior has been analyzed by Ermakov [6], Pensky and Vidakovic [16]), Efromovich and Koltchinskii [5]. In this paper we study estimation for the 2 exp framework when the ellipsoid Θ is not known, but we know only that a k s are exponential. We propose an adaptive estimator which attains optimal rates up to an inevitable logarithmic factor deterioration) simultaneously on all the ellipsoids with exponential a k s. Moreover, we show that the estimator is sharp adaptive, i.e., it cannot be improved to within a constant. This generalizes the result of Tsybakov [19] about the optimal rate of adaptation for 2 exp problems. The construction of our adaptive estimator is based on a block thresholding cf. [10] or [3] for the inverse problems setting). A difference from those papers is that, in order to get sharp optimality in our case, we need a «running» block estimator rather than an estimator with fixed blocks. Let us give some examples of severely ill-posed inverse problems related to partial differential equations. E x a m p l e 1. Consider the Dirichlet problem for the Laplace equation on a circle of radius 1: u = 0, u1, ϕ) = fϕ), ϕ [0, 2π], 0 r 1, 5) where is the Laplace operator, ur, ϕ) is a function in polar coordinates r 0, ϕ [0, 2π], and f is a 2π-periodic function in L 2 [0, 2π]. It is wellknown that the solution of 5) is u f r, ϕ) = θ r k [θ k cos kϕ) + θ k sin kϕ)], 6) 2π π k=1 where θ k are the Fourier coefficients of f. Assume that f is not known, but one can observe the solution u f r, ϕ) on the circle of radius r 0 < 1 in a white Gaussian noise: dy ϕ) = u f r 0, ϕ)dϕ + εdw ϕ), ϕ [0, 2π], 7)
5 Ill-posed inverse problem 5 where W is a standard Wiener process on [0, 2π] and 0 < ε < 1. The problem is to estimate the boundary condition f based on the observation of a trajectory Y ϕ), ϕ [0, 2π]. Substituting 6) in 7), multiplying 7) by the trigonometric basis functions and integrating over [0, 2π] we get the infinite sequence of observations y k = b k θ k + εξ k, k Z, where b k = r k 0 and ξ k are i.i.d. N 0, 1) random variables. By renumbering the indices from k Z to k N we get a particular case of the model 3). This problem is severely ill-posed since b k 0 exponentially fast as k. E x a m p l e 2. Consider the following Cauchy problem for the Laplace equation: u = 0, ux, 0) = 0, y ux, y) y=0 = gx), 8) where ux, y) is defined for x R, y 0, and the initial condition g is a 1-periodic function on R. Suppose that we do not know g but we have in our disposal the noisy observations Y x), x [0, 1], where Y is the random process defined by dy x) = gx) dx + ε dw x), x [0, 1]. 9) Here W is the standard Wiener process on [0, 1]. The problem is to estimate the solution fx) def = u g x, y 0 ) of 8) at a given y 0 > 0, based on these observations. Since g is 1-periodic, f is also 1-periodic. Denoting θ k the Fourier coefficients of f, one can find that, given 9), the following sequence of observations is available: y k = b k θ k + εξ k, where ξ k are i.i.d. N 0, 1) random variables and b k k exp βy 0 k) as k, for some β > 0 see [8] for more details). 2. Setting of the problem. From now on we assume that the observations have the form 3), where ξ k are i.i.d. N 0, 1) random variables and the values b k are defined by b 2 k = r k expρk) 10) with ρ > 0 and a positive sequence r k varying slower than an exponential as k. Such a definition of b k covers the examples considered above, whereas considering the squared values of b k s reflects the fact that the results will be insensitive to the signs. We assume that r k is subexponential in the sense of the following definition. D e f i n i t i o n 1. A sequence r k k=1 is called subexponential if r k > 0 for all k and there exist constants C < and µ 0, 1] such that r k+1 1 C r, k = 1, 2, ) k k µ
6 6 Cavalier L., Golubev Y., Lepski O., Tsybakov A. The class of subexponential sequences is rather large, including polynomial, logarithmic and other sequences. It is easy to see that a subexponential sequence r k satisfies a exp ck 1 µ ) r k a expc k 1 µ ) if 0 < µ < 1, ak c r k a k c if µ = 1, 12) with some positive finite constants a, a, c, and c. We will assume that θ belongs to an ellipsoid Θα, L) = θ: q k expαk) θ 2 k, L 13) k=1 where q k is a subexponential sequence and α > 0, L > 0 are finite constants. In order to shed some light on the estimation of θ in this setup, consider a simple projection estimator θ = θ 1, θ 2,...) with bandwidth W N, i.e., θ k = b 1 k y k, k W, 0, k > W. The maximal risk of this estimator over Θα, L) is bounded from above as follows R ε θ, θ) = θ Θα,L) θ Θα,L) k>w L k>w θ 2 k + ε 2 W k=1 b 2 k exp αk) q 1 k + ε 2 W expρk) r k. 14) The minimum of the right-hand side of 14) with respect to W is attained for some W depending on ε such that W as ε 0 in fact, otherwise the right-hand side of 14) does not tend to 0 as ε 0). Using Lemma 2 see below) to compute the last two sums in 14), we find that as W, the right hand-side of 14) is approximated by k=1 JW ) = L exp αw ) q 1 W 1 e α + ε 2 expρw ) r W 1 e ρ. The minimizer of JW ) gives an approximately optimal bandwidth. Since for subexponential sequences r k, q k and large enough W we have r W 1 r W r W +1, q W 1 q W q W +1, the necessary conditions of a local minimum at W, namely JW ) < JW + 1) and JW ) < JW 1), can be written in the form L exp γw ) < ε 2 e ρ r W q W, L exp γw 1)) > ε 2 e ρ r W 1 q W 1,
7 Ill-posed inverse problem 7 where γ = ρ + α. It can be shown that all the local minimizers of JW ) provide essentially the same value of J, so that one can take, for example, the smallest local minimizer W α, L) = min k N: L exp γk) < ε 2 e ρ r k q k. 15) In other words, the minimum of the right-hand side of 14) is approximately attained at W α, L). This yields the following upper bound for the minimax risk: inf ˆθ R ε ˆθ, θ) Cr ε α, L), θ Θα,L) with a constant C < and the rate of convergence r ε α, L) = ε 2 r W α,l) exp [ρw α, L)]. 16) Using the argument of [9] it is not difficult to show that the rate of convergence r ε α, L) cannot be improved in a minimax sense. Unfortunately, the minimax approach has a serious drawback: the optimal or nearly optimal) choice of the bandwidth depends strongly on the parameters of the functional class Θα, L). In the next section we correct this drawback: we propose an adaptive estimator of θ independent of α and L and attaining the rate which is only logarithmically worse than r ε α, L) on any class Θα, L). 3. Adaptive estimator and its optimality. Define the following estimator θ = θ 1, θ 2,...) based on block thresholding with running blocks where ρ > ρ and θ k = b 1 k y k I y 2 k 2ε2 ρ k, k = 1, 2,..., 17) y 2 k = s N: k s N y 2 s 18) with an integer N 1. Here and later I denotes the indicator function. It will be clear from the proofs that θk = 0 with a probability close to 1, whenever k does not belong to a «small» neighborhood of the integer W defined by W = W α, L) = min k N: L exp γk) < 2ε 2 ρkr k q k. 19) We will call W the adaptive bandwidth. Note that W α, L) is smaller than the optimal bandwidth W α, L) given in 15) for all ε small enough. For instance if r k = q k 1 we have as ε 0, W α, L) = 1 γ log L + O1), 20) ε2
8 8 Cavalier L., Golubev Y., Lepski O., Tsybakov A. whereas W α, L) = 1 γ log L ε 1 2 γ log log L + O1). 21) ε2 For general r k, q k the closed form expression for W is not available, but using 12) one can see that W log1/ε) as ε 0. Nevertheless, possible terms of the order olog1/ε)) in the expression for W are not negligible since they can affect the rate of convergence cf. 21)). Note also that the value W need not be known for the construction of our estimator. In this section we establish the exact asymptotics of the minimax adaptive risk. It turns out that this asymptotics is expressed in terms of the value A of the following maximization problem: where and Θ α, L) = A = A α, L) = θ l 2 Z): k= max θ Θ α,l) k= θ 2 k 1, k= expρk) θ 2 k, 22) expγk) θ 2 k E α, L), 23) E L exp[ γw α, L)] α, L) =. 24) 2ε 2 ρ W α, L) r W α,l) q W α,l) Note that 22) 23) is a problem of linear programming w.r.t. θk s 2 and it has a solution belonging to the boundary of Θ α, L). The values A α, L) and E α, L) depend on ε, but the dependence is not strong: they oscillate between two fixed constants as ε varies. In fact, the definition of W implies that for any L and α there exist finite positive constants e 1, e 2 such that e 1 E α, L) e 2 for all ε. This implies the existence of finite positive constants a 1, a 2 depending on L and α) such that a 1 A α, L) a 2 for all ε. In particular, since γ > ρ, one can take a 2 = e 2. Define ψ ε α, L) = 2A α, L) ε 2 ρ W α, L) exp[ρw α, L)] r W α,l). 25) The next theorem gives a bound for the maximal risk of the estimator θ over Θα, L). Theorem 1. Assume that b k satisfies 10) and that r k and q k are subexponential. Let θ be the estimator defined by 17) 18) with ρ > ρ and N N. Then for any α > 0 and L > 0 we have lim ε 0 θ Θα,L) R ε θ, θ) ψ ε α, L) ρ ρ + C exp [ N 2 minα, ρ) ], 26) where the constant C < does not depend on N and ρ.
9 Ill-posed inverse problem 9 Note that if the size 2N + 1 of the block is large and the parameter ρ is close to ρ, then the right-hand side of 26) approaches 1. Alternatively, one can take N = N ε and ρ = ρ ε ρ as ε 0, satisfying appropriate restrictions, which leads to the next result. For x 0 write x = minn N: n > x. Theorem 2. Assume that b k satisfies 10) and that r k and q k are subexponential. Let θ be the estimator defined by 17) 18) with N = log[log1/ε) 1] and ρ = ρ + N 1. Then for any α > 0 and L > 0 we have lim ε 0 θ Θα,L) θ Θα 0,L 0) R ε θ, θ) ψ ε α, L) 1. 27) R e m a r k 1. The estimator θ is defined as an infinite sequence. It can be proved that, under our assumptions, the number of nonzero components of this sequence is finite almost surely. However, to construct the estimator 17) one has to check the inequality y 2 k 2ε2 ρ k for all k, which is not realizable in practice. It is easy to propose a realizable version: put θk = 0 for k > N max with some N max = N max ε) as ε 0. Inspection of the proofs shows that to keep Theorems 1 and 2 valid it suffices to take rather small N max, for example, N max log 2 1/ε). Note also that the choice of N, ρ suggested in Theorem 2 is not the only possible one: there exists a variety of similar values N, ρ) that allow to attain the result of the theorem. These values are described by some technical conditions that we do not include in the theorem but that can be easily extracted from the proof. We now show that the upper bound 27) is sharp optimal for adaptation. Note first that for every fixed α 0 > 0, L 0 > 0 there exists an estimator ˆθ such that R ε ˆθ, θ) lim ε 0 ψ ε α 0, L 0 ) = 0. For example, one can take ˆθ as a projection estimator with the optimal bandwidth corresponding to α 0, L 0 ). Thus, ˆθ gains over θ «at a point» α 0, L 0 ). On the other hand, there are points α, L), where θ gains over ˆθ. The next theorem shows that if an estimator ˆθ gains over θ at one point α 0, L 0 ), there exists another point α, L), where ˆθ looses much more than it gains at α 0, L 0 ). Theorem 3. Assume that b k satisfies 10) and r k = q k = 1 for all k. Let an estimator ˆθ be such that, for some α 0 > 0, L 0 > 0, lim ε 0 θ Θα 0,L 0) R ε ˆθ, θ) < 1. 28) ψ ε α 0, L 0 ) Then there exists α > α 0 such that for all α > α and all L > 0 lim inf ε 0 θ Θα 0,L 0) R ε ˆθ, θ) ψ ε α 0, L 0 ) R ε ˆθ, θ) =. 29) θ Θα,L) ψ ε α, L)
10 10 Cavalier L., Golubev Y., Lepski O., Tsybakov A. Theorems 2 and 3 imply in particular that ψ ε α, L) is the adaptive rate of convergence for our problem. This follows from Definition 3 in [18]), with θ Θα,L) R ε ˆθ, θ) being viewed as another rate of convergence S ε α, L). In fact, Theorems 2 and 3 give even more than just the rate: they show that an attempt to improve ψ ε, ) at any point α 0, L 0 ) not only in the rate, but also in the constant, leads to a catastrophic behavior in another point. This property is interpreted as sharp adaptive optimality of the rate ψ ε, ). Another consequence of these theorems can be called sharp adaptive optimality of the estimator θ. One can modify Theorem 3 by expressing the result in terms of the ratio of maximal risks. For any two estimators ˆθ 1 and ˆθ 2 define ) ˆθ1 G α,l = θ Θα,L) R ε ˆθ 2, θ) ˆθ 2 θ Θα,L) R ε ˆθ 1, θ). This value is interpreted as the gain of ˆθ 1 over ˆθ 2 at α, L). The larger is G α,l ˆθ 1 /ˆθ 2 ), the better is ˆθ 1 as compared to ˆθ 2. It is easy to see that Theorem 3 and 27) imply the following corollary. Corollary 1. Under assumptions of Theorem 3, let an estimator ˆθ be such that, for some α 0 > 0, L 0 > 0, lim inf ε 0 ) ˆθ G α0,l 0 > 1, 30) θ where θ is the estimator defined by 17) 18) and satisfying the assumptions of Theorem 2. Then there exists α > α 0 such that for all L > 0 and all ε small enough θ ) ) ˆθ G α,l > l ε G α0,l 0 ˆθ θ with l ε as ε 0. R e m a r k 2. It follows from 15) and 16) that for r k = q k 1 the nonadaptive rate of convergence r ε α, L) is of the order ε 2α/γ, while 21) and 25) imply that the adaptive rate satisfies ψ ε α, L) ε 2α/γ log1/ε) α/γ. Thus, one has to pay an extra log-factor for adaptation. This effect is similar to the one established by Lepski [12] in the case of adaptation at a fixed point, and it is due to a nondegenerate asymptotic behavior of the normalized loss of the estimators as ε 0. Our problem provides the first example, where such an effect occurs for the L 2 -loss and not for the loss at a fixed point. 4. Proofs. In this section we will denote by C finite positive constants that may be different in different occasions.
11 Ill-posed inverse problem Proof of Theorems 1 and 2. Lemma 1. Let w k be a subexponential sequence. Then for any integers T, t such that t T and any integer M < mint, log T ) we have w t+k 1 w η T, 31) t k Z: k M where η T depends only on T and η T 0 as T. P r o o f is straightforward. Lemma 2. Let w k be a subexponential sequence. Then for any τ > 0 as T k=t T k=1 P r o o f. Write T k=1 expτk) w k = 1 + o1)) expτt ) w T 1 e τ, 32) exp τk) w k = 1 + o1)) exp τt ) w T 1 e τ. 33) T 1 expτk) w k = expτt ) w T k=0 exp τk) w T k w T. Set M = log T 1. If T is large, we have M < T 1, and hence we can write T 1 k=0 exp τk) w T k w T = Using Lemma 1, we get k=0 M k=0 exp τk) w T k w T + k=0 T 1 k=m+1 M M 1 η T ) exp τk) exp τk) w T k 1 + η T ) w T where η T = o1) as T. Thus lim M T k=0 The last sum in 34) satisfies T 1 k=m+1 exp τk) w T k w T exp τk) w T k w T = T 1 k=m+1 k=m e τ. exp τk) w T k w T. 34) M exp τk), k=0 exp τ k) 1 + C ) k M µ exp τ CM µ ) k. This term tends to 0 as M. Thus we obtain 32). Equation 33) is proved similarly.
12 12 Cavalier L., Golubev Y., Lepski O., Tsybakov A. Lemma 3. Let ξ i be i.i.d. N 0, 1) random variables. Then, for any k N, N N and x > 0, xe E ξ 2 k I ξ 2k 2 ) N+3/2 x exp x ). 35) 3 + 2N 2 P r o o f. For any 0 < λ < 1 2, EξkI 2 ξ 2 k x exp λx) Eξ 2 k exp λ ξ 2 k = exp λx) Eξ 2 k exp λξ 2 k expλξ1) 2 i k = exp λx)1 2λ) 3/2 1 2λ) N = exp λx 3 + 2N ) log1 2λ). 36) 2 The minimum with respect to λ of the right-hand side of 36) is attained at λ = N x Substituting this λ into 36) we get 35). P r o o f o f T h e o r e m 1. Let M be a sufficiently large integer satisfying N M < minw /2, log W /2). In this proof we denote by C the constants that do not depend on M, N, ε, and θ. We decompose the risk of the estimator θ into three parts where S 1 = S 3 = ). R ε θ, θ) S 1 + S 2 + S 3, 37) θ Θα,L) W M E θ θ k θ k ) 2, S 2 = θ Θα,L) k=1 E θ θ k θ k ) 2. θ Θα,L) k=w +M W +M θ Θα,L) k=w M E θ θ k θ k ) 2, Consider first the term S 1. Using 32) for the subexponential sequences w k = r k and w k = kr k, and Lemma 1 for w k = r k, we have S 1 = W M [ expρk) r k E θ yk I y 2 k < 2 ε 2 ρ k ] 2 εξ k θ Θα,L) k=1 W M 2 ε 2 expρk) r k k=1
13 Ill-posed inverse problem W M expρk) r k E θ y 2 k I y 2 k < 2ε 2 ρ k θ Θα,L) k=1 W M 2ε 2 k=1 W M expρk) r k + 4ε 2 k=1 expρk) r k kρ Cρε 2 expρw ) r W W exp ρm) C ψ ε α, L) exp ρm). 38) Next, we bound from above the term S 2. This is the main term in 37) and we will analyze it using a renormalization argument. We begin with some simple remarks. Denote Θ M = θ = θ 1, θ 2,...): W +M+N k=w M N expγk) q k θ 2 k L Clearly, Θα, L) Θ M. Now we change the variables from θ k to ν k by setting, for k 1 W, ν k = θ k+w ε 2ρW r k+w expρk + W )) = θ A ) r 1/2 W k+w, ψ ε α, L) expρk) r k+w and let νk be derived from θk by the same transformation. If θ Θ M, the sequence ν = ν k belongs to the set M+N Ξ M = ν Ξ: expγk) r k+w q k+w ν 2 L exp γw ) k 2ε 2 ρw N where Ξ denotes the set of all sequences of the form ν = ν 1 W,..., ν 0, ν 1,...). Now, in view of Lemma 1, applied to the subexponential sequences w k = r k and w k = r k q k, there exists η = η ε depending only on W, such that η 0, as ε 0, and max k+w k M 1 + η) r W, 39) min k+w q k+w k M+N 1 η) r W q W. 40) Fix 0 < δ < 1, and assume that ε is small enough to have simultaneously η < δ and 1 η) 1 < 1 + δ. Then 40) implies that min k M+N r k+w q k+w r W q W /1 + δ), and therefore Ξ M Ξ δ M = ν Ξ: M+N N expγk) ν 2 k E 1 + δ) Furthermore, 39) guarantees that max k M r k+w 1 + δ) r W. These remarks imply that, for ε small enough, S 2 θ Θ M W +M k=w M E θ θ k θ k ) 2..
14 14 Cavalier L., Golubev Y., Lepski O., Tsybakov A. ψ εα, L) A ν Ξ δ M 1 + δ) ψ εα, L) A M ν Ξ δ M expρk) r k+w r W M E θ ν k ν k ) 2 expρk) E θ ν k ν k ) 2. 41) Using the inequality x + y) δ) x δ 1 ) y 2 for any x, y R, we get [ ) E θ ν k ν k ) 2 = E ν k + ξ k+w 2ρW I k+n 1 + δ) ν 2 kp k+n ν l + ξ ) 2 l+w ρ 1 + k ) ] 2 ν 2ρW ρ W k ν l + ξ ) 2 l+w < ρ 1 + k ) 2ρW ρ W δ 1 )2ρW ) 1. 42) Next, using the inequality x+y) 2 1 δ)x 2 +1 δ 1 )y 2 for any x, y R, one obtains k+n ν l + ξ ) 2 k+n l+w 1 δ) ν 2 2ρW l 1 δ k+n ξ 2 2ρW l+w δ and, therefore, P k+n P I ν l + 1 δ) k+n ν 2 l ξ l+w 2ρW k+n ) 2 < ρ 1 + k ) ρ W ν 2 l 1 δ 2ρW δ 1 V + P 4δρW k+n k+n where V = 2δ M ) ρ 1 δ W ρ. By Markov s inequality Define P 1 4δρW AN, M) = ν Ξ δ M k+n M ξ 2 l+w > δ expρk) ν 2 k I ξ 2 l+w < ρ 1 + k ) ρ W ξ 2 l+w > δ, 43) 2N + 1 4δ 2 ρw. 44) k+n ν 2 l V.
15 Ill-posed inverse problem 15 Combining 42) 44) one obtains M ν Ξ δ M + [ expρk) E θ ν k ν k ) δ) AN, M) + 2N + 1 4δ 2 ρw ν Ξ δ M M expρk) ν 2 k ] + C expρm) W 1 + δ) AN, M) + C N + expρm)), 45) W where for the last inequality we used that, since γ > ρ, ν Ξ δ M M expρk) ν 2 k ν Ξ δ M M expγk) ν 2 k E 1 + δ). From 41) and 45) we get that for any 0 < δ < 1 and for all ε small enough S 2 ψ εα, L) A [ 1 + δ) 2 AN, M) + ] CN + expρm)). 46) W Our next goal is to show that AN, M) is close to A. We will proceed in steps. The first step is to remark that M k+n AN, M) expρk) ν 2 k I ν 2 l V 1 + δ) where V Ξ = ν Ξ δ M [ M ν Ξ ν Ξ: expρk) ν 2 k I M+N N k+n ν 2 l 1 expγk) ν 2 k E. ], 47) In fact, to get 47), introduce the sequence ν Ξ such that ν k = ν k V 1 + δ), observe that V > 1 and use the embedding ν Ξ: V ν Ξ: M+N N M+N N expγk)ν k) 2 E expγk)ν k) 2 E. Our next step is to find an upper bound for the expression in square brackets in 47). We can write assuming without loss of generality that N is even, M+N k+n expρk) ν 2 k I ν 2 l 1 A 1 + A 2 + A 3 48) ν Ξ N
16 16 Cavalier L., Golubev Y., Lepski O., Tsybakov A. with and A 1 = N/2 1 ν Ξ N C exp A 2 = A 3 = ρn 2 M+N ν Ξ k=n/2+1 C exp ν Ξ k= N/2 N/2 N/2 ν Ξ k= N/2 αn 2 expρk) ν 2 k I ), expρk) ν 2 k I ), expρk) ν 2 k I expρk) ν 2 k I k+n k+n k+n N/2 l= N/2 ν 2 l 1 ν 2 l 1 ν 2 l 1 ν 2 l 1 N/2 1 k= e ρk M+N ν Ξ k=n/2+1 N/2 ν Ξ k= N/2 e ρk ν 2 k expρk) ν 2 k, where Ξ = ν Ξ: M+N N expγk) ν 2 k E, N/2 k= N/2 ν 2 k 1. Substitution of the inequalities for A 1, A 2, and A 3 into 48) yields M+N ν Ξ N N/2 ν Ξ k= N/2 expρk) ν 2 k I k+n expρk) ν 2 k + C exp ν 2 l 1 N 2 minα, ρ) ). 49) Next, introducing the set Θ N/2 α, L) = ν Θ α, L): ν k = 0, for k > N/2, we note that N/2 ν Ξ k= N/2 expρk) ν 2 k = ν Θ N/2 N/2 α,l) k= N/2 N/2 ν Θ α,l) k= N/2 expρk) ν 2 k expρk) ν 2 k A. 50) In fact, the equality in 50) follows from the fact that taking ν k 0 for k > N/2 does not increase the sum N/2 expρk) k= N/2 ν2 k, and thus the
17 Ill-posed inverse problem 17 remum of this sum over Ξ is attained on the sequences ν with ν k = 0 for k > N/2. From 47), 49) 50) we get AN, M) V [A +C exp N minα, ρ)/2)]. This together with 46) and fact that A is bounded from below uniformly in ε entail [ S 2 ψ ε α, L) V 1 + δ) 2 + C exp N ) 2 minα, ρ) + C W N + expρm)) ]. 51) Finally we bound from above the term S 3. Using 3) and 10) we find S 3 2 θ 2 k θ Θα,L) k=w +M + 2ε 2 θ Θα,L) k=w +M expρk) r k E ξ 2 k I y 2 k 2ε2 ρ k. 52) The first term in the right-hand side satisfies θ 2 k = exp αk) q 1 k q k θ 2 k expαk) θ Θα,L) k=w +M θ Θα,L) k=w +M θ Θα,L) k=w +M exp αk) q 1 k C exp [ αw + M)] q 1 W +M q k expαk) θ 2 k k=1 C exp [ αm] ψ ε α, L), 53) where to get the last two inequalities we applied Lemma 2 and then Lemma 1 for the subexponential sequence w k = q 1 k. Consider the sequence θ = θ 1, θ 2,...) with θ k = b k θ k. For k W + M θ 2 k = θ Θα,L) L k+n θ Θα,L) k+n θ Θα,L) k+n r 1 l θ 2 l r 1 l exp ρl) q l expαl) θ 2 l k+n r 1 l q 1 l exp γl) q 1 l exp γl) Crk Nq 1 1 k N exp γk N)), where we used Lemma 2 for the subexponential sequence w l = r 1 l q 1 l. Applying to the last expression of the previous display successively Lemma 1 and then the fact that L exp γw ) < 2ε 2 ρw r W q W, we get θ 2 k θ Θα,L) Cr 1 W q 1 W exp γk N)) Cε 2 W exp γm N)) Cε 2 k exp γm N)). 54)
18 18 Cavalier L., Golubev Y., Lepski O., Tsybakov A. Now we bound the last term in 52). By Lemma 3, for any k W + M we have EξkI 2 θ + εξ 2 k 2ε2 ρ k E ξ 2 k I ξ k 2ρ k θ ε k 2ρ ke 2 ) N+3/2 exp 1 [ 2ρ k θ ] N 2 ε k 2ρ ke 2 ) N+3/2 exp ρ k + 2ρ 3 + 2N k θ ε. 55) k In the rest of the proof we set M = 2 log[logε 1 ) 1]. 56) For ε small enough this choice of M satisfies the assumptions on M imposed above, since W log1/ε). Since M as ε 0, for any small constant c > 0 there exists ε 0 such that for ε < ε 0 we have exp[ γm N)] cρ ρ) 2. Then for ε < ε 0, in view of 54) we have ρ ρ)k + 2ρ k θ /ε k ρ ρ)k/2 and, therefore, expρk) r k Eξ 2 k I θ + εξ 2 k 2 ε2 ρ k θ Θα,L) k=w +M 2ρ ke 2 ) N+3/2 r k exp ρ ρ) k + 2ρ θ Θα,L) 3 + 2N k k=w +M 2ρ ke 2 ) N+3/2 r k exp ρ ρ) k 3 + 2N 2 k=w +M 2ρ W + M)e 2 ) N+3/2 Cr W ρ ρ) 1 exp ρ ρ) W, 3 + 2N 2 θ ε k where the last inequality follows from 33) of Lemma 2 with the subexponential sequence w k = k N+3/2 r k and from Lemma 1 with w k = r k. This inequality and 52), 53) yield S 3 C[exp αm) ψ ε α, L) + ρ ρ) 1 ε 2 r W ] [ Cψ ε α, L) exp αm) + ρ ρ) W ) 1]. Combining this result with 37), 38), and 51) we find R ε θ, θ) θ Θα,L) ψ ε α, L) V 1 + δ)2 [ + C exp αm) + exp ρm) + exp N ) 2 minα, ρ) + N + expρm) + ρ ρ) 1 ]. W
19 Ill-posed inverse problem 19 It remains to take limits as ε 0, and then as δ 0, using the definition of M in 56) and the definition of V. This completes the proof of 26). P r o o f o f T h e o r e m 2. We follow the lines of the proof of Theorem 1 with M = 2N cf. 56)). The argument preceding 56) is true for any fixed N and ρ > ρ, and it remains intact. Inspection of the proof of Theorem 1 after 56) shows that the choice of N and ρ defined in Theorem 2 is sufficient to get 27) Proof of Theorem 3. For an integer M, consider the set Θ M α, L) = θ Θ α, L): θ k = 0, for k > M and define M A M α, L) = expρk) θk. 2 θ Θ M α,l) Lemma 4. For any α > 0, L > 0, lim M A Mα, L) = A α, L). 57) P r o o f. Fix α > 0, L > 0, and omit for brevity the indication of α and L in brackets for A M, A, Θ, Θ M, E. Obviously, A M A, and we have to show only that We first prove that A M Q lim inf M A M A. 58) M θ Θ expρk) θ 2 k, 59) where Q = E /E + exp γm)). To do this, it suffices to show that for any θ Θ there exists θ Θ M such that M expρk)θ k) 2 Q M expρk) θ 2 k. 60) If θ Θ M this inequality is obvious one takes θ = θ). If θ Θ \ Θ M, we have S = k >M θ2 k > 0. Also, S 1 since θ Θ. Define θ by θ k = Q 1/2 θ k, M < k M, θ M = Q 1/2 θ 2 M + S, θ k = 0, k > M.
20 20 Cavalier L., Golubev Y., Lepski O., Tsybakov A. Clearly, θ Θ M since M θ k) 2 M θ2 k 1 and M [ M ] expγk)θ k) 2 = Q expγk) θ 2 k + S exp γm) On the other hand, M Q Q [ [ expρk)θ k) 2 = Q exp γm) exp γm) + Q [ M M k M k= θ 2 k + +1 expγk) θ 2 k expγk) θ 2 k ] E. expρk) θ 2 k + S exp ρm) expρk) θ 2 k. This proves 60) and therefore 59). To finish the proof of 58) it remains to combine 59) with the following inequality: M expρk) θ 2 k A expρk) θ 2 k + ) expρk) θ 2 k θ Θ θ Θ k< M k>m A E exp αm) exp ρm). Lemma 5. Under the assumptions of Theorem 3, for any 0 < α 0 < α α and any L 0 > 0, L > 0 we have R ε ˆθ, θ) lim inf inf max ε ˆθ ψ ε α 0, L 0 ), R ε ˆθ, θ) ψ ε α 1 γ 0, L) γ, 61) θ Θα 0,L 0) θ Θα,L) where inf ˆθ denotes the infimum over all estimators and γ 0 = α 0 + ρ, γ = α + ρ. P r o o f. We will write for brevity W 0 = W α 0, L 0 ), W = W α, L). Let M be an integer satisfying 1 M < W 0, and let δ 0, 1) be such that 1 W 1 + δ)/w 0 > 0. Such a choice of δ is possible for all ε small enough since under the assumption q k 1 we have, in view of 21), For L = L 0 [1 W 1 + δ)/w 0 ] set Θ M,0 = θ = θ 1, θ 2,...): W lim = γ 0 < 1. 62) ε 0 W 0 γ W 0+M k=w 0 M expα 0 k) θ 2 k L and θ k = 0, k W 0 > M ] ].
21 Ill-posed inverse problem 21 Clearly, Θ M,0 Θα 0, L 0 ), and, therefore, r ε def = inf max ˆθ θ Θα 0,L 0 ) inf max θ Θ M,0 \0 ˆθ R ε ˆθ, θ) ψ ε α 0, L 0 ), θ Θα,L) R ε ˆθ, θ) ψ ε α 0, L 0 ), R ε ˆθ, 0) ψ ε α, L) R ε ˆθ, θ) ψ ε α, L), 63) where 0 denotes the sequence θ with all the elements equal to 0. To handle the last expression, we use again a renormalization. Change the variables from θ k to ν k by setting, for k 1 W 0, ν k = b k+w 0 θ k+w0 ε 2ρW 0 = θ k+w0 ε 2ρW 0 expρk + W 0 )), and let ν k be obtained from ˆθ k by the same transformation, thus defining a sequence ν Ξ. We will also write P ν, E ν instead of P θ and E θ, respectively. Clearly, θ Θ M,0 if and only if ν = ν k belongs to the set Ξ M,0 = ν Ξ: M expγ 0 k) ν 2 k Ẽ and ν k = 0, k > M where Ẽ = E α 0, L 0 )[1 W 1 + δ)/w 0 ]. With this notation, [ R ε ˆθ, θ) ψ ε α 0, L 0 ) = 1 M expρk) E A ν ν k ν k ) 2 + ] expρk) E ν ν 2 k α 0, L 0 ) k >M 1 M expρk) E A ν ν k ν k ) 2 α 0, L 0 ) which entails, together with 63), that r ε where 0 < δ 0 < 1, D ε ν = inf ν ν Ξ M,0 \0 [ 1 δ0 ) 2 M A α 0, L 0 ) expρk) ν 2 k max E ν d 2 ν ν, ν), λ ε E 0 d 2 ν ν, 0), λ ε = ψ εα 0, L 0 ) ψ ε α, L), ) D ε ν ],, 64) and d ν ν 1), ν 2) ), for a fixed ν Ξ \ 0, denotes the distance between two sequences ν 1) Ξ and ν 2) Ξ defined by M d 2 νν 1), ν 2) ) = 1 δ 0 ) 2 expρk)ν1) k ν 2) M expρk) ν2 k k ) 2.
22 22 Cavalier L., Golubev Y., Lepski O., Tsybakov A. In particular, d ν ν, 0) = 1 δ 0, which allows us to apply Theorem 6 of [18] resulting in D ε ν τλε δ01 2 2δ 0 ) 2 ) dp0 P 1 2δ 0 ) 2 + τλ ε δ0 2 ν τ 65) dp ν for any τ > 0. Now, P ν is the Gaussian measure corresponding to the observations Y k = ν k + ξ k+w0 / 2ρW 0 here Y k is the value obtained from y k by the same transformation as ν k is obtained from θ k ). Thus, P ν dp0 dp ν ) τ = P expξ 2ρW 0 ν ρw 0 ν 2 ) τ, where ν = M ν2 k) 1/2 and ξ is the standard Gaussian random variable. Set τ = expρw W 0 ) + δρw /2). Then, for ν 2 [1 W 1 + δ)/w 0 ] we get P ν dp0 dp ν ) τ P ξ 1 2ρW0 ν log τ + ρw 0 ν 2 ) δρw P ξ 2 2ρW 0 ν P ξ δρw [1 2 W ] 1/2 1 + δ) 2ρW 0 W 0 def = p ε = 1 + o1) as ε 0, 66) in view of 62) and since W as ε 0. Hence, introducing the set Ξ M,0 = ν Ξ M,0 : ν 2 [1 W ] 1 + δ) W 0 and using 64) 66) we get r ε p ετλ ε δ δ 0 ) 2 1 δ 0 ) 2 1 2δ 0 ) 2 + τλ ε δ 2 0 Next, note that M ν Ξ M,0 \0 expρk) ν 2 k A α 0, L 0 )) 1 M ν Ξ M,0 \0 expρk) ν 2 k. 67) = [1 W ] M 1 + δ) expρk) ν 2 W 0 ν Θ M α0,l0)\0 k = [1 W ] 1 + δ) A M α 0, L 0 ). 68) W 0 In fact, the first equality in 68) is easy to get using the change of variables ν k = [1 W 1+δ)/W 0 ] 1/2 ν k, whereas the second one is due to the fact that
23 Ill-posed inverse problem 23 the remum of the sum over Θ M α 0, L 0 ) \ 0 is equal to the remum over Θ M α 0, L 0 ). Finally, for ε small enough, due to the definiton of ψ ε, ) and 62) we have λ ε C expρw 0 W )), and thus δρw τλ ε ) C exp as ε 0. 69) 2 To finish the proof of the lemma it remains to substitute 68) into 67) and to take the limits of the resulting inequality first as M using Lemma 4), then as ε 0 using 66) and 69)), and finally as δ 0 and δ 0 0. P r o o f o f T h e o r e m 3. The assumption of the theorem guarantees that there exists δ 0, 1) such that θ Θα 0,L 0) R ε ˆθ, θ) ψ ε α 0, L 0 ) 1 δ for all ε small enough. Substituting this into 61) and choosing in 61) the value α = 4γ 0 /δ ρ > α 0 we get for α > α and for sufficiently small ε inf ˆθ R ε ˆθ, θ) max 1 δ, θ Θα,L) ψ ε α 1 δ, L) 4 + o1) 1 δ 2. Thus, for ε small enough, R ε ˆθ, θ) θ Θα,L) ψ ε α, L) 1 δ ) ψε α, L) 2 ψ ε α, L). 70) On the other hand, it follows from [9] that the minimax risk for Θα 0, L 0 ) satisfies inf ˆθ θ Θα 0,L 0 ) R ε ˆθ, θ) Cr ε α 0, L 0 ), 71) and, in view of 20) and 21), for r k = q k 1 we have r ε α 0, L 0 ) ψ ε α 0, L 0 ) C log 1 ) α0/γ 0. ε Using the last inequality and 70), 71), we get as ε 0. R ε ˆθ, θ) ψ ε α 0, L 0 ) θ Θα,L) C log 1 ) α0 /γ 0 W α, L) ε W α, L) exp C log 1 ε θ Θα 0,L 0) R ε ˆθ, θ) ψ ε α, L) C log 1 ) α0/γ 0 ψ ε α, L) ε ψ ε α, L) ) α0/γ 0+ρ1/γ 1/γ) ε ρ1/γ 1/γ) ) ρ[w α, L) W α, L)]
24 24 Cavalier L., Golubev Y., Lepski O., Tsybakov A. REFERENCES 1. Бакушинский А. Б. О построении регулянизующих алгоритмов при случайных помехах. Докл. АН СССР, 1969, т. 189, 2, с Cavalier L., Golubev G. K., Picard D., Tsybakov A. B. Oracle inequalities for inverse problems. Ann. Statist., 2002, v. 30, 3, p Cavalier L., Tsybakov A. B. Sharp adaptation for inverse problems with random noise. Probab. Theory Relat. Fields, 2002, v. 123, 3, p Efromovich S. Robust and efficient recovery of a signal passed through a filter and then contaminated by nongaussian noise. IEEE Trans. Inf. Theory, 1997, v. 43, 4, p Efromovich S., Koltchinskii V. On inverse problems with unknown operators. IEEE Trans. Inform. Theory, 2001, v. 47, 7, p Ermakov M. S. Minimax estimation of the solution of an ill-posed convolution type problem. Проблемы передачи информации, 2001, т. 25, 3, p Goldenshluger A., Pereverzev S. V. Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observations. Probab. Theory Relat. Fields, 2000, v. 118, 2, p Голубев Г. К., Хасьминуский Р. З. Статистический подход к некоторым обратным задачам для уравнений в частных производных. Проблемы передачи информации, 1999, т. 35, в. 23, с Golubev G. K., Khasminskii R. Z. A statistical approach to the Cauchy problem for the Laplace equation. State of the Art in Probability and Statistics. Festschrift for Willem R. van Zwet. Ed. by M. de Gunst, C. Klaassen, A. van der Vaart, Beachwood, OH: IMS, Institute of Mathematical Statistics, 2000, p IMS Lecture notes Monograph Series, v. 36.) 10. Hall P., Kerkyacharian G., Picard D. Block threshold rules for curve estimation using kernel and wavelet methods. Ann. Statist., 1998, v. 26, 3, p Johnstone I. M. Wavelet shrinkage for correlated data and inverse problems: Adaptivity results. Statistica Sinica, 1999, v. 9, 1, p Лепский О. В. Об одной задаче адаптивного оценивания в гауссовском белом шуме. Теория вероятн. и ее примен., 1990, т. 35, в. 3, с Mair B., Ruymgaart F. H. Statistical inverse estimation in Hilbert scales. SIAM J. Appl. Math., 1996, v. 56, 5, p Mathé P., Pereverzev S. V. Optimal discretization and degrees of ill-posedness for inverse estimation in Hilbert scales in the presence of random noise. Preprint 469, WIAS, Berlin. 15. Natterer F. Error bounds for Tikhonov regularization in Hilbert scales. Appl. Anal., 1984, v. 18, p Pensky M., Vidakovic B. Adaptive wavelet estimator for nonparametric density deconvolution. Ann. Statist., 1999, v. 27, 6, p Судаков В. Н., Халфин Л. А. Статистический подход к корректности задач математической физики. Докл. АН СССР, 1964, т. 157, 5, с Tsybakov A. B. Pointwise and -norm sharp adaptive estimation of functions on the Sobolev classes. Ann. Statist., 1998, v. 26, p Tsybakov A. B. On the best rate of adaptive estimation in some inverse problems. C. R. Acad. Sci. Paris, v. 330, Série I, 2000, p Поступила в редакцию 23.VII.2002
Inverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).
More informationInverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011
More informationInverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27 Table of contents YES, Eurandom, 10 October 2011 p. 2/27 Table of contents 1)
More informationD I S C U S S I O N P A P E R
I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive
More informationOPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1
The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute
More informationStatistical Inverse Problems and Instrumental Variables
Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology
More informationMinimax Risk: Pinsker Bound
Minimax Risk: Pinsker Bound Michael Nussbaum Cornell University From: Encyclopedia of Statistical Sciences, Update Volume (S. Kotz, Ed.), 1999. Wiley, New York. Abstract We give an account of the Pinsker
More informationDiscussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan
Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Professors Antoniadis and Fan are
More informationA RECONSTRUCTION FORMULA FOR BAND LIMITED FUNCTIONS IN L 2 (R d )
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 127, Number 12, Pages 3593 3600 S 0002-9939(99)04938-2 Article electronically published on May 6, 1999 A RECONSTRUCTION FORMULA FOR AND LIMITED FUNCTIONS
More informationNumerical differentiation by means of Legendre polynomials in the presence of square summable noise
www.oeaw.ac.at Numerical differentiation by means of Legendre polynomials in the presence of square summable noise S. Lu, V. Naumova, S. Pereverzyev RICAM-Report 2012-15 www.ricam.oeaw.ac.at Numerical
More informationA Lower Bound Theorem. Lin Hu.
American J. of Mathematics and Sciences Vol. 3, No -1,(January 014) Copyright Mind Reader Publications ISSN No: 50-310 A Lower Bound Theorem Department of Applied Mathematics, Beijing University of Technology,
More informationOutline of Fourier Series: Math 201B
Outline of Fourier Series: Math 201B February 24, 2011 1 Functions and convolutions 1.1 Periodic functions Periodic functions. Let = R/(2πZ) denote the circle, or onedimensional torus. A function f : C
More informationNonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII
Nonparametric estimation using wavelet methods Dominique Picard Laboratoire Probabilités et Modèles Aléatoires Université Paris VII http ://www.proba.jussieu.fr/mathdoc/preprints/index.html 1 Nonparametric
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationDirect estimation of linear functionals from indirect noisy observations
Direct estimation of linear functionals from indirect noisy observations Peter Mathé Weierstraß Institute for Applied Analysis and Stochastics, Mohrenstraße 39, D 10117 Berlin, Germany E-mail: mathe@wias-berlin.de
More informationWavelet Shrinkage for Nonequispaced Samples
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Wavelet Shrinkage for Nonequispaced Samples T. Tony Cai University of Pennsylvania Lawrence D. Brown University
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA
ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens
More informationEstimation of a quadratic regression functional using the sinc kernel
Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073
More informationA COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )
Electronic Journal of Differential Equations, Vol. 2006(2006), No. 5, pp. 6. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) A COUNTEREXAMPLE
More informationarxiv: v2 [math.st] 12 Feb 2008
arxiv:080.460v2 [math.st] 2 Feb 2008 Electronic Journal of Statistics Vol. 2 2008 90 02 ISSN: 935-7524 DOI: 0.24/08-EJS77 Sup-norm convergence rate and sign concentration property of Lasso and Dantzig
More informationUnbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods
Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationNon linear estimation in anisotropic multiindex denoising II
Non linear estimation in anisotropic multiindex denoising II Gérard Kerkyacharian, Oleg Lepski, Dominique Picard Abstract In dimension one, it has long been observed that the minimax rates of convergences
More informationASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin
The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian
More informationFast learning rates for plug-in classifiers under the margin condition
Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,
More informationAsymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals
Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.
More informationBernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION
Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationLearning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013
Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description
More informationu xx + u yy = 0. (5.1)
Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function
More informationOptimal Estimation of a Nonsmooth Functional
Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose
More informationMinimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.
Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of
More informationMinimax Goodness-of-Fit Testing in Ill-Posed Inverse Problems with Partially Unknown Operators
Minimax Goodness-of-Fit Testing in Ill-Posed Inverse Problems with Partially Unknown Operators Clément Marteau, Institut Camille Jordan, Université Lyon I - Claude Bernard, 43 boulevard du novembre 98,
More informationEmpirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods
Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationPCA with random noise. Van Ha Vu. Department of Mathematics Yale University
PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical
More informationM. Ledoux Université de Toulouse, France
ON MANIFOLDS WITH NON-NEGATIVE RICCI CURVATURE AND SOBOLEV INEQUALITIES M. Ledoux Université de Toulouse, France Abstract. Let M be a complete n-dimensional Riemanian manifold with non-negative Ricci curvature
More informationNonparametric Regression
Adaptive Variance Function Estimation in Heteroscedastic Nonparametric Regression T. Tony Cai and Lie Wang Abstract We consider a wavelet thresholding approach to adaptive variance function estimation
More informationStability of an abstract wave equation with delay and a Kelvin Voigt damping
Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationNonlinear elliptic systems with exponential nonlinearities
22-Fez conference on Partial Differential Equations, Electronic Journal of Differential Equations, Conference 9, 22, pp 139 147. http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu
More informationConvergence rates in weighted L 1 spaces of kernel density estimators for linear processes
Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,
More informationAuthor(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)
Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL
More informationMicrolocal Methods in X-ray Tomography
Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods
More informationAdaptive Wavelet Estimation: A Block Thresholding and Oracle Inequality Approach
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1999 Adaptive Wavelet Estimation: A Block Thresholding and Oracle Inequality Approach T. Tony Cai University of Pennsylvania
More informationA talk on Oracle inequalities and regularization. by Sara van de Geer
A talk on Oracle inequalities and regularization by Sara van de Geer Workshop Regularization in Statistics Banff International Regularization Station September 6-11, 2003 Aim: to compare l 1 and other
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationConvergence Rate of Nonlinear Switched Systems
Convergence Rate of Nonlinear Switched Systems Philippe JOUAN and Saïd NACIRI arxiv:1511.01737v1 [math.oc] 5 Nov 2015 January 23, 2018 Abstract This paper is concerned with the convergence rate of the
More informationPCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION)
PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION) EMMANUEL BREUILLARD 1. Lecture 1, Spectral gaps for infinite groups and non-amenability The final aim of
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationA note on the convex infimum convolution inequality
A note on the convex infimum convolution inequality Naomi Feldheim, Arnaud Marsiglietti, Piotr Nayar, Jing Wang Abstract We characterize the symmetric measures which satisfy the one dimensional convex
More informationA NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES
A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES S. DEKEL, G. KERKYACHARIAN, G. KYRIAZIS, AND P. PETRUSHEV Abstract. A new proof is given of the atomic decomposition of Hardy spaces H p, 0 < p 1,
More informationOPTIMAL CONTROL AND STRANGE TERM FOR A STOKES PROBLEM IN PERFORATED DOMAINS
PORTUGALIAE MATHEMATICA Vol. 59 Fasc. 2 2002 Nova Série OPTIMAL CONTROL AND STRANGE TERM FOR A STOKES PROBLEM IN PERFORATED DOMAINS J. Saint Jean Paulin and H. Zoubairi Abstract: We study a problem of
More informationMixed exterior Laplace s problem
Mixed exterior Laplace s problem Chérif Amrouche, Florian Bonzom Laboratoire de mathématiques appliquées, CNRS UMR 5142, Université de Pau et des Pays de l Adour, IPRA, Avenue de l Université, 64000 Pau
More informationInverse Statistical Learning
Inverse Statistical Learning Minimax theory, adaptation and algorithm avec (par ordre d apparition) C. Marteau, M. Chichignoud, C. Brunet and S. Souchet Dijon, le 15 janvier 2014 Inverse Statistical Learning
More informationAsymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs
Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics
More informationSums of exponentials of random walks
Sums of exponentials of random walks Robert de Jong Ohio State University August 27, 2009 Abstract This paper shows that the sum of the exponential of an oscillating random walk converges in distribution,
More informationSome Background Material
Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationSobolev Spaces. Chapter 10
Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p
More informationAbout One way of Encoding Alphanumeric and Symbolic Information
Int. J. Open Problems Compt. Math., Vol. 3, No. 4, December 2010 ISSN 1998-6262; Copyright ICSRS Publication, 2010 www.i-csrs.org About One way of Encoding Alphanumeric and Symbolic Information Mohammed
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationThe Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:
Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationAnalysis of an Adaptive Finite Element Method for Recovering the Robin Coefficient
Analysis of an Adaptive Finite Element Method for Recovering the Robin Coefficient Yifeng Xu 1 Jun Zou 2 Abstract Based on a new a posteriori error estimator, an adaptive finite element method is proposed
More informationMinimax lower bounds I
Minimax lower bounds I Kyoung Hee Kim Sungshin University 1 Preliminaries 2 General strategy 3 Le Cam, 1973 4 Assouad, 1983 5 Appendix Setting Family of probability measures {P θ : θ Θ} on a sigma field
More informationPartial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:
Chapter 7 Heat Equation Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation: u t = ku x x, x, t > (7.1) Here k is a constant
More informationNECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES
NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES JUHA LEHRBÄCK Abstract. We establish necessary conditions for domains Ω R n which admit the pointwise (p, β)-hardy inequality u(x) Cd Ω(x)
More informationSTOKES PROBLEM WITH SEVERAL TYPES OF BOUNDARY CONDITIONS IN AN EXTERIOR DOMAIN
Electronic Journal of Differential Equations, Vol. 2013 2013, No. 196, pp. 1 28. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu STOKES PROBLEM
More informationIll-Posedness of Backward Heat Conduction Problem 1
Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that
More informationMinimax theory for a class of non-linear statistical inverse problems
Minimax theory for a class of non-linear statistical inverse problems Kolyan Ray and Johannes Schmidt-Hieber Leiden University Abstract We study a class of statistical inverse problems with non-linear
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationRATE-OPTIMAL GRAPHON ESTIMATION. By Chao Gao, Yu Lu and Harrison H. Zhou Yale University
Submitted to the Annals of Statistics arxiv: arxiv:0000.0000 RATE-OPTIMAL GRAPHON ESTIMATION By Chao Gao, Yu Lu and Harrison H. Zhou Yale University Network analysis is becoming one of the most active
More informationNonlinear aspects of Calderón-Zygmund theory
Ancona, June 7 2011 Overture: The standard CZ theory Consider the model case u = f in R n Overture: The standard CZ theory Consider the model case u = f in R n Then f L q implies D 2 u L q 1 < q < with
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationSolutions: Problem Set 4 Math 201B, Winter 2007
Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x
More informationParameter Estimation for Stochastic Evolution Equations with Non-commuting Operators
Parameter Estimation for Stochastic Evolution Equations with Non-commuting Operators Sergey V. Lototsky and Boris L. Rosovskii In: V. Korolyuk, N. Portenko, and H. Syta (editors), Skorokhod s Ideas in
More informationPlug-in Approach to Active Learning
Plug-in Approach to Active Learning Stanislav Minsker Stanislav Minsker (Georgia Tech) Plug-in approach to active learning 1 / 18 Prediction framework Let (X, Y ) be a random couple in R d { 1, +1}. X
More informationarxiv: v2 [math.st] 18 Oct 2018
Bayesian inverse problems with partial observations Shota Gugushvili a,, Aad W. van der Vaart a, Dong Yan a a Mathematical Institute, Faculty of Science, Leiden University, P.O. Box 9512, 2300 RA Leiden,
More informationHARMONIC ANALYSIS. Date:
HARMONIC ANALYSIS Contents. Introduction 2. Hardy-Littlewood maximal function 3. Approximation by convolution 4. Muckenhaupt weights 4.. Calderón-Zygmund decomposition 5. Fourier transform 6. BMO (bounded
More informationBilinear Stochastic Elliptic Equations
Bilinear Stochastic Elliptic Equations S. V. Lototsky and B. L. Rozovskii Contents 1. Introduction (209. 2. Weighted Chaos Spaces (210. 3. Abstract Elliptic Equations (212. 4. Elliptic SPDEs of the Full
More informationPointwise convergence rates and central limit theorems for kernel density estimators in linear processes
Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence
More informationEXISTENCE AND UNIQUENESS FOR A THREE DIMENSIONAL MODEL OF FERROMAGNETISM
1 EXISTENCE AND UNIQUENESS FOR A THREE DIMENSIONAL MODEL OF FERROMAGNETISM V. BERTI and M. FABRIZIO Dipartimento di Matematica, Università degli Studi di Bologna, P.zza di Porta S. Donato 5, I-4126, Bologna,
More informationStorm Open Library 3.0
S 50% off! 3 O L Storm Open Library 3.0 Amor Sans, Amor Serif, Andulka, Baskerville, John Sans, Metron, Ozdoby,, Regent, Sebastian, Serapion, Splendid Quartett, Vida & Walbaum. d 50% f summer j sale n
More informationESTIMATES FOR MAXIMAL SINGULAR INTEGRALS
ESTIMATES FOR MAXIMAL SINGULAR INTEGRALS LOUKAS GRAFAKOS Abstract. It is shown that maximal truncations of nonconvolution L -bounded singular integral operators with kernels satisfying Hörmander s condition
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationBayesian Nonparametric Point Estimation Under a Conjugate Prior
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More informationA Singular Integral Transform for the Gibbs-Wilbraham Effect in Inverse Fourier Transforms
Universal Journal of Integral Equations 4 (2016), 54-62 www.papersciences.com A Singular Integral Transform for the Gibbs-Wilbraham Effect in Inverse Fourier Transforms N. H. S. Haidar CRAMS: Center for
More informationRegularity and compactness for the DiPerna Lions flow
Regularity and compactness for the DiPerna Lions flow Gianluca Crippa 1 and Camillo De Lellis 2 1 Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy. g.crippa@sns.it 2 Institut für Mathematik,
More informationFourier Series. 1. Review of Linear Algebra
Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier
More informationAlmost sure asymptotics for the random binary search tree
AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,
More informationORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE
ORACLE INEQUALITY FOR A STATISTICAL RAUS GFRERER TYPE RULE QINIAN JIN AND PETER MATHÉ Abstract. We consider statistical linear inverse problems in Hilbert spaces. Approximate solutions are sought within
More information1.5 Approximate Identities
38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these
More informationarxiv: v3 [math.st] 23 Apr 2009
1 Statistical minimax approach of the Hausdorff moment problem arxiv:0705.1235v3 [math.st] 23 Apr 2009 Thanh Mai Pham Ngoc Université Pierre et Marie Curie, Paris VI Abstract: The purpose of this paper
More informationN. P. Girya PERIODICITY OF DIRICHLET SERIES
Математичнi Студiї. Т.42, 1 Matematychni Studii. V.42, No.1 УДК 517.518.6 N. P. Girya PERIODICITY OF DIRICHLET SERIES N. P. Girya. Periodicity of Dirichlet series, Mat. Stud. 42 (2014), 38 43. We prove
More informationSome lecture notes for Math 6050E: PDEs, Fall 2016
Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.
More informationChapter 8 Integral Operators
Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,
More information1. Introduction 1.1. Model. In this paper, we consider the following heteroscedastic white noise model:
Volume 14, No. 3 005), pp. Allerton Press, Inc. M A T H E M A T I C A L M E T H O D S O F S T A T I S T I C S BAYESIAN MODELLING OF SPARSE SEQUENCES AND MAXISETS FOR BAYES RULES V. Rivoirard Equipe Probabilités,
More information