Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Size: px
Start display at page:

Download "Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector"

Transcription

1 On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean of an observed Gaussian innite-dimensional vector with independent components is studied, when vector is known to lie in a l -ellipsoid and the variances of the components need not be equal. Under some general assumptions on the ellipsoid we provide the second-order behaviour of the minimax risk. Key words Gaussian noise, ellipsoid, minimax linear risk, asymptotically minimax estimator, second-order asymptotics. Introduction Pinsker (980) initiated the study of minimax estimation procedures for the ltration problem in Gaussian noise which can be described, in equivalent terms, as () Y k = k + k k k =; ; Here k 0; k =; ;, are given, i 's are independent standart Gaussian random variables, >0 is a small parameter and =( ; ;) is the unknown innite-dimensional parameter of interest,, where () =(Q) =f a k k Qg ; (a k ; k =; ;) is a nonnegative sequence converging to innity. The observation model () arise as the limiting experiment in many other estimation problems. This model has been actively pursued recently, see [], [], [5], [6] and further references therein. These papers demonstrate amply the importance of the asymptotic minimax estimators and their practical relevance. In [9] it was shown that the quadratic minimax risk over the elipsoids coincides asymptotically with the minimax risk within the class of linear estimators. A procedure for obtaining the minimax linear estimators and evaluating their risks was described.

2 In this article, developing further the approach of [9], we describe the second-order behaviour of the minimax estimators and the quadratic minimax risk for the model (){ (). These results are illustrated by a number of examples. The authors are grateful to G.K. Golubev for a number of comments resulting in the improvement of some results of the paper and their better presentation. Minimax linear estimation Let the model of observations be given by (). For the sake of simplicity we assume that the sequence (a k ; k =; ;) in () is positive and monotone. In this Section we investigate the minimax linear risk which will be shown later to be asymptotically equal, under some conditions, to the minimax risk. Denote x =(x ;x ;) and introduce the class of linear estimators (3) ^ = ^(x) =(^ ; ^ ;); ^k = x k Y k ; k =; ; Dene the risk of a linear estimator (4) R (x; ) =E k^(x), k and the minimax linear risk (5) r l = rl () = inf sup R (x; ); x where kk = P k. To formulate the result about the minimax linear risk, we introduce some notations. Let c be a solution of the equation (6) and (7) d = d () = k a k(, ca k ) + = cq k(, c a k ) + Here b + denotes nonnegative part of b. The following Theorem is due to Pinsker [9], but we give its elementary proof for the sake of completeness. Theorem. Let c and d are dened by (6) and (7). Then (8) inf x sup R (x; ) = sup inf R (x; ); x the saddle point (~x; ~ ) for the problem (5) is given by (9) (0) ~x k =(, c a k ) + ; ~ k = k (, c a k ) + =(c a k )

3 and the linear minimax risk satises the following equations () r l = d = sup k k=( k + k) Proof. It follows immediately that the risk of a linear estimator has the form () R (x; ) = kx k +(, x k ) k (3) Since, according to (6) Qc = P kc a k (, c a k ) +, inf x sup R (x; ) sup R (~x;) Q sup(, ~x k ) =a k + k Qc + = = k(, c a k ) + k((c a k (, c a k ) + +(, c a k ) +)) k(, c a k ) + = d Note now that equation (6) can be also rewritten as (4) a k ~ k = Q; so that ~. Taking into account (3), (4) and (), we obtain d inf x = sup sup R (x; ) sup inf k k k + k This completes the proof of Theorem. x R (x; ) k ~ k = d ~ k + k k~x k Remark. The equations (6) and (0) can be obtained by the Lagrange multiplier method for a problem of maximizing a functional P k k=( k + k) subject to the convex constraint (). Remark. Due to monotonocityof(a k ;; ;), d = P N k(,c a k ), where (5) N = N () = maxfk a k c, g One can easily derive the explicit formulas for c and N (cf. [3]) c = c () = P N ka k Q, + P N ; k a k 3

4 Note that (5) entails that ( N = max l lx ka k (a l, a k ) Q ) (6) 0 c a k ; k =; ;;N 3 Asymptotically minimax estimation In this section we investigate the asymtotic behaviour of the minimak risk with respect to all possible estimators. We dene the minimax risk (7) r = r () = inf ^ sup E k^, k ; where ^ is an arbitrary estimator based on Y =(Y ;Y ;). In the proof of Proposition we use the van Trees inequality [0, p. 7]. Now we describe the version of this inequality which we use below. Let dp (y), y =(y ;y ; ), denote distribution of the vector of observations Y =(Y ;Y ; ) in () and '(y k ; k )bethe marginal (gaussian) density ofy k. Assume that a prior distribution d(); =( ; ; ) is dened according to which k are independent random variables, with corresponding densities k (x). Let, for all k; k (x) be absolutely continuous, with nite Fisher log k (x) I( k )= k (x) Assume also that k (x) is positive inside a bounded interval of the real line and zero outside it. We write E for the expectation with respect to the joint distribution of Y and. Then, according to the van Trees inequality (cf. [4] and [5]), the Bayes risk E(^ k, k ) admits a lower bound (8) E(^ k, k ) I k + I( k ) ; where I k =, k, is the Fisher information about k contained in the observation Y k and ^ k = ^ k (Y ). Since our setup here is slightly dierent fron those of [4] and [5], below wesketch a short proof of (8). Let A = ^ k, k ; k log ('(Y k ; k ) k ( k )) Denote Y (k) =(Y ; ; Y k, ;Y k+ ; ), (k) =( ; ; k, ; k+ ; ) and let dp (k) (y (k) ) and d (k) ( (k) ) respectively be their distributions. 4

5 Use the Cauchy-Schwarz inequality EA (EAB) =EB One can assume, without loss of generality, that EA <. Our assumptions permit integration by parts and interchanging the order of integration in the following integral yields EAB = = = (^ k, k log ('(y k ; k ) k ( k )) dp (y) d() (^ k, k ('(y k ; k ) k ( k )) dy k d k dp (k) (y (k) ) d (k) ( (k) ) dp (y) d() = It remains to note that EB = I k + I( k ). Next theorem describes the lower bound for the minimax risk (7). The proof of this and the following results of this Section will be given in the Appendix. Theorem. Let (m k ;; ;) be a sequence such that, for some >0, (9) a km k + 8 log, a 4 km 4 k! = Q Then the following lower bound holds (0) r km k m k + k + O( ) ;! 0 To derive a good lower bound, one should, in principle, maximize the functional appearing in (0) under the restriction (9). However, the following Theorem shows that, under rather mild condition, this problem is asymptotically equivalent to the maximization problem () which has already been solved by Theorem. This implies, in particular, the asymptotic equivalence of the minimax risk and the minimax linear risk. Theorem 3. Let c and N be dened by (6) and (5). If condition holds, then log, P a k 4 k(, c a k ) + ( P a k k (, c a k ) + ) = o() ;! 0 ; r = N X k, c N X ka k! ( + o()) ;! 0 The next Proposition, although looking quite general, provides exact asymptotics of the minimax risk for a more restricted class of ellipsoids. In particular, it is convenient in applications where the sequence (a k ; k = ; ;) is increasing faster than k m for any m >0. Since in such cases the limiting behaviour of the minimax linear risk d 5

6 typically does not depend on Q (cf. Example 4{5 in Section 4 below), this Proposition leads also to the exact asymptotics of the minimax risk r. In the context of curve estimation this corresponds to estimating "very smooth" function, with rapidly decreasing Fourier coecients (cf. [5]). Proposition. Let d be dened by (7). Then () d ((Q= )) r d ((Q)) Note that these lower and upper bounds for the minimax risk are nonasymptotic. Corollary. Let c and N be dened by (6) and (5). If P k = < and c N X ka k = o 0 k=n + then the following asymptotic expansion holds r =, X k=n + k k A ;! 0 ; A ( + o()) ;! 0 Remark 3. There are two terms in asymptotic expression of the minimax risk in Theorem 3. They can be either of the same order or the second term can be of smaller order than the rst. In the latter case Theorem 3 provides at least two terms of asymptotic expansion of the minimax risk (cf. Example below). Remark 4. Recall that the sequence (a k ; k =; ;)was assumed positive. The results remain valid under the weaker assumption a k 0, k =; ;(cf. []). 4 Examples The results presented below illustrate the assertions in the previous Section. Example. Consider model (){() with a k = k, >0, k = k,, + >0.In this case it is easy to prove that c N! as! 0. Using this and (6), one can calculate N = ( + )( + )Q=( ) + ( + o()) ; c = =(( + )( + )Q) + ( + o()) Here we make use of the asymptotic relation () m = M + ( + o()) as M!; >, ( +) Now one can easily verify the condition of Theorem 3. By applying Theorem 3, we derive the asymptotics of the minimax risk. 6

7 Case >0. The asymptotics () and relations for N and c yield (cf. [9] for =) r = 4=(+), (Q( + )) =(+) (=( + )) =(+) ( + o()) In this case Theorem 3 gives only the rst-order term of the minimax risk. Note also that although ( k;; ;) can be increasing to innity, the minimax risk still converges to zero. Case = 0. By using () and the asymptotics (3) one obtains k, = log M + C e + o() as M!; r =, log, + (C e +(), log(q),, )( + o()) ; where C e = is the Euler constant ([7, ]). Case <0. Using the asymptotic relation we calculate m=m m, = M, ( + o()) as M!; > ;, r = + 4=(+), (Q( + )) =(+) (=( + )) =(+) ( + o()) Example a k = k, >0, k = k,(+). In this case the condition of Theorem 3 is again satised, and N =(Q, = log, ) = ( + o()) ; c = log, (Q), ( + o()) Then by Theorem 3, r =, 4 (log, ), Q, ( + o()) Example 3 a k = k, >0, k = k,(+), >. One calculates N = (, )Q, = ( + o()) ; c = ((, )Q), ( + o()) With these asymptotic relations, one can show that d ((Q)) =, 4 Q, (, ), ( + o()) By applying Proposition, we can obtain only the rate of the second-order term of the minimax risk r =, 4 Q, (, ), ; 7

8 where lim inf ;!0 lim sup!0 Example 4 a k = e k, >0, k = k,.from (5) one can see that (4) e, c e N Using (6), (4) and the asimptotics gives m e m = M (M +) e ( + o()) as M! (e, ) N =, log, +(), (, ) loglog, + O() By the last two relations and (4), we have c NX ka k = N, (N +) c e ( + o()) N, e e, e, ( + o()) = O (log, ), We apply Proposition to this Example. Case >. Since, according to [7, 0.], we calculate NX k = N m = M + ( +) + M ( + o()) as M!; >0 ; + N, and obtain that ( + o()) = (log, ) + (, )(log, ), loglog, ( + o()) ; r = (log, ) + (, )(log, ), loglog, ( + o()) Case =. In this case we have that c P N ka k = O() ; NX k = N =, log, + O() ; and therefore, r =, log, + O( ) 8

9 Case 0 <<. One can show that where m, =, (M + m),, (,) m+ m,! = M, + ()+o() as M!; 0 << ; (, ) () =, (,) m+ m, is the Riemann zeta function ([7, 7.4.]). Using this asymptotics, we obtain NX k = (log, ) + (, )+o() Consequently, r = (log, ),, + (, )( + o()) Case = 0. Since, by (3), NX k = log N + C e + o() ; we get r = loglog, + (C e + log, )( + o()) Case <0. In this case one can verify that k=n + k =, N Therefore, by Corollary we have ( + o()) =,(log, ) ( + o()) r = + (log, ),, ( + o()) Example 5 a k = e kr, >0, 0 <r<, k = k,. With the asymptotics where m e mr = C r e Mr M +(,r)+ ( + o()) as M!; C r = 8 >< > 0 <r< (r), ; e =(e, ); r = ; r> ; 9

10 one can obtain N =, log, =r ( + o()) By denition of N, weevaluate X N c ka k = C r N,+(,r)+ c e Nr ( + o()) C r N,+(,r)+ ( + o()) = O log, (,+(,r)+)=r ( + o()) Now the asymptotics of the minimax risk may be obtained in the same way as in Example 4. Case >0. r = (log, ) =r,=r, ( + o()) Case =0. Case <0. r = r, loglog, + (C e + r, log, )( + o()) r = + (log, ) =r,=r, ( + o()) Remark 5. Note that in most cases in Examples 4 and 5 both the rst- and the second-order terms of the minimax risk do not depend on the "size" Q of ellipsoid (Q). Remark 6. Let a k = a k (), k =; ;, be as in Example 4 or Example 5. Dene the correspondent hyperrectangle in l -space H = H (Q) =f j k j q Qa, k (); k =; ;g The assertions of Examples 4 and 5 concerning the rst-order behaviour (also the secondorder behaviour for the cases = 0 and <0) of the minimax risk remain valid with = replaced by H. This follows immediately from the following easily veried relation for any Q>0; >0; 0 << there exists Q > 0 such that (Q) H (Q), (Q ) Example 6 a k = k, k = ekr, ; ; r > 0. Let us establish rst an upper bound for the minimax risk r () (see (7)). Such a bound is provided by the minimax linear risk which, according to Theorem, equals d (see (6){(7)). Using the asymptotic expansions (M!) m e mr = M,r Mr M e r m e m = M e M e e,, + r,,! M,r ( + o()) ; 0 <r<; (r) e (e, ) M, ( + o()) m e mr = M e Mr + e (M,)r ( + o()) ; r>; 0! ;

11 one can solve (6){(7), thus obtaining c =, log,,=r ( + o()); d = Qc ( + o()) = Q, log,,=r ( + o()) The last formula exhibits a distinctive feature of this example, as compared to all previous ones. Indeed, analyzing the proof of Theorem (cf. inequality (3)), one realizes that the term Qc, contributing to d, arises solely as the squared bias term of the linear minimax estimator. Thus, only the bias of the estimator contributes to its maximal risk, up to the rst order. To show that d coincides asymptotically with the minimax risk r (), we choose a prior distribution on and use the obvious inequality r () R (), where R () denotes the Bayes risk. Let be a distribution on such that and N = ; with probabilities = i =0; i 6= N;,almost surely; where =(Q=a N ) = and N =[c, ]. Clearly () =, = Qa, N = Qc ( + o()) = d ( + o()) and N = e Nr =, e O(). Due to suciency considerations, the Bayes risk R () in estimating is equal to the Bayes risk in estimating N, based on the observation Y N only. Since it follows (see [8], proof of Lemma 3.) that lim =0;!0 Var Y N r () R () = ( + o()) = d ( + o()) Thus r () = Q, log,,=r ( + o()) 5 Appendix Proofs The proof of Theorem is based on the following elementary result. Proposition. Let ;; m be independent Gaussian random variables with E k =0, Var k = d k. Then P ( m X k >P ) exp (, P, P m d k 4 P m d 4 k )

12 Indeed, using Markov inequality and moment-generating function of m, one obtains for any >0 P ( m X It remains to set k >P ) e,p Ee P m k = exp (,P + ( ) mx mx exp,(p, d k )+ = P, P m d k P m d 4 k d 4 k mx log(, d k),= ) Proof of Theorem. We select a prior measure d() such that k ;; ;, are distributed independently and normally with zero means and variances m k, k =; ;. Let E denote the expectation with respect to the joint distribution of Y ;Y ; and ; ;. Since is closed and convex, r = inf sup ^ E k^, k. We bound the minimax risk from below as follows r = inf sup E k^, k inf E k^, k d()=() ^ ^ (5) inf ^ inf ^ Ek^, k, sup ^ because, by Cauchy-Shwarz inequality and (9), sup ^ C E k^, k d()=() E(^ k, k ), Qa, ( + p 3) ( C ) = =() ; C E k^, k d() ( C ) sup ^ Q( C )a, + = Q( C )a, E k^k + C k d() (( C )) = + p 3 ( C ) = X Qa, ( + p 3) ( C ) = Note that Proposition, together with (9), entails ( C ) = (6) 4 kd() Now we recall the following known result. If and are independent Gaussian random variables with E =0,Var =, Var =, and = +, then inf E(, f()) = E (, E(j)) = =( + ) f () m k =

13 Using this, we estimate the rst term of the right-hand side of (5) inf ^ E(^ k, k ) From the last inequality, (5) and (6) we obtain nally Theorem is proved. r km k m k + k km k m k + k + O( ) Proof of Theorem 3. The following upper bound for the minimax risk follows immediately from Theorem (7) r r l = d = N X k, c N X ka k Introduce = () = 8 (log, ) P a k 4 k (, c a k ) = + P a k k(, c a k ) + Note that 0 and = o() as! 0 for any >0 because of condition of Theorem. Now we take the sequence m k = ~ k( + ),, k = ; ;, with ~ k dened by (0). Equation (9) is satised. Indeed, by virtue of (4) we have a km k + = 8 log, a 4 kmk! 4 = a k ~ k = Q Applying now Theorem for the sequence (m k;; ;), we calculate r ( + ) = NX k, c k ~ k ~ k( + ), + k NX ka k, c + O( ) NX ka k (, c a k ) + c a k + O( ) From (6) it follows that c can not be of smaller order than. Choosing now some >4 and recalling that > 0, = o() as! 0, and 0 c a k for k =; ;;N (see (6)), we conclude that the last lower bound, together with upper bound (7), proves Theorem 3. Proof of Proposition. Let m k ;; ;be some sequence of positive numbers such that (8) a km k Q; 3

14 i.e. m =(m k ;; ;). Introduce k (x) =(=m k ) 0 (x=m k ); k =; ; ; where 0 (x) =Ifjxj g cos (x=) These are the probability densities with supports [,m k ;m k ] respectively. It is easy to calculate Fisher information of distribution dened by density k (x) (9) I( k )=E k [(log k ( k )) 0 ] = I( 0 )=m k = =m k ; where E k denotes the expectation with respect to density k. We select a prior measure d() such that k ; k =; ;, are distributed independently with densities k (x); ; ;, respectively. Since (8) provides that supp we proceed estimating the minimax risk (7) from below as follows (30) r = inf ^ sup E (^ k, k ) inf ^ = inf ^ E (^ k, k ) d() E(^ k, k ) For this case (see (9)) the inequality (8) yields E(^ k, k ) =m k +, k, From this inequality and (30) we get that for any m the minimax risk r satises (3) r From this one obtains the following lower bound k m k = m k = + k r sup km k= = sup km k m(q) m k = + k m(q= ) m k + k The last relation and Theorem complete the proof. = d ((Q= )) Proof of Corollary. The left-hand side of the inequality (3) does not depend on m. Therefore, we can take any m. Now we make use of the vector ( ~ k ; k =; ;) dened by (0). Relation (4) provides that ~. Substituting ~ k in (3), k =; ;, one calculates r k ~ k= X N = ~ k = + k Using now this and (6), we obtain that r N X k, N X k, N X kc a k kc a k (,, )c a k +, Combining last relation with condition of Corollary and the upper bound (7) completes the proof. 4

15 References [] D.L. Donoho and I.M. Johnstone, Minimax risk over l p -balls, Technical Report 04, Department of Statistics, University of California, Berkeley, 989. [] D.L. Donoho, R.C. Liu and B. MacGibbon, Minimax risk over hyperrectangles, and implications, Ann. Statist., 8 (990), pp [3] S.Y. Efroimovich and M.S. Pinsker, Estimation of square-integrable density of a random veriable, Problems Inform. Transmission, 3 (98), pp [4] R.D. Gill and B.Y. Levit, Applications of the van Trees inequality A Bayesian Cramer-Rao bound, Bernoulli, (995). [5] G.K. Golubev and B.Y. Levit, On the Second Order Minimax Esimation of Distribution Functions, Preprint 88, Department of Mathematics, University of Utrecht, 994. [6] G.K. Golubev and M. Nussbaum, A risk bound in Sobolev class regression, Ann. Statist., 8 (990), pp [7] I.S. Gradshtein and I.M. Ryzhik, Table of Integrals, Series, and Products, Academic Press, New York, 980. [8] I.A. Ibragimov, R.. Khasminskii, On nonparametric estimation of the value of a linear functional in Gaussian white noise, Theor. Prob. Appl., 9 (984), 8-3. [9] M.S. Pinsker, Optimal ltering of square integrable signals in Gaussian white noise, Problems Inform. Transmission, (980), pp [0] H.L. van Trees, Detection, Estimation and Modulation Theory, Part, Wiley, New York,

Minimax Risk: Pinsker Bound

Minimax Risk: Pinsker Bound Minimax Risk: Pinsker Bound Michael Nussbaum Cornell University From: Encyclopedia of Statistical Sciences, Update Volume (S. Kotz, Ed.), 1999. Wiley, New York. Abstract We give an account of the Pinsker

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Abstract Inverse problems for elliptic and parabolic partial dierential equations are considered. It is assumed that a solution of the equation is obs

Abstract Inverse problems for elliptic and parabolic partial dierential equations are considered. It is assumed that a solution of the equation is obs Statistical approach to inverse boundary problems for partial dierential equations Golubev G. and Khasminskii R. eierstra{institut fur Angewandte Analysis and Stochastik Mohrenstae 39 D{7, Berlin, Deutchland

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Nonparametric Density and Regression Estimation

Nonparametric Density and Regression Estimation Lower Bounds for the Integrated Risk in Nonparametric Density and Regression Estimation By Mark G. Low Technical Report No. 223 October 1989 Department of Statistics University of California Berkeley,

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

The Pacic Institute for the Mathematical Sciences http://www.pims.math.ca pims@pims.math.ca Surprise Maximization D. Borwein Department of Mathematics University of Western Ontario London, Ontario, Canada

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier. Ito 8-646-8 Calculus I Geneviève Gauthier HEC Montréal Riemann Ito The Ito The theories of stochastic and stochastic di erential equations have initially been developed by Kiyosi Ito around 194 (one of

More information

Chi-square lower bounds

Chi-square lower bounds IMS Collections Borrowing Strength: Theory Powering Applications A Festschrift for Lawrence D. Brown Vol. 6 (2010) 22 31 c Institute of Mathematical Statistics, 2010 DOI: 10.1214/10-IMSCOLL602 Chi-square

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

A converse Gaussian Poincare-type inequality for convex functions

A converse Gaussian Poincare-type inequality for convex functions Statistics & Proaility Letters 44 999 28 290 www.elsevier.nl/locate/stapro A converse Gaussian Poincare-type inequality for convex functions S.G. Bokov a;, C. Houdre ; ;2 a Department of Mathematics, Syktyvkar

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space

New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Lesson 6: Linear independence, matrix column space and null space New concepts: Span of a vector set, matrix column space (range) Linearly dependent set of vectors Matrix null space Two linear systems:

More information

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria ESI The Erwin Schrodinger International Boltzmanngasse 9 Institute for Mathematical Physics A-19 Wien, Austria The Negative Discrete Spectrum of a Class of Two{Dimentional Schrodinger Operators with Magnetic

More information

Consistent semiparametric Bayesian inference about a location parameter

Consistent semiparametric Bayesian inference about a location parameter Journal of Statistical Planning and Inference 77 (1999) 181 193 Consistent semiparametric Bayesian inference about a location parameter Subhashis Ghosal a, Jayanta K. Ghosh b; c; 1 d; ; 2, R.V. Ramamoorthi

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

4.6 Montel's Theorem. Robert Oeckl CA NOTES 7 17/11/2009 1

4.6 Montel's Theorem. Robert Oeckl CA NOTES 7 17/11/2009 1 Robert Oeckl CA NOTES 7 17/11/2009 1 4.6 Montel's Theorem Let X be a topological space. We denote by C(X) the set of complex valued continuous functions on X. Denition 4.26. A topological space is called

More information

The best expert versus the smartest algorithm

The best expert versus the smartest algorithm Theoretical Computer Science 34 004 361 380 www.elsevier.com/locate/tcs The best expert versus the smartest algorithm Peter Chen a, Guoli Ding b; a Department of Computer Science, Louisiana State University,

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

2 Section 2 However, in order to apply the above idea, we will need to allow non standard intervals ('; ) in the proof. More precisely, ' and may gene

2 Section 2 However, in order to apply the above idea, we will need to allow non standard intervals ('; ) in the proof. More precisely, ' and may gene Introduction 1 A dierential intermediate value theorem by Joris van der Hoeven D pt. de Math matiques (B t. 425) Universit Paris-Sud 91405 Orsay Cedex France June 2000 Abstract Let T be the eld of grid-based

More information

Algebraic Information Geometry for Learning Machines with Singularities

Algebraic Information Geometry for Learning Machines with Singularities Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes Maximum Likelihood Estimation Econometrics II Department of Economics Universidad Carlos III de Madrid Máster Universitario en Desarrollo y Crecimiento Económico Outline 1 3 4 General Approaches to Parameter

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

1. Introduction In many biomedical studies, the random survival time of interest is never observed and is only known to lie before an inspection time

1. Introduction In many biomedical studies, the random survival time of interest is never observed and is only known to lie before an inspection time ASYMPTOTIC PROPERTIES OF THE GMLE WITH CASE 2 INTERVAL-CENSORED DATA By Qiqing Yu a;1 Anton Schick a, Linxiong Li b;2 and George Y. C. Wong c;3 a Dept. of Mathematical Sciences, Binghamton University,

More information

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 9: Asymptotics III(MLE) 1 / 20 Jensen

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

Outline. Random Variables. Examples. Random Variable

Outline. Random Variables. Examples. Random Variable Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

More information

On some properties of elementary derivations in dimension six

On some properties of elementary derivations in dimension six Journal of Pure and Applied Algebra 56 (200) 69 79 www.elsevier.com/locate/jpaa On some properties of elementary derivations in dimension six Joseph Khoury Department of Mathematics, University of Ottawa,

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

A characterization of consistency of model weights given partial information in normal linear models

A characterization of consistency of model weights given partial information in normal linear models Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care

More information

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES UNIVERSITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XL 2002 ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES by Joanna Jaroszewska Abstract. We study the asymptotic behaviour

More information

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f 1 Bernstein's analytic continuation of complex powers c1995, Paul Garrett, garrettmath.umn.edu version January 27, 1998 Analytic continuation of distributions Statement of the theorems on analytic continuation

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound)

1 Introduction It will be convenient to use the inx operators a b and a b to stand for maximum (least upper bound) and minimum (greatest lower bound) Cycle times and xed points of min-max functions Jeremy Gunawardena, Department of Computer Science, Stanford University, Stanford, CA 94305, USA. jeremy@cs.stanford.edu October 11, 1993 to appear in the

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

Detection & Estimation Lecture 1

Detection & Estimation Lecture 1 Detection & Estimation Lecture 1 Intro, MVUE, CRLB Xiliang Luo General Course Information Textbooks & References Fundamentals of Statistical Signal Processing: Estimation Theory/Detection Theory, Steven

More information

Near-Optimal Linear Recovery from Indirect Observations

Near-Optimal Linear Recovery from Indirect Observations Near-Optimal Linear Recovery from Indirect Observations joint work with A. Nemirovski, Georgia Tech http://www2.isye.gatech.edu/ nemirovs/statopt LN.pdf Les Houches, April, 2017 Situation: In the nature

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case

Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Conditional Value-at-Risk (CVaR) Norm: Stochastic Case Alexander Mafusalov, Stan Uryasev RESEARCH REPORT 03-5 Risk Management and Financial Engineering Lab Department of Industrial and Systems Engineering

More information

1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa

1. Introduction Over the last three decades a number of model selection criteria have been proposed, including AIC (Akaike, 1973), AICC (Hurvich & Tsa On the Use of Marginal Likelihood in Model Selection Peide Shi Department of Probability and Statistics Peking University, Beijing 100871 P. R. China Chih-Ling Tsai Graduate School of Management University

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

The memory centre IMUJ PREPRINT 2012/03. P. Spurek

The memory centre IMUJ PREPRINT 2012/03. P. Spurek The memory centre IMUJ PREPRINT 202/03 P. Spurek Faculty of Mathematics and Computer Science, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland J. Tabor Faculty of Mathematics and Computer

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

Z-estimators (generalized method of moments)

Z-estimators (generalized method of moments) Z-estimators (generalized method of moments) Consider the estimation of an unknown parameter θ in a set, based on data x = (x,...,x n ) R n. Each function h(x, ) on defines a Z-estimator θ n = θ n (x,...,x

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner formula Larry Goldstein, University of Southern California Nourdin GIoVAnNi Peccati Luxembourg University University British

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas

Plan of Class 4. Radial Basis Functions with moving centers. Projection Pursuit Regression and ridge. Principal Component Analysis: basic ideas Plan of Class 4 Radial Basis Functions with moving centers Multilayer Perceptrons Projection Pursuit Regression and ridge functions approximation Principal Component Analysis: basic ideas Radial Basis

More information

( f ^ M _ M 0 )dµ (5.1)

( f ^ M _ M 0 )dµ (5.1) 47 5. LEBESGUE INTEGRAL: GENERAL CASE Although the Lebesgue integral defined in the previous chapter is in many ways much better behaved than the Riemann integral, it shares its restriction to bounded

More information

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring,

Richard DiSalvo. Dr. Elmer. Mathematical Foundations of Economics. Fall/Spring, The Finite Dimensional Normed Linear Space Theorem Richard DiSalvo Dr. Elmer Mathematical Foundations of Economics Fall/Spring, 20-202 The claim that follows, which I have called the nite-dimensional normed

More information

QUESTIONS AND SOLUTIONS

QUESTIONS AND SOLUTIONS UNIVERSITY OF BRISTOL Examination for the Degrees of B.Sci., M.Sci. and M.Res. (Level 3 and Level M Martingale Theory with Applications MATH 36204 and M6204 (Paper Code MATH-36204 and MATH-M6204 January

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

A CHARACTERIZATION OF ANCESTRAL LIMIT PROCESSES ARISING IN HAPLOID. Abstract. conditions other limit processes do appear, where multiple mergers of

A CHARACTERIZATION OF ANCESTRAL LIMIT PROCESSES ARISING IN HAPLOID. Abstract. conditions other limit processes do appear, where multiple mergers of A CHARACTERIATIO OF ACESTRAL LIMIT PROCESSES ARISIG I HAPLOID POPULATIO GEETICS MODELS M. Mohle, Johannes Gutenberg-University, Mainz and S. Sagitov 2, Chalmers University of Technology, Goteborg Abstract

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Continued Fraction Digit Averages and Maclaurin s Inequalities

Continued Fraction Digit Averages and Maclaurin s Inequalities Continued Fraction Digit Averages and Maclaurin s Inequalities Steven J. Miller, Williams College sjm1@williams.edu, Steven.Miller.MC.96@aya.yale.edu Joint with Francesco Cellarosi, Doug Hensley and Jake

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Support Vector Machines vs Multi-Layer. Perceptron in Particle Identication. DIFI, Universita di Genova (I) INFN Sezione di Genova (I) Cambridge (US)

Support Vector Machines vs Multi-Layer. Perceptron in Particle Identication. DIFI, Universita di Genova (I) INFN Sezione di Genova (I) Cambridge (US) Support Vector Machines vs Multi-Layer Perceptron in Particle Identication N.Barabino 1, M.Pallavicini 2, A.Petrolini 1;2, M.Pontil 3;1, A.Verri 4;3 1 DIFI, Universita di Genova (I) 2 INFN Sezione di Genova

More information

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is

1. Introduction The nonlinear complementarity problem (NCP) is to nd a point x 2 IR n such that hx; F (x)i = ; x 2 IR n + ; F (x) 2 IRn + ; where F is New NCP-Functions and Their Properties 3 by Christian Kanzow y, Nobuo Yamashita z and Masao Fukushima z y University of Hamburg, Institute of Applied Mathematics, Bundesstrasse 55, D-2146 Hamburg, Germany,

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

balls, Edalat and Heckmann [4] provided a simple explicit construction of a computational model for a Polish space. They also gave domain theoretic pr

balls, Edalat and Heckmann [4] provided a simple explicit construction of a computational model for a Polish space. They also gave domain theoretic pr Electronic Notes in Theoretical Computer Science 6 (1997) URL: http://www.elsevier.nl/locate/entcs/volume6.html 9 pages Computational Models for Ultrametric Spaces Bob Flagg Department of Mathematics and

More information

8 Singular Integral Operators and L p -Regularity Theory

8 Singular Integral Operators and L p -Regularity Theory 8 Singular Integral Operators and L p -Regularity Theory 8. Motivation See hand-written notes! 8.2 Mikhlin Multiplier Theorem Recall that the Fourier transformation F and the inverse Fourier transformation

More information

2 ANDREW L. RUKHIN We accept here the usual in the change-point analysis convention that, when the minimizer in ) is not determined uniquely, the smal

2 ANDREW L. RUKHIN We accept here the usual in the change-point analysis convention that, when the minimizer in ) is not determined uniquely, the smal THE RATES OF CONVERGENCE OF BAYES ESTIMATORS IN CHANGE-OINT ANALYSIS ANDREW L. RUKHIN Department of Mathematics and Statistics UMBC Baltimore, MD 2228 USA Abstract In the asymptotic setting of the change-point

More information

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke (

One important issue in the study of queueing systems is to characterize departure processes. Study on departure processes was rst initiated by Burke ( The Departure Process of the GI/G/ Queue and Its MacLaurin Series Jian-Qiang Hu Department of Manufacturing Engineering Boston University 5 St. Mary's Street Brookline, MA 2446 Email: hqiang@bu.edu June

More information

On the shape of solutions to the Extended Fisher-Kolmogorov equation

On the shape of solutions to the Extended Fisher-Kolmogorov equation On the shape of solutions to the Extended Fisher-Kolmogorov equation Alberto Saldaña ( joint work with Denis Bonheure and Juraj Földes ) Karlsruhe, December 1 2015 Introduction Consider the Allen-Cahn

More information

David L. Donoho. Iain M. Johnstone. Department of Statistics. Stanford University. Stanford, CA December 27, Abstract

David L. Donoho. Iain M. Johnstone. Department of Statistics. Stanford University. Stanford, CA December 27, Abstract Minimax Risk over l p -Balls for l q -error David L. Donoho Iain M. Johnstone Department of Statistics Stanford University Stanford, CA 94305 December 27, 994 Abstract Consider estimating the mean vector

More information

Mathematics 324 Riemann Zeta Function August 5, 2005

Mathematics 324 Riemann Zeta Function August 5, 2005 Mathematics 324 Riemann Zeta Function August 5, 25 In this note we give an introduction to the Riemann zeta function, which connects the ideas of real analysis with the arithmetic of the integers. Define

More information

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences,

In Chapter 14 there have been introduced the important concepts such as. 3) Compactness, convergence of a sequence of elements and Cauchy sequences, Chapter 18 Topics of Functional Analysis In Chapter 14 there have been introduced the important concepts such as 1) Lineality of a space of elements, 2) Metric (or norm) in a space, 3) Compactness, convergence

More information

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION

OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION OPTIMAL TRANSPORTATION PLANS AND CONVERGENCE IN DISTRIBUTION J.A. Cuesta-Albertos 1, C. Matrán 2 and A. Tuero-Díaz 1 1 Departamento de Matemáticas, Estadística y Computación. Universidad de Cantabria.

More information

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente

Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Cente Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks Cheng-Shang Chang IBM Research Division T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 cschang@watson.ibm.com

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland Frederick James CERN, Switzerland Statistical Methods in Experimental Physics 2nd Edition r i Irr 1- r ri Ibn World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI CHENNAI CONTENTS

More information

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2  1 LECTURE 5 Characteristics and the Classication of Second Order Linear PDEs Let us now consider the case of a general second order linear PDE in two variables; (5.) where (5.) 0 P i;j A ij xix j + P i,

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Inference for High Dimensional Robust Regression

Inference for High Dimensional Robust Regression Department of Statistics UC Berkeley Stanford-Berkeley Joint Colloquium, 2015 Table of Contents 1 Background 2 Main Results 3 OLS: A Motivating Example Table of Contents 1 Background 2 Main Results 3 OLS:

More information

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables

Lecture 21: Expectation of CRVs, Fatou s Lemma and DCT Integration of Continuous Random Variables EE50: Probability Foundations for Electrical Engineers July-November 205 Lecture 2: Expectation of CRVs, Fatou s Lemma and DCT Lecturer: Krishna Jagannathan Scribe: Jainam Doshi In the present lecture,

More information

April 25 May 6, 2016, Verona, Italy. GAME THEORY and APPLICATIONS Mikhail Ivanov Krastanov

April 25 May 6, 2016, Verona, Italy. GAME THEORY and APPLICATIONS Mikhail Ivanov Krastanov April 25 May 6, 2016, Verona, Italy GAME THEORY and APPLICATIONS Mikhail Ivanov Krastanov Games in normal form There are given n-players. The set of all strategies (possible actions) of the i-th player

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information