Optimally Bounding the GES(Hampel s Problem)

Size: px
Start display at page:

Download "Optimally Bounding the GES(Hampel s Problem)"

Transcription

1 Optimally Bounding the GES(Hampel s Problem) Let ˆµ n bea(smooth)locationm-estimatebasedonthemonotonelocation score function ψ and the consistent dispersion estimate ˆσ n. We have shown before that the asymptotic variance of ˆµ n at the central normal model H 0 N ( µ,σ 2) isgivenby AV(ψ,H 0 ) σ 2 E H0 ψ 2 ( y µ σ [ )} EH0 ψ( y µ σ )( )}] y µ 2. () σ Wehavealsoshownbeforethatthegrosserrorsensitivityof ˆµ n atthecentral normalmodeln ( µ,σ 2) isgivenby ψ( ) γ(ψ,h 0 ) σ E H0 ψ( y µ σ )( y µ )}. (2) σ In his Ph.D. dissertation(completed in 967), Hampel posed and solved the following problem. Hampel s Optimality Problem: Consider the class C of all location M- estimates with non-decreasing and bounded score functions ψ. We wish to find the most efficient robust estimate within this class. Suppose that we will measureefficiencybytheasymptoticvariance(atthetargetmodelh 0 )androbustness by the gross error sensitivity. Therefore, we wish to choose the location M-estimatewithinC withthesmallestasymptoticvarianceav(ψ,h 0 )among all estimatesinc withrelatively small gross-errorsensitivityγ(ψ,h 0 ).Hampel soptimalityproblemcanbemathematicallystatedasfollows: findψ C such that and ψ arg min γ(ψ,h 0) K AV(ψ,H 0) (3) Thechangeofvariablesz(y µ)/σin()and(2)gives AV(ψ,H 0 ) σ 2 E H0 ψ 2 ( y µ σ [ )} EH0 ψ( y µ σ )( )}] y µ 2 σ 2 E Φ ψ 2 (z) } [E σ Φ ψ(z)(z)}] 2 (4) ψ( ) ψ( ) γ(ψ,h 0 ) σ E H0 ψ( y µ σ )( )} y µ σ E σ Φ ψ(z)z}. (5)

2 Therefore,exceptforthefixedscalingfactors(σ 2 andσ)theasymptoticvariance and the GES of M-estimates essentially depend on the shape of the distribution H 0 andnotontheparticularvaluesofitslocationanddispersionparameters. Therefore, without loss of generality we can assume that the fixed location and dispersionparametersareµ0andσ(andsoh 0 Φ).Moreover,since (4)and(5)areinvariantundermultiplicationofψbyanon-zeroconstant,we can assume without loss of generality that E Φ ψ(y)y}. (6) Therefore, problem(3) can be re-stated as follows. Minimize (inψ) E Φ ψ 2 (z) } Subject to and (i) E Φ ψ(y)y}. (ii) ψ( ) K For future reference let C K ψ C : (i)and(ii)abovehold}. (7) We have the following result: Theorem LetH 0 N ( µ,σ 2) andconsiderthefamilyofhuber sscorefunctions: y/[2φ(c) ] y c ψ c (y) (8) csign(y)/[2φ(c) ] y >c. Then (a) IfK</[2ϕ(0)],thenC K φ(theemptyset). Inotherwords, γ(ψ,φ) π, forallψ. (9) 2 (b) The median minimizes the gross error sensitivity among all location M- estimates. 2

3 (c) If K > /[2ϕ(0)], then there exists a unique constant c > 0 such that ψ c C K,thatisγ(ψ c,φ)k. Proof: To prove(a) write E Φ ψ(y)y} 2 0 2ψ( ) ψ(y)yϕ(y)dy ψ(y)yϕ(y)dy 0 yϕ(y)dy 2ψ( ) 2ψ( )[ϕ( ) ϕ(0)]2ψ( )ϕ(0) 0 ϕ (y)dy Therefore, 2ψ( )/ 2π. γ(ψ,φ) ψ( ) 2π π E Φ ψ(y)y} 2 2. Part (b)follows directly from(a) and the fact thatin the case of the median ψ(y) sign(y) γ(sign,φ) sign( ) E Φ sign(y)yϕ(y)} sign(y)yϕ(y)dy [ [ 2 [ 2 ] y ϕ(y)dy 0 0 ] yϕ(y)dy ] ϕ (y)dy [ 2ϕ( )+2ϕ(0)] /[2ϕ(0)] π 2. 3

4 Toprove(c)noticethat and therefore γ(ψ c,φ) The function g(c) is continuous, with E Φ ψ c (y)y} c [2Φ(c) ] g(c) and lim g(c) c limg(c) lim c 0 c 0 c [2Φ(c) ] lim c 0 2ϕ(c) π/2. Thisprovestheexistence partof(c). Theuniqueness partof(c)followsfrom the fact g (c) 2Φ(c) 2cϕ(c) [2Φ(c) ] 2 0, forall c 0 (seeproblem3). Theorem2 Given K > /[2ϕ(0)], there exists a unique c > 0 such that γ(ψ c,φ)k (see(8))and AV(ψ,Φ) AV(ψ c,φ), forallψ C K. Proof: Let K > /[2ϕ(0)]. By the previous theorem we know there is a unique c c(k) such γ(ψ c,φ) K. Set β β(c) [2Φ(c) ]. 4

5 Now write J(ψ) E Φ ψ 2 (y) } and I(ψ) [ψ(y) βy] 2 ϕ(y)dy. Forallψ C K, I(ψ) [ψ 2 (y)+β 2 y 2 2βyψ(y)]ϕ(y)dyJ(ψ)+β 2 2β. (0) Therefore, ψ minimizes J(ψ) over C K if and only if ψ minimizes I(ψ)over C K. By symmetry, I(ψ) 2 0 [ψ(y) βy] 2 ϕ(y)dy andsoitsufficestostudythesquareddifference[ψ(y) βy] 2 fory 0. Clearly,sinceψ C K,fory cwehave [ψ(y) βy] 2 0[ψ c (y) βy] 2 for y c. () Ontheotherhand,sinceψ C K,fory>cwehave 0 ψ(y) Kψ c (y), for y>c andso [ψ(y) βy] 2 [K βy] 2 [ψ c (y) βy] 2, for y>c. (2) 5

6 From()and(2) wehave(seefigure) I(ψ) [ψ(y) βy] 2 ϕ(y)dy [ψ c (y) βy] 2 ϕ(y)dyi(ψ c ) (3) Finallythetheoremfollows(i.e. ψ (y)ψ c (y))from(0)and(3). ByTheorem2,foreach K> π/2 there is a unique c > 0 such that ψ c (y) (see Equation (8)) minimizes the asymptotic variance among all ψ functions with γ(ψ c,φ) K. Thespecificvalueofc, c c(k), can be determined from the equation c [2Φ(c) ] K. (4) 6

7 Minimax Variance(Huber s Problem) Consider the symmetrical contamination neighborhood F ɛ H: H(y)( ɛ)φ+ɛg, Gissymmetric}. (5) ThetechnicaladvantageofrestrictingattentiontoF ɛ isthatforanym-estimate ˆµwehave ˆµ(H) 0, forall H F ɛ. Therefore,wedonotneedtoworryabouttheproblemofasymptoticbias when werestrictedtof ɛ. Ontheotherhand,theseriouspracticallimitationsimposed by this restriction are obvious. The original problem considered by Huber was solved assuming that the dispersion parameter σ is known. The solution of Huber s problem when σ is unknown (andmustbe robustly estimated)was obtainedbyliandzamar (99). Huber s goal was to find the location M-estimate score function ψ that minimizes the maximum asymptotic variance over the symmetric contamination family (5). Huber s mathematical approach was very elegant and goes along the following lines. Let AV(ψ) sup AV(ψ,H), F Fɛ bethemaximumasymptoticvarianceoverf ɛ forthelocationm-estimatewith score function ψ. Robust M-estimates are then expected to have a relatively small and stable AV over the contamination neighborhood. Therefore, an estimatewithsmallerav(ψ)shouldbepreferred,fromarobustnesspointofview (other things being equal). Then, we wish to consider the minimization problem inf ψ AV(ψ) inf sup AV(ψ,H). (6) ψ F Fɛ 7

8 Supposethatonecanprove(asHuberdid)thatψ solves(6)ifandonlyifψ solves the dual problem sup inf F Fɛ ψ AV(ψ,H). (7) Then,wemaywishtosolvethe dual optimalityproblem(7)insteadofthe (harder) optimality problem(6). Let ψ H (y) h (y) h(y) bethemaximumlikelihoodscorefunctionunderh andleti(h)bethecorresponding Fisher Information index I(H) E 2 Hψ H (y)} ( ) 2 h (y) h(y)dy. h(y) Itisnotdifficulttoverifythat sup inf F Fɛ ψ AV(ψ,H) sup AV(ψ H,H) F Fɛ sup F F ɛ I(H) inf F F ɛ I(H) Therefore, the original min-max problem reduces to that of finding the least favorable distribution, that is, the distribution with smallest Fisher information in the family (5). Huber did precisely that and found its famous truncatedlinearscorefunctionwhenh 0 isnormal. An Alternative Derivation of Huber s Optimality Result We will see that in the case of contamination neighborhoods Huber s optimality result can be directly obtained using Hampel s result, and vice-versa. 8

9 That is, the two problems are mathematically equivalent. This is not too surprisingifwenoticethatthetwoproblemsyieldthesamefamilyofoptimalscore functions. The elegant approach of solving the min-max variance problem by first findingtheleastfavorabledistributionisnoteasytoapplytomodelsotherthanthe simple pure location model treated by Huber. On the other hand, the approach of attacking the min-max problem directly(brute force approach, so to speak) shows better promise of possible extensions as it has already been successfully applied by Li and Zamar(99) to the location-dispersion model. It is easy to show (using a simplified version of proof of Theorem?? in Section??) thatinthecaseofknowndispersion -takenequaltoone,without loss of generality- the asymptotic distribution of location M-estimates is normal withmean0andvariance AV (ψ,h) E H ψ 2 (x) } [ EH ψ (x) }] 2. (8) We have the following result. Theorem 3 Suppose that the dispersion parameter σ is known(and then taken equal to one without loss of generality). If ψ is non-decreasing then sup AV (ψ,h) H F ɛ ( ɛ) AV (ψ,φ)+ ɛ ( ɛ) 2γ2 (ψ,φ). (9) where γ(ψ, Φ) is the gross-error sensitivity at the Normal model. Therefore the maximum asymptotic variance under the normal ɛ-contamination family F ɛ is alinear combination of thenormal asymptoticvarianceand gross-errorsensitivity,withcoefficients/( ɛ)andɛ/( ɛ) 2,respectively. Proof: Let H ( ɛ)n(0,)+ɛh (20) From(8)and(20)wehave AV (ψ,h) ( ɛ)e Φ ψ 2 (x) } +ɛe H ψ 2 (x) } [ ( ɛ)eφ ψ (x) } +ɛe H ψ (x) ] 2. 9

10 Since ψ is non-decreasing, it is clear that ( ɛ)e Φ ψ 2 (x) } +ɛe H ψ 2 (x) } ( ɛ)e Φ ψ 2 (x) } +ɛψ 2 ( ). (2) Moreover,sinceψ (x) 0forallx,wecanwrite ( ɛ)e Φ ψ (x) } +ɛe H ψ (x) } ( ɛ)e Φ ψ (x) } (22) Using(2)and(22)weobtain AV (ψ,h) ( ɛ)e Φ ψ 2 (x) } +ɛe H ψ 2 (x) } [ ( ɛ)eφ ψ (x) } +ɛe H ψ (x) ] 2 ( ɛ)e Φ ψ 2 (x) } +ɛψ 2 ( ) [ ( ɛ)eφ ψ (x) }] 2 ( ɛ) E Φ ψ 2 (x) } [ EΦ ψ (x) }] 2 + ɛ ( ɛ) 2 ψ 2 ( ) [ EΦ ψ (x) }] 2 ( ɛ) AV (ψ,φ)+ ɛ ( ɛ) 2γ2 (ψ,φ), forallh F ɛ. Therefore, sup AV (ψ,h) H F ɛ ( ɛ) AV (ψ,φ)+ ɛ ( ɛ) 2γ2 (ψ,φ). (23) where γ(ψ, Φ) is the gross-error-sensitivity of the M-estimate at the Normal model. On the other hand, taking H y ( ɛ)n(0,)+ɛuniform (y δ,y+δ; y δ, y+δ) 0

11 wehavethat sup AV (ψ,h) supav (ψ,h y ) lim AV (ψ,h y) H F ɛ y y Therefore,(9) follows from(23) and(24). ( ɛ) AV (ψ,φ)+ ɛ ( ɛ) 2γ2 (ψ,φ). (24) Theorem 3 shows that the maximum asymptotic variance minimized by Huber is the Lagrangian representation of Hampel s optimality problem. Hampel sets an explicit bound on the gross-error-sensitivity while Huber (in a rather fortuitous way) had the gross-error-sensitivity included as a penalty term in the maximum asymptotic variance he minimized. The following result formally shows that Hampel s and Huber s problems are equivalent from a mathematical point of view. However, it is very important to understand that the mathematical equivalence of the two problems doesn t imply their statistical equivalence. More precisely, Hampel s problem has a deep statistical meaning while Huber s problem is just a nice mathematical elaboration. Theorem 4 (Equivalence between Huber s and Hampel s optimal location results) (a)supposethatψ solveshuber soptimalityproblemforsome0<ɛ</2. Then ψ solves Hampel s optimality problemwith gross-error-sensitivity bound equaltokγ(ψ,φ). (b) For each 0 < ɛ < /2 the solution ψ to Huber s min-max variance problemonf ɛ belongstothesetof scorefunctionsψ c givenby(8).thatis, minmaxav(ψ,h)min max AV (ψ c,h) ψ H F ɛ c>0 H F ɛ where the min on the left side is taken over the family of all monotone score functions ψ. Proof: To prove Part(a) notice that, by hypothesis, max H F ɛ AV (ψ,h) max H F ɛ AV(ψ,H), forallψ

12 Let α /( ɛ) β ɛ/( ɛ) 2. (25) ByTheorem3 wehave αav (ψ,φ)+βγ 2 (ψ,φ) αav (ψ,φ)+βγ 2 (ψ,φ), forallψ. Hence, αav (ψ,φ)+βk(c) 2 αav (ψ,φ)+βγ 2 (ψ,φ), forallψ or equivalently Therefore [ β AV (ψ,φ) AV (ψ,φ)+ α γ2 (ψ,φ) β ] α K(c)2, forallψ. AV (ψ,φ) AV (ψ,φ), forallψwithγ(ψ,φ) K and Part(a) follows. Toprovepart(b),let0<ɛ</2befixedandset(asin(25))α/( ɛ) andβɛ/( ɛ) 2. Denote byψ c the solutionto Hampel sproblemwithboundk(c)onthe gross-error-sensitivity. Then AV(ψ c,φ) AV(ψ,Φ) forallγ(ψ,φ) K(c). This implies the weaker statement AV(ψ c,φ) AV(ψ,Φ) forallγ(ψ,φ)k(c). (26) 2

13 Using(26)wecanwrite, αav (ψ c,φ)+βk αav (ψ,φ)+βk(c) 2, forallγ(ψ,φ)k Hence, by Theorem 3 maxav(ψ c,φ) maxav (ψ,h) forallγ(ψ,φ)k(c) (27) H F ɛ H F ɛ Let g(c) αav (ψ c,φ)+βk(c) ɛ AV(ψ K,Φ)+ ɛk ( ɛ) 2 (28) anddenotebyc theminimizerofg(c),thatis, min g(c) c>0 g(c ) (seeproblem2). (29) Finallyby(27),(29)andTheoremwecanwrite maxav(ψ c,φ) maxav(ψ c,φ), forallc>0 H F ɛ H F ɛ max H F ɛ AV(ψ,H), forallγ(ψ,φ)k(c) andforallc>0 max H F ɛ AV(ψ,H), forall ψ. 3

14 Problems Problem Showthatifψ satisfies ψ arg min γ(ψ,h 0) K AV(ψ,Φ) forsomek > 2/π,thenψ alsosatisfies forsomek 2 >. ψ arg min AV(ψ,H 0) K 2 γ(ψ,h 0 ) Problem 2 (a) Show that Equation (4) has a unique solution and write a computerprogramtosolveitforeachk> π/2. (b) Let g(c) αav (ψ c,φ)+βk(c) and recall that α /( ɛ) β ɛ/( ɛ) 2 (see(25)). Find(numerically if necessary) the minimizing value c c (ɛ)argming(c), (30) c>0 forgiven0<ɛ</2. (c) Find lim ɛ 0 c (ɛ) c 0 4

15 and lim ɛ 0.5 c (ɛ) c /2. (d)explainthestatisticalmeaningofc 0 andc /2. Problem 3 Verify that forallc 0. 2Φ(c) 2cϕ(c) [2Φ(c) ] 2 Problem 4 Findasharplowerboundforthegrosserrorsensitivityofdispersion M-estimates at the normal central model. Recall that maxχ( ) b,b} GES(s,Φ)s 0 ( )( )} E H0 χ y m0 x m0 s 0 s(h 0 ) and argue that, without loss of generality we can work with the simplified expression GES(s,Φ) maxχ( ) b,b} E Φ χ. (y)y} Problem 5 Solve Hampel s optimality problem of minimizing the asymptotic varianceatthecentralnormalmodelsubjecttoaboundonthegesfordispersion M-estimates at the normal central model. 0 Problem 6 Supposethat thereexists(x 0,y 0 )suchthat ming(x,y 0 )maxg(x 0,y)g(x 0,y 0 ). (3) x y A point (x 0,y 0 ) with this property is called a saddle point for the function g. Provethatif(x 0,y 0 )isasaddlepointforg then minmaxg(x,y)maxming(x,y)g(x 0,y 0 ). x y y x Problem 7 Find the least favorable distribution H for the family of symmetric distributions F with finite variance. Show that in this case (3) with g(x,y)av (ψ,h), x 0 ψ H and y 0 H, holds. Discuss the convenience and practical relevance issues in relation with F and the strengths and weaknesses of the corresponding minimax estimate. Problem 8 FindtheleastfavorabledistributionH forthefamilyf 2 ofsymmetric and strongly unimodal distributions H with h(µ) k, where µ is the commoncenterofsymmetryforthedistributionsinf 2. Showthatinthiscase (3) also holds and therefore the min-max and max-min problem are equivalent. As in Problem 7 discuss the convenience and practical relevance of the family F 2 andthestrengthsandweaknessesofthecorrespondingminimaxestimate. 5

16 References Huber, P. J.(964). Robust estimation of a location parameter. Ann. Math. Statist. 35, Hampel, F.R.(974). The influence curve and its role in robust estimation. J. Am. Stat. Assoc., 69, Li, B. and Zamar, R.H.(99). Min-max asymptotic variance when scale is unknown. Statistics and Probability Letters,, 39-45,(99). 6

Maximum Bias and Related Measures

Maximum Bias and Related Measures Maximum Bias and Related Measures We have seen before that the influence function and the gross-error-sensitivity can be used to quantify the robustness properties of a given estimate. However, these robustness

More information

Breakdown points of Cauchy regression-scale estimators

Breakdown points of Cauchy regression-scale estimators Breadown points of Cauchy regression-scale estimators Ivan Mizera University of Alberta 1 and Christine H. Müller Carl von Ossietzy University of Oldenburg Abstract. The lower bounds for the explosion

More information

Measuring robustness

Measuring robustness Measuring robustness 1 Introduction While in the classical approach to statistics one aims at estimates which have desirable properties at an exactly speci ed model, the aim of robust methods is loosely

More information

ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION

ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION ON THE MAXIMUM BIAS FUNCTIONS OF MM-ESTIMATES AND CONSTRAINED M-ESTIMATES OF REGRESSION BY JOSE R. BERRENDERO, BEATRIZ V.M. MENDES AND DAVID E. TYLER 1 Universidad Autonoma de Madrid, Federal University

More information

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE 17th European Signal Processing Conference EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ROBUST MIIMUM DISTACE EYMA-PEARSO DETECTIO OF A WEAK SIGAL I O-GAUSSIA OISE Georgy Shevlyakov, Kyungmin Lee,

More information

Math 172 HW 1 Solutions

Math 172 HW 1 Solutions Math 172 HW 1 Solutions Joey Zou April 15, 2017 Problem 1: Prove that the Cantor set C constructed in the text is totally disconnected and perfect. In other words, given two distinct points x, y C, there

More information

Lecture 14 October 13

Lecture 14 October 13 STAT 383C: Statistical Modeling I Fall 2015 Lecture 14 October 13 Lecturer: Purnamrita Sarkar Scribe: Some one Disclaimer: These scribe notes have been slightly proofread and may have typos etc. Note:

More information

A SHORT COURSE ON ROBUST STATISTICS. David E. Tyler Rutgers The State University of New Jersey. Web-Site dtyler/shortcourse.

A SHORT COURSE ON ROBUST STATISTICS. David E. Tyler Rutgers The State University of New Jersey. Web-Site  dtyler/shortcourse. A SHORT COURSE ON ROBUST STATISTICS David E. Tyler Rutgers The State University of New Jersey Web-Site www.rci.rutgers.edu/ dtyler/shortcourse.pdf References Huber, P.J. (1981). Robust Statistics. Wiley,

More information

ROBUST TESTS BASED ON MINIMUM DENSITY POWER DIVERGENCE ESTIMATORS AND SADDLEPOINT APPROXIMATIONS

ROBUST TESTS BASED ON MINIMUM DENSITY POWER DIVERGENCE ESTIMATORS AND SADDLEPOINT APPROXIMATIONS ROBUST TESTS BASED ON MINIMUM DENSITY POWER DIVERGENCE ESTIMATORS AND SADDLEPOINT APPROXIMATIONS AIDA TOMA The nonrobustness of classical tests for parametric models is a well known problem and various

More information

Test Volume 12, Number 2. December 2003

Test Volume 12, Number 2. December 2003 Sociedad Española de Estadística e Investigación Operativa Test Volume 12, Number 2. December 2003 Von Mises Approximation of the Critical Value of a Test Alfonso García-Pérez Departamento de Estadística,

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 9 for Applied Multivariate Analysis Outline Addressing ourliers 1 Addressing ourliers 2 Outliers in Multivariate samples (1) For

More information

On robust and efficient estimation of the center of. Symmetry.

On robust and efficient estimation of the center of. Symmetry. On robust and efficient estimation of the center of symmetry Howard D. Bondell Department of Statistics, North Carolina State University Raleigh, NC 27695-8203, U.S.A (email: bondell@stat.ncsu.edu) Abstract

More information

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by Duality (Continued) Recall, the general primal problem is min f ( x), xx g( x) 0 n m where X R, f : X R, g : XR ( X). he Lagrangian is a function L: XR R m defined by L( xλ, ) f ( x) λ g( x) Duality (Continued)

More information

Globally Robust Inference

Globally Robust Inference Globally Robust Inference Jorge Adrover Univ. Nacional de Cordoba and CIEM, Argentina Jose Ramon Berrendero Universidad Autonoma de Madrid, Spain Matias Salibian-Barrera Carleton University, Canada Ruben

More information

ECE531 Lecture 10b: Maximum Likelihood Estimation

ECE531 Lecture 10b: Maximum Likelihood Estimation ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So

More information

Highly Robust Variogram Estimation 1. Marc G. Genton 2

Highly Robust Variogram Estimation 1. Marc G. Genton 2 Mathematical Geology, Vol. 30, No. 2, 1998 Highly Robust Variogram Estimation 1 Marc G. Genton 2 The classical variogram estimator proposed by Matheron is not robust against outliers in the data, nor is

More information

Singularity Formation in Nonlinear Schrödinger Equations with Fourth-Order Dispersion

Singularity Formation in Nonlinear Schrödinger Equations with Fourth-Order Dispersion Singularity Formation in Nonlinear Schrödinger Equations with Fourth-Order Dispersion Boaz Ilan, University of Colorado at Boulder Gadi Fibich (Tel Aviv) George Papanicolaou (Stanford) Steve Schochet (Tel

More information

arxiv: v1 [math.st] 2 Aug 2007

arxiv: v1 [math.st] 2 Aug 2007 The Annals of Statistics 2007, Vol. 35, No. 1, 13 40 DOI: 10.1214/009053606000000975 c Institute of Mathematical Statistics, 2007 arxiv:0708.0390v1 [math.st] 2 Aug 2007 ON THE MAXIMUM BIAS FUNCTIONS OF

More information

Robust Inference. A central concern in robust statistics is how a functional of a CDF behaves as the distribution is perturbed.

Robust Inference. A central concern in robust statistics is how a functional of a CDF behaves as the distribution is perturbed. Robust Inference Although the statistical functions we have considered have intuitive interpretations, the question remains as to what are the most useful distributional measures by which to describe a

More information

Extreme inference in stationary time series

Extreme inference in stationary time series Extreme inference in stationary time series Moritz Jirak FOR 1735 February 8, 2013 1 / 30 Outline 1 Outline 2 Motivation The multivariate CLT Measuring discrepancies 3 Some theory and problems The problem

More information

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if

Central limit theorem. Paninski, Intro. Math. Stats., October 5, probability, Z N P Z, if Paninski, Intro. Math. Stats., October 5, 2005 35 probability, Z P Z, if P ( Z Z > ɛ) 0 as. (The weak LL is called weak because it asserts convergence in probability, which turns out to be a somewhat weak

More information

Robust model selection criteria for robust S and LT S estimators

Robust model selection criteria for robust S and LT S estimators Hacettepe Journal of Mathematics and Statistics Volume 45 (1) (2016), 153 164 Robust model selection criteria for robust S and LT S estimators Meral Çetin Abstract Outliers and multi-collinearity often

More information

MATH5745 Multivariate Methods Lecture 07

MATH5745 Multivariate Methods Lecture 07 MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction

More information

Inference based on robust estimators Part 2

Inference based on robust estimators Part 2 Inference based on robust estimators Part 2 Matias Salibian-Barrera 1 Department of Statistics University of British Columbia ECARES - Dec 2007 Matias Salibian-Barrera (UBC) Robust inference (2) ECARES

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Nonlinear Signal Processing ELEG 833

Nonlinear Signal Processing ELEG 833 Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu May 5, 2005 8 MYRIAD SMOOTHERS 8 Myriad Smoothers 8.1 FLOM

More information

Indian Statistical Institute

Indian Statistical Institute Indian Statistical Institute Introductory Computer programming Robust Regression methods with high breakdown point Author: Roll No: MD1701 February 24, 2018 Contents 1 Introduction 2 2 Criteria for evaluating

More information

Robust high-dimensional linear regression: A statistical perspective

Robust high-dimensional linear regression: A statistical perspective Robust high-dimensional linear regression: A statistical perspective Po-Ling Loh University of Wisconsin - Madison Departments of ECE & Statistics STOC workshop on robustness and nonconvexity Montreal,

More information

PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES

PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES Sankhyā : The Indian Journal of Statistics 1994, Volume, Series B, Pt.3, pp. 334 343 PRE-TEST ESTIMATION OF THE REGRESSION SCALE PARAMETER WITH MULTIVARIATE STUDENT-t ERRORS AND INDEPENDENT SUB-SAMPLES

More information

2 Measure Theory. 2.1 Measures

2 Measure Theory. 2.1 Measures 2 Measure Theory 2.1 Measures A lot of this exposition is motivated by Folland s wonderful text, Real Analysis: Modern Techniques and Their Applications. Perhaps the most ubiquitous measure in our lives

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

Robust Stochastic Frontier Analysis: a Minimum Density Power Divergence Approach

Robust Stochastic Frontier Analysis: a Minimum Density Power Divergence Approach Robust Stochastic Frontier Analysis: a Minimum Density Power Divergence Approach Federico Belotti Giuseppe Ilardi CEIS, University of Rome Tor Vergata Bank of Italy Workshop on the Econometrics and Statistics

More information

arxiv: v1 [math.dg] 11 Jan 2009

arxiv: v1 [math.dg] 11 Jan 2009 arxiv:0901.1474v1 [math.dg] 11 Jan 2009 Scalar Curvature Behavior for Finite Time Singularity of Kähler-Ricci Flow Zhou Zhang November 6, 2018 Abstract In this short paper, we show that Kähler-Ricci flows

More information

9. Robust regression

9. Robust regression 9. Robust regression Least squares regression........................................................ 2 Problems with LS regression..................................................... 3 Robust regression............................................................

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

f(x µ, σ) = b 2σ a = cos t, b = sin t/t, π < t 0, a = cosh t, b = sinh t/t, t > 0,

f(x µ, σ) = b 2σ a = cos t, b = sin t/t, π < t 0, a = cosh t, b = sinh t/t, t > 0, R-ESTIMATOR OF LOCATION OF THE GENERALIZED SECANT HYPERBOLIC DIS- TRIBUTION O.Y.Kravchuk School of Physical Sciences and School of Land and Food Sciences University of Queensland Brisbane, Australia 3365-2171

More information

COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION

COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION (REFEREED RESEARCH) COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION Hakan S. Sazak 1, *, Hülya Yılmaz 2 1 Ege University, Department

More information

Inequalities Relating Addition and Replacement Type Finite Sample Breakdown Points

Inequalities Relating Addition and Replacement Type Finite Sample Breakdown Points Inequalities Relating Addition and Replacement Type Finite Sample Breadown Points Robert Serfling Department of Mathematical Sciences University of Texas at Dallas Richardson, Texas 75083-0688, USA Email:

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

The Almost-Sure Ergodic Theorem

The Almost-Sure Ergodic Theorem Chapter 24 The Almost-Sure Ergodic Theorem This chapter proves Birkhoff s ergodic theorem, on the almostsure convergence of time averages to expectations, under the assumption that the dynamics are asymptotically

More information

Math 410 Homework 6 Due Monday, October 26

Math 410 Homework 6 Due Monday, October 26 Math 40 Homework 6 Due Monday, October 26. Let c be any constant and assume that lim s n = s and lim t n = t. Prove that: a) lim c s n = c s We talked about these in class: We want to show that for all

More information

Comparison with RSS-Based Model Selection Criteria for Selecting Growth Functions

Comparison with RSS-Based Model Selection Criteria for Selecting Growth Functions TR-No. 14-07, Hiroshima Statistical Research Group, 1 12 Comparison with RSS-Based Model Selection Criteria for Selecting Growth Functions Keisuke Fukui 2, Mariko Yamamura 1 and Hirokazu Yanagihara 2 1

More information

The influence of a line with fast diffusion on Fisher-KPP propagation : integral models

The influence of a line with fast diffusion on Fisher-KPP propagation : integral models The influence of a line with fast diffusion on Fisher-KPP propagation : integral models Antoine Pauthier Institut de Mathématique de Toulouse PhD supervised by Henri Berestycki (EHESS) and Jean-Michel

More information

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Math The Laplacian. 1 Green s Identities, Fundamental Solution Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external

More information

Optimum designs for model. discrimination and estimation. in Binary Response Models

Optimum designs for model. discrimination and estimation. in Binary Response Models Optimum designs for model discrimination and estimation in Binary Response Models by Wei-Shan Hsieh Advisor Mong-Na Lo Huang Department of Applied Mathematics National Sun Yat-sen University Kaohsiung,

More information

Introduction to Bases in Banach Spaces

Introduction to Bases in Banach Spaces Introduction to Bases in Banach Spaces Matt Daws June 5, 2005 Abstract We introduce the notion of Schauder bases in Banach spaces, aiming to be able to give a statement of, and make sense of, the Gowers

More information

Econ Slides from Lecture 1

Econ Slides from Lecture 1 Econ 205 Sobel Econ 205 - Slides from Lecture 1 Joel Sobel August 23, 2010 Warning I can t start without assuming that something is common knowledge. You can find basic definitions of Sets and Set Operations

More information

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc Robust regression Robust estimation of regression coefficients in linear regression model Jiří Franc Czech Technical University Faculty of Nuclear Sciences and Physical Engineering Department of Mathematics

More information

Definitions of ψ-functions Available in Robustbase

Definitions of ψ-functions Available in Robustbase Definitions of ψ-functions Available in Robustbase Manuel Koller and Martin Mächler July 18, 2018 Contents 1 Monotone ψ-functions 2 1.1 Huber.......................................... 3 2 Redescenders

More information

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals MATH 02 INTRODUCTION TO MATHEMATICAL ANALYSIS Properties of Real Numbers Some Fundamentals The whole course will be based entirely on the study of sequence of numbers and functions defined on the real

More information

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner

More information

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1 Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the

More information

Appendix A. Sequences and series. A.1 Sequences. Definition A.1 A sequence is a function N R.

Appendix A. Sequences and series. A.1 Sequences. Definition A.1 A sequence is a function N R. Appendix A Sequences and series This course has for prerequisite a course (or two) of calculus. The purpose of this appendix is to review basic definitions and facts concerning sequences and series, which

More information

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in

More information

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x.

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x. Advanced Calculus I, Dr. Block, Chapter 2 notes. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x. 2. Definition. A sequence is a real-valued function

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

MATH 301 INTRO TO ANALYSIS FALL 2016

MATH 301 INTRO TO ANALYSIS FALL 2016 MATH 301 INTRO TO ANALYSIS FALL 016 Homework 04 Professional Problem Consider the recursive sequence defined by x 1 = 3 and +1 = 1 4 for n 1. (a) Prove that ( ) converges. (Hint: show that ( ) is decreasing

More information

Rome - May 12th Université Paris-Diderot - Laboratoire Jacques-Louis Lions. Mean field games equations with quadratic

Rome - May 12th Université Paris-Diderot - Laboratoire Jacques-Louis Lions. Mean field games equations with quadratic Université Paris-Diderot - Laboratoire Jacques-Louis Lions Rome - May 12th 2011 Hamiltonian MFG Hamiltonian on the domain [0, T ] Ω, Ω standing for (0, 1) d : (HJB) (K) t u + σ2 2 u + 1 2 u 2 = f (x, m)

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

MAXIMUM BIAS CURVES FOR ROBUST REGRESSION WITH NON-ELLIPTICAL REGRESSORS

MAXIMUM BIAS CURVES FOR ROBUST REGRESSION WITH NON-ELLIPTICAL REGRESSORS The Annals of Statistics 21, Vol. 29, No. 1, 224 251 MAXIMUM BIAS CURVES FOR ROBUST REGRESSION WITH NON-ELLIPTICAL REGRESSORS By José R. Berrendero 1 and Ruben H. Zamar 2 Universidad Autónoma de Madrid

More information

Inference on distributions and quantiles using a finite-sample Dirichlet process

Inference on distributions and quantiles using a finite-sample Dirichlet process Dirichlet IDEAL Theory/methods Simulations Inference on distributions and quantiles using a finite-sample Dirichlet process David M. Kaplan University of Missouri Matt Goldman UC San Diego Midwest Econometrics

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Solution to Homework 4, Math 7651, Tanveer 1. Show that the function w(z) = 1 2

Solution to Homework 4, Math 7651, Tanveer 1. Show that the function w(z) = 1 2 Solution to Homework 4, Math 7651, Tanveer 1. Show that the function w(z) = 1 (z + 1/z) maps the exterior of a unit circle centered around the origin in the z-plane to the exterior of a straight line cut

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Numerical Sequences and Series

Numerical Sequences and Series Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is

More information

Quantile Approximation of the Chi square Distribution using the Quantile Mechanics

Quantile Approximation of the Chi square Distribution using the Quantile Mechanics Proceedings of the World Congress on Engineering and Computer Science 017 Vol I WCECS 017, October 57, 017, San Francisco, USA Quantile Approximation of the Chi square Distribution using the Quantile Mechanics

More information

Ch. 7. One sample hypothesis tests for µ and σ

Ch. 7. One sample hypothesis tests for µ and σ Ch. 7. One sample hypothesis tests for µ and σ Prof. Tesler Math 18 Winter 2019 Prof. Tesler Ch. 7: One sample hypoth. tests for µ, σ Math 18 / Winter 2019 1 / 23 Introduction Data Consider the SAT math

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Problem 2: ii) Completeness of implies that for any x X we have x x and thus x x. Thus x I(x).

Problem 2: ii) Completeness of implies that for any x X we have x x and thus x x. Thus x I(x). Bocconi University PhD in Economics - Microeconomics I Prof. M. Messner Problem Set 1 - Solution Problem 1: Suppose that x y and y z but not x z. Then, z x. Together with y z this implies (by transitivity)

More information

EILENBERG-ZILBER VIA ACYCLIC MODELS, AND PRODUCTS IN HOMOLOGY AND COHOMOLOGY

EILENBERG-ZILBER VIA ACYCLIC MODELS, AND PRODUCTS IN HOMOLOGY AND COHOMOLOGY EILENBERG-ZILBER VIA ACYCLIC MODELS, AND PRODUCTS IN HOMOLOGY AND COHOMOLOGY CHRIS KOTTKE 1. The Eilenberg-Zilber Theorem 1.1. Tensor products of chain complexes. Let C and D be chain complexes. We define

More information

Solutions Final Exam May. 14, 2014

Solutions Final Exam May. 14, 2014 Solutions Final Exam May. 14, 2014 1. (a) (10 points) State the formal definition of a Cauchy sequence of real numbers. A sequence, {a n } n N, of real numbers, is Cauchy if and only if for every ɛ > 0,

More information

Lebesgue-Stieltjes measures and the play operator

Lebesgue-Stieltjes measures and the play operator Lebesgue-Stieltjes measures and the play operator Vincenzo Recupero Politecnico di Torino, Dipartimento di Matematica Corso Duca degli Abruzzi, 24, 10129 Torino - Italy E-mail: vincenzo.recupero@polito.it

More information

Notes on Decision Theory and Prediction

Notes on Decision Theory and Prediction Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Recursive definitions on surreal numbers

Recursive definitions on surreal numbers Recursive definitions on surreal numbers Antongiulio Fornasiero 19th July 2005 Abstract Let No be Conway s class of surreal numbers. I will make explicit the notion of a function f on No recursively defined

More information

Convex Optimization Notes

Convex Optimization Notes Convex Optimization Notes Jonathan Siegel January 2017 1 Convex Analysis This section is devoted to the study of convex functions f : B R {+ } and convex sets U B, for B a Banach space. The case of B =

More information

Lecture 12 Robust Estimation

Lecture 12 Robust Estimation Lecture 12 Robust Estimation Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Financial Econometrics, Summer Semester 2007 Copyright These lecture-notes

More information

Nishant Gurnani. GAN Reading Group. April 14th, / 107

Nishant Gurnani. GAN Reading Group. April 14th, / 107 Nishant Gurnani GAN Reading Group April 14th, 2017 1 / 107 Why are these Papers Important? 2 / 107 Why are these Papers Important? Recently a large number of GAN frameworks have been proposed - BGAN, LSGAN,

More information

Katětov and Katětov-Blass orders on F σ -ideals

Katětov and Katětov-Blass orders on F σ -ideals Katětov and Katětov-Blass orders on F σ -ideals Hiroaki Minami and Hiroshi Sakai Abstract We study the structures (F σ ideals, K ) and (F σ ideals, KB ), where F σideals is the family of all F σ-ideals

More information

6. Duals of L p spaces

6. Duals of L p spaces 6 Duals of L p spaces This section deals with the problem if identifying the duals of L p spaces, p [1, ) There are essentially two cases of this problem: (i) p = 1; (ii) 1 < p < The major difference between

More information

Lecture 5: Peak sets for operator algebras and quantum cardinals. David Blecher. December 2016

Lecture 5: Peak sets for operator algebras and quantum cardinals. David Blecher. December 2016 Lecture 5: Peak sets for operator algebras and quantum cardinals David Blecher University of Houston December 2016 Abstract for Lecture 5 Abstract: We discuss noncommutative peak sets for operator algebras,

More information

APMA 2811Q. Homework #1. Due: 9/25/13. 1 exp ( f (x) 2) dx, I[f] =

APMA 2811Q. Homework #1. Due: 9/25/13. 1 exp ( f (x) 2) dx, I[f] = APMA 8Q Homework # Due: 9/5/3. Ill-posed problems a) Consider I : W,, ) R defined by exp f x) ) dx, where W,, ) = f W,, ) : f) = f) = }. Show that I has no minimizer in A. This problem is not coercive

More information

High Breakdown Point Estimation in Regression

High Breakdown Point Estimation in Regression WDS'08 Proceedings of Contributed Papers, Part I, 94 99, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS High Breakdown Point Estimation in Regression T. Jurczyk Charles University, Faculty of Mathematics and

More information

High Breakdown Analogs of the Trimmed Mean

High Breakdown Analogs of the Trimmed Mean High Breakdown Analogs of the Trimmed Mean David J. Olive Southern Illinois University April 11, 2004 Abstract Two high breakdown estimators that are asymptotically equivalent to a sequence of trimmed

More information

An Empirical Characteristic Function Approach to Selecting a Transformation to Normality

An Empirical Characteristic Function Approach to Selecting a Transformation to Normality Communications for Statistical Applications and Methods 014, Vol. 1, No. 3, 13 4 DOI: http://dx.doi.org/10.5351/csam.014.1.3.13 ISSN 87-7843 An Empirical Characteristic Function Approach to Selecting a

More information

Free Entropy for Free Gibbs Laws Given by Convex Potentials

Free Entropy for Free Gibbs Laws Given by Convex Potentials Free Entropy for Free Gibbs Laws Given by Convex Potentials David A. Jekel University of California, Los Angeles Young Mathematicians in C -algebras, August 2018 David A. Jekel (UCLA) Free Entropy YMC

More information

2 n intervals in the attractor of the tent map

2 n intervals in the attractor of the tent map 2 n intervals in the attractor of the tent map Peter Barendse revised December 2010 Abstract In this paper we give a elementary proof explaining the phenomenon of band-splitting, first explained in [2,

More information

odhady a jejich (ekonometrické)

odhady a jejich (ekonometrické) modifikace Stochastic Modelling in Economics and Finance 2 Advisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 27 th April 2009 Contents 1 2 3 4 29 1 In classical approach we deal with stringent stochastic

More information

This exam contains 5 pages (including this cover page) and 4 questions. The total number of points is 100. Grade Table

This exam contains 5 pages (including this cover page) and 4 questions. The total number of points is 100. Grade Table MAT25-2 Summer Session 2 207 Practice Final August 24th, 207 Time Limit: Hour 40 Minutes Name: Instructor: Nathaniel Gallup This exam contains 5 pages (including this cover page) and 4 questions. The total

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Spatial autocorrelation: robustness of measures and tests

Spatial autocorrelation: robustness of measures and tests Spatial autocorrelation: robustness of measures and tests Marie Ernst and Gentiane Haesbroeck University of Liege London, December 14, 2015 Spatial Data Spatial data : geographical positions non spatial

More information

RIEMANN HYPOTHESIS: A NUMERICAL TREATMENT OF THE RIESZ AND HARDY-LITTLEWOOD WAVE

RIEMANN HYPOTHESIS: A NUMERICAL TREATMENT OF THE RIESZ AND HARDY-LITTLEWOOD WAVE ALBANIAN JOURNAL OF MATHEMATICS Volume, Number, Pages 6 79 ISSN 90-5: (electronic version) RIEMANN HYPOTHESIS: A NUMERICAL TREATMENT OF THE RIESZ AND HARDY-LITTLEWOOD WAVE STEFANO BELTRAMINELLI AND DANILO

More information

The loss function and estimating equations

The loss function and estimating equations Chapter 6 he loss function and estimating equations 6 Loss functions Up until now our main focus has been on parameter estimating via the maximum likelihood However, the negative maximum likelihood is

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Bayesian Optimization

Bayesian Optimization Practitioner Course: Portfolio October 15, 2008 No introduction to portfolio optimization would be complete without acknowledging the significant contribution of the Markowitz mean-variance efficient frontier

More information

Existence of Antiparticles as an Indication of Finiteness of Nature. Felix M. Lev

Existence of Antiparticles as an Indication of Finiteness of Nature. Felix M. Lev Existence of Antiparticles as an Indication of Finiteness of Nature Felix M. Lev Artwork Conversion Software Inc., 1201 Morningside Drive, Manhattan Beach, CA 90266, USA (Email: felixlev314@gmail.com)

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information