DERIVING TESTS OF THE REGRESSION MODEL USING THE DENSITY FUNCTION OF A MAXIMAL INVARIANT

Size: px
Start display at page:

Download "DERIVING TESTS OF THE REGRESSION MODEL USING THE DENSITY FUNCTION OF A MAXIMAL INVARIANT"

Transcription

1 DERIVING TESTS OF THE REGRESSION MODEL USING THE DENSITY FUNCTION OF A MAXIMAL INVARIANT Jahar L. Bhowik and Maxwell L. King Departent of Econoetrics and Business Statistics Monash University Clayton, Victoria 38 Australia Abstract In the context of the linear regression odel in which soe regression coefficients are of interest and others are purely nuisance paraeters, we derive the density function of a axial invariant statistic. This allows the construction of a range of optial test statistics including the locally best invariant test which is equivalent to the wellknown one-sided t-test. The approach is also extended to the general linear regression odel in which the covariance atrix is nonscalar and to non-linear regression odels. Key words: Invariance; linear regression odel; locally best invariant test; non-linear regression odel; nuisance paraeters; t-test. JEL classification: C, C.

2 . Introduction This paper is concerned with the proble of testing the null hypothesis that one regressor coefficient is zero, against the alternative that it is non-negative, in the context of linear and non-linear regression odels. Statistical odels and particularly those used by econoetricians, involve a large nuber of influences. These kinds of odels contain two types of paraeters, those of interest and those not of iediate interest that are known as nuisance paraeters. Their presence causes unexpected coplications in statistical inference. Kalbfleisch and Sprott (97) discussed ethods of eliinating nuisance paraeters fro the likelihood function so that inference can be ade about the paraeters of interest. In practice, any statistical probles including testing of hypothesis, display syetries, which ipose additional restrictions for the choice of proper statistical tests. In the statistical testing literature, the idea of invariance dates back half a century. The unpublished work of Hunt and Stein (946) (see Lehann, 959 and 986) introduced the principle and deonstrated its applicability and eaningfulness in the fraework of hypothesis testing. According to Lehann (95), the notion of invariance was introduced into the statistical literature in the writings of R. A. Fisher, Hotelling (933, 936), Pitan (939), Stein (948) and others, in connection with various special probles. Aong others Lehann (959, 986), King (979, 98, 983a), King and Sith (986), Ara and King (993), and Ara (995) suggested the use of invariance arguents to overcoe the proble of nuisance paraeters. It is a generally accepted principle that if a proble with a unique solution is invariant under a certain group of transforations, then the solution should be invariant under that transforations. A hypothesis testing proble is invariant under a group of

3 transforations acting on the saple space of the observed data vector if for any transforation the probability distribution of transfored data vector belongs to the sae set (null or alternative hypothesis) as the original data vector. The idea behind invariance is that if the hypothesis testing proble under consideration has a particular invariance property, then we should restrict attention to only those tests that share this invariance property. The class of all invariant functions can be obtained as the totality of functions of a axial invariant. A axial invariant is a statistic which takes the sae value for the observed data vectors that are connected by transforations and different values for those data vectors that are not connect by transforations. Consequently any invariant test statistic can be written as a function of the axial invariant. This eans, we can treat the axial invariant as the observed data, find its density and then construct appropriate tests based on this density. The perforance of a statistical test is assessed by its size and power properties. Econoetricians are always interested in optiality of power and for any testing proble they would like to use a uniforly ost powerful (UMP) test. Many testing probles involving the linear regression odel can be reduced, either by conditioning on sufficient statistics or by invariance arguents to testing a siple null hypothesis against one sided alternatives. Ideally we would then like to use a UMP test but unfortunately it is rarely possible to find a UMP test when the alternative hypothesis is coposite andor in the presence of nuisance paraeters. Cox and Hinkley (974, p.) discuss three approaches for constructing tests of siple hypotheses against coposite alternative hypotheses when no UMP test exits. They involve choosing a point at which the power is optiised: 3

4 (i) to pick, soewhat arbitrarily, a typical point in the alternative paraeter space and use it in test construction in order to find a test that at least has optial power for the chosen point; (ii) reoving this arbitrariness by choosing a point to be close to the null hypothesis which leads to the locally best (LB) test, i.e. to axiize the power locally near the null hypothesis; (iii) choosing a test which axiizes soe weighted average of the powers over the alternative paraeter space. Option (i), labelled by King (987b) as the point optial (PO) approach, uses the ost powerful test against a specific alternative solution. Option (ii) is the ost popular for a single paraeter and leads us to a LB or a locally ost powerful test. These tests are constructed by axiizing power locally at the null hypothesis. The LB test is also optial in the sense that its power curve has the steepest slope, at the null hypothesis, of all power curves fro tests with the sae size. Following Neyan and Pearson (936) a nuber of authors, notably Ferguson (967), Efron (975), King (98, 984, 987a), King and Hillier (985), Sengupta (987) and Wu and King (994) aong others, have recoended the use of LB tests. Our interest in this article is to derive the density function of the axial invariant statistic in the context of regression odel and invariant transforation and then construct a LBI test for linear and non-linear regressors, which is called the Locally Best Invariant (LBI) test. Also we show that this LBI test is equivalent to the onesided t-test in the case of testing regression coefficients. 4

5 The plan of this paper is as follows. First of all we derive the density function of the axial invariant in section and in section 3 we construct the LBI test statistic which is shown to also be equivalent to the one-sided t-test. In section 4, we derive the density function of the axial invariant for a non-linear regression function and find the LBI test statistic. Finally, soe concluding rearks are ade in section 5.. Derivation of Density Function Let us consider the linear odel, y = X β + X β + u (.) where y is n, X is an n q nonstochastic atrix, X is an n p nonstochastic atrix, β is a q vector and β is p vector. Here [ X: X] is full colun rank. Considering first the case of p =, we are interested in testing H :β = against H a :β > in the context of the above linear regression odel. It is assued that u~ N(, σ I n here σ is unknown. This proble is invariant under the class of transforations y γ y+ X γ (.) where γ is a positive scalar and γ is a q vector. Let M = I X X ( X ) X. Then My= MXβ + MXβ + Mu = MXβ + Mu. (.3) Multiplying both sides of (.) by P where P is an n atrix such that PP = I, PP = M and = n q,weget Py = PX β + Pu. 5

6 Note that PM y = Py and PM = P. Thus Py ~ N PX (, I β σ ).Let z = Py. Then the joint density function of z is f( z) = ( πσ ) exp{ ( z PX ) ( z PX )} β β. (.4) σ Let r = zz be the usual squared distance of z fro the origin. Now, we change z to the -diensional polar co-ordinates ( r, θ, θ,..., θ ) as follows: z z z = rcosθ j j k j = r( sin θ )cos θ ; = r sinθ k where r, θ k π,for k =,,...,( ) and θ π. for j, (.5) The Jacobian of the transforation is J (, r θ, θ,..., θ ) = ( z, z,..., z) (, r θ, θ,..., θ ) = r k sinθ k (Miller, 964, p.3). To construct the LBI test, we have to find the density function of the axial invariant statistic. So we want to find the distribution of the axial invariant w= z( z z) = z r. Note that z = rw. Now the joint density function of z becoes, after the above change of variables, f (, r θ, θ,..., θ ) = ( πσ ) exp{ ( rw PX β )( rw PX β )} r sinθ k σ k 6

7 =( πσ ) exp{ ( rww wrpx )} β + β X PPX β σ r k k sinθ. (.6) To find the density function of w fro the above joint density function of z,first we have to find the arginal density function of ( θ, θ,..., θ ). The coponents of w are w( θ, θ,..., θ ),, w ( θ, θ,..., θ ) and they are defined by (.5). Therefore the arginal density function of ( θ, θ,..., θ ) can be obtained by integrating out r of (.6), f( θ, θ,..., θ ) = ( πσ ) exp{ ( r ww wrpx β β X PPXβ )} + σ r k sinθ k = ( ) + sin k exp{ ( )} πσ θ k rww wrpx β β X PPXβ r dr σ dr k r r β β = + β ( πσ ) sinθ k exp{ ( )} ww w PX X P PX r dr σ σ σ σ σ z z k = + ( πσ ) sinθ k exp{ ( λ λwpxβ β X PPX β )} λ σ σ dλ; β where ww =, β =, λ = r,and dr = σdλ.ifweset σ σ aw (, β ) = wpx β, (.7) b( w, β ) = ( β X P ww PX β β X P PXβ ) = β X P Mw PXβ (.8) 7

8 and Mw = I ww = I w( w w) w,then bw (, β ) is su of squared errors of the OLS regression of PX β on w and z k f( θ, θ,..., θ ) = ( π) sinθk exp{ ( λ a( w, β )) + b( w, β )} λ dλ; or f ( θ, θ,..., θ ) k = ( π) sinθk exp( bw (, β )) exp{ ( λaw (, β )) } λ dλ k = sinθk exp( bw (, β )) λ ϕ{( λaw (, β )) } dλ = cw (, β ) ( η+ aw (, β )) ϕ( η ) dη, where ϕ{( λ aw (, β )) } = ( π) exp{ ( λaw (, β )) }, (King, 979, chapter 3), cw (, β ) = exp( bw (, β )) sinθk k η= λaw (, β ), λ = η+aw (, β ) and dλ = dη. We can further write f( θ, θ,..., θ ) = c { η + ( ) η a( w, β ) +, (.9) ( )( ) 3 a ( w, ) a ( w, η β β )} ϕ( η ) dη = cw (, β ){ η ϕ( η ) dη+ aw (, β)( ) η ϕ( η ) dη+ a 3 ( w, ) ( )( ) β η ϕ( η ) dη a ( w, β ) ϕ( η ) dη} = cw (, β ){ Γ( ) π + aw (, β )( ) Γ( ) π a 3 ( w, ) ( )( ) ( )... a ( w, )( ) β Γ π + + β π Γ( )}; 4 + 8

9 z where η ϕ( η ) dη= Γ ( ) π, ϕη ( ) η ( π) d = 3 Γ( ) and Γ( ) = π. Therefore the arginal density function of θ, θ,..., θ is, f( θ, θ,..., θ ) = c( w, β ){ Γ( ) π + a( w, β )( ) Γ( ) π 3 + a ( w, β )( )( ) Γ( ) π ( ) a ( w, β )( π) π }; 4 or k f( θ, θ,..., θ ) = exp( b( w, β )) sin θk { Γ( ) π + aw (, β )( ) Γ( ) π + a ( w, β )( )( ) Γ( ) π a ( w, β )( π) π }. (.) ' Now the transforation fro ( θ, θ,..., θ ) to w= z( zz) is straightforward since the coponents of w are defined by (.5). Therefore the density function of w is, f( w) = exp( b( w, β )){ Γ( ) π + a( w, β )( ) Γ( ) π + a ( w, β )( )( ) Γ( ) π a ( w, β )( π) π } 4 or f( w) = π exp( b( w, β )){ Γ( ) + a( w, β )( ) Γ( ) π 3 + a ( w, β )( )( ) Γ( ) π + a ( w, β )( )( )( 3) 6 Γ( ) a π ( w, β ) π } (.) 9

10 where aw (, β ) and bw (, β ) are defined by (.7) and (.8). Here (.) is the density function of the axial invariant w and using this density function we can construct the LBI test statistic. 3.Tests of a Linear Regressor We are interested in testing the hypothesis H :β = against H a :β > in the context of the linear regression odel (.). Let us first consider p =, i.e. β is a scalar. A LBI test of H against H a is that with critical region of the for log f( w) β β = c In the previous section we have derived f( w). α. (3.) 3. Construction of the LBI test statistic The density function of the axial invariant w is given by (.). Taking logs on both sides of (.e get log{ f( w)} =log log π + bw (, β ) + log{ Γ( ) + aw (, β )( ) Γ( ) π + a ( w, )( )( ) ( ) a + β Γ π ( w, β ) π } = + log log π ( β X P ww PXβ β X P PXβ ) + log{ Γ( ) a ( ) Γ( ) π a ( )( ) Γ( ) π... a π } log f( w) = ( X P ww PX X P PX ) + { ( ) + a( w, )( ) ( ) β β Γ β Γ π β a ( w, β )( )( ) Γ( ) π... a ( w, β ) π } { Γ( ) + a( w, β )( ) Γ( ) π + a ( w, β )( )( ) Γ( ) π + β a ( w, β ) π }

11 = ( X P ww PXβ X P PXβ ) + { Γ( ) + a( w, β )( ) Γ( ) π + + a ( w, β )( )( ) Γ( ) π a ( w, β ) π } {( ) Γ( ) π ( wpx ) +( )( ) Γ( ) π( wpx β )( wpx ) π ( )( w PX β ) ( w PX ) }; log f( w) β β = = { Γ( )} ( ) Γ( ) π ( wpx ) = π ( ) Γ( )( wpx ). Γ( ) Therefore, log f( w) β β = Γ( ) π = ( ) Γ( ) ( wpx ). Hence the LBI test rejects H for log f( w) or ( wpx ) d β β = α, Γ( ) π = ( ) Γ( ) ( wpx ) cα (3.) or X P Py X My = d α, (3.3) ( yppy ) ( ymy ) where d α is an appropriate critical value. Thus X My s = is the LBI test statistic. ( ymy )

12 3. Relationship between the t-test and the required LBI test Fro the above discussion we can say that the LBI test rejects H for large values of X My s = (3.4) ( ymy ) for testing H :β = against H a :β > in the linear odel (.) if X is a vector, i.e. p =. The ordinary least squares (OLS) estiator of β in (.) is $ = ( X M X ) X M y, β and the unbiased OLS estiator of the error variance is $ = ( ( y M M X X M X ) X M ) y( ). σ Thus the t test statistic is t = $ {$( X M X ) β σ } ; = ( ) ( ) { ( ( ) X M X X M y y M M X X M X X M ) y} ( X M X ) or t = X M X s X ( ) ( ) { ( M X ) s }; (3.5) where X M X is a positive scalar. So, clearly (3.5) is a onotonic increasing function of the test statistic s. Thus we ay conclude that our LBI test is equivalent to the t-test. 3.3 Whether the test statistic s is UMPI or not We have Py Py w= z( z z) = =. (3.6) ( yppy ) ( ymy ) Replacing (3.6) and (3.4) in (.e get,

13 f( w) exp( s X P PX ){ ( ) s ( = π β β Γ + β ) Γ( ) π s β ( )( ) Γ( ) π + s β ( )( )( 3) Γ( ) π 6 Thus ( sβ ) π }. f( w) exp{ ( s X P PX )}{ ( ) s ( = π β β Γ + β ) Γ( ) π s β ( )( ) Γ( ) π + s β ( )( )( 3) Γ( ) π ( sβ ) π }. (3.7) We know all invariant tests can be written as a function of the axial invariant. To find the best test within the class of invariant tests we have derived the density function of the axial invariant and then we have constructed the LBI test. To prove, our test statistic s is UMPI, according to Lehann (986, section 6.4e have to prove that the function of s, f( w) is a onotonic increasing function of s.sofar, we cannot prove theoretically that our LBI test statistic s is UMPI. We have siulated this function using a nuber of different values of s, β and X P PX and in each case found the function (for these particular valuesas onotonic. This is quite different fro being able to prove that f( w) is onotonic. We do however conjecture that f( w) is onotonic having been unable to disprove this using siulation ethods. We therefore conjecture that the test based on s is UMPI where invariance is with respect to transforations of the for (.). 3.4 Construction of the LBI test when u~ N(, σ Σ ( φ)), where Σ( φ) is known Consider the following odel, y = X β + X β + u (3.8) where X is an n q nonstochastic atrix, X is an n p nonstochastic atrix 3

14 and u~ N[, σ Σ ( φ)] Derivation of Density Function Let M = I X X ( X ) X, using the sae transforation on (3.8) that we used earlier (see section e get, PM u ~ N[, σ ( PΣ ( φ) P )] and Py ~ N{ PX β, σ ( PΣ ( φ) P )}. We want to find the distribution of the axial invariant w= z( z z),where z = Py and zz = r. We get, f( z) = ( πσ ) PΣ( φ) P exp{ ( ) ( ( ) ) z PX β PΣ φ P ( z PXβ)}. (3.9) σ Now, changing z to the -diensional polar co-ordinates ( r, θ, θ,..., θ ) as (.5), where r, θ k π,for k =,,...,( )and θ π, the density of z becoes, f(, r θ, θ,..., θ ) = ( πσ ) PΣ( φ) P exp[ {( rw PX β ) ( P ( φ) P ) ( rw PX β )}] r sinθ Σ k σ k = ( πσ ) PΣ( φ) P exp[ { r σ k k + rw ( PΣ( φ) P ) PX β β X P ( PΣ( φ) P ) PX β }] r sin θ. 4

15 The arginal density function of ( θ, θ,..., θ ) r, can be obtained by integrating out k f( θ, θ,..., θ ) = ( πσ ) PΣ( φ) P sinθk exp[ r { w ( P ( ) P w r β Σ φ ( PΣ( φ) P ) PX + σ σ σ β σ β φ X P ( PΣ( ) P ) PX }] r dr σ z k = ( πσ ) PΣ( φ) P sinθ k exp[ {( r ) σ r ) PX σ β σ ( Σ( ) ) β X P P φ P PX + ( ) σ β }] r σ dr k k ' = ( π) PΣ( φ) P sinθ a( w, φ) exp{ ( (,, )) (,, λ + )} a w φ β b w φ β λ d λ where a ( w, φ) = w ( PΣ ( φ) P ) w, (3.) a w ( P ( φ) P ) PXβ ( w, φβ, ) = Σ, (3.) β X P ( PΣ( φ) P ) PXβ bw (, φβ, ) = [ (3.) β X P ( PΣ( φ) P ) PX β ], λ = r β and β =. σ σ 5

16 k f( θ, θ,..., θ ) = exp( b( w, φ, β ))( π) PΣ( φ) P sinθk a( w, φ) exp{ ( (,, λ )) } a w φ β λ d λ k = exp( bw (, φβ, ))( π) PΣ( φ) P sin θ a ( w, φ) exp( η )( η+ a ( w, φ) a ( w, φ, β ) ) dη a φ where η λ φ β = ( w, ) ( a ( w,, )), λ = ( a( w, φ)) η+ a( w, φ, β ) and dλ = ( a ( w, φ)) dη. Now following a siilar approach to integrating out r to that used earlier (see section e get, k k f( θ, θ,..., θ ) = exp( b( w, φ, β )) PΣ( φ) P sin θk a ( w, φ) π ( )( ) [ Γ( ) + ( ) c( w, φβ, ) Γ( ) + c ( w, φβ, ) Γ( ) c ( w, φβ, ) Γ( )] a ( w, φ) a ( w, φ, β ) where cw (, φβ, ) = = Therefore the density function of axial invariant w is, ) PXβ { }. (3.3) f( w) = π exp( b( w, φ, β )) PΣ( φ) P a ( w, φ)[ Γ( ) + ( )( ) + ( ) c( w, φβ, ) Γ( )+ c ( w, φβ, ) Γ( ) c ( w, φβ, ) Γ ( )]. (3.4) 3.4. Construction of LBI test statistic Taking logs on both sides of (3.4e get, 6

17 log f( w) =log logπ log a( w, φ) + b( w, φ, β ) + log{ PΣ( φ) P } + ( )( ) log{ Γ( ) + ( ) c( w, φβ, ) Γ( ) + c ( w, φβ, ) Γ( ) c ( w, φβ, ) Γ( )}, where a ( w, φ ), bw (,, φβ ) and cw (, φβ, ) are defined by (3.), (3.) and (3.3). log f( w) X P ( PΣ( φ) P ) PXβ = [ X P ( P ( φ) P ) PX β } Σ β ( )( ) + { Γ( ) + ( ) c( w, φβ, ) Γ( ) + c ( w, φβ, ) Γ( ) + F HG... + (,, ) ( )} { ( ) w ( PΣ( φ) P ) PX c w φβ Γ Γ( ) a( w, φ) ( )( ) ) PXβ a( w, φ)... + Γ( )( ) log f( w) β F HG ) PXβ β = F HG ) PX IKJ F w ( PΣ( φ) P ) PX HG IKJ + IKJ F HG } IKJ + a ( w, φ) ) PX = { Γ( )} ( ) Γ ( ) I K J. Hence the LBI test rejects H for, { Γ( )} or equivalently or a F HG ( w, φ) ) PX ( ) Γ ( ) r F HG X P ( PΣ( φ) P d α, c h ) PX IKJ d α IKJ c α 7

18 or X P ( PΣ( φ) P ) Py( y My) { yp ( PΣ( φ) P ) Py} ( ymy ) d α, because w= Py( y M y),or X P ( PΣ( φ) P ) Py { yp ( PΣ( φ) P ) Py} dα. (3.5). Here (see King (98)) X P P P Py = X X X X X ( Σ( φ) ) { Σ( φ) Σ( φ) ( Σ( φ) ) Σ( φ) } y = ( X Σ( φ) ){ Σ( φ) Σ( φ) X ( X Σ( φ) X ) X Σ( φ) } y = X M y where y X X = Σ( φ) y Σ( φ) = Σ( φ) = X X M = I X ( X X ) X. Siilarly yp ( PΣ( φ) P ) Py= y M y. Thus the LBI test (3.) can be written as X M y ( y M y ) d α where d α is an appropriate critical value. 8

19 Therefore our required LBI test statistic is s X M y = ( y M y ). (3.6) In the case of p =, we are interested in testing H :β = against H :β > for the regression equation y = X β + X β + v, (3.7) where v = φ u, v ~ N(, σ I) Σ( ) and φ is known. Using a siilar procedure as the unstared case (3.5), we can show that the test statistic t is a onotonic increasing function of the LBI test s. Thus we ay conclude that our LBI test is equivalent to the one-sided t test. 4.Tests of a Non-linear Regressor Let us consider the following non-linear odel, y = X β + g( X, β ) + u (4.) where X is an n q nonstochastic atrix, X is an n p nonstochastic atrix and g( X, β ) is a non-linear function of β and X, for which g( X, β ) = when β =.Wewishtotest H :β = against H a :β > in the context of the above non-linear regression odel for p =. 4. Derivation of Density Function Let M = I X X ( X ) X. Then My= MXβ + Mg( X, β ) + Mu = Mg( X, β ) + Mu. (4.) Now ultiplying (4.) by P we get, 9

20 Py = Pg( X, β ) + Pu, where P is an n atrix such that PP = I, PP = M and = n q. Then PM y = Py and PM = P. Thus Py ~ N( Pg( X, β), σ I ). (4.3) Let z = Py. We want to find the distribution of w= z( z z).here zz = r. Fro (4.3), f( z) = ( πσ ) exp{ ( z Pg( X, )) ( z Pg( X, ))} β β. (4.4) σ Now, changing z to the -diensional polar co-ordinates ( r, θ, θ,..., θ ) (.5), the density of z becoes, as in f (, r θ, θ,..., θ ) = ( πσ ) exp{ ( rw Pg( X, β )) σ k ( rw Pg( X, ))} r β sinθk = ( πσ ) exp{ ( rww wrpgx (, ) + β σ k g ( X P Pg X r, β) (, β)} sin θk. The density function of ( θ, θ,..., θ ) can be obtained by integrating out r, f( θ, θ,..., θ ) = ( πσ ) exp{ ( r ww w rpg( X, β ) + σ k g ( X, β ) P Pg( X, β )} r sinθ dr. Now using siilar transforations to those we have used earlier for the linear case we get, k

21 z k f( θ, θ,..., θ ) = ( π) sinθk exp{ ( λ a( w, β )) + b( w, β )} λ dλ; where aw (, β ) = wpg ( X, β ), (4.5) bw (, β) = { g ( X, β) PwwPg ( X, β) g ( X, β ) P Pg ( X, β )}, (4.6) g( X g, β ) ( X, β ) =, (4.7) σ λ = r and ww =. σ z k f( θ, θ,..., θ ) = ( π) sinθk exp( b( w, β)) exp{ ( λa( w, β)) } λ dλ = c λ ϕ{( λa( w, β )) } dλ; in which c = exp( b( w, β )) sinθk k and ϕ{( λ aw (, β )) } = ( π) exp{ ( λaw (, β )) }. Alternatively z f( θ, θ,..., θ ) = c ( η+ a( w, β )) ϕ( η ) dη, where η = λ aw (, β ), dλ = dη and λ = η +aw (, β ). Following a siilar procedure as for the linear case (section ), we get k f( θ, θ,..., θ ) = exp( b( w, β)) sin θk { Γ( ) π + aw (, β)( ) Γ( ) π + a ( w, β )( )( ) Γ( ) π a ( w, β)( π) π }. + Therefore the density function of the axial invariant w is,

22 f( w) = π exp( b( w, β)){ Γ( ) + a( w, β)( ) Γ( ) π (4.8) + + a ( w, β )( )( ) Γ( ) π a ( w, β ) π } where aw (, β ) and bw (, β ) are defined by (4.5) and (4.6). Using this density function of the axial invariant statistic, we can construct the LBI test for a nonlinear regressor. 4.. Tests of a Non-Linear Regressor Taking logs of both sides of (4.8e get log{ f( w)} =log log π + bw (, β) + log{ Γ( ) + aw (, β)( ) Γ( ) π + a ( w, )( )( ) ( ) a + β Γ π ( w, β ) π } = + log log π { g ( X, β) P ww Pg ( X, β) g ( X, β) P Pg ( X, β)} + log{ Γ( ) + a( w, β )( ) Γ( ) π + a ( w, β )( )( ) Γ( ) π a w β π... (, ) }. Therefore the LBI test of H :β = against H a :β > rejects H for large values of log f( w) β β = and results in the critical region g ( X, β ) β β = ( ymy ) My cα. (4.9)

23 Thus putting the value of g ( X, ) β β = β in the above function we get the LBI test statistic for testing H :β =,where g ( X, β ) X and β. in (4.7) is a non-linear function of 5. Concluding Rearks In this paper, having derived the density function of the axial invariant statistic, we have constructed an LBI test in the linear regression odel against one-sided alternatives. The principle of invariance is used to eliinate nuisance paraeters in ultiparaeter one-sided testing probles. This allows the construction of the LBI test. The resultant LBI test is found to be equivalent to the one-sided t-test. In the case of a non-linear regression, we have derived the density function of the axial invariant statistic and we have constructed the LBI test statistic for a non-linear regressor. 3

24 References Ara, I. (995), Marginal Likelihood Based Tests of Regression Disturbances, Unpublished Ph.D. thesis, Monash University, Clayton, Melbourne. Ara, I. and King, M.L. (993), Marginal Likelihood Based Tests of Regression Disturbances, Paper presented at the 993 Australasian Meeting of the Econoetric Society, Sydney. Cox, D.R. and Hinkley, D.V. (974), Theoretical Statistics, London: Chapan and Hall. Efron, D. (975), Defining the Curvature of a Statistical Proble (with Application to Second Order Efficiency), The Annals of Statistics, 3, Ferguson, T.S. (967), Matheatical Statistics: A Decision Theoretic Approach, New York: Acadeic Press. Gourieroux, C., Holly, A. and Monfort, A. (98), Likelihood Ratio Test, Wald Test and Kuhn-Tucker Test in Linear Models with Inequality Constraints on the Regression Paraeters, Econoetrica, 5, Hildreth, C. and Houck, J.P. (968), Soe Estiators for a Linear Model with Rando Coefficients, Journal of the Aerican Statistical Association, 63, Hillier, G.H. (986), Joint Tests for Zero Restrictions on Non-Negative Regression Coefficients, Bioetrika, 73, Hotelling, H. (933) Analysis of a Coplex of Statistical Variables into Principal Coponents, Journal of Educational Psychology, 4, and Hotelling, H. (936) Relations Between Two Sets of Variates, Bioetrika, 8, Hunt, G. and Stein, C. (946), Most Stringent Tests of Statistical Hypotheses, unpublished anuscript. Kalbfleisch, J.D. and Sprott, D.A. (97), Application of Likelihood Methods to Models Involving Large Nubers of Paraeters, Journal of the Royal Statistical Society B3, Kariya, T. and Eaton, M.L. (977), Robust Tests for Spherical Syetry, The Annals of Statistics, 5, 6-5. King, M.L. (979), Soe Aspects of Statistical Inference in the Linear Regression Model, Unpublished Ph.D. thesis, University of Canterbury, Christchurch. 4

25 King, M.L. (98), Robust Tests for Spherical Syetry and their Application to Least Squares Regression, The Annals of Statistics, 8, King, M.L. (98), A Sall Saple Property of the Cliff-Ord Test for Spatial Correlation, Journal of the Royal Statistical Society B43, King, M.L. (983a), Testing for Autoregressive Against Moving Average Errors in the Linear Regression Model, Journal of Econoetrics,, King, M.L. (983b), Testing for Moving Average Regression Disturbances, Australian Journal of Statistics, 5, King, M.L. (984), A New Test for Fourth-Order Autoregressive Disturbances, Journal of Econoetrics, 4, King, M.L. (986), Modified Lagrange Multiplier Tests for Probles with One- Sided Alternatives, Journal of Econoetrics, 3, King, M.L. (987a), An Alternative Test for Regression Coefficient Stability, Review of Econoics and Statistics, 69, King, M.L. (987b), Towards a Theory of Point Optial Testing, Econoetric Reviews, 6, King, M.L. and Hillier, G.H., (985), Locally Best Invariant Tests of the Error Covariance Matrix of the Linear Regression Model, Journal of the Royal Statistical Society, B47, 98-. King, M.L. and Sith, M.D. (986), Joint One-Sided Tests of Linear Regression Coefficients, Journal of Econoetrics, 3, Lehann, E.L. (95), Soe Principles of the Theory of Testing Hypothesis, Annals of Matheatical Statistics,, -6. Lehann, E.L. (959), Testing Statistical Hypotheses, Wiley, New York. Lehann, E.L. (986), Testing Statistical Hypotheses, Second Edition, Wiley, New York. Neyan, J., and Pearson, E.S. (933), On the Proble of the Most Efficient Tests of Statistical Hypotheses, Philosophical Transactions of the Royal Society, A3, Neyan, J., and Pearson, E.S. (936), Contribution to the Theory of Testing Statistical Hypotheses. I. Unbiased Critical Regions of Type A and Type A, Statistical Research Meoirs,, -37. Pitan, E.J.G. (939), Tests of Hypotheses Concerning Location and Scale Paraeters, Bioetrika, 3, -5. 5

26 Sengupta, A. (987), On Tests for Equicorrelation Coefficient of a Standard Syetric Multivariate Noral Distribution, Australian Journal of Statistics, 9, Stein, C. (948), On Sequences of Experients (Abstract), Annals of Matheatical Statistics, 9, 7-8. Wu, P.X. and King, M.L. (994), One-Sided Hypothesis Testing in Econoetrics: A Survey, Pakistan Journal of Statistics,,

Constructing Locally Best Invariant Tests of the Linear Regression Model Using the Density Function of a Maximal Invariant

Constructing Locally Best Invariant Tests of the Linear Regression Model Using the Density Function of a Maximal Invariant Aerican Journal of Matheatics and Statistics 03, 3(): 45-5 DOI: 0.593/j.ajs.03030.07 Constructing Locally Best Invariant Tests of the Linear Regression Model Using the Density Function of a Maxial Invariant

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

Some Proofs: This section provides proofs of some theoretical results in section 3.

Some Proofs: This section provides proofs of some theoretical results in section 3. Testing Jups via False Discovery Rate Control Yu-Min Yen. Institute of Econoics, Acadeia Sinica, Taipei, Taiwan. E-ail: YMYEN@econ.sinica.edu.tw. SUPPLEMENTARY MATERIALS Suppleentary Materials contain

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

arxiv: v1 [stat.ot] 7 Jul 2010

arxiv: v1 [stat.ot] 7 Jul 2010 Hotelling s test for highly correlated data P. Bubeliny e-ail: bubeliny@karlin.ff.cuni.cz Charles University, Faculty of Matheatics and Physics, KPMS, Sokolovska 83, Prague, Czech Republic, 8675. arxiv:007.094v

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Moments of the product and ratio of two correlated chi-square variables

Moments of the product and ratio of two correlated chi-square variables Stat Papers 009 50:581 59 DOI 10.1007/s0036-007-0105-0 REGULAR ARTICLE Moents of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: June 006 / Revised: 8 October 007

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison

More information

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples

Estimation of the Mean of the Exponential Distribution Using Maximum Ranked Set Sampling with Unequal Samples Open Journal of Statistics, 4, 4, 64-649 Published Online Septeber 4 in SciRes http//wwwscirporg/ournal/os http//ddoiorg/436/os4486 Estiation of the Mean of the Eponential Distribution Using Maiu Ranked

More information

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS

DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Probabilistic Machine Learning

Probabilistic Machine Learning Probabilistic Machine Learning by Prof. Seungchul Lee isystes Design Lab http://isystes.unist.ac.kr/ UNIST Table of Contents I.. Probabilistic Linear Regression I... Maxiu Likelihood Solution II... Maxiu-a-Posteriori

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

An improved self-adaptive harmony search algorithm for joint replenishment problems

An improved self-adaptive harmony search algorithm for joint replenishment problems An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory Pseudo-arginal Metropolis-Hastings: a siple explanation and (partial) review of theory Chris Sherlock Motivation Iagine a stochastic process V which arises fro soe distribution with density p(v θ ). Iagine

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Cosine similarity and the Borda rule

Cosine similarity and the Borda rule Cosine siilarity and the Borda rule Yoko Kawada Abstract Cosine siilarity is a coonly used siilarity easure in coputer science. We propose a voting rule based on cosine siilarity, naely, the cosine siilarity

More information

Testing the lag length of vector autoregressive models: A power comparison between portmanteau and Lagrange multiplier tests

Testing the lag length of vector autoregressive models: A power comparison between portmanteau and Lagrange multiplier tests Working Papers 2017-03 Testing the lag length of vector autoregressive odels: A power coparison between portanteau and Lagrange ultiplier tests Raja Ben Hajria National Engineering School, University of

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well

More information

Support Vector Machines. Maximizing the Margin

Support Vector Machines. Maximizing the Margin Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics

Tail Estimation of the Spectral Density under Fixed-Domain Asymptotics Tail Estiation of the Spectral Density under Fixed-Doain Asyptotics Wei-Ying Wu, Chae Young Li and Yiin Xiao Wei-Ying Wu, Departent of Statistics & Probability Michigan State University, East Lansing,

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

TABLE FOR UPPER PERCENTAGE POINTS OF THE LARGEST ROOT OF A DETERMINANTAL EQUATION WITH FIVE ROOTS. By William W. Chen

TABLE FOR UPPER PERCENTAGE POINTS OF THE LARGEST ROOT OF A DETERMINANTAL EQUATION WITH FIVE ROOTS. By William W. Chen TABLE FOR UPPER PERCENTAGE POINTS OF THE LARGEST ROOT OF A DETERMINANTAL EQUATION WITH FIVE ROOTS By Willia W. Chen The distribution of the non-null characteristic roots of a atri derived fro saple observations

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Meta-Analytic Interval Estimation for Bivariate Correlations

Meta-Analytic Interval Estimation for Bivariate Correlations Psychological Methods 2008, Vol. 13, No. 3, 173 181 Copyright 2008 by the Aerican Psychological Association 1082-989X/08/$12.00 DOI: 10.1037/a0012868 Meta-Analytic Interval Estiation for Bivariate Correlations

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Online Supplement. and. Pradeep K. Chintagunta Graduate School of Business University of Chicago

Online Supplement. and. Pradeep K. Chintagunta Graduate School of Business University of Chicago Online Suppleent easuring Cross-Category Price Effects with Aggregate Store Data Inseong Song ksong@usthk Departent of arketing, HKUST Hong Kong University of Science and Technology and Pradeep K Chintagunta

More information

A Poisson process reparameterisation for Bayesian inference for extremes

A Poisson process reparameterisation for Bayesian inference for extremes A Poisson process reparaeterisation for Bayesian inference for extrees Paul Sharkey and Jonathan A. Tawn Eail: p.sharkey1@lancs.ac.uk, j.tawn@lancs.ac.uk STOR-i Centre for Doctoral Training, Departent

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression Advances in Pure Matheatics, 206, 6, 33-34 Published Online April 206 in SciRes. http://www.scirp.org/journal/ap http://dx.doi.org/0.4236/ap.206.65024 Inference in the Presence of Likelihood Monotonicity

More information

arxiv: v1 [cs.lg] 8 Jan 2019

arxiv: v1 [cs.lg] 8 Jan 2019 Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Nonlinear Log-Periodogram Regression for Perturbed Fractional Processes

Nonlinear Log-Periodogram Regression for Perturbed Fractional Processes Nonlinear Log-Periodogra Regression for Perturbed Fractional Processes Yixiao Sun Departent of Econoics Yale University Peter C. B. Phillips Cowles Foundation for Research in Econoics Yale University First

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax:

A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics, EPFL, Lausanne Phone: Fax: A general forulation of the cross-nested logit odel Michel Bierlaire, EPFL Conference paper STRC 2001 Session: Choices A general forulation of the cross-nested logit odel Michel Bierlaire, Dpt of Matheatics,

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

arxiv: v2 [math.st] 11 Dec 2018

arxiv: v2 [math.st] 11 Dec 2018 esting for high-diensional network paraeters in auto-regressive odels arxiv:803659v [aths] Dec 08 Lili Zheng and Garvesh Raskutti Abstract High-diensional auto-regressive odels provide a natural way to

More information

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version Made available by Hasselt University Library in Docuent Server@UHasselt Reference (Published version): EGGHE, Leo; Bornann,

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

A Small-Sample Estimator for the Sample-Selection Model

A Small-Sample Estimator for the Sample-Selection Model A Sall-Saple Estiator for the Saple-Selection Model by Aos Golan, Enrico Moretti, and Jeffrey M. Perloff October 2000 ABSTRACT A seiparaetric estiator for evaluating the paraeters of data generated under

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information

A New Class of APEX-Like PCA Algorithms

A New Class of APEX-Like PCA Algorithms Reprinted fro Proceedings of ISCAS-98, IEEE Int. Syposiu on Circuit and Systes, Monterey (USA), June 1998 A New Class of APEX-Like PCA Algoriths Sione Fiori, Aurelio Uncini, Francesco Piazza Dipartiento

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

Optimal Jackknife for Discrete Time and Continuous Time Unit Root Models

Optimal Jackknife for Discrete Time and Continuous Time Unit Root Models Optial Jackknife for Discrete Tie and Continuous Tie Unit Root Models Ye Chen and Jun Yu Singapore Manageent University January 6, Abstract Maxiu likelihood estiation of the persistence paraeter in the

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Smooth Test for Testing Equality of Two Densities 1

Smooth Test for Testing Equality of Two Densities 1 Sooth Test for Testing Equality of Two Densities Anil K. Bera University of Illinois. E-ail:abera@uiuc.edu. Phone: 7-333-4596. Aurobindo Ghosh Singapore Manageent University. E-ail:aurobindo@su.edu.sg.

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Determining OWA Operator Weights by Mean Absolute Deviation Minimization

Determining OWA Operator Weights by Mean Absolute Deviation Minimization Deterining OWA Operator Weights by Mean Absolute Deviation Miniization Micha l Majdan 1,2 and W lodziierz Ogryczak 1 1 Institute of Control and Coputation Engineering, Warsaw University of Technology,

More information

A Markov Framework for the Simple Genetic Algorithm

A Markov Framework for the Simple Genetic Algorithm A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information

Best Procedures For Sample-Free Item Analysis

Best Procedures For Sample-Free Item Analysis Best Procedures For Saple-Free Ite Analysis Benjain D. Wright University of Chicago Graha A. Douglas University of Western Australia Wright s (1969) widely used "unconditional" procedure for Rasch saple-free

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

Empirical phi-divergence test-statistics for the equality of means of two populations

Empirical phi-divergence test-statistics for the equality of means of two populations Epirical phi-divergence test-statistics for the equality of eans of two populations N. Balakrishnan, N. Martín and L. Pardo 3 Departent of Matheatics and Statistics, McMaster University, Hailton, Canada

More information

Online Publication Date: 19 April 2012 Publisher: Asian Economic and Social Society

Online Publication Date: 19 April 2012 Publisher: Asian Economic and Social Society Online Publication Date: April Publisher: Asian Econoic and Social Society Using Entropy Working Correlation Matrix in Generalized Estiating Equation for Stock Price Change Model Serpilılıç (Faculty of

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Statistica Sinica 6 016, 1709-178 doi:http://dx.doi.org/10.5705/ss.0014.0034 AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Nilabja Guha 1, Anindya Roy, Yaakov Malinovsky and Gauri

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Solution of Multivariable Optimization with Inequality Constraints by Lagrange Multipliers. where, x=[x 1 x 2. x n] T

Solution of Multivariable Optimization with Inequality Constraints by Lagrange Multipliers. where, x=[x 1 x 2. x n] T Solution of Multivariable Optiization with Inequality Constraints by Lagrange Multipliers Consider this proble: Miniize f where, =[. n] T subect to, g,, The g functions are labeled inequality constraints.

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem

Optimum Value of Poverty Measure Using Inverse Optimization Programming Problem International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization

More information

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA

Multivariate Methods. Matlab Example. Principal Components Analysis -- PCA Multivariate Methos Xiaoun Qi Principal Coponents Analysis -- PCA he PCA etho generates a new set of variables, calle principal coponents Each principal coponent is a linear cobination of the original

More information

Fundamental Limits of Database Alignment

Fundamental Limits of Database Alignment Fundaental Liits of Database Alignent Daniel Cullina Dept of Electrical Engineering Princeton University dcullina@princetonedu Prateek Mittal Dept of Electrical Engineering Princeton University pittal@princetonedu

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Priorities in Thurstone Scaling and Steady-State Probabilities in Markov Stochastic Modeling

Priorities in Thurstone Scaling and Steady-State Probabilities in Markov Stochastic Modeling Journal of Modern Applied Statistical Methods Volue Issue Article 5--03 Priorities in Thurstone Scaling and Steady-State Probabilities in Markov Stochastic Modeling Stan Lipovetsky GfK Custo Research North

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information