Components Analysis of Sampled Noisy Functions. Herve Cardot. Unite de Biometrie et Intelligence Articielle, INRA Toulouse, BP 27

Size: px
Start display at page:

Download "Components Analysis of Sampled Noisy Functions. Herve Cardot. Unite de Biometrie et Intelligence Articielle, INRA Toulouse, BP 27"

Transcription

1 Nonarametric Estimation of Smoothed Princial Comonents Analysis of Samled Noisy Functions Herve Cardot Unite de Biometrie et Intelligence Articielle, INRA Toulouse, BP 7 F-336 Castanet-Tolosan Cedex, FRANCE cardot@toulouse.inra.fr (To aear in the Journal of Nonarametric Statistics) Abstract This study deals with the simultaneous nonarametric estimations of n curves or observations of a random rocess corruted by noise in which samle aths belong to a nite dimension functional subsace. The estimation, by means of B-slines, leads to a new ind of functional rincial comonents analysis. Asymtotic rates of convergence are given for the mean and the eigenelements of the emirical covariance oerator. Heuristic arguments show that a well chosen smoothing arameter may imrove the estimation of the subsace which contains the samle ath of the rocess. Finally, simulations suggest that the estimation method studied here is advantageous when there are a small number of design oints. Key words: functional rincial comonent analysis, nonarametric regression, rates of convergence, mean square error, asymtotic exansion, hybrid slines, B- slines. Introduction This aer aims at giving the asymtotic rates of convergence in mean integrated square error (MISE) of the nonarametric estimation of the second order characteristics of several indeendent and identically distributed samled curves lying in a common nite dimensional subsace of smooth functions. Futhermore, we suose that curves are corruted by a white noise at the (non random) design oints. In alications, this may be the case when the data are samled growth curves

2 (Boularan et al. 993), meteorological data (Ramsay & Dalzell, 99) or econometric data (Knei, 995). The reader is also referred to the boo of Ramsay & Silverman (997) which resents many statistical models for such functional data. The method of estimation examinated here has been reviously studied, in a ratical way, by Besse et al. (997). Their estimates are constructed by means of hybrid slines (see Kelly & Rice (99) for a denition and Diac & Thomas-Agnan (997) for the use of these functions to test convexity) and lead to a new functional Princial Comonents Analysis (PCA). Besides, this PCA is adated to the case where the data are unbalanced. They show by simulations that this new method can imrove the estimation of the signal comared to usual nonarametric methods such as smoothing slines. Asymtotic roerties of the PCA of Hilbert valued random variables have already been investigated by Dauxois et al. (98) and various authors (see e.g. Bosq 99, Pezzulli & Silverman 993, Silverman 996). When the observations are discretized, Biritxinaga (987) and Besse (99) have demonstrated the convergence of the sline estimates of functional PCA. However, they did not assume that the curves where corruted by samling noise at the design oints and they did not discuss the lins between the asymtotic behaviour of the MISE and the dierent arameters such as the number of curves and design oints and the smoothing arameter value. In this article, we demonstrate the convergence of our estimates and bound the MISE of the mean and the eigenelements of the smooth covariance oerator as a function of all these arameters. Then we demonstrate how smoothing can imrove the accuracy of the estimates. The organization of the aer is as follows. In section, we establish notations, describe the model and exlain how the estimates are constructed. In section 3, we give rates of convergence for the mean and the eigenfunctions of the covariance oerator. In section 4, heuristic arguments based on erturbation theory (as in Pezzulli & Silverman, 993 and Silverman, 996) are used to show how smoothing may imrove our estimates. A simulation study is resented in section 5 where it is noted that it can be advantageous to smooth when there is a small number of design oints. Finally, section 6 summarizes the roofs of our roositions. More details may be found in Cardot (997). Model and estimation. Notations Consider a random function Z dened on the robability sace (; A; P ) and taing values in the searable Hilbert sace L [; ] equied with the inner roduct: 8f; g L [; ] hf; gi = Z f(t)g(t) dt:

3 Let's denote by : L the norm induced by this inner roduct and dene f g to be the ran one oerator which satises: 8f; g; h L [; ] f g(h) = hf; hi g: In order to aroximate the samle aths, we use normalized B-slines (De Boor 978, Schumaer 98). Denote by S the sace of slines functions of degree with equisaced nots. It is well nown that S has a basis consisting of r = + normalized B-slines, B j (t); j = ; : : : ; + : Let B (t) be the vector of fb j (t); j = ; : : : ; + g and consider the Gram matrices [C ] jl = h ec i jl = Z B j (t)b l (t)dt ; j; l = ; : : : ; r ; () X i= B j (t i )B l (t i ) ; j; l = ; : : : ; r; () where t ; : : : ; t are the design oints. Dene for all integer m smaller than ; the matrix G : [G ] jl = Z B (m) j (t)b (m) l (t)dt ; j; l = ; : : : ; r : (3) It is the matrix associated to the semi-norm dened by the dierential oerator D m which measure the smoothness of any function belonging to S : In this article, the norm : will denote either the euclidian norm, 8b R ; b = P j= b j; the usual matrix norm, that is for any real matrix A; we have A = su x= A x where x is the euclidian norm of vector x: the usual norm for bounded linear oerators dened on L [; ]:. Model Let S = fs(t); t [; ]g; be a \smooth" second order random function comosed of a deterministic comonent (t) and a centered random residual art Z(t): S(t) = (t) + Z(t); t [; ]: (4) The function is the mean of S and the random art Z is suosed to belong to a nite dimensional sace, say E q, of smooth functions. Suose we observe n indeendent realizations of this signal corruted by a white noise at the samling oints t < : : : < t : Then, we have the data Y i R : Y i = + Z i + i ; i = ; : : : ; n (5) In the succeding, we will use bold face letters to denote vector or matrices 3

4 where : 8 >< >: Y i (t j ) = (t j ) + Z i (t j ) + ij i = ; : : : ; n ; j = ; : : : ; ; Z i E q ; (Z i ) = ; Z i L < + E q L [; ]; dime q = q; Z i indeendent of ij ( ij ) = ; E( ij i j ) = ii jj : (6) For sae of simlicity, the distribution of the design oints is suosed to be uniform on [; ]: (A ) t j = j? ; j = ; ; : : : ; :? It is also ossible to get similar convergence results as those obtained in section 3 by assuming a weaer condition on the design oints (Agarwall & Studden, 98), at the exense, however, of more comlicated exressions. Nevertheless, the estimators dened in next section are erfectly adated to this situation. The rocess Z is a second order rocess and therefore it can be written, in quadratic mean, for a basis of orthonormal functions f ; : : : ; q g of the subsace E q : Z(t) = qx j= hz; j i j (t): (7) The smoothness assumtion on the signal S is then ensured by assuming that and each function j satisfy the condition: (A ) and ; : : : ; q belong to C m [; ], that is to say they have m continuous derivatives. This article aims at demonstrating the convergence of the estimates of and ; : : : ; q deduced from the observation of the vectors fy i ; i = ; : : : ; ng: One way to comute the functions j is to erform the sectral analysis of the covariance oerator of the random function Z:? = (Z Z) : (8) In that case, these functions are the eigenfunctions of? associated to the eigenvalues j =? hz; j i and it is called the Princial Comonents Analysis (PCA) or the Karhunen-Loeve exansion of Z (Dauxois et al. 98)..3 Construction of the estimates Fix = m + ; and denote by by i the least squares estimate of the ith samle ath in the B-slines basis. It can be written as follows: by i (t) = ^b e ic? B (t) = ^s ib (t); (9) 4

5 P j= Y ijb (t j ) and ^s i = e C? where ^b i = ^b i : The number of nots deends on and n but these indices will omitted for sae of simlicity. The data secicity suggests two inds of constraints in the estimation rocedure. On the one hand, a dimensionality constraint must assume that curves only san a nite dimensional subsace. On the other hand, each real curve has to be suciently smooth to satisfy the condition A : To estimate the samle aths and in order to tae these constraints into account, Besse et al. (997) considered the following otimization roblem, which may be interreted as a Tihonov regularization of the least squares estimates by i : min ey i H r q ( n nx i= by i? ey i L + ey (m) i ) L ; () where is a smoothing Z arameter and Hq r is a Z q-dimensional subsace of S : Since ^s ic ^s j = byi (t)by j (t)dt and ^s (m) ig ^s j = by i (t)by (m) j (t)dt ; the otimization roblem () is equivalent to: ( nx ) min ^s i? u i u i ;A q n C + u i G ; u A q ; dim A q = q () i= where A q is a q-dimensional subsace of R r : P Denote by ^s ;n = n ^s n i= i the coordinates of the emirical mean in the B-slines basis and then S n = n nx i= (^s i? ^s ;n )(^s i? ^s ;n ) is the emirical covariance matrix. Finally, dene which acts as a "Hat matrix". The solution of roblem () is given by: H ; = (C + G )? () bu i; = b Pq; H ; C (^s i? ^s ;n ) + H ; C ^s ;n ; i = : : : ; n; by i; (t) = bu i;b (t); t [; ] is the H? ;-orthogonal rojection onto the subsace generated by the rst q eigenvectors (bv ; ; : : : ; bv q; ); normalized with resect to the where b Pq; = P q j= bv j; bv j;h? ; metric H? ; H ;C S n C : (3) Each estimated samle ath can also be written as follows: by i; (t) = b (t) + bz i; (t); t [; ]; (4) S+ rograms for carrying out the estimation are available by anonymous FTP from ft.cict.fr in the directory ub/lstatrob/ferraty. 5

6 where b = (H ; C ^s ;n ) B is the smoothed estimate of the mean function and bz i; = bpq; H ; C (^s i? ^s ;n ) B is the smooth estimate of the ran constrained individual random eect. The estimates of the j 's dened in (7) are written as follows in the B-slines basis: b j;n(t) = bv j;b (t); t [; ]; j = ; : : : ; q: Let's notice that these estimates of the eigenfunctions are not orthonormal with resect to the usual inner roduct in L [; ] (exceted if = ) since their coordinates are normalized with resect to H? ; : Equivalently, the solution is given by: bu i; = H = ; b Q q; H = ; C (^s i? ^s ;n ) + H ; C ^s ;n ; i = : : : ; n: where Qq; b P q = v j= j;v j; is the orthogonal rojection onto the subsace generated by the rst q eigenvectors of the symmetric matrix: H = ; C S n C H = ; : (5) Remar : with a dierent goal, the estimation of smooth eigenfunctions of the covariance oerator, the method roosed by Silverman (996) leads to the same estimates of the eigenelements. The main dierence is that Silverman (996) assumed the samle aths to be continuoulsy observed and not contaminated with noise. Silverman's estimates, dened as the solution of the following roblem: bv j;c S n C bv j; = b j; and bv j;h? ;bv i; = ij are also the eigenvectors bv j;, normalized with resect to the metric H? ; ; of the matrix dened in (3). 3 Convergence in mean integrated square error This section is divided into three arts. Firstly, we give rates of convergence for the mean function. Then we bound the MISE of the eigenvectors of the covariance oerator without smoothing, that is to say by assuming that the smoothing arameter is xed and equals zero. Finally, the use of erturbation theory (Kato, 976), for small values of allows us to derive the MISE for the smooth eigenvectors. Asymtotic roerties rely uon aroximation roerties of B-slines functions and on the existence of fourth order moments of random variables. We also assume that the growth of and can be controlled when n tends to innity. 6

7 3. Smoothed estimates of the mean function The exression for the smooth estimate b ; as dened in (4), is given by: b (t) = bs B (t) where bs = H ; C bs ;n = H ; C e C? b b ;n ; and let's denote its exectation by e = b : The mean square error? b L is written, as usual, as square bias lus variance:? b L =? e L + e? b L : Theorem 3. Under assumtions A ; and A, if satises = o(?m ) and = o() then we can bound bias and variance as follows: Consequently, we have:? e L = O( 4m ) + O(?m ); e? b L = O(n? ): b? L = O(n? ) + O(?m ) + O( 4m ): This theorem imlies that if ; and are chosen as follows: = O n m ; = O(n?3= ); then the MISE is bounded above by: b? L = O(n? ): = o(); Furthermore, if we suose that Z = almost surely, this model is just the usual nonarametric model. It is then easy to see that the variance is of order O(=(n)) (see Cardot & Diac, 998) and we obtain the otimal rate of convergence for nonarametric estimates. One can also get intermediate rates of convergence by assuming, for examle, that Z satises mixing conditions (Burman, 99). Remar: the arameters and both act as smoothing tools and consequently have to be chosen simultaneously. We thin that the cross validation method roosed by Rice & Silverman (99) and Hart & Werhly (993) obtained by leaving one entire curve out, could be usefully adated to this roblem. However, an asymtotic study should be erformed to justify such a ratice. 7

8 3. Results on the covariance oerator without smoothing Henceforth, we suose for simlicity that the mean has been subtracted o, and thus the samle aths are centered. Consider the emirical covariance oerator, say b?n, of the estimates by i (t) dened in equation (9). Slit each random vector, Y i ; into a random signal Z i lus noise i : Y i = Z i + i i = ; : : : ; n: Denote by ez i ; the B-slines estimate of the signal deduced from Z i and by e i the B-slines estimate of the noise obtained from i : Then, each nonarametric estimate of the samle aths can be written: by i = ez i +e i ; i = ; : : : ; n; and the emirical covariance oerator b?n may be exressed as follows: where e? n = n e? ;n = n b? n = n = n nx nx i= i= by i by i (ez i +e i ) (ez i + e i ) = e?n + e?;n + e?;n + e?;n ; nx i= nx i= ez i ez i ; e i ez i ; e?;n = n e?;n = n nx nx i= i= ez i e i ; e i e i : The study of the asymtotic convergence is organized as follows: the samling eect is studied in lemmas 3., 3.3, 3.4 and theorem 3.5, the discretization and B-slines aroximation eect is measured in lemma 3.6. The rst ste can be viewed as a study of the asymtotic variance whereas the second one is a study of the asymtotic behaviour of the bias. Some of the following results are based on the Hilbert-Schmidt roerties of covariance oerators of second order Hilbert valued random variables. A Hilbert- Schmidt oerator T dened on L [; ] is a bounded linear oerator satisfying: X in ht e i ; T e i i < + (6) for any orthonormal basis (e i ) in of L [; ]: This relation denes a norm, say T H ; which satises T H T : The covariance oerator? of a second order random variable Z which taes values in L [; ] is a Hilbert-Schmidt oerator. Futhermore, there exists a function?(s; t) such that: 8

9 ?(f) (t) = Z O?(s; t)f(s) ds; 8f L [; ]: (7) Actually,?(s; t) is the covariance of the underlying continuous time rocess:?(s; t) = (Z(s); Z(t)) : The Hilbert-Schmidt norm of? can therefore be exressed equivalently as follows:? H = Z Z (?(s; t)) ds dt: (8) The following lemma allows us to measure the norm between a covariance oerator of a second order random variable and its emirical estimate. Its demonstration is similar to the real case and consequently omitted. Lemma 3. If (X i ) in is a sequence of i.i.d random variables which tae values in L [; ] and satisfy X 4 L < then the following holds:? n?? H = n X4 L? n? H (9) where? n = n P n i= X i X i and? = (X X): Furthermore, if the sequence of i.i.d second order random variables (Y i ) in is indeendent of (X i ) in ; then the cross covariance oerator and its emirical estimate satisfy: n? H = Y L X L n where n = n P n i= Y i X i and = (Y X):? n H () Lemma 3.3 If su [Z(t) 4 ] < ; we have: t[;] e?n?? e where O() does not deend on and : H = O n ; () Remar : Using relation (7) and the regularity of the functions j ; it is easy to chec that condition su t [Z(t) 4 ] < + of lemma 3.3 is equivalent to the following assumtion: Z 4 L < +: Let's dene P; e P h i r = ea l;j= lj e ij where ea lj = ec? ; l; j = ; : : : ; r: It is the l;j discrete aroximation of the rojection onto the sace S in the basis generated by the ran one oerators e ij = B j B i ; i; j = ; : : : ; r: () 9

10 Lemma 3.4 We have (e?;n ) = e P ; : Moreover, if ( 4 ) < then the following holds:? e ;n? e P ; = O n H : Theorem 3.5 If su [Z(t) 4 ] <, ( 4 ) < and ; then we have t? b n? (e? + e P ; ) H = O n : The following lemma measures the loss induced by the discretisation of the curves and shows that the covariance oerator of the noise tends to zero. Lemma 3.6 Under assumtion A and if = o() when and tend to innity, we have??? e? H = O?m ; e P ; = O : Hence?? e? + e P ; = O??m + O : Let's denote by fb n ; : : : ; b qn g the eigenvectors of b?n associated to the decreasing sequence of eigenvalues. Since these functions are determined u to sign change, we will suose that < b jn ; j > : We can now state the main theorem of this section: Theorem 3.7 Under assumtions A ; A ; if = o() when n and tend to innity, then we have b?n?? = = O(?m ) + O(n?= ) + O(? ): (3) Moreover, if the eigenvalues of? are distinct, > : : : > q > ; then the following holds: b jn? j L = = O(?m ) + O(n?= ) + O(? ); j = ; : : : ; q: (4)

11 Remars:. The rate of convergence deends exlicitly on the number of design oints : If we choose n, = O(n =m ); the condition = o() is satised and b jn? j L = = O(n?= ):. We have suosed, for simlicity, that the eigenvalues of? were distinct. This condition could be relaxed by considering the subaces generated by the eigenvectors of?: Then one should wor with rojections instead of eigenvectors such as in Dauxois et al. (98). 3. The assumtion q < is crucial in lemma 3.6 to bound?? e? : If it wasn't satised, it would still be ossible to obtain some results of convergence. We could show for instance the convergence of e? towards? by using comactness roerties of covariance oerators (Besse 99) but in counterart, one would loose rates of convergence. On the other hand, we could add a condition on the decrease of the eigenvalues such as j = ar j ; a > ; r ]; [; and control the growth of q with n; and : 3.3 Smooth estimates of the eigenelements In order to rove the consistency of the smooth estimates, we will study the asymtotic behaviour of the eigenvectors of the oerator b?n;; ; whose matrix reresentation in the B-slines basis is dened by (3). The demonstration of this result is based on an asymtotic exansion, for small, of b?n;; : Let fb j;n ; j = ; : : : qg be the rst q eigenvectors of b?n;; : Since they are determined only u to a sign change, we suose that (< b j;n; j >) : We can now state the main theorem of this section: Theorem 3.8 Under assumtions of theorem 3.7 and if = o(?m ); the following holds: b j;n? j = = O(n?= ) + O(?m ) + O( m ) + O(? ); j = ; : : : ; q: L The mean square error of smooth estimates of the eigenfunctions tends to zero at the same rate if is chosen as follows: = O(n?3= ):

12 4 Heuristic investigation of the eect of smoothing In this section, we carry out some heuristic calculations based on erturbation theory (Kato, 976), for large n and small to determine whether, and when, smoothing has an advantageous eect on the estimation of any articular eigenfunction. In order to simlify calculations, arameters and were suosed to be xed. Similar calculus have already been erformed by various authors such as Pezzulli & Silverman (993) and Silverman (996). We investigate here, within the subsace of slines functions S ;, the eect of smoothing on the mean square error of estimation in order to establish a lin between the otimal smoothing arameter and the number of observed curves. This asymtotic exansion is justied by the convergence of our estimates demonstrated in theorem 3.8. Nevertheless, we have to mae the additional assumtion on the distribution of the random vectors Z i and i : (A 3 ) for all i, Z i and i are indeendent gaussian vectors. Since the eigenelements of the matrix M n; dened in (3) are equal to those obtained by the Silverman (996) rocedure (remar in section.3), we only recall the method used to get an asymtotic exansion of the eigenelements. Then, we are able to aroximate the quadratic error of estimation, within S ; ; of a new and indeendent curve. 4. Asymtotic exansions Using the matrix reresentation of the covariance oerator M n; = H ; CS n C =? I + C? G? S n C and recall its eigenvectors bv ; ; : : : ;bv q; are orthonormal with resect to metric H? Dene M n = S n C and M = (M n ): The eigenelements satisfy: M n bv`; = b `;? I + C? G bv`; : b`;;bv`; ; : of M n; We can deduce from the central limit theorem that the covariance structure of n(mn? M) does not vary with n (Silverman, 996) and and since they are suosed to be xed. Then, one can exress M n = M + n?= R; where R is a random matrix with mean zero. Assuming n is large enough, is small enough and using bounds obtained in theorem 3.8, one can write the asymtotic exansion of the eigenelements of M n; as: b = + n?= + + n?= + n? + + (5) bv = v + n?= u + u + n?= u + n? u + u + (6)

13 with ( Mn bv = b? I + C? G bv and bv H? ;bv = Mv = v and v Cv = Subscrit ` is omitted for sae of simlicity. By identifying the orders in n?= ; ; n?= ; : : : and dening A = C? G; one obtain the two following sets of equations: and 8>< 8 >< >: u Cv = u Cv + v Gv = u Gv + u Cv + u Cu = u Cv + u Cu = u Cv + u Gv + u Cu = (7) (8) >: Rv + Mu = v + u Mu = v + u + Av Ru + Mu = u + u + v + u + v + Au + Av Mu + R u = v + u + u Mu = v + u + u + Av + Au (9) Dene P j = v j v jc; the C-orthogonal rojection onto the subsace generated by the vector v j and X P T j = : j? 6=j These oerators commute since they have the same eigenvectors and are symmetric with resect to metric C: We then get from above: 8 = tr(p`r) = hv; Rvi C =? tr(p`a) =? d ` >< u = T`Rv` u =? T`Av`? d`v` u = T`Ru? T`u? u Cu v` u = T`Ru? T`Au? T`u? T`Av? T`u? (u >:? Cu + u Gv`) v` u =?T`Au? T`Av`? T`u? u Cu + ugv` v` (3) where d l = v `Gv` is the semi-norm of the vector v` which measures the smoothness of the corresonding function belonging to S ; : 4. Is smoothing advantageous? In this section we will exand the MISE, in terms of n and ; for the estimates of: eigenvectors, a new and indeendent samled curve. 3

14 4.. Aroximation of the eigenfunctions Consider the following ris function:? MISE() = bv`;? v` C as a criterion for measuring the accuracy of the estimated eigenvectors. It is suf- cient to comare MISE() with MISE() to determine when smoothing has an advantageous eect. Neglecting terms of order n?3= ; =n; 3 n?= ; : : : ; one obtains the following aroximation:? bv`;? v` C? n? u C + n?= u Cu + n? fu Cu + u Cu g + u C : Using R = ; T`v` = and the assumtion of normality A 3 we obtain (RAR) = MAM + tr(am)m for any symmetric matrix A (see Pezzulli & Silverman 993). Hence, we can obtain for instance: (3) u C = ht`rv; T`Rvi C = v; RT `Rv C = v; MT `Mv C + tr? T `M hv; Mvi X C = ` 6=` (`? ) : Pursuing these tyes of maniulations, we nally obtain: 8 >< >: hu ; u i C = hu ; u i C = `d ` 4 hu ; u i C =? `d ` X v6=` u C = d4` + ` 4 X v6=` (`? ) X 6=` (`? ) + ` d ` (`? ) X where d`j = v `Gv j : The asymtotic aroximation of the MISE v6=` (d `? d ) (`? ) 3 (3) MISE()? MISE() = n? hu ; u i C + hu ; u i C + u C can be viewed as a quadratic olynomial in : Hence, it is helful to use a smoothing arameter, for small when At the otimum value hu ; u i C + hu ; u i C : (33) =? hu ; u i C + hu ; u i C n u C 4

15 then MISE( )? MISE() =? : u C A sucient condition for smoothing to be advantageous, is that the ordered eigenvectors are more and more \rough" (` < ) d` d ): For real data sets, this condition is often satised. It is to be noticed that the condition obtained here is less restrictive than the one of Pezzulli & Silverman (993) (see Silverman 996 for further exlanations). 4.. Aroximation of a new curve n? hu; u i C + hu ; u i C One can also erform the same tye of exansion of the MISE for a new and indeendent centered samle ath. That is to say, suose we have constructed our estimates from a samle of n curves and we then observe a new trajectory, indeendent of the samle. By writing its coordinates, bs; in the B-slines basis and then slitting them into two arts, signal lus noise, bs = s + ; and searating its covariance oerator into signal and noise comonents We tae into account that (bsbs C) =? + C? e C = X (Lemma 6.). Hence, the eigenvalues Z satisfy: Writing = (z) (z) P + I + o : (34) C e? C? I C =? (e C? C) = O(=) is negligible (z) + + o? bs ;n = bv`; bv `;H ;H ; Cbs = hbv`; ;bsi C bv`; of the covariance oerator of the signal ; = ; : : : ; q: (35) gives the aroximation of the coordinates of this new curve in the sace sanned by? the `th eigenvector. bv`; bv `;H ; is the H? ;-orthogonal rojection onto the subsace generated by bv`; : The following loss function L() = bs ;n? P`s C (36) allows us to evaluate the aroximation error for reconstructing the curve in the subsace generated by bv`; : To examine the eect of smoothing, one must comare L() with L(): For this, dene bv the estimate of v` without smoothing and bs n the corresonding estimate of the coordinates: bs n = bvbv Cbs = P`bs + n?= (T`RP` + P`RT`)bs + 5

16 Writing bs ;n = bv`bv `Cbs and exanding it as follows bs ;n = bs n + (u v + vu ) + :n?= (u v + vu ) Cbs (37) gives the following aroximation n L() L() + (u v + vu ) Cbs C (38) + bs n? P`s ; (u v + vu ) Cbs + n?= (u v + vu + u u + u u ) Cbs C The real term of interest in this equation is the last term, say W; since it is advantageous to smooth if, and only if, it is negative. In this case, L() will be a decreasing function near zero. Using the fact that bs n? P`s = P` + n?= (T`RP` + P`RT`) (s + ) + and bs; and bv are indeendent, we get: W = hp; (u v + vu ) Ci C +n?= hp; (u v + vu + u u + u u ) Ci C +n?= h(trp + PTR)bs; (u v + vu ) Cbsi C : (39) Since TRP = and u = u = ; the terms in n?= are zero and W? d`: (4) This exression is always negative and consequently, it is always better to smooth to estimate a new curve. Nevertheless, we must ee in mind that exression (4) is based on an aroximation of the loss function L(): One can notice as well that W decreases when the number of design oints increases. One can as the question what is the ideal amount of smoothing to estimate otimally this new curve. For this, we need to calculate (u v + vu )Cbs C : Consider for examle D (T`A + d` I)P` bs; (T`A + d` hu v Cbs; u v Cbsi C = D C = bs; P`(AT` + d` I)(T`A + d` I)P`bs n C = tr P` AT `A + d` (T`A + AT`) + d4` I 4 which nally yields: = = d4` 4 + X 6=` (u v + vu )Cbs C = + d4` 4 trp` tr P`AT `AP`! d ` (`? ) + (z) ` d I)P` bs! X d ` d 4` + 3( (z) ` ) (`? ) 6=` 4` + ((z) ` 6 ) X 6=` ( (z) ` + (z) )d ` (`? ) E E + o P`? bsbs C o :

17 Hence, the ideal amount of smoothing may be exressed as follows: d ` a + b (a > ; b > ): (4) The more the data are rough, the larger the otimal smoothing arameter has to be and the more the noise is disersed, the more the data have to be smoothed. 5 Simulations In revious sections, the lins between the otimal smoothing arameter and the number of curves n were studied. In this section we suggest to ut the stress, by means of simulations, on the relation between the number of nots, the number of design oints and the otimal smoothing arameter in the mean square error sense. Articial data sets were generated as follows: Z i (t) = C i sin(:5t)+c i sin(:5t)+c 3i sin(3:5t) i = ; : : : ; n; t [; ]; (4) where fc i ; C i ; C 3i ; i = ; : : : ; ng are 3n seudo random numbers indeedently and normaly distributed with zero mean and variance = ; = :3; 3 = :9 : Clearly, the subsace E q which contains the random rocess Z is of dimension three and is generated by the set of functions f sin(:5t); sin(:5t); sin(3:5t) g: Afterwards, these curves were samled uniformly and corruted by adding a white noise ; normally distributed with zero mean and unit variance, at the design oints. We nally observe the n random vectors belonging to R : Y i = Z i + i ; i = ; : : : ; n; where Y ij = Z i ( j? ) + ij; j = ; : : : ;? : We erformed nine simulations by considering dierent values for n and : n equal to n = 5, n = and n 3 = : equal to = 3, = 6 and 3 = : Estimates were constructed from the observations Y i using the method described in section. We have decided to choose m = ; that is to say the roughness enalty is the L [; ] norm of the second derivative, and = 3 which means that curves where aroximated by cubic B-slines. Let's denote by c Yi (; q; ) the ran constrained estimation of the ith curve with smoothing arameter in a q-dimensional subsace of S : For and xed, the otimal smoothing arameters q and where chosen by minimizing the true quadratic ris function: R (; q) = n nx i= 7 c Yi (; q; )? Z i : (43)

18 =8 q R(q ; ) R(q ; ) =3 3 5.e =6 3.4e = 3.e-6..3 Table : Quadratic ris, = 8 and n = : =6 q R(q ; ) R(q ; ) = e =6 3 5.e = 3.e-6..4 Table : Quadratic ris, = 6 and n = 5: In order to be able to comare the eciency of these smooth estimates with the B-slines estimates obtained without using a smoothing aramater, we have chosen another otimal dimension, q ; by minimizing R (; q) with resect to q: Then, we have comared R (; q ) with R (; q ): The analysis of the results of these simulations leads us to mae the following remars which can give ideas for future wor. It seems that: the number of nots has no real eect, on these simulations, on the accuracy of the smooth estimates. A too large is comensated the roughness enalty controlled by the smoothing arameter value (see Table and Table 3). for small numbers of design oints, the use of a smoothing arameter may imrove signicantly the eciency of these estimates (see Table, Table and Table 3). On the other hand, if is large, smoothing does not seem to be justifed since it does not imrove signicantly the estimation. These remars agree with results of the revious section, in articular equation (4). smoothing gives more exibility to the estimates and allows to comensate a bad dimension choice (see gure ). Actually, the true dimension is 3 but an estimate in a subsace of dimension 4 can nearly attain the otimal ris rovided that is well chosen. =4 q R(q ; ) R(q ; ) = e =6 3.4e = 3.e-6..4 Table 3: Quadratic ris, = 4 and n = : 8

19 Risque q=3 q=4 ^-7 ^-6 ^-5 ^-4 ^-3 ^- rho Figure : Ris function R with resect to and dimension q ( = 4; = 3 and n = ). the otimal dimension q doesn't seem to be directly lined with the otimal smoothing arameter. Actually, we notice that q is always equal to q : We thin that it may be ossible to choose the dimension and the smoothing arameter searately by using two dierent criteria. For instance, one could choose q in a rst ste, and then : 6 Proofs 6. Technical lemmas on B-slines The following lemmas on B-slines will be stated since they will be needed later on to rove results of convergence. Lemma 6. B-slines are nonnegative functions which satisfy the following equation: rx j= B j (t) = ; 8t [; ]: Futhermore, the suort of each B-sline B j is included in [ j ; j+ ] where = = = ; j+ = j=( + ); j = ; : : : ; ; ++ = = + = : Lemma 6. Under assumtion A and if < ; we have: C = O(? ) and C? = O() e C = O(?? ) and e C = O() 9

20 C? e C = O(? ); G = O( m? ); where : is the usual matrix norm. Proof of lemma 6. The rst three oints were demonstrated by Agarwall & Studden (98) and Burman (99). They lie on comact suort and regularity roerties of slines functions. On the other hand, each B i has m continuous derivatives for m < and satises (Schumaer 98, theorem 4.) : su jb (m) i (t)j c m ; t[;] where constant c does not deend on and i: Furthermore, each B-sline has a suort length of order? (lemma 6.). Thus, the elements of matrix G satisfy: j[g ] lj j Z B (m) j (t) (m) B l (t) dt = ( if jl? jj > O(? m ) else. Therefore, alying theorem.9 of Chatelin (983) to the symmetric matrix G we obtain: G su j=;:::;r rx l= j[g ] lj j = O( m? ): We also need to bound the distance between the two matrices H = ; for "small" smoothing arameters : Lemma 6.3 For small and for all we have: H =? ; C?= = O( 4m+ ): and C?= Proof of lemma 6.3 The roof of this lemma is based on erturbation theory of linear oerators (Kato 976, Chatelin 983). Since the matrix C is ositive, its resolvent r() = (C + I)? exists for > and we have (Kato, 976 8): C?= = Z + r() d: (44) Let's consider er() =? H? ; + I? : The second resolvent equation gives us: er()? r() =?r()(h? ;? C )er() =? r()g er(); (45)

21 and thus, using (44) and (45), we obtain: H =? ; C?= =? Z + r()g er() d : (46) Let be the smallest eigenvalue of the matrices C and H? ; : We have r() ( +)? and from lemma 6., we get = O(? ): Hence, using (46), we can bound: H =? ; C?= = Z + Z + r()g er() d ( + )? d G = O( 3= m? ); (47) since G = O( m? ) and Z + ( + )? d C?3= = O( 3= ): 6. Asymtotic behaviour of the MISE for the mean function Proof of theorem 3. Beginning with the study of the asymtotic behavior of the bias, denote by b ;n the least squares estimate of the mean function in the B-slines basis which can be written: b ;n (t) = b b ;n e C? B (t); (48) where b b;n = n P n i= b bi : Considering the exectation, e (t); as follows: where e b = P j= (t j)b (t j ); we have for the bias: e (t) = e b e C? B (t); (49)? e L? e? e L +? e L : Under assumtion A and condition = o(); one obtains from Agarwall & Studden (98) (theorem 3.B): Z e? L?m (D m (t)) dt: (5) Furthermore, if = o(?m ) it is ossible to exand the "hat matrix" H ; : H ; C =? I + C? G? = I? C? G + o( m ); (5)

22 because we have C? G = O( m ) from lemma 6.. Using this exansion we get e? e L = Z [(bs? bs ;n ) B (t)] dt = (bs? bs ;n ) C (bs? bs ;n ) = bs ;n (H ;C? I) C (H ; C? I) bs ;n = bs ;ng C? G bs ;n + o( 4m ): (5) It is easy to chec that bs ;n = O() and using lemma 6. again we have G C?G = O( 4m? ): Therefore the bias is bounded above as follows: Writing the variance term as follows: e? b L =? e L = O(?m ) + O( 4m ): Z (es?bs ) B (t) dt = (es?bs ) C (es?bs ) = eb? b;n b H; C H eb ;? b;n b H ; C H ; e b? b;n b : (53) Matrix C is ositive and denite, matrix G is nonnegative, and the smallest eigenvalue of C is of order? (lemma 6.) whereas G has m null eigenvalues since the derivative of order m of olynomials whose degree is less than m is null. Therefore the smallest eigenvalue of H? ; = C + G is of order? and H ; = O(): Thus and it remains to bound P n n i= i and write: H ; C H ; H ; C = O( )O(? ) = O(); (54) e b? b;n b P : Let's consider Z n = n Z n i= i and n = bb ;n? e b = X j= [ Z n + n ] j B (t j ): By assumtions, we have ( Z n ) = E( n ) =, Z n Z n =? n, n n = n, Z n n =, and [(Z n + n )(Z n + n ) ] = n (? + I): From equation (7) and assumtion A on the regularity of Z one can chec that su t[;] f(z(t) g < +: Let's dene = (? n + I ) and D =max ( ij ): Then

23 we have D = O(=n): Furthermore, Let's write X b b;n? b e X+ l;j i= = X D X B i (t l )B i (t j ) = l;j l;j + X l;j i= X+ i= X+ i= B i (t l )B i (t j ) B i (t l )B i (t j ): X j= B i (t j )! X l= B i (t l )! : From lemma 6., we now that each B-sline has a suort length of order? and therefore the cardinal of the set fl j B i (t l ) 6= g is of order? ; and P j= B i(t j ) = O(? ): Hence and we obtain: X X+ l;j i= b b;n? b e B i (t l )B i (t j ) = O(? ); (55) = O(? n? ): (56) Finally, we get the desired result by using (56), (54) and (53). 6.3 Results on the covariance oerator and its eigenvectors Proof of lemma 3.3 Denote by ez the B-slines estimation of the signal z deduced from Z: Using lemma 3., we get the following bound: e?n?? e H ez4 L : n We only need to study ez 4 L ; let's dene e b = = P j= Z(t j)b (t j ) and write ez into the B-slines basis to get: ez 4 L = Z eb e C?? e C 4 C O( ) n eb e b o ; B (t) dt n eb e b o using lemma 6.. Let's consider D = su t[;] [Z(t) 4 ] < : It yields n eb e b o = 4 X a;b;c;d=;:::; D 4 X a;b;c;d=;:::; D 4 X a;b=;:::; (Z(t a )Z(t b )Z(t c )Z(t d ))B (t a ) B (t b )B (t c ) B (t d ) B (t a ) B (t b )B (t c ) B (t d ) B (t a ) B (t b ) 3 X c;d=;:::; B (t c ) B (t d ):

24 Then, using (55), we obtain: X a;b=;:::; B (t a ) B (t b ) = X+ i= X X a= b= B i (t a )B i (t b ) = O(? ); o and n eb b e = O(? ): Finally, we deduce from above that ez 4 L = O(); and the desired result e?n?? e H = O(n? ): Proof of lemma 3.4 Let's dene, for i = ; : : : ; n; e bi; = = P write n nx i=! eb i;b ei; = n = nx X i= j = e C : Hence, we have (es i; es i; ) = = e C? [? j= i] j B (t j ) and es i; = C e e b i; : Let's X j;`= ([ i ] j [ i ]`)B (t j )B (t`) B (t j )B (t j ) and the rst art of the roof is obtained readily by exressing?;n e in the basis (e ij ) i;j=;:::;r : P Let's consider d 4 = ( 4 ) and dene b = = [] j= jb (t j ): By writing the B-slines aroximation of the white noise, say e; deduced from the vector ; we get e 4 = O( )fb b g and fb b g = 4 X = d 4 4 X a;b;c;d=::: a= + ( ) 4 + ( ) 4 f a b c d gb (t a ) B (t b )B (t c ) B (t d ) (B (t a ) B (t a )) + ( ) X X X X a= b6=a a= b6=a 4 X X a= b6=a B (t a ) B (t b )B (t b ) B (t a ) B (t a ) B (t a )B (t b ) B (t b ): (B (t a ) B (t b )) The rst term is negligible since B (t a ) B (t a ) : Moreover, using the following equality X B (t a ) B (t a ) = tr(e C ) a= e C = O(? ) we nally get fb b g = O(? ) and the result. and Proof of theorem 3.5 4

25 Since for all i = ; : : : ; n; we have (e i ) = and e i is indeendent of ez i, it is easy to chec that e?;n = and e?;n = : Hence, we have: b?n = e? + e P ; : Furthermore, since?;n e is the transosed of?;n e ; these oerators have the same roerties. Using () and the indeendence between Z and ; one can bound e?;n as follows: H because e L = O Let's write now: b? n? : e?;n e? + e P ; ez e L L H n = O n = e?n?? e + e?;n +?;n e + e?;n? e P ; ; (57) (58) and use lemmas 3.3, 3.4 and equation (57), in order to get! = = O b? n? e? + e P ; H n + O! +! : The result is obtained readily since the terms of order = are negligible. Proof of lemma 3.6 Let's exress e? in the basis (e lj ) dened in (): e? = " rx? ea lj e l;j where [ea lj ] l;j = A e = C e l;j= X j;l= (Z(t j )Z(t l ))B (t j )B (t l ) Furthermore, we get from exansion (7) of the random function Z; that its covariance may be exressed as follows: (Z(t j )Z(t l )) = X i= i i (t j ) i (t l ): Let's denote by ;i e the B-slines aroximation of the function i deduced from its observation at the design oints (t j ) j : One can see easily that? e may be written equivalently as follows: qx i;i e ;i e : (59) e? = i= 5 # ec? :

26 Let's notice that the functions e ;i ; which are aroximations of the eigenvectors of?; are not a riori, the eigenvectors of the oerator e? : By assumtion, functions i satisfy the regularity condition A ; and so we get (theorem 3.B, Agarwall & Studden 98): Thus,?? e? H = = i? e ;i L?m D m i L : (6) qx i i i? ;i e ;i e H i= qx i= qx i i? ;i e i L L + i= h i i i? ;i e i + ;i e i? ;i e H e ;i L : (6) The rst art of the roof is obtained by alying (6) to the q functions e ;i : By denition of the norm we have: P; e = sufj < ( P; e )f; f > j; f L = g: Let's dene the vector f whose elements are f< f; B j >; j = ; : : : ; rg: The term < ( e P; )f; f > may be exressed in a matrix form as follows: Since f L O(): Hence, < ( e P; )f; f >= f e C? f : = ; we have f f = O(? ) and using lemma 6., we get: P; e = O() and e P ; = O :? e C = Proof of theorem 3.7 Bound (3) is obtained readily by alying theorem 3.5 and lemma 3.6. We get the convergence of b jn towards j by using lemma 3. of Bosq (99). Indeed, it states that? n b L? b j jn L (? )???n b if j = min( j?? j ; j? j+ ) and allows us to obtain readily the second result.?? b?n if j > 6

27 Before embaring on the roof of theorem 3.8, we have to establish some notations. Let's denote by bs the coordinates in the B-slines basis of the vectors b j : j;n These are obtained by alying H = to the eigenvectors, say ; fv j;n ; j = ; : : : ; qg; of matrix H = C ; S n C H = dened in (5). Let's denote by ; fv j ; j = ; : : : qg (resectively fv j ; j = ; : : : ; qg) the rst q eigenvectors of H = C ; (S n ) C H = (resectively of the matrix C = ; (S n ) C = ). These are obtained by alying C?= to the coordinates of the eigenvectors of?n; b : Finally, let's dene the matrix S = (S n ) : Proof of theorem 3.8 This roof is slit into two arts. At rst we rove the convergence of v j;n towards v j : Then we show that the eigenfunctions b j;n converge towards j: At rst, let's notice that H = ; C S n C H = ;? H= C ; SC H = ; = O n by alying to the matrix H = ; C S n C H = ; the same arguments as in lemma 3.. Let's comare now the deterministic matrices H = C ; SC H = ; and C= SC = : For this, let's write C = SC = = C?= C SC C?= and dene I = H = ; C SC H = It yields I = C SC H = SC H = H =? ; C?= H =? ; C?= H = We have from lemma 6.3, to chec that H = = O(?= ) and ; C = C SC H = ;? ; C?= + ; C= + C = SC? ; C?= (6) : (63) = O( (4m+)= ): Furthermore, it is easy = O(?= ): If we write C = SC C = C SC H = ; S C ; C S H = ; it remains to bound P S to get a bound for I: In order to do this, let's write n S = =n s i= is =np n s i i= is = O() since s i is i = O() by similar arguments as those used in the roof of theorem 3.. Thus, it yields H = ; C SC H =? ; C= SC = ; = O( m ): (64) Furthermore, the eigenvalues of C = SC = are also the eigenvalues of? e ; and thus are distinct, for small = and large (lemma 3.6 and theorem 3.7). Hence, the eigenvectors of C = S n C = satisfy v j;n? v j = = O(n?= ) + O( m ); (65) by alying lemma 3. of Bosq (99) and relations (6) and (64).? ; C= SC = : 7

28 Let's comare now the eigenfunctions fb j;n ; j = ; : : : ; qg of b?n;; with those of e? ; say fe j; ; j = ; : : : ; qg: We have b (t) = j;n (bs j ) B (t); t [; ]; where bs = j H= ; v and e j;n j; (t) = (C?= v j ) B (t); t [; ]: Thus, we can bound by writing b? e j;n j; = L n(v j;n) H = ; v j;n? C?= v j = H = ; v? j;n C?= +n? v j;n? v j + n? v j;n? v j H =? ; C?= C?= C?= H =? ; C?= C v j C C C?= C Using relation (65) and lemmas 6.3 and 6., we nally get b? e j;n j; L = o H =? ; C?=? v? v j;n j H =? ; C?= v j;n? v + j;n C?= v? v j;n j : v j;n = O(n?= ) + O( m ): (66) The end of the roof is obtained by alying again lemma 3. of Bosq (99) to the oerators e? and?: e j;? j L = O(? ) + O(?m ); o o Acnowledgments : The author is indebted to Professor P.C. Besse, to Drs. C. Diac and D.B. Stehenson and an anonymous referee for critical comments which considerably imroved this article in various ways. References Agarwal, G. and Studden, W (98). Asymtotic integrated mean square error using least squares and bias minimizing slines. The Annals of Statistics, 8:37{35. Besse, P. (99). Aroximation sline de l'analyse en comosantes rinciales d'une variable aleatoire hilbertienne. Annales de la Faculte des sciences de Toulouse, :39{346. Besse, P., Cardot, H., and Ferraty, F. (997). Simultaneous nonarametric regressions of unbalanced longitudinal data. Comutational Statistics and Data Analysis, 4:55{7. Biritxinaga, E. (987). Estimation sline de la moyenne d'une fonction aleatoire. These de troisieme cycle, Universite de Pau, France. 8

29 Bosq, D. (99). Modelization, nonarametric estimation and rediction for continuous time rocesses. In Roussas, G., editor, Nonarametric Functional Estimation and Related Toics, ages 59{59. Nato, Asi series. Boularan, J., Ferre, L., and Vieu, P. (993). Growth curves: a two stage nonarametric aroach. Journal of Statistical Planning and Inference, 38:37{35. Burman, P. (99). Regression function estimation from deendant observations. Journal of Multivariate Analysis, 36:63{79. Cardot, H. (997). donnees fonctionnelles. Toulouse, France. Cardot, H. and Diac, C. (998). Contribution a l'estimation et a la revision statistique de These de troisieme cycle, Universite Paul Sabatier, Convergence en moyenne quadratique de l'estimateur de la regression ar slines hybrides. C. R. Acad. Sci. Paris, Serie I, t. 36, Chatelin, F. (983). Sectral Aroximation of Linear Oerators. Academic Press. Dauxois, J., Pousse, A., and Romain, Y. (98). Asymtotic theory for the rincial comonent analysis of a random vector function: Some alications to statistical inference. Journal of Multivariate Analysis, :36{54. Diac, C. and Thomas-Agnan, C. (997). A nonarametric test of non-convexity of the regression. To aear in Journal of Nonarametric Statistics. De Boor, C. (978). A Practical Guide to Slines. Sringer Verlag. Hart, J. and Wehrly, T. (993). Consistency of cross-validation when data are curves. Stochastic rocesses and their alications, 45:35{36. Kato, T. (976). Perturbation Theory for Linear Oerator. Sringer-Verlag. Kelly, C. and Rice, J. (99). Monotone smoothing with alication to dose-resonse curves and the assessment of synergism. Biometrics, 46:7{85. Knei, A. (995). Nonarametric estimation of common regressors for similar curve data. The Annals of Statistics, :386{47. Pezzulli, S. and Silverman, B. (993). On smoothed rincial comonents analysis. Comutational Statistics, 8:{6. Ramsay, J. and Dalzell, C. (99). Some tools for functional data analysis. Journal of the Royal Statistical Society, B, 53:539{57. with discussion. Ramsay, J. and Silverman, B. (997). Functional Data Analysis. Sringer-Verlag. 9

30 Rice, J. and Silverman, B. (99). Estimating the mean and covariance structure nonarametrically when the data are curves. Journal of the Royal Statistical Society, B, 53:33{43. Schumaer, L. (98). Sline Functions: Basic Theory. Wiley-Interscience. Silverman, B. (996). Smoothed functional rincial comonents analysis by choice of norm. The Annals of Statistics, 4:{4. 3

Nonparametric Estimation of Smoothed Principal. Components Analysis of Sampled Noisy Functions. Herve Cardot. INRA Toulouse, BP 27.

Nonparametric Estimation of Smoothed Principal. Components Analysis of Sampled Noisy Functions. Herve Cardot. INRA Toulouse, BP 27. Nonparametric Estimation of Smoothed Principal Components Analysis of Sampled Noisy Functions Herve Cardot Unite de Biometrie et Intelligence Articielle, INRA Toulouse, BP 7 F-336 Castanet-Tolosan Cedex,

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are

216 S. Chandrasearan and I.C.F. Isen Our results dier from those of Sun [14] in two asects: we assume that comuted eigenvalues or singular values are Numer. Math. 68: 215{223 (1994) Numerische Mathemati c Sringer-Verlag 1994 Electronic Edition Bacward errors for eigenvalue and singular value decomositions? S. Chandrasearan??, I.C.F. Isen??? Deartment

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

The analysis and representation of random signals

The analysis and representation of random signals The analysis and reresentation of random signals Bruno TOÉSNI Bruno.Torresani@cmi.univ-mrs.fr B. Torrésani LTP Université de Provence.1/30 Outline 1. andom signals Introduction The Karhunen-Loève Basis

More information

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R.

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R. 1 Corresondence Between Fractal-Wavelet Transforms and Iterated Function Systems With Grey Level Mas F. Mendivil and E.R. Vrscay Deartment of Alied Mathematics Faculty of Mathematics University of Waterloo

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve

A Simple Weight Decay Can Improve. Abstract. It has been observed in numerical simulations that a weight decay can improve In Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson and R.P. Limann, eds. Morgan Kaumann Publishers, San Mateo CA, 1995,. 950{957. A Simle Weight Decay Can Imrove Generalization

More information

Estimating Time-Series Models

Estimating Time-Series Models Estimating ime-series Models he Box-Jenkins methodology for tting a model to a scalar time series fx t g consists of ve stes:. Decide on the order of di erencing d that is needed to roduce a stationary

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm

NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS. The Goldstein-Levitin-Polyak algorithm - (23) NLP - NONLINEAR OPTIMIZATION WITH CONVEX CONSTRAINTS The Goldstein-Levitin-Polya algorithm We consider an algorithm for solving the otimization roblem under convex constraints. Although the convexity

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

substantial literature on emirical likelihood indicating that it is widely viewed as a desirable and natural aroach to statistical inference in a vari

substantial literature on emirical likelihood indicating that it is widely viewed as a desirable and natural aroach to statistical inference in a vari Condence tubes for multile quantile lots via emirical likelihood John H.J. Einmahl Eindhoven University of Technology Ian W. McKeague Florida State University May 7, 998 Abstract The nonarametric emirical

More information

Stochastic integration II: the Itô integral

Stochastic integration II: the Itô integral 13 Stochastic integration II: the Itô integral We have seen in Lecture 6 how to integrate functions Φ : (, ) L (H, E) with resect to an H-cylindrical Brownian motion W H. In this lecture we address the

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

Sharp gradient estimate and spectral rigidity for p-laplacian

Sharp gradient estimate and spectral rigidity for p-laplacian Shar gradient estimate and sectral rigidity for -Lalacian Chiung-Jue Anna Sung and Jiaing Wang To aear in ath. Research Letters. Abstract We derive a shar gradient estimate for ositive eigenfunctions of

More information

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th

1 1 c (a) 1 (b) 1 Figure 1: (a) First ath followed by salesman in the stris method. (b) Alternative ath. 4. D = distance travelled closing the loo. Th 18.415/6.854 Advanced Algorithms ovember 7, 1996 Euclidean TSP (art I) Lecturer: Michel X. Goemans MIT These notes are based on scribe notes by Marios Paaefthymiou and Mike Klugerman. 1 Euclidean TSP Consider

More information

Participation Factors. However, it does not give the influence of each state on the mode.

Participation Factors. However, it does not give the influence of each state on the mode. Particiation Factors he mode shae, as indicated by the right eigenvector, gives the relative hase of each state in a articular mode. However, it does not give the influence of each state on the mode. We

More information

Additive results for the generalized Drazin inverse in a Banach algebra

Additive results for the generalized Drazin inverse in a Banach algebra Additive results for the generalized Drazin inverse in a Banach algebra Dragana S. Cvetković-Ilić Dragan S. Djordjević and Yimin Wei* Abstract In this aer we investigate additive roerties of the generalized

More information

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek

Use of Transformations and the Repeated Statement in PROC GLM in SAS Ed Stanek Use of Transformations and the Reeated Statement in PROC GLM in SAS Ed Stanek Introduction We describe how the Reeated Statement in PROC GLM in SAS transforms the data to rovide tests of hyotheses of interest.

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

On the asymptotic sizes of subset Anderson-Rubin and Lagrange multiplier tests in linear instrumental variables regression

On the asymptotic sizes of subset Anderson-Rubin and Lagrange multiplier tests in linear instrumental variables regression On the asymtotic sizes of subset Anderson-Rubin and Lagrange multilier tests in linear instrumental variables regression Patrik Guggenberger Frank Kleibergeny Sohocles Mavroeidisz Linchun Chen\ June 22

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

8.7 Associated and Non-associated Flow Rules

8.7 Associated and Non-associated Flow Rules 8.7 Associated and Non-associated Flow Rules Recall the Levy-Mises flow rule, Eqn. 8.4., d ds (8.7.) The lastic multilier can be determined from the hardening rule. Given the hardening rule one can more

More information

SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES

SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES Kragujevac Journal of Mathematics Volume 411) 017), Pages 33 55. SOME TRACE INEQUALITIES FOR OPERATORS IN HILBERT SPACES SILVESTRU SEVER DRAGOMIR 1, Abstract. Some new trace ineualities for oerators in

More information

x(n) x(n) H (f) y(n) Z -1 Z -1 x(n) ^

x(n) x(n) H (f) y(n) Z -1 Z -1 x(n) ^ SOE FUNDAENTAL POPETIES OF SE FILTE BANKS. Kvanc hcak, Pierre oulin, Kannan amchandran University of Illinois at Urbana-Chamaign Beckman Institute and ECE Deartment 405 N. athews Ave., Urbana, IL 680 Email:

More information

ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE. 1. Introduction

ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE. 1. Introduction ON UNIFORM BOUNDEDNESS OF DYADIC AVERAGING OPERATORS IN SPACES OF HARDY-SOBOLEV TYPE GUSTAVO GARRIGÓS ANDREAS SEEGER TINO ULLRICH Abstract We give an alternative roof and a wavelet analog of recent results

More information

Positive decomposition of transfer functions with multiple poles

Positive decomposition of transfer functions with multiple poles Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.

More information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,

More information

On a Markov Game with Incomplete Information

On a Markov Game with Incomplete Information On a Markov Game with Incomlete Information Johannes Hörner, Dinah Rosenberg y, Eilon Solan z and Nicolas Vieille x{ January 24, 26 Abstract We consider an examle of a Markov game with lack of information

More information

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK Towards understanding the Lorenz curve using the Uniform distribution Chris J. Stehens Newcastle City Council, Newcastle uon Tyne, UK (For the Gini-Lorenz Conference, University of Siena, Italy, May 2005)

More information

New weighing matrices and orthogonal designs constructed using two sequences with zero autocorrelation function - a review

New weighing matrices and orthogonal designs constructed using two sequences with zero autocorrelation function - a review University of Wollongong Research Online Faculty of Informatics - Paers (Archive) Faculty of Engineering and Information Sciences 1999 New weighing matrices and orthogonal designs constructed using two

More information

Heuristics on Tate Shafarevitch Groups of Elliptic Curves Defined over Q

Heuristics on Tate Shafarevitch Groups of Elliptic Curves Defined over Q Heuristics on Tate Shafarevitch Grous of Ellitic Curves Defined over Q Christohe Delaunay CONTENTS. Introduction 2. Dirichlet Series and Averages 3. Heuristics on Tate Shafarevitch Grous References In

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

Applications to stochastic PDE

Applications to stochastic PDE 15 Alications to stochastic PE In this final lecture we resent some alications of the theory develoed in this course to stochastic artial differential equations. We concentrate on two secific examles:

More information

Adaptive Estimation of the Regression Discontinuity Model

Adaptive Estimation of the Regression Discontinuity Model Adative Estimation of the Regression Discontinuity Model Yixiao Sun Deartment of Economics Univeristy of California, San Diego La Jolla, CA 9293-58 Feburary 25 Email: yisun@ucsd.edu; Tel: 858-534-4692

More information

Introduction Consider a set of jobs that are created in an on-line fashion and should be assigned to disks. Each job has a weight which is the frequen

Introduction Consider a set of jobs that are created in an on-line fashion and should be assigned to disks. Each job has a weight which is the frequen Ancient and new algorithms for load balancing in the L norm Adi Avidor Yossi Azar y Jir Sgall z July 7, 997 Abstract We consider the on-line load balancing roblem where there are m identical machines (servers)

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

Coding Along Hermite Polynomials for Gaussian Noise Channels

Coding Along Hermite Polynomials for Gaussian Noise Channels Coding Along Hermite olynomials for Gaussian Noise Channels Emmanuel A. Abbe IG, EFL Lausanne, 1015 CH Email: emmanuel.abbe@efl.ch Lizhong Zheng LIDS, MIT Cambridge, MA 0139 Email: lizhong@mit.edu Abstract

More information

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de

Chater Matrix Norms and Singular Value Decomosition Introduction In this lecture, we introduce the notion of a norm for matrices The singular value de Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Deartment of Electrical Engineering and Comuter Science Massachuasetts Institute of Technology c Chater Matrix Norms

More information

1 Extremum Estimators

1 Extremum Estimators FINC 9311-21 Financial Econometrics Handout Jialin Yu 1 Extremum Estimators Let θ 0 be a vector of k 1 unknown arameters. Extremum estimators: estimators obtained by maximizing or minimizing some objective

More information

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r

Introduction Model secication tests are a central theme in the econometric literature. The majority of the aroaches fall into two categories. In the r Reversed Score and Likelihood Ratio Tests Geert Dhaene Universiteit Gent and ORE Olivier Scaillet Universite atholique de Louvain January 2 Abstract Two extensions of a model in the resence of an alternative

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

t 0 Xt sup X t p c p inf t 0

t 0 Xt sup X t p c p inf t 0 SHARP MAXIMAL L -ESTIMATES FOR MARTINGALES RODRIGO BAÑUELOS AND ADAM OSȨKOWSKI ABSTRACT. Let X be a suermartingale starting from 0 which has only nonnegative jums. For each 0 < < we determine the best

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

Notes on Instrumental Variables Methods

Notes on Instrumental Variables Methods Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of

More information

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)] LECTURE 7 NOTES 1. Convergence of random variables. Before delving into the large samle roerties of the MLE, we review some concets from large samle theory. 1. Convergence in robability: x n x if, for

More information

LEIBNIZ SEMINORMS IN PROBABILITY SPACES

LEIBNIZ SEMINORMS IN PROBABILITY SPACES LEIBNIZ SEMINORMS IN PROBABILITY SPACES ÁDÁM BESENYEI AND ZOLTÁN LÉKA Abstract. In this aer we study the (strong) Leibniz roerty of centered moments of bounded random variables. We shall answer a question

More information

k- price auctions and Combination-auctions

k- price auctions and Combination-auctions k- rice auctions and Combination-auctions Martin Mihelich Yan Shu Walnut Algorithms March 6, 219 arxiv:181.3494v3 [q-fin.mf] 5 Mar 219 Abstract We rovide for the first time an exact analytical solution

More information

RIEMANN-STIELTJES OPERATORS BETWEEN WEIGHTED BERGMAN SPACES

RIEMANN-STIELTJES OPERATORS BETWEEN WEIGHTED BERGMAN SPACES RIEMANN-STIELTJES OPERATORS BETWEEN WEIGHTED BERGMAN SPACES JIE XIAO This aer is dedicated to the memory of Nikolaos Danikas 1947-2004) Abstract. This note comletely describes the bounded or comact Riemann-

More information

Nonparametric estimation of Exact consumer surplus with endogeneity in price

Nonparametric estimation of Exact consumer surplus with endogeneity in price Nonarametric estimation of Exact consumer surlus with endogeneity in rice Anne Vanhems February 7, 2009 Abstract This aer deals with nonarametric estimation of variation of exact consumer surlus with endogenous

More information

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS #A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668

More information

On Wald-Type Optimal Stopping for Brownian Motion

On Wald-Type Optimal Stopping for Brownian Motion J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of

More information

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA)

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA) Unsuervised Hyersectral Image Analysis Using Indeendent Comonent Analysis (ICA) Shao-Shan Chiang Chein-I Chang Irving W. Ginsberg Remote Sensing Signal and Image Processing Laboratory Deartment of Comuter

More information

Maxisets for μ-thresholding rules

Maxisets for μ-thresholding rules Test 008 7: 33 349 DOI 0.007/s749-006-0035-5 ORIGINAL PAPER Maxisets for μ-thresholding rules Florent Autin Received: 3 January 005 / Acceted: 8 June 006 / Published online: March 007 Sociedad de Estadística

More information

The following document is intended for online publication only (authors webpage).

The following document is intended for online publication only (authors webpage). The following document is intended for online ublication only (authors webage). Sulement to Identi cation and stimation of Distributional Imacts of Interventions Using Changes in Inequality Measures, Part

More information

Analysis of some entrance probabilities for killed birth-death processes

Analysis of some entrance probabilities for killed birth-death processes Analysis of some entrance robabilities for killed birth-death rocesses Master s Thesis O.J.G. van der Velde Suervisor: Dr. F.M. Sieksma July 5, 207 Mathematical Institute, Leiden University Contents Introduction

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

E-companion to A risk- and ambiguity-averse extension of the max-min newsvendor order formula

E-companion to A risk- and ambiguity-averse extension of the max-min newsvendor order formula e-comanion to Han Du and Zuluaga: Etension of Scarf s ma-min order formula ec E-comanion to A risk- and ambiguity-averse etension of the ma-min newsvendor order formula Qiaoming Han School of Mathematics

More information

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression

MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression 1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression

More information

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application

Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Application BULGARIA ACADEMY OF SCIECES CYBEREICS AD IFORMAIO ECHOLOGIES Volume 9 o 3 Sofia 009 Positive Definite Uncertain Homogeneous Matrix Polynomials: Analysis and Alication Svetoslav Savov Institute of Information

More information

AN ALGORITHM FOR MATRIX EXTENSION AND WAVELET CONSTRUCTION W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. This paper gives a practical method of exten

AN ALGORITHM FOR MATRIX EXTENSION AND WAVELET CONSTRUCTION W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. This paper gives a practical method of exten AN ALGORITHM FOR MATRIX EXTENSION AND WAVELET ONSTRUTION W. LAWTON, S. L. LEE AND ZUOWEI SHEN Abstract. This aer gives a ractical method of extending an nr matrix P (z), r n, with Laurent olynomial entries

More information

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation Uniformly best wavenumber aroximations by satial central difference oerators: An initial investigation Vitor Linders and Jan Nordström Abstract A characterisation theorem for best uniform wavenumber aroximations

More information

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education

CERIAS Tech Report The period of the Bell numbers modulo a prime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education CERIAS Tech Reort 2010-01 The eriod of the Bell numbers modulo a rime by Peter Montgomery, Sangil Nahm, Samuel Wagstaff Jr Center for Education and Research Information Assurance and Security Purdue University,

More information

Chapter 3. GMM: Selected Topics

Chapter 3. GMM: Selected Topics Chater 3. GMM: Selected oics Contents Otimal Instruments. he issue of interest..............................2 Otimal Instruments under the i:i:d: assumtion..............2. he basic result............................2.2

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

A Note on Guaranteed Sparse Recovery via l 1 -Minimization

A Note on Guaranteed Sparse Recovery via l 1 -Minimization A Note on Guaranteed Sarse Recovery via l -Minimization Simon Foucart, Université Pierre et Marie Curie Abstract It is roved that every s-sarse vector x C N can be recovered from the measurement vector

More information

Location of solutions for quasi-linear elliptic equations with general gradient dependence

Location of solutions for quasi-linear elliptic equations with general gradient dependence Electronic Journal of Qualitative Theory of Differential Equations 217, No. 87, 1 1; htts://doi.org/1.14232/ejqtde.217.1.87 www.math.u-szeged.hu/ejqtde/ Location of solutions for quasi-linear ellitic equations

More information

Spectral Properties of Schrödinger-type Operators and Large-time Behavior of the Solutions to the Corresponding Wave Equation

Spectral Properties of Schrödinger-type Operators and Large-time Behavior of the Solutions to the Corresponding Wave Equation Math. Model. Nat. Phenom. Vol. 8, No., 23,. 27 24 DOI:.5/mmn/2386 Sectral Proerties of Schrödinger-tye Oerators and Large-time Behavior of the Solutions to the Corresonding Wave Equation A.G. Ramm Deartment

More information

PETER J. GRABNER AND ARNOLD KNOPFMACHER

PETER J. GRABNER AND ARNOLD KNOPFMACHER ARITHMETIC AND METRIC PROPERTIES OF -ADIC ENGEL SERIES EXPANSIONS PETER J. GRABNER AND ARNOLD KNOPFMACHER Abstract. We derive a characterization of rational numbers in terms of their unique -adic Engel

More information

The Poisson Regression Model

The Poisson Regression Model The Poisson Regression Model The Poisson regression model aims at modeling a counting variable Y, counting the number of times that a certain event occurs during a given time eriod. We observe a samle

More information

Specialized and hybrid Newton schemes for the matrix pth root

Specialized and hybrid Newton schemes for the matrix pth root Secialized and hybrid Newton schemes for the matrix th root Braulio De Abreu Marlliny Monsalve Marcos Raydan June 15, 2006 Abstract We discuss different variants of Newton s method for comuting the th

More information

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS

#A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS #A37 INTEGERS 15 (2015) NOTE ON A RESULT OF CHUNG ON WEIL TYPE SUMS Norbert Hegyvári ELTE TTK, Eötvös University, Institute of Mathematics, Budaest, Hungary hegyvari@elte.hu François Hennecart Université

More information

Estimation of Separable Representations in Psychophysical Experiments

Estimation of Separable Representations in Psychophysical Experiments Estimation of Searable Reresentations in Psychohysical Exeriments Michele Bernasconi (mbernasconi@eco.uninsubria.it) Christine Choirat (cchoirat@eco.uninsubria.it) Raffaello Seri (rseri@eco.uninsubria.it)

More information

On information plus noise kernel random matrices

On information plus noise kernel random matrices On information lus noise kernel random matrices Noureddine El Karoui Deartment of Statistics, UC Berkeley First version: July 009 This version: February 5, 00 Abstract Kernel random matrices have attracted

More information

Asymptotically Optimal Simulation Allocation under Dependent Sampling

Asymptotically Optimal Simulation Allocation under Dependent Sampling Asymtotically Otimal Simulation Allocation under Deendent Samling Xiaoing Xiong The Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815, USA, xiaoingx@yahoo.com Sandee

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

On Z p -norms of random vectors

On Z p -norms of random vectors On Z -norms of random vectors Rafa l Lata la Abstract To any n-dimensional random vector X we may associate its L -centroid body Z X and the corresonding norm. We formulate a conjecture concerning the

More information

Maximum Likelihood Asymptotic Theory. Eduardo Rossi University of Pavia

Maximum Likelihood Asymptotic Theory. Eduardo Rossi University of Pavia Maximum Likelihood Asymtotic Theory Eduardo Rossi University of Pavia Slutsky s Theorem, Cramer s Theorem Slutsky s Theorem Let {X N } be a random sequence converging in robability to a constant a, and

More information

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS

GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS International Journal of Analysis Alications ISSN 9-8639 Volume 5, Number (04), -9 htt://www.etamaths.com GENERALIZED NORMS INEQUALITIES FOR ABSOLUTE VALUE OPERATORS ILYAS ALI, HU YANG, ABDUL SHAKOOR Abstract.

More information

On Doob s Maximal Inequality for Brownian Motion

On Doob s Maximal Inequality for Brownian Motion Stochastic Process. Al. Vol. 69, No., 997, (-5) Research Reort No. 337, 995, Det. Theoret. Statist. Aarhus On Doob s Maximal Inequality for Brownian Motion S. E. GRAVERSEN and G. PESKIR If B = (B t ) t

More information

ESTIMATION OF THE RECIPROCAL OF THE MEAN OF THE INVERSE GAUSSIAN DISTRIBUTION WITH PRIOR INFORMATION

ESTIMATION OF THE RECIPROCAL OF THE MEAN OF THE INVERSE GAUSSIAN DISTRIBUTION WITH PRIOR INFORMATION STATISTICA, anno LXVIII, n., 008 ESTIMATION OF THE RECIPROCAL OF THE MEAN OF THE INVERSE GAUSSIAN DISTRIBUTION WITH PRIOR INFORMATION 1. INTRODUCTION The Inverse Gaussian distribution was first introduced

More information

Some results of convex programming complexity

Some results of convex programming complexity 2012c12 $ Ê Æ Æ 116ò 14Ï Dec., 2012 Oerations Research Transactions Vol.16 No.4 Some results of convex rogramming comlexity LOU Ye 1,2 GAO Yuetian 1 Abstract Recently a number of aers were written that

More information

Topic 7: Using identity types

Topic 7: Using identity types Toic 7: Using identity tyes June 10, 2014 Now we would like to learn how to use identity tyes and how to do some actual mathematics with them. By now we have essentially introduced all inference rules

More information

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces

Research Article An iterative Algorithm for Hemicontractive Mappings in Banach Spaces Abstract and Alied Analysis Volume 2012, Article ID 264103, 11 ages doi:10.1155/2012/264103 Research Article An iterative Algorithm for Hemicontractive Maings in Banach Saces Youli Yu, 1 Zhitao Wu, 2 and

More information

Metrics Performance Evaluation: Application to Face Recognition

Metrics Performance Evaluation: Application to Face Recognition Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,

More information

Otimal exercise boundary for an American ut otion Rachel A. Kuske Deartment of Mathematics University of Minnesota-Minneaolis Minneaolis, MN 55455 e-mail: rachel@math.umn.edu Joseh B. Keller Deartments

More information

Online Learning of Noisy Data with Kernels

Online Learning of Noisy Data with Kernels Online Learning of Noisy Data with Kernels Nicolò Cesa-Bianchi Università degli Studi di Milano cesa-bianchi@dsiunimiit Shai Shalev Shwartz The Hebrew University shais@cshujiacil Ohad Shamir The Hebrew

More information

Estimating function analysis for a class of Tweedie regression models

Estimating function analysis for a class of Tweedie regression models Title Estimating function analysis for a class of Tweedie regression models Author Wagner Hugo Bonat Deartamento de Estatística - DEST, Laboratório de Estatística e Geoinformação - LEG, Universidade Federal

More information

Research of power plant parameter based on the Principal Component Analysis method

Research of power plant parameter based on the Principal Component Analysis method Research of ower lant arameter based on the Princial Comonent Analysis method Yang Yang *a, Di Zhang b a b School of Engineering, Bohai University, Liaoning Jinzhou, 3; Liaoning Datang international Jinzhou

More information

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data Quality Technology & Quantitative Management Vol. 1, No.,. 51-65, 15 QTQM IAQM 15 Lower onfidence Bound for Process-Yield Index with Autocorrelated Process Data Fu-Kwun Wang * and Yeneneh Tamirat Deartment

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Conve Analysis and Economic Theory Winter 2018 Toic 16: Fenchel conjugates 16.1 Conjugate functions Recall from Proosition 14.1.1 that is

More information