Run-length compression of quantized Gaussian stationary signals

Size: px
Start display at page:

Download "Run-length compression of quantized Gaussian stationary signals"

Transcription

1 Random Oper. Stoch. Equ. 0 (0), 3 38 DOI 0.55/rose de Gruyter 0 Run-length compression of quantized Gaussian stationary signals Oleg Seleznjev and Mykola Shykula Communicated by Nikolai Leonenko Abstract. We consider quantization of random continuous-valued signals. In practice, analogue signals are quantized at sampling points with further compression. We study probabilistic models for run-length encoding (RLE) algorithm applied to quantized sampled random signals (Gaussian processes). This compression technique is widely used in digital signal and image processing. The mean (inverse) RLE compression ratio (or data rate savings) and its statistical inference are considered. In particular, the asymptotic normality for some estimators of this characteristic is shown. Numerical experiments for synthetic and real data are presented. Keywords. Asymptotic normality, Gaussian process, compression ratio, run-length encoding, uniform quantization. 00 Mathematics Subject Classification. Primary 60G5; secondary 94A9, 94A34. Introduction Quantization in signal processing, or discretization in amplitude of continuousvalued signals, is intensively studied in the last decades. For a comprehensive overview of quantization techniques, we refer to [9] (see also [8] and the references therein). Nowadays the interest to statistical problems for quantization is renewed due to the wide spread of various types of sensors and increased performance requirements. Two necessary steps are needed to digitalize a signal: sampling (or time discretization) and quantization. Quantization maps the continuousvalued measurements of the signal to a set of discrete values (say, quantization levels). Quantization is a part of digital signal representation for various conversion techniques such as source coding, data compression, archiving, restoration etc., where data compression refers to reducing the amount of data required to represent a given quantity of information. The underlying basis of this reduction is the removal of redundant data. Later, the compressed signal is decompressed to The first author is partly supported by the Swedish Research Council grant

2 3 O. Seleznjev and M. Shykula reconstruct an original sampled signal which is often can be used for exact recovering of the original signal (e.g., by Nyquist sampling technique). In lossless data compression, an original signal can be recovered exactly. Run-length encoding (RLE) algorithm (see, e.g., []) is one of the simplest lossless coding techniques used in data compression. In RLE, a signal that consists of repeated symbols is compressed by encoding each repetition (or run) as a pair ( symbol, length of the run ). This technique is widely used in signal and image processing independently (e.g., BMP and PCX compression formats) or as a part of more advanced methods (see, e.g., [4] and the references therein). The prediction of the memory capacity (or quantization rate) needed for encoded (quantized) signals is an important problem especially nowadays for huge digital signal and image databases in various applications. Probabilistic modelling for random signals (processes) allows us to analyze this problem in average case setting (cf. [4]). In order to demonstrate the general approach, we consider the RLE compression in more detail by using correlation structure of an original Gaussian stationary process when sampling and quantization steps vary accordingly. Some statistical problems for quantized data are considered in [0]. General quantization problems for Gaussian processes are investigated in [, 5]. We consider probabilistic modelling and statistical inference for the mean memory capacity of quantized in value/time realizations of a Gaussian process when RLE compression is used. Throughout the paper it is assumed, without comment, that X.t/, t Œ0; T, is a Gaussian stationary process with mean, variance > 0, and covariance and correlation functions K.t/ and k.t/ D K.t/=, respectively, t Œ0; T. Let k.t/ be continuous and for any h > 0, jk.h/j <, i.e., the joint distribution for X.0/ and X.h/ is nonsingular. In particular, k.t/! if and only if t! 0. In applications, signals are observed at sampling points. For a sampling step h > 0, let S n;h.x/ WD ¹X.kh/; k D 0; ; : : : ; nº denote a sample from X.t/, n D T =h is the number of sampling points. For a positive " and a real x, define the uniform quantizer q ".x/ WD "Œx=", where Œx denotes the integer part of x. Then for S n;h.x/ and q ".x/, the random memory capacity needed for RLE compressed quantized signal (RLE quantization rate) up to the fixed multiplicative constant equals nx l ";h.x/ D l ";h.x; T / WD I q ".X.jh// q ".X..j C /h// ; j D0 where I. / is the indicator function. Henceforth we use the term RLE quantization

3 Run-length compression of quantized Gaussian stationary signals 33 rate for l ";h.x/ if no confusion can arise. Denote by L ";h.x/ D L ";h.x; T / WD E.l ";h.x//; the mean RLE quantization rate for S n;h.x/. It follows by the stationarity of X.t/ that L ";h.x/ D T P q ".X.0// D q ".X.h// D T h h r ";h.x/; (.) where the mean RLE (inverse) compression ratio r ";h.x/, 0 < r ";h.x/, characterizes the compression efficiency of RLE (in average). A related equivalent characteristics is data rate savings s ";h D r ";h. Henceforth, we consider the ratio r ";h.x/ as a main RLE compression parameter for a given sampling step h. Let d! denote convergence in distribution,.x/ and ˆ.x/ be the density and distribution functions of the standard normal distribution N.0; /, respectively. The asymptotic normality for an estimator O m of a parameter means that p m. O m / d! N.0; / as m!. The paper is organized as follows. In Section, we study the behavior of the mean RLE inverse compression ratio for a given probabilistic model. In Section 3, we consider statistical inference for r ";h.x/, when a standard time series model with increasing number of observations is assumed. Some properties (e.g., asymptotic normality) of sample estimators for r ";h.x/ are investigated. Section 4 provides numerical experiments, illustrating the rate of convergence in the obtained results. Section 5 proves the statements from Sections and 3. Run-length encoding. Probabilistic modelling In this section, we consider some properties of the mean RLE inverse compression ratio (or the mean RLE quantization rate) assuming a known probabilistic model. First, we study the behavior of r ";h.x/ for any positive cell width " and sampling step h, then we consider the asymptotic behavior of r ";h.x/, when " and h vary in an according way. We begin with some notation. Let for a sampling step h > 0, v.h/ D v.k; h/ WD. k.h/ / D K.0/ = =.K.0/ K.h/ / : = Let Y and U be independent standard normal and Œ0; -uniform random variables, respectively, and Z WD jy j=u, g.x/ WD Z x 0. y=x/.y/dy D P.Z x/; x > 0; (.)

4 34 O. Seleznjev and M. Shykula i.e., g.x/ is the probability distribution function of Z, g.0/ D 0. We can interpret g.x/ as following. First, we note that g.x/ D P.Y x. U // P.Y xu / D P.0 Y C xu < x/: (.) From the additive noise model for the uniform quantizer q " (see, e.g., [7,0]) we have for sufficiently small positive " X.0/ d ' q ".X.0// C "U; (.3) where U is a Œ0; -uniform random variable independent of X.0/; X.h/. Let.h/ WD E.X.h/ X.0// D.K.0/ K.h// and Y D.X.h/ X.0//=.h/, i.e., a standard normal variable independent of U. By definition, we get P.q ".X.0// D q ".X.h/// D P.0 X.h/ "ŒX.0/=" < "/: (.4) It follows from (.3) and (.4) that for sufficiently small ", P.q ".X.0// D q ".X.h/// ' P.0 X.h/ X.0/ C "U < "/ and one may expect that D P.0.h/Y C "U < "/ D g."=.h// g."v.h// as h! 0; r ";h.x/ g."v.h// as "; h! 0: (.5) Now we proceed with the main result of this section. In the following theorem we evaluate r ";h.x/. Write w.h/ WD.. C k.h//=/ = as h! 0. Theorem.. For any positive " and h, r ";h.x/ D g."v.h// w.h/ C " ı ";h.x/ ; where jı ";h.x/j = C " ² 4w.h/ exp If ¹k"; k Zº, then jı ";h.x/j.=/ =. " w.h/ ³ :

5 Run-length compression of quantized Gaussian stationary signals 35 From Theorem., we immediately obtain the lower and upper bounds for r ";h.x/, for any positive " and h, since g.x/, x > 0, is monotone and positive, where r l ";h r ";h.x/ r u ";h ; (.6) r l ";h WD " ";h ";h; r u ";h WD ";h C " ";h; ";h WD g."v.h//w.h/; = " ² ";h WD g."v.h// C 4w.h/ exp " w.h/ ³ : Notice that these bounds do not depend on the mean of the original process X.t/. The parameters " and correspond to quantization accuracy and deviation around the mean of the process, respectively. In many applications, in order to obtain an informative quantized process realization, it is reasonable to suppose that "= is sufficiently small and therefore, the obtained bounds, (.6), are precise enough. Moreover, we have and for sufficiently small h, we obtain (cf. (.5)) r ";h.x/ ";h as "! 0; (.7) r ";h.x/ g."v.h// as "; h! 0: (.8) By definition, heuristically, the relationship between the quantization cell width and quadratic mean (q.m.) smoothness of X.t/ determines the asymptotic behavior of r ";h.x/. Denote by.t/ WD E X.t/ X.0/ D. k.t// D K.0/ K.t/; the semivariogram of X.t/, t Œ0; T. Semivariograms are commonly used for modelling random functions in geosciences (see, e.g., [5]). In the following theorem we investigate the asymptotic behavior of r ";h.x/, when " and.h/ vary in an according way, i.e., h D h."/. Write."; h/ WD ".h/ =. Theorem.. When "! 0, 8.";h/ ˆ< p ; if."; h/! 0; r ";h.x/ g.= p /; if."; h/! ; 0 < < ; (.9) ˆ:.";h/ p ; if."; h/! and "."; h/! 0:

6 36 O. Seleznjev and M. Shykula Remark.3. (i) The additional condition "."; h/! 0 for."; h/! is technical and we claim that the corresponding result is valid in a more general case. (ii) The corresponding asymptotics for the mean RLE quantization rate L ";h.x/ now follows from the definition L ";h.x/ D T h r ";h.x/: For a continuously (q.m.) differentiable X.t/, we have.h/ h =, where D K./.0/ is the second spectral moment of X.t/, and."; h/ " = as h! 0: h We rewrite more explicitly (.9) when " and h vary in an according way. Let "! 0. Then 8 ˆ<. / = " h ; if " h! 0; r ";h.x/ g.c = /; if "! c; 0 < c < ; (.0) h ˆ: = h " ; if " h! and " h! 0: In particular, L ";h.x/ T " = T D o ; if " h h! and " h! 0 as "! 0; (.) (cf. [5]). We claim that the asymptotic relationship (.0) between q.m. smoothness of the process, the quantization cellwidth, and the sampling step is general up to a constant for various lossless coding algorithms. 3 Statistical inference for RLE In the previous section, we study the behavior of the mean RLE inverse compression ratio r ";h.x/ provided that a stochastic structure of the original Gaussian process X.t/ is known. Now we assume that the covariance function is unknown and there are observed data X.jh/, j D 0; : : : ; m, for statistical evaluation of r ";h.x/ with a fixed sampling step h > 0 and let X j WD X.jh/. Note that this sample can be different from S n;h.x/ and is used as in time-series modelling for estimation of unknown parameters K.0/; K.h/ when the number of observation m increases and h is fixed. In particular, for these realizations, we admit m n, i.e., mh T. Let bk.0/ WD m mx j D0 X j O ; bk.h/ WD m mx j D0 X j O X j C O ; (3.)

7 Run-length compression of quantized Gaussian stationary signals 37 be standard sample estimators of K.0/ and K.h/, respectively, and In a similar way, we introduce O WD m C mx X j : j D0 b.h/ WD mx ; X j C X j bk.h/ WD bk.0/ bk.h/; (3.) m j D0 sample estimators for.h/ (see, e.g., [5, 8]). Various statistical properties of the above estimators are well known. Specifically, the asymptotic joint normality of bk.0/ and bk.h/ has been established by several authors under appropriate conditions on the covariance function K.t/ that ensure the dependence in the process decreases sufficiently fast as the distance (time) between observations increases (see, e.g., [], [7, Section 6.3] and [6]). We use the following conditions: (A) Assume that X.jh/ can be represented as follows: X j D C X X a i.h/v j i ; j 0; ja i.h/j < ; iz where V i, i Z, are independently and identically distributed standard normal variables. (B) P j Z jk.jh/j <. Note that (B) follows from (A) and (A) says that ¹X j º is a linear time series. For completeness and further references, we give the following proposition on asymptotic properties of bk.0/, bk.h/, b.h/, and b K.h/, when the number of sample points m!. Proposition 3.. (i) If (A) holds, then bk.0/ and bk.h/ are asymptotically joint normal estimators of K.0/ and K.h/, respectively, as m!. (ii) If (B) holds, then b.h/ and b K.h/ are asymptotically normal estimators of.h/ as m!. Applying the results from the previous section, we evaluate the mean RLE inverse compression ratio r ";h.x/. Denote by b ";h WD ";h.bk.0/; bk.h// a sample (plug-in) estimator of ";h based on bk.0/ and bk.h/. Then a natural estimator of r ";h.x/ is provided by b ";h, (.8), for small enough ". In a similar way, iz Or l ";h WD rl ";h b K.0/; bk.h/ and Or u ";h WD ru ";h b K.0/; bk.h/

8 38 O. Seleznjev and M. Shykula define the corresponding sample estimators of the lower r l ";h and upper ru ";h bounds for r ";h.x/, respectively, (.6). In the following theorem we study the asymptotic behavior of b ";h, Or l ";h, and Oru ";h. Theorem 3.. If (A) holds, then for any positive " and h, b ";h, Or l ";h, and Oru ";h are asymptotically normal estimators of ";h, r l ";h, and ru, respectively, as m!. ";h Remark 3.3. The asymptotic variances of b ";h, Or l ";h, and Oru can be found by ";h using those of bk.0/ and bk.h/ given in [, Chapter 8.4.] (see also [3, Chapter 6a.]). Alternatively, data-resampling methods for stationary time series can be used (see, e.g., [6, Chapter 9]). Then, Theorem 3. together with (.6) allows us to construct asymptotic confidence intervals for the mean RLE inverse compression ratio r ";h.x/ for any positive " and h. By Theorem., the asymptotic behavior of r ";h.x/ as "! 0 is determined through the semivariogram.h/. Thus, applying, for instance, the sample estimator b K.h/ D bk.0/ bk.h/ (or b.h/) of.h/, we get plug-in (asymptotic) estimators of r ";h.x/ for various cases, e.g., (.9), ";h..h// WD "..h// = : Then the asymptotic normality of O ";h WD ";h.b K.h// (or.b.h//) follows ";h from the condition (B) in a standard way. In fact, the observed data are already quantized. However, for a certain class of Gaussian processes X.t/, t Œ0; T, and the uniform quantizer q ".x/, the normalized error process Z ".t/ in the additive noise model X.t/ D q ".X.t// C "Z ".t/, t Œ0; T, behaves asymptotically like a white noise (see [7]). In particular, the process Z ".t/, t Œ0; T, has asymptotically independent values for t s; Z ".t/ is asymptotically independent of X.t/ as "! 0; the q.m. quantization error is of order ". In this section until now we neglect the quantization error effect. For a detailed analysis of the influence of quantization effects in some statistical problems we refer to [0]. Let assume that the observed data Q " m;h.x/ WD ¹q ".X.jh// W j D ; : : : ; mº are quantized now. Based on Q ".X/, a plain (nonparametric) estimator of the m;h mean RLE inverse compression ratio r ";h.x/ is br ";h.x/ WD m mx j D0 I q ".X j / q ".X j C / : (3.3) Theorem 3.4. Let ¹k"; k Zº. If (B) holds, then for any positive " and h, br ";h.x/ is an asymptotically normal estimator of r ";h.x/ as m!.

9 Run-length compression of quantized Gaussian stationary signals 39 Remark 3.5. The variance for this estimator can be evaluated by data-resampling methods for stationary time series (cf. Remark 3.3). 4 Numerical experiments The first two numerical examples illustrate the rate of convergence for the results obtained in the previous sections. While in Example 4. we assume that the stochastic structure of an original process is known, in Example 4. this structure is supposed to be unknown and there are (simulated) observed data for evaluation. In Example 4.3 we compare estimators b ";h, m D 50 (Theorem 3.), and br ";h, m D 700 (Theorem 3.4) of the mean inverse compression ratio for the real data related to paper production process. We approximate the (theoretical) mean RLE quantization rates L ";h.x/ D T h r ";h in Examples 4. and 4. by the standard Monte-Carlo (MC) method and let N denote the number of MC simulations. Example 4.. Let X.t/, t Œ0; T, T D 4:8, be a stationary Gaussian process with zero mean and covariance function K.t/ D e t. For the sampling step h D 0:3, we have, by Theorem. and (.6), L ";h.x / T h ";h K.0/; K.h/ as "! 0: (4.) Figure illustrates (4.), i.e., performance of the approximation of L ";h.x / by.t=h/ ";h.k.0/; K.h//. Note the accuracy of the approximation even for moderate values of " Œ0;. Example 4.. Let for the sampling step h D 0:, D m;h.x i / D ¹X i.jh/; j D ; : : : ; mº; i D ; ; be two samples (or observed data) of zero mean Gaussian stationary processes X i.t/, t 0, i D ;, with covariance functions K.t/ D e t and K.t/ D e jtj, respectively. Let T D 5. Applying Theorems 3. and 3.4, we obtain the estimates T h b ";h.x i / D T h ";h.bk i.0/; bk i.h// and.t=h/br ";h.x i /, respectively, for the mean RLE quantization rate L ";h.x i ; T / by using the data D m;h.x i /, i D ;.

10 30 O. Seleznjev and M. Shykula rate Figure. The mean RLE quantization rate L ";h.x / (solid line) and estimated.t=h/ ";h (dashed line), K.t/ D e t, T D 4:8, h D 0:3, N D ε Figures and 3 compare the approximations of L ";h.x i ; T / by the estimates.t=h/b ";h.x i / and.t =h/br ";h.x i /, i D ;, respectively, and show increasing estimation accuracy when the number of observations increases, m D 5; 50; 500. Notice also that the values of the mean RLE quantization rates for the more smooth process X.t/ are less than for those of the Ornstein Uhlenbeck process X.t/, t 0, cf. (.0) Mean RLE rate m=5 m=50 m= Mean RLE rate m=5 m=50 m= rate 5 rate ε ε Figure. The mean RLE quantization rate L ";h.x i / (solid line) and the estimates.t=h/ ";h.b Ki.0/; b Ki.h//, i D (left), (right), m D 5; 50; 500, T D 5, h D 0:, N D 5000.

11 Run-length compression of quantized Gaussian stationary signals Mean RLE rate m=5 m=50 m= Mean RLE rate m=5 m=50 m= rate 5 rate ε ε Figure 3. The mean RLE quantization rate L ";h.x i / (solid line) and the estimates.t=h/br ";h.x i /, i D (left), (right), m D 5; 50; 500, T D 5, h D 0:, N D Example 4.3 (Paper machine data). The paper production process includes many steps starting from wood chipping and ending with rolling final outcome paper. Wood pulping is one of the intermediate steps. During thermomechanical pulping (TMP) wood chips are softened by steam. This facilitates separation of cellulose and reduces damage of individual fibres (see, e.g., [3]). Various characteristics (e.g., TMP conductivity) are observed during the paper production process (e.g., for further analysis and prediction paper breaks). In this example, we consider TMP conductivity data during hours with no breaks in additional hours before and after, the time series ¹Y k ; k D ; : : : ; 7º, one observation per minute, Figure TMP conductivity time Figure 4. TMP conductivity time series.

12 3 O. Seleznjev and M. Shykula First, we fit the corresponding autoregressive AR./ model for the data ¹Y k º (see, e.g., [9]). Then we compare inverse compression ratio estimates b ";h.y / and br ";h.y / for various ". Figure 5 illustrates the closeness of the estimates n, b ";h.y /, m D 50, and nbr ";h.y /, m D 700, for the mean RLE quantization rate L ";h.y / D nr ";h.y /, n D 7 and the TMP conductivity data rate ε Figure 5. The estimators of the mean RLE quantization rate n ";h.b K.0/; b K.h// (solid line) and nbr ";h.y / (dashed line), TMP conductivity time series, n D 7. Note that the estimates are close even for a small enough sample size, m D 50. The predicted mean RLE quantization rates are not significant due to irregularity of the original data, Figure 4. The results of numerical experiments confirm that the plug-in estimator b ";h is smoother as a function of " than the nonparametric estimator br ";h and with smaller variance, but biased. Note that the unbiased nonparametric br ";h can be applicable for a more wide class of stochastic models. 5 Proofs Proof of Theorem.. We prove first the assertion for D. By definition it is enough to show that P q ".X.0// D q ".X.h// D g."v.h//w.h/p."; h; /; (5.) for some P."; h; / such that jp."; h; / j = " C 4w.h/ e " w.h/ : (5.)

13 Run-length compression of quantized Gaussian stationary signals 33 Let p h.x; y/ be the joint Gaussian density function of X.0/ and X.h/. We represent p h.x; y/ as follows: p h.x; y/ D. k.h/ / = e.x y/. k.h/ / e.x /.y / Ck.h/ : (5.3) Let I k."/ D Œ"k; ".k C /, k Z. By definition of the uniform quantizer q ".x/, P q ".X.0// D q ".X.h// D X P X.0/ I k."/; X.h/ I k."/ kz D X p h.x; y/dxdy D kz I k."/ where a piecewise constant function."; h/ " Z R f ";h.z/dz; f ";h.z/ WD."; h/ I k."/ p h.x; y/dxdy for z I k."/; k Z;."; h/ WD. k.h/ / = I 0."/ e It follows by calculus that, for v.h/ D. k.h/ / =,."; h/ D "v.h/ " D "v.h/ e Œ0; Z 0. x/e x.x y/. k.h/ / dxdy: (5.4).x y/ " v.h/ dxdy " v.h/ dx D./ = g."v.h//: (5.5) The integral mean-value theorem applied to f ";h.z/ together with (5.3) and (5.4) give that there exist x k ; y k I k."/, x k y k, such that f ";h.z/ D e.x k /.y k / Ck.h/ for z I k."/; k Z: (5.6) Recall that w.h/ D.. C k.h//=/ =. Let k 0 Z be such that I k0."/. If x k0 < y k0, then it is clear that either.x k0 ; y k0 / or I k0."/ n.x k0 ; y k0 / holds. If I k0."/ n.x k0 ; y k0 /, then.x k0 /.y k0 / 0 and, by (5.6), we get Z f ";h.z/dz D w.h/ X e z k = " R w.h/ ; (5.7) kz here w.h/z k WD..x k /.y k // = Œk" ;.k C /", k Z. Let P."; h; / WD./ = X kz e z k = " w.h/ :

14 34 O. Seleznjev and M. Shykula By the monotonicity of e y =, y R, we have and, similarly, P."; h; / D./ =./ = Z D C X k ;0 e z k = " w.h/ C./ = e z = " w.h/ R C e z 0 = " w.h/ e y = dy C./ = " w.h/ = " w.h/ (5.8) P."; h; / = " w.h/ ; (5.9) that is, inequality (5.) holds (cf. the Euler Maclaurin summation formula, [0]). If.x k0 ; y k0 /, then.x k0 /.y k0 / < 0 and by (5.6), we obtain Z f ";h.z/dz D. C k.h// = P."; h; / C P."; h; / ; (5.0) R where as in (5.8) and (5.9), we get and jp."; h; / j = " w.h/ ; jp."; h; /j D./ =ˇˇˇˇexp ² ³.xk0 /.y k0 / C k.h/ ².xk0 / " exp C k.h/ ³ˇˇˇˇ w.h/./ = j.x ² k 0 /.y k0 x k0 /j exp C k.h/ ² "./ = ³ " 3 exp w.h/ by the Lagrange Theorem, so (5.) holds also for P."; h; / D P."; h; / C P."; h; /: " C k.h/ ³ " w.h/ w.h/ 3 ; (5.)

15 Run-length compression of quantized Gaussian stationary signals 35 Now, combining (5.) with (5.5) and (5.7) (or (5.0)), and taking into account (5.8) and (5.9), we obtain (5.). Note that if ¹k"; k Zº, then q ".X.t// D q ".X.t/ / and the assertion follows by applying (5.8) and (5.9). For > 0,, the assertion follows immediately from (5.) together with (5.), if we notice that P q ".X.0// D q ".X.h// D P q ".ex.0// D q ".ex.h// (5.) for ex.t/ D X.t/= and e" D "=. This completes the proof of the theorem. Proof of Theorem.. First, we give the following elementary calculus result for further references. Lemma 5.. The function g.x/ is continuous on.0; C/ and 8 < p x C O.x / as x! 0C; g.x/ D q : x C O.x/ as x! C: x 3 Proof of Lemma 5.. By definition and Taylor expansion, we obtain r Z g.x/ D x u x x. u/e du D p C O.x / as x! 0 C : 0 Now integration by part gives.x/ = g.x/ D ˆ.x/ x x ; x > 0;.x/ ˆ.x/.x/ ˆ.x/ D O x x D O x 3 as x! ; where ˆ.x/ WD ˆ.x/. This completes the proof of the lemma. We prove the statement when D. For > 0,, it is enough to recall (5.). Note that v.h/..h// = as h! 0,.h/ D k.h/. Therefore, all three cases follow straightforward from Theorem. and Lemma 5.. Indeed, if "v.h/! 0 as "! 0, then r ";h.x/ D "v.h/ p C O." v.h/ / w.h/ C "ı."; h; / D w.h/ p "v.h/ C O." v.h// ". k.h// = as "! 0:

16 36 O. Seleznjev and M. Shykula Note that v.h/! is equivalent to k.h/!, that is, h! 0. Further, if "v.h/! as "! 0, 0 < <, then v.h/! and r ";h.x/ D g./ C o./ w.h/ C "ı."; h/ D w.h/g./ C o./ g./ as "! 0: Finally, let "v.h/! as "! 0. Then v.h/! and w.h/ D k.h/ C w.h/ D O v.h/ " D O " v.h/ D o." / as "! 0: Thus, by Lemma 5. we get r ";h.x/ D r D r D D T h r."v.h// "v.h/ C O " 3 v.h/ 3 C O."/ "v.h/ C O."/ C o "v.h/ "v.h/ C O." v.h// C o./ k.h/ = " C o./ as "! 0; if, additionally, " v.h/! 0 as "! 0. This completes the proof of the theorem. Proof of Proposition 3.. Assertion (i) of the proposition follows immediately by [, Corollary 8.4.]. Further, it follows by [, Theorem 3] applied to the vector Gaussian time series Y j WD.X.jh/; X..j C /h//, j, that b.h/ is an asymptotically normal estimator of.h/. In fact, three conditions should be verified in this case, (a) E.X.h/ X.0// 4 <. There exist limits (b) lim m! =m P m i;j D0 K..i j /h/, (c) lim m! =m P m i;j D0 K..i j /h/. Condition (b) follows from simple algebra and (B), m mx i;j D0 K..i j /h/ D mx kd0 k K.kh/! m X K.kh/ as m! ; kd0

17 Run-length compression of quantized Gaussian stationary signals 37 and similarly we get (c). We can assume that Y j is zero-mean since the estimators are shift invariant. By the Cramér Slutsky Theorem, we get the asymptotic normality of b K.h/, since where m = b K.h/.h/ D m = b.h/.h/ C m=.m / d m; Ejd m j D Eˇˇ.X.h/ O/ C.X.mh/ O/ bk.0/ˇˇ 6 : so, the assertion.ii/ holds. This completes the proof of the proposition. Proof of Theorem 3.. By Proposition 3. (i), bk.0/ and bk.h/ are asymptotically joint normal estimators. Thus, b ";h D ";h.bk.0/; bk.h//, as a continuously differentiable two-dimensional function of bk.0/ and bk.h/, is also an asymptotically normal estimator of ";h (see, e.g., [3, Chapter 6a.]). In a similar way, we obtain the asymptotic normality of bb l ";h and bb u. This completes the proof. ";h Proof of Theorem 3.4. The assertion is an immediate consequence of [, Theorem 3] applied to the stationary Gaussian vector time series Y j D.X.jh/; X..j C /h//; j ; with a similar argument as in Proposition 3.. This completes the proof. Acknowledgments. The authors would like to thank Professor Patrik Eklund, Computer Science Institute, Umeå University, for the provided datasets of paper production process measurements and helpful discussions. Bibliography [] T. W. Anderson, The Statistical Analysis of Time Series, Wiley, New York, 97. [] M. A. Arcones, Distributional limit theorems over a stationary Gaussian sequence of random vectors, Stochastic Process. Appl. 88 (000), [3] C. J. Biermann, Handbook of Pulping and Papermaking, Second edition, Academic Press, Oxford, 996. [4] K. Y. Chena, P. H. Hsua and K. M. Kun-Mao Chao, Hardness of comparing two runlength encoded strings, J. Complexity 6 (00), [5] N. Cressie, Statistics for Spatial Data, Revised edition, Wiley, New York, 993. [6] B. M. Davis and L. E. Borgman, A note on the asymptotic distribution of the sample variogram, Math. Geology 4 (98),

18 38 O. Seleznjev and M. Shykula [7] W. A. Fuller, Introduction to Statistical Time Series, Wiley, New York, 976. [8] R. M. Gray, Entropy and Information Theory, Second edition, Springer-Verlag, New York, 0. [9] R. M. Gray and D. L. Neuhoff, Quantization, IEEE Trans. Inform. Theory 44 (998), [0] K. Lange, Numerical Analysis for Statisticians, Springer-Verlag, New York, 999. [] H. Luschgy and G. Pagès, Functional quantization of Gaussian processes, J. Funct. Anal. 96 (00), [] V. Mäkinen, G. Navarro and E. Ukkonen, Approximate matching of run-length compressed strings, Algorithmica 35 (003), [3] C. R. Rao, Linear Statistical Inference and Its Applications, Wiley, New York, 973. [4] O. Seleznjev and B. Thalheim, Random databases with approximate record matching, Methodol. Comput. Appl. Probab. (00), [5] O. Seleznjev and M. Shykula, Uniform and non-uniform quantization of Gaussian processes, Res. Rep. Math. Stat. 004-, Department of Mathematics and Mathematical Statistics, Umeå University, 004. [6] J. Shao and D. Tu, The Jackknife and Bootstrap, Springer-Verlag, New York, 995. [7] M. Shykula and O. Seleznjev, Stochastic structure of asymptotic quantization errors, Statist. Probab. Lett. 76 (006), [8] M. Stein, Interpolation of Spatial Data: Some Theory for Kriging, Springer-Verlag, New York, 999. [9] W. N. Venables and B. D. Ripley, Modern Applied Statistics with S-Plus, Springer- Verlag, New York, 994. [0] B. Widrow, I. Kollár and M.-C. Liu, Statistical theory of quantization, IEEE Trans. Instrum. Measurement 45 (996), Received June 7, 0; accepted August 7, 0. Author information Oleg Seleznjev, Department of Mathematics and Mathematical Statistics, Umeå University, 9087 Umeå, Sweden. Mykola Shykula, Department of Statistics, Umeå University, 9087 Umeå, Sweden.

Uniform and non-uniform quantization of Gaussian processes

Uniform and non-uniform quantization of Gaussian processes MATHEMATICAL COMMUNICATIONS 447 Math. Commun. 17(212), 447 46 Uniform and non-uniform quantization of Gaussian processes Oleg Seleznjev 1, and Mykola Shykula 2 1 Department of Mathematics and Mathematical

More information

Uniform and non-uniform quantization of Gaussian processes

Uniform and non-uniform quantization of Gaussian processes Uniform and non-uniform quantization of Gaussian processes Oleg Seleznjev a,b,, Mykola Shykula a a Department of Mathematics and Mathematical Statistics, Umeå University, SE-91 87 Umeå, Sweden b Faculty

More information

arxiv: v1 [stat.me] 24 May 2010

arxiv: v1 [stat.me] 24 May 2010 The role of the nugget term in the Gaussian process method Andrey Pepelyshev arxiv:1005.4385v1 [stat.me] 24 May 2010 Abstract The maximum likelihood estimate of the correlation parameter of a Gaussian

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Probability Models for Bayesian Recognition

Probability Models for Bayesian Recognition Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIAG / osig Second Semester 06/07 Lesson 9 0 arch 07 Probability odels for Bayesian Recognition Notation... Supervised Learning for Bayesian

More information

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical

More information

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary

1 Introduction This work follows a paper by P. Shields [1] concerned with a problem of a relation between the entropy rate of a nite-valued stationary Prexes and the Entropy Rate for Long-Range Sources Ioannis Kontoyiannis Information Systems Laboratory, Electrical Engineering, Stanford University. Yurii M. Suhov Statistical Laboratory, Pure Math. &

More information

On large deviations of sums of independent random variables

On large deviations of sums of independent random variables On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Time Series Analysis. Asymptotic Results for Spatial ARMA Models

Time Series Analysis. Asymptotic Results for Spatial ARMA Models Communications in Statistics Theory Methods, 35: 67 688, 2006 Copyright Taylor & Francis Group, LLC ISSN: 036-0926 print/532-45x online DOI: 0.080/036092050049893 Time Series Analysis Asymptotic Results

More information

Strong log-concavity is preserved by convolution

Strong log-concavity is preserved by convolution Strong log-concavity is preserved by convolution Jon A. Wellner Abstract. We review and formulate results concerning strong-log-concavity in both discrete and continuous settings. Although four different

More information

Bootstrap inference for the finite population total under complex sampling designs

Bootstrap inference for the finite population total under complex sampling designs Bootstrap inference for the finite population total under complex sampling designs Zhonglei Wang (Joint work with Dr. Jae Kwang Kim) Center for Survey Statistics and Methodology Iowa State University Jan.

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

Yunhi Cho and Young-One Kim

Yunhi Cho and Young-One Kim Bull. Korean Math. Soc. 41 (2004), No. 1, pp. 27 43 ANALYTIC PROPERTIES OF THE LIMITS OF THE EVEN AND ODD HYPERPOWER SEQUENCES Yunhi Cho Young-One Kim Dedicated to the memory of the late professor Eulyong

More information

Non-Gaussian Maximum Entropy Processes

Non-Gaussian Maximum Entropy Processes Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

The correct asymptotic variance for the sample mean of a homogeneous Poisson Marked Point Process

The correct asymptotic variance for the sample mean of a homogeneous Poisson Marked Point Process The correct asymptotic variance for the sample mean of a homogeneous Poisson Marked Point Process William Garner and Dimitris N. Politis Department of Mathematics University of California at San Diego

More information

Made available courtesy of Elsevier:

Made available courtesy of Elsevier: A note on processes with random stationary increments By: Haimeng Zhang, Chunfeng Huang Zhang, H. and Huang, C. (2014). A Note on Processes with Random Stationary Increments. Statistics and Probability

More information

Modelling Non-linear and Non-stationary Time Series

Modelling Non-linear and Non-stationary Time Series Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo Outline in High Dimensions Using the Rodeo Han Liu 1,2 John Lafferty 2,3 Larry Wasserman 1,2 1 Statistics Department, 2 Machine Learning Department, 3 Computer Science Department, Carnegie Mellon University

More information

Lecture Notes 15 Prediction Chapters 13, 22, 20.4.

Lecture Notes 15 Prediction Chapters 13, 22, 20.4. Lecture Notes 15 Prediction Chapters 13, 22, 20.4. 1 Introduction Prediction is covered in detail in 36-707, 36-701, 36-715, 10/36-702. Here, we will just give an introduction. We observe training data

More information

The memory centre IMUJ PREPRINT 2012/03. P. Spurek

The memory centre IMUJ PREPRINT 2012/03. P. Spurek The memory centre IMUJ PREPRINT 202/03 P. Spurek Faculty of Mathematics and Computer Science, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland J. Tabor Faculty of Mathematics and Computer

More information

University of California San Diego and Stanford University and

University of California San Diego and Stanford University and First International Workshop on Functional and Operatorial Statistics. Toulouse, June 19-21, 2008 K-sample Subsampling Dimitris N. olitis andjoseph.romano University of California San Diego and Stanford

More information

PROOF OF ZADOR-GERSHO THEOREM

PROOF OF ZADOR-GERSHO THEOREM ZADOR-GERSHO THEOREM FOR VARIABLE-RATE VQ For a stationary source and large R, the least distortion of k-dim'l VQ with nth-order entropy coding and rate R or less is δ(k,n,r) m k * σ 2 η kn 2-2R = Z(k,n,R)

More information

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Journal of Probability and Statistical Science 14(), 11-4, Aug 016 Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Teerawat Simmachan

More information

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions

EE/Stat 376B Handout #5 Network Information Theory October, 14, Homework Set #2 Solutions EE/Stat 376B Handout #5 Network Information Theory October, 14, 014 1. Problem.4 parts (b) and (c). Homework Set # Solutions (b) Consider h(x + Y ) h(x + Y Y ) = h(x Y ) = h(x). (c) Let ay = Y 1 + Y, where

More information

WHITE NOISE APPROACH TO FEYNMAN INTEGRALS. Takeyuki Hida

WHITE NOISE APPROACH TO FEYNMAN INTEGRALS. Takeyuki Hida J. Korean Math. Soc. 38 (21), No. 2, pp. 275 281 WHITE NOISE APPROACH TO FEYNMAN INTEGRALS Takeyuki Hida Abstract. The trajectory of a classical dynamics is detrmined by the least action principle. As

More information

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques

Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Monte Carlo Methods for Statistical Inference: Variance Reduction Techniques Hung Chen hchen@math.ntu.edu.tw Department of Mathematics National Taiwan University 3rd March 2004 Meet at NS 104 On Wednesday

More information

Akaike criterion: Kullback-Leibler discrepancy

Akaike criterion: Kullback-Leibler discrepancy Model choice. Akaike s criterion Akaike criterion: Kullback-Leibler discrepancy Given a family of probability densities {f ( ; ), 2 }, Kullback-Leibler s index of f ( ; ) relativetof ( ; ) is Z ( ) =E

More information

No. of dimensions 1. No. of centers

No. of dimensions 1. No. of centers Contents 8.6 Course of dimensionality............................ 15 8.7 Computational aspects of linear estimators.................. 15 8.7.1 Diagonalization of circulant andblock-circulant matrices......

More information

Probability for Statistics and Machine Learning

Probability for Statistics and Machine Learning ~Springer Anirban DasGupta Probability for Statistics and Machine Learning Fundamentals and Advanced Topics Contents Suggested Courses with Diffe~ent Themes........................... xix 1 Review of Univariate

More information

G. Larry Bretthorst. Washington University, Department of Chemistry. and. C. Ray Smith

G. Larry Bretthorst. Washington University, Department of Chemistry. and. C. Ray Smith in Infrared Systems and Components III, pp 93.104, Robert L. Caswell ed., SPIE Vol. 1050, 1989 Bayesian Analysis of Signals from Closely-Spaced Objects G. Larry Bretthorst Washington University, Department

More information

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5.

Erkut Erdem. Hacettepe University February 24 th, Linear Diffusion 1. 2 Appendix - The Calculus of Variations 5. LINEAR DIFFUSION Erkut Erdem Hacettepe University February 24 th, 2012 CONTENTS 1 Linear Diffusion 1 2 Appendix - The Calculus of Variations 5 References 6 1 LINEAR DIFFUSION The linear diffusion (heat)

More information

A NON-PARAMETRIC TEST FOR NON-INDEPENDENT NOISES AGAINST A BILINEAR DEPENDENCE

A NON-PARAMETRIC TEST FOR NON-INDEPENDENT NOISES AGAINST A BILINEAR DEPENDENCE REVSTAT Statistical Journal Volume 3, Number, November 5, 155 17 A NON-PARAMETRIC TEST FOR NON-INDEPENDENT NOISES AGAINST A BILINEAR DEPENDENCE Authors: E. Gonçalves Departamento de Matemática, Universidade

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Reliability of Coherent Systems with Dependent Component Lifetimes

Reliability of Coherent Systems with Dependent Component Lifetimes Reliability of Coherent Systems with Dependent Component Lifetimes M. Burkschat Abstract In reliability theory, coherent systems represent a classical framework for describing the structure of technical

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Math 5588 Final Exam Solutions

Math 5588 Final Exam Solutions Math 5588 Final Exam Solutions Prof. Jeff Calder May 9, 2017 1. Find the function u : [0, 1] R that minimizes I(u) = subject to u(0) = 0 and u(1) = 1. 1 0 e u(x) u (x) + u (x) 2 dx, Solution. Since the

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

STABILITY OF n-th ORDER FLETT S POINTS AND LAGRANGE S POINTS

STABILITY OF n-th ORDER FLETT S POINTS AND LAGRANGE S POINTS SARAJEVO JOURNAL OF MATHEMATICS Vol.2 (4) (2006), 4 48 STABILITY OF n-th ORDER FLETT S POINTS AND LAGRANGE S POINTS IWONA PAWLIKOWSKA Abstract. In this article we show the stability of Flett s points and

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance

Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Advances in Decision Sciences Volume, Article ID 893497, 6 pages doi:.55//893497 Research Article Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance Junichi Hirukawa and

More information

SAMPLE CHAPTERS UNESCO EOLSS STATIONARY PROCESSES. E ξ t (1) K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria

SAMPLE CHAPTERS UNESCO EOLSS STATIONARY PROCESSES. E ξ t (1) K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria STATIONARY PROCESSES K.Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria Keywords: Stationary process, weak sense stationary process, strict sense stationary process, correlation

More information

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS

COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS COMPUTER ALGEBRA DERIVATION OF THE BIAS OF LINEAR ESTIMATORS OF AUTOREGRESSIVE MODELS Y. ZHANG and A.I. MCLEOD Acadia University and The University of Western Ontario May 26, 2005 1 Abstract. A symbolic

More information

arxiv: v1 [math.dg] 1 Jul 2014

arxiv: v1 [math.dg] 1 Jul 2014 Constrained matrix Li-Yau-Hamilton estimates on Kähler manifolds arxiv:1407.0099v1 [math.dg] 1 Jul 014 Xin-An Ren Sha Yao Li-Ju Shen Guang-Ying Zhang Department of Mathematics, China University of Mining

More information

On bounded redundancy of universal codes

On bounded redundancy of universal codes On bounded redundancy of universal codes Łukasz Dębowski Institute of omputer Science, Polish Academy of Sciences ul. Jana Kazimierza 5, 01-248 Warszawa, Poland Abstract onsider stationary ergodic measures

More information

white noise Time moving average

white noise Time moving average 1.3 Time Series Statistical Models 13 white noise w 3 1 0 1 0 100 00 300 400 500 Time moving average v 1.5 0.5 0.5 1.5 0 100 00 300 400 500 Fig. 1.8. Gaussian white noise series (top) and three-point moving

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

If we want to analyze experimental or simulated data we might encounter the following tasks:

If we want to analyze experimental or simulated data we might encounter the following tasks: Chapter 1 Introduction If we want to analyze experimental or simulated data we might encounter the following tasks: Characterization of the source of the signal and diagnosis Studying dependencies Prediction

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Kriging for processes solving partial differential equations

Kriging for processes solving partial differential equations Kriging for processes solving partial differential equations Karl Gerald van den Boogaart 1st July 2001 Abstract Physical laws are often expressed in terms of partial differential equations. However these

More information

Lower Tail Probabilities and Related Problems

Lower Tail Probabilities and Related Problems Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued

More information

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet.

EE376A - Information Theory Final, Monday March 14th 2016 Solutions. Please start answering each question on a new page of the answer booklet. EE376A - Information Theory Final, Monday March 14th 216 Solutions Instructions: You have three hours, 3.3PM - 6.3PM The exam has 4 questions, totaling 12 points. Please start answering each question on

More information

High Order Numerical Methods for the Riesz Derivatives and the Space Riesz Fractional Differential Equation

High Order Numerical Methods for the Riesz Derivatives and the Space Riesz Fractional Differential Equation International Symposium on Fractional PDEs: Theory, Numerics and Applications June 3-5, 013, Salve Regina University High Order Numerical Methods for the Riesz Derivatives and the Space Riesz Fractional

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart

Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart Statistical and Learning Techniques in Computer Vision Lecture 1: Random Variables Jens Rittscher and Chuck Stewart 1 Motivation Imaging is a stochastic process: If we take all the different sources of

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

arxiv:math/ v1 [math.nt] 27 Feb 2004

arxiv:math/ v1 [math.nt] 27 Feb 2004 arxiv:math/0402458v1 [math.nt] 27 Feb 2004 On simultaneous binary expansions of n and n 2 Giuseppe Melfi Université de Neuchâtel Groupe de Statistique Espace de l Europe 4, CH 2002 Neuchâtel, Switzerland

More information

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RESEARCH REPORT. Estimation of sample spacing in stochastic processes.   Anders Rønn-Nielsen, Jon Sporring and Eva B. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 6 Anders Rønn-Nielsen, Jon Sporring and Eva B. Vedel Jensen Estimation of sample spacing in stochastic processes No. 7,

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Lecture 1: Brief Review on Stochastic Processes

Lecture 1: Brief Review on Stochastic Processes Lecture 1: Brief Review on Stochastic Processes A stochastic process is a collection of random variables {X t (s) : t T, s S}, where T is some index set and S is the common sample space of the random variables.

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Gaussian distributions and processes have long been accepted as useful tools for stochastic

Gaussian distributions and processes have long been accepted as useful tools for stochastic Chapter 3 Alpha-Stable Random Variables and Processes Gaussian distributions and processes have long been accepted as useful tools for stochastic modeling. In this section, we introduce a statistical model

More information

An Introduction to Spatial Statistics. Chunfeng Huang Department of Statistics, Indiana University

An Introduction to Spatial Statistics. Chunfeng Huang Department of Statistics, Indiana University An Introduction to Spatial Statistics Chunfeng Huang Department of Statistics, Indiana University Microwave Sounding Unit (MSU) Anomalies (Monthly): 1979-2006. Iron Ore (Cressie, 1986) Raw percent data

More information

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES Hacettepe Journal of Mathematics and Statistics Volume 43 2 204, 245 87 COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES M. L. Guo

More information

New Identities for Weak KAM Theory

New Identities for Weak KAM Theory New Identities for Weak KAM Theory Lawrence C. Evans Department of Mathematics University of California, Berkeley Abstract This paper records for the Hamiltonian H = p + W (x) some old and new identities

More information

Energy method for wave equations

Energy method for wave equations Energy method for wave equations Willie Wong Based on commit 5dfb7e5 of 2017-11-06 13:29 Abstract We give an elementary discussion of the energy method (and particularly the vector field method) in the

More information

On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y).

On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y). On 1.9, you will need to use the facts that, for any x and y, sin(x+y) = sin(x) cos(y) + cos(x) sin(y). cos(x+y) = cos(x) cos(y) - sin(x) sin(y). (sin(x)) 2 + (cos(x)) 2 = 1. 28 1 Characteristics of Time

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO ROCKY MOUNTAIN JOURNAL OF MATHEMATICS Volume 44, Number, 014 ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO ABSTRACT. This paper provides an asymptotic estimate

More information

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime

Asymptotic Filtering and Entropy Rate of a Hidden Markov Process in the Rare Transitions Regime Asymptotic Filtering and Entropy Rate of a Hidden Marov Process in the Rare Transitions Regime Chandra Nair Dept. of Elect. Engg. Stanford University Stanford CA 94305, USA mchandra@stanford.edu Eri Ordentlich

More information

PARAMETER ESTIMATION FOR FRACTIONAL BROWNIAN SURFACES

PARAMETER ESTIMATION FOR FRACTIONAL BROWNIAN SURFACES Statistica Sinica 2(2002), 863-883 PARAMETER ESTIMATION FOR FRACTIONAL BROWNIAN SURFACES Zhengyuan Zhu and Michael L. Stein University of Chicago Abstract: We study the use of increments to estimate the

More information

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT Received: 31 July, 2008 Accepted: 06 February, 2009 Communicated by: SIMON J SMITH Department of Mathematics and Statistics La Trobe University,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Entropy Rate of Stochastic Processes

Entropy Rate of Stochastic Processes Entropy Rate of Stochastic Processes Timo Mulder tmamulder@gmail.com Jorn Peters jornpeters@gmail.com February 8, 205 The entropy rate of independent and identically distributed events can on average be

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Topics in Stochastic Geometry. Lecture 4 The Boolean model

Topics in Stochastic Geometry. Lecture 4 The Boolean model Institut für Stochastik Karlsruher Institut für Technologie Topics in Stochastic Geometry Lecture 4 The Boolean model Lectures presented at the Department of Mathematical Sciences University of Bath May

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Sharp inequalities and asymptotic expansion associated with the Wallis sequence

Sharp inequalities and asymptotic expansion associated with the Wallis sequence Deng et al. Journal of Inequalities and Applications 0 0:86 DOI 0.86/s3660-0-0699-z R E S E A R C H Open Access Sharp inequalities and asymptotic expansion associated with the Wallis sequence Ji-En Deng,

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

UNIFORM BOUNDS FOR BESSEL FUNCTIONS

UNIFORM BOUNDS FOR BESSEL FUNCTIONS Journal of Applied Analysis Vol. 1, No. 1 (006), pp. 83 91 UNIFORM BOUNDS FOR BESSEL FUNCTIONS I. KRASIKOV Received October 8, 001 and, in revised form, July 6, 004 Abstract. For ν > 1/ and x real we shall

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS

PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS PRECISE ASYMPTOTIC IN THE LAW OF THE ITERATED LOGARITHM AND COMPLETE CONVERGENCE FOR U-STATISTICS REZA HASHEMI and MOLUD ABDOLAHI Department of Statistics, Faculty of Science, Razi University, 67149, Kermanshah,

More information