Uniform and non-uniform quantization of Gaussian processes

Size: px
Start display at page:

Download "Uniform and non-uniform quantization of Gaussian processes"

Transcription

1 Uniform and non-uniform quantization of Gaussian processes Oleg Seleznjev a,b,, Mykola Shykula a a Department of Mathematics and Mathematical Statistics, Umeå University, SE Umeå, Sweden b Faculty of Mathematics and Mechanics, Moscow State University, Moscow, Russia Abstract Quantization of a continuous-value signal into a discrete form (or discretization of amplitude) is a standard task in all analog/digital devices. We consider quantization of a signal (or random process) in a probabilistic framework. The quantization method presented in this paper can be applied to signal coding and storage capacity problems. In order to demonstrate the general approach, both uniform and non-uniform quantization of a Gaussian process are studied in more detail and compared with a conventional piecewise constant approximation. We investigate asymptotic properties of some accuracy characteristics, such as a random quantization rate, in terms of the correlation structure of the original random process when quantization cellwidth tends to zero. Some examples and numerical experiments are presented. MSC: primary 6G15; secondary 94A29, 94A34 Keywords: Quantization; Rate; Gaussian process; Level crossings 1 Introduction Analog signals correspond to data represented in continuously variable physical quantities in contrast to the digital representation of data in discrete units. In many applications, both discretization in time (or sampling) and in amplitude (or quantizing) are exploited. Due to many factors (e.g., measurement errors, essential phenomenon randomness) signals are frequently random and therefore random process models are used. Unlike standard techniques, the quantization methods presented in this paper allow us, by using the correlation structure of the model random process, not only to solve process coding and archiving problems but also to predict the necessary capacity of the memory needed for quantized process realizations. Ignoring the quantization problem we may use various approximation methods to restore an initial random process with a given accuracy (algebraic or trigonometric polynomials, Seleznjev, 1991; splines, Wahba, 199; Seleznjev, 2; Hüsler et al., 23; wavelets, Seleznjev and Staroverov, 23). Quantization is generally less well understood than linear approximation. One of the reasons is that it is a nonlinear operation. Nevertheless quantization is a standard procedure for all analog/digital devices. Various quantization problems are considered in the number of papers, mostly in applied literature (see, e.g., Cambanis Corresponding author. address: oleg.seleznjev@matstat.umu.se (O. Seleznjev) 1

2 and Gerr, 1983; Widrow et al., 1995; Gray and Neuhoff, 1998, and references therein). Some statistical properties of quantizers are studied also in Elias (197). Let [x] and {x} be the integer and the fractional parts of x, respectively, x = [x] + {x}. The uniform quantizer q ε (x) is defined as q ε (x) := ε[x/ε] for a fixed cellwidth ε >. In the number of quantization levels optimization problems, the nonuniform quantization is used (see, e.g., Gray and Neuhoff, 1998). Following Bennett s notation (see Bennett, 1948), the non-uniform n-level companding quantizer (or compander) q n, G (x) is defined as follows: q n, G (x) := G 1 (q n, U [G(x)]), where G : R (, 1) is onto and increasing, and q n, U is the n-level uniform quantizer on (, 1), k n 1 2n, if y ( k 1 n, n] k, k = 1,...,n 1, q n, U (y) = 1 1 2n, if y ( n 1 n, 1). Note that the inverse function S := G 1 provides a transformation from a uniform quantization on (, 1) to a non-uniform one. Therefore, the quantization intervals of the companding quantizer q n, G, I 1, n = (, S( 1 n )], I k, n = (S( k 1 n ), S( k n )], k = 2,...,n 1, and I n, n = (S( n 1 n ), ), and the corresponding quantization levels grid, {u k = u k (n), k = 1,...,n} = {S( k n 1 2n ), k = 1,...,n}. For a random variable X and a quantizer q(x) (uniform or non-uniform), properties of the quantization errors e(x) := X q(x) are considered in a number of applied papers (see, e.g., Gray and Neuhoff, 1998, and references therein). Shykula and Seleznjev (26) derive the stochastic structure of the asymptotic quantization errors for a random variable with values in a finite interval. For a random process X = X(t), t [, T], the quality of a quantizer q(x) can be measured, for instance, by the maximum mean square error (MMSE) ε(x) = ε(x, q(x)) := max X(t) q(x(t)), t [,T] where Y 2 := E(Y 2 ) for a random variable Y. There are quantizers with fixed or variable codeword length (or rate). Fixed-rate quantizers use all codes (or codewords) of equal length, say r. In variable-rate quantization the instant rate r(x) is the length of the corresponding codeword, i.e., a random variable, and the average rate R(X) := E(r(X)) is of interest. The MMSE characterizes restoration quality (i.e., accuracy) of a quantizer whereas the rate concerns the storage capacity needed for quantized process realizations. Some problems for the fixed-rate quantization of Gaussian processes and optimal properties of the corresponding Karhunen-Loève expansion are considered in Luschgy and Pagès (22). This approach is related to reproducing kernel Hilbert space techniques and some results for ɛ-entropy for sets in a metric space (see also Lifshits, 1995; Lifshits and Linde, 22). Here we consider the uniform and non-uniform quantization with (random) variable-rate for the linear space C m [, T] of random processes with continuous quadratic mean (q.m.) derivatives 2

3 up to the order m. The space C m [, T] of non-random functions with continuous derivatives up to the order m can be considered as a linear subspace of C m [, T] by usual embedding. Let X(t), t [, T], be a random process (or a signal) with covariance function K(t, s), t, s [, T], and continuous sample paths. Following the standard notation for crossings of a level by a continuous function f (see, e.g., Cramér and Leadbetter, 1967), f is said to have a crossing of the level u at t if in each neighborhood of t there are points t 1 and t 2 such that (f(t 1 ) u)(f(t 2 ) u) <. Let N u (f) := N u (f, T) denote the number of crossings of the level u by f in [, T]. For a realization of the random process X(t), t [, T], denote by r ε (X) and r n, G (X) the total number of quantization points in [, T] (or random quantization rate) for uniform q ε (X) and non-uniform q n, G (X) quantizers, respectively, r ε (X) := k Z N uk (X), r n, G (X) := n N uk (X). Note that uniform r ε (X) and non-uniform r n, G (X) quantization rates are random variables whenever X(t), t [, T], is a Gaussian process with continuous sample paths (see Cramér and Leadbetter, 1967). For the random process X(t), t [, T], define also the average (uniform and nonuniform) quantization rates R ε (X) := E(r ε (X)) and R n, G (X) := E(r n, G (X)), respectively. We investigate the asymptotic properties of r ε (X) and R ε (X) as ε and R n, G (X) as n. For a set of random variables Y ε, ε >, let a.s. denote convergence almost surely and s in s-mean, s 1 as ε. Denote by λ 2 = λ 2 (X) the second spectral moment of a stationary process X(t), t [, T]. In this paper we study (random) variable-rate uniform and non-uniform quantization of a random process with continuous sample paths. The main distinction with a number of mentioned results is exploiting correlation properties of a random process for evaluation of the rate. Moreover, the developed technique could be applied to a wide class of random functions. In order to demonstrate the general approach, the asymptotic properties of the uniform and non-uniform quantizers for a Gaussian process are studied in more details. The paper is organized as follows. In Section 2 we study the asymptotic properties of the quantization rate r ε (X) for a Gaussian process X(t), t [, T], as ε. Asymptotic behavior of uniform and non-uniform average quantization rates are investigated. For some classes of Gaussian processes and a given accuracy, we compare approximations by a quantized process and by a piecewise constant process. In Section 3 proofs of the statements in the previous sections are given. k=1 2 Results For deterministic signals, the evaluation of the necessary capacity of the memory for a quantized sequence is a complicated task in general. We develop an average case analysis of this problem assuming a probabilistic model. Henceforth, let X(t), t [, T], be a Gaussian zero mean process with covariance function K(t, s), t, s [, T], and continuously differentiable sample paths. For a zero mean random process Y (t), t [, T], with covariance function B(t, s), t, s [, T], we introduce non-singularity assumptions as follows. Let the derivative process Y (1) (t), t [, T], exist and have the covariance function B 11 (t, s) = 2 B(t, s)/( t s), t, s [, T]. Let B 1 (t, s) 3

4 be the covariance between Y (t) and Y (1) (s), B 1 (t, s) = B(t, s)/ s, t, s [, T]. The following conditions on Y (t), t [, T], are related to the curve crossing problems for non-stationary Gaussian processes (see, e.g., Cramér and Leadbetter, 1967). (A1) The joint normal distribution for Y (t) and Y (1) (t) is non-singular for each t, i.e., B(t, t) >, B 11 (t, t) >, B 1 (t, t)/ B(t, t)b 11 (t, t) < 1, t [, T]. (A2) B 11 (t, s), t, s [, T], is continuous at all diagonal points (t, t). In the following theorem we investigate the asymptotic behavior of rates r ε (X) as ε in various convergence modes, specifically, a.s.-convergence and convergence of the first and the second moments. Denote by v(x) := T X (1) (t) dt and V (X) := E(v(X)) the variation and the average variation of X(t), t [, T], X C 1 [, T], respectively. Let M(X) be the number of crossings of zero level by a sample path of the process X (1) (t), t [, T], X C 2 [, T] (i.e., the number of local extremes). Theorem 1 Let X(t), t [, T], X C 2 [, T], be a zero mean Gaussian process. If (A1) holds for X (1) (t), t [, T], then (i) εr ε (X) v(x) ε(m(x) + 1) (a.s.) and εr ε (X) a.s. v(x) as ε ; (ii) εr ε (X) V (X) ε(e(m(x)) + 1) and εr ε (X) 1 v(x) as ε. (iii) If additionally X C 3 [, T], then E εr ε (X) v(x) 2 ε 2 E ( M(X) + 1 ) 2 and εrε (X) 2 v(x) as ε. Remark 1 (i) Note that Gaussianity of X(t), t [, T], is a technical assumption and, for example, Theorem 1(i) is valid for a wide class of smooth enough processes with a finite (a.s.) number of local extremes. For the Gaussian case we evaluate the rate of convergence (cf. the Banach Theorem for N u (f), the number of crossings of a level u R by function f). Particularly, in Theorem 1(ii) we have, by Cramér and Leadbetter, 1967, Chapter 13.2, E(M(X)) = T R z g t (, z)dtdz, where g t (u, z) is the joint probability density function of Gaussian processes X (1) (t) and X (2) (t), t [, T]. For a stationary Gaussian process X(t), t [, T], with covariance function K(t, s) = k(t s), we get E(M(X)) = T π k(4) () k (2) (). (ii) If X C 2 [, T], then there exist the continuous mixed partial derivatives K 11 (t, s) = 2 K(t, s), K 22 (t, s) = 4 K(t, s) t s t 2 s 2. Hence, for some constants C >, a > 3, and sufficiently small h, we get h h K 11 (t, t) = Var ( X (1) (t + h) X (1) (t) ) C/ log h a, 4

5 where by the definition h h K 11 (t, t) = K 11 (t + h, t + h) 2K 11 (t, t + h) + K 11 (t, t). Therefore the process X(t), t [, T], can be considered as a random process with continuously differentiable sample paths (see, e.g., Cramér and Leadbetter, 1967, Chapter 9.4) and v(x) is well defined. (iii) Theorem 1(ii) and 1(iii) can be generalized to the convergence in mean of higher order for smooth enough random processes. As a corollary of Theorem 1(ii) and 1(iii) we have the following proposition, where the average quantization rate R ε (X) = E(r ε (X)) and the variance of r ε (X) are studied asymptotically as ε. Let σ 1 (t) and ρ 11 (t, s) be the standard deviation and the correlation function of X (1) (t), t, s [, T], respectively. Denote by γ(t, s) := 2π 1 σ 1 (t)σ 1 (s)( (1 ρ 2 11 (t, s) ) 1/2 + ρ11 (t, s) ( π/2 arccos ρ 11 (t, s) )). Proposition 1 Let X(t), t [, T], X C 2 [, T], be a zero mean Gaussian process with covariance function K(t, s). If (A1) holds for X (1) (t), t [, T], then (i) εr ε (X) V (X) = 2/π T σ 1(t)dt as ε. (ii) If X C 3 [, T], then lim ε ε2 Var(r ε (X)) = Var(v(X)) ( T 2 = γ(t, s)dtds 2π 1 σ 1 (t)dt). [,T] 2 The following corollary is an immediate consequence of Proposition 1(i) for the stationary case. Corollary 1 Suppose that X(t), t [, T], X C 2 [, T], is a zero mean stationary Gaussian process, λ 2 = λ 2 (X). If (A1) holds for X (1) (t), t [, T], then R ε (X) ε 1 T 2λ 2 /π as ε. In the following theorem, the average quantization rate R n, G (X) for n-level companding quantizer q n, G (X) is studied asymptotically as n. Let p t (z) and p t (u, z) denote the density function of X (1) (t) and the joint density function of X(t) and X (1) (t), t [, T], respectively. Let σ(t) and K 1 (t, s) denote the standard deviation of X(t) and the covariance between X(t) and X (1) (t), K 1 (t, s) = K(t, s)/ s, t, s [, T]. We introduce the following assumption on the joint distribution of X(t) and X (1) (t), t [, T]. (B) There exists b(t, z) : [, T] R R such that p t (u z) = p t (u, z)/p t (z) b(t, z), u, z R, t [, T], and T R b(t, z) z p t(z)dzdt <. Theorem 2 Let X(t), t [, T], X C 1 [, T], be a zero mean Gaussian process with covariance function K(t, s). If (A1), (A2), and (B) hold for X(t), t [, T], then n 1 R n, G (X) T R 2 z p t (u, z)dzdg(u)dt as n. 5

6 Remark 2 (i) We claim that Theorems 1 and 2 can be generalized for Gaussian processes with sufficiently smooth mean functions (cf. Cramér and Leadbetter, 1967, Chapter 13.2). (ii) Note that (B) holds if (A1) fulfills for X(t), t [, T], and σ(t), σ 1 (t), and K 1 (t, s), t, s [, T], are continuous functions. The following corollary is an immediate consequence of Theorem 2 for the stationary case. Corollary 2 Let X(t), t [, T], X C 1 [, T], be a zero mean stationary Gaussian process with one-dimensional density f(x), x R, λ 2 = λ 2 (X). If (A1) and (A2) hold for X(t), t [, T], then n 1 R n, G (X) T 2λ 2 /π f(u)dg(u) as n. R The quantized process L ε (t) = q ε (X(t)), t [, T], can be considered as a piecewise constant approximation of X(t), t [, T], with random number of knots corresponding to crossings of quantization levels. For Gaussian processes, we compare this method with a conventional piecewise constant approximation, P m (t), t [, T], when discretization in time is used. The sequence of sampling points (or designs) T m (h) = {t <... < t m, t i [, T]}, t =, t m = T, is generated by a positive continuous density function h(t), i/m = T 1 t i h(t)dt, i =,...,m, and P m (t) = P m (X, T m )(t) = X(t i ), t [t i, t i+1 ), i =,...,m 1. The optimal sequences of designs for spline approximation are studied in Seleznjev (2). A corresponding result for piecewise constant approximation and the uniform mean norm max t [,T] X(t) is obtained in the following proposition. For a random process Y (t), t [, T], Y C 1 [, T], we introduce the following condition (local stationarity), lim Y (t + s) Y (t) / s = Y (1) (t) > uniformly in t [, T]. (1) s Let ε m (X) = MMSE(X, P m ) := max t [,T] X(t) P m (t) denote the maximum mean square error for a random process X C 1 [, T]. Let h (t) = c(t)/ T c(s)ds, t [, T], where c(t) := X(1) (t). Proposition 2 Let a random process X(t), t [, T], X C 1 [, T], satisfy (1). Then (i) lim m mε m (X) = max t [,T] ( c(t)/ h(t) ). (ii) The sequence of designs T m = T m (h) is optimal iff T m = T m, where T m = T m (h ). For the optimal T m, lim mε m(x) = m T c(t)dt. (2) Henceforth, we suppose that the corresponding assumptions for the results used below are fulfilled. For a Gaussian process X C 1 [, T], it follows by the definition and Proposition 1(i) that MMSE(X, L ε ) ε for the approximation by the quantized process L ε (t) with the average number of points (average uniform quantization rate) m q = m q (ε) ε 1 2/π T c(t)dt as ε. (3) For a fixed accuracy ε m (X) = δ, (2) implies that it needs m p sampling points, T m p = m p (δ) δ 1 c(t)dt as δ. (4) 6

7 Note that by the definition ε = ε(δ) δ X() L ε () ε/ 3 as ε (see e.g., Cambanis and Gerr, 1983). Thus, from (3) and (4) we have m q δ 2 2 lim = lim δ m p δ ε π π.798. For a Gaussian stationary process X C 1 [, T], and therefore, δ = MMSE(X, L ε ) = X() L ε () ε/ 3 as ε, m q 2 lim = δ m p 3π.461. (5) A similar result can be obtained for a non-uniform quantization. Let X(t), t [, T], be a stationary process with one-dimensional density f(x), x R, and L n, G (t) = q n, G (X(t)), t [, T], the quantized process with the sequence of quantization levels generated by G(u) = u g(v)dv. Then, the distortion ε n, G (X) = MMSE(X, L n, G ) = X() L n, G (). This distortion is minimal iff G(u) = G (u) with the corresponding optimal density g (u) = f(u) 1/3 / R f(z)1/3 dz (see e.g., Bennett, 1948), and ε n, G (X) n 1( R f(u) 1/3 du) 3/2(12) 1/2 = γn 1 as n. Moreover, if X(t), t [, T], is Gaussian, then γ = 3 1/4 /2 and, for the optimal density g (u) and a fixed accuracy ε n, G (X) = δ, it needs n quantization levels, n = n(δ) γ δ 1 as δ. Now, Corollary 2 gives for the approximation by L n, G (t), t [, T], that we need (in average) m q (g ) quantization points (average non-uniform quantization rate), and, therefore, m q (g ) = m q (g, δ) (δ 1 γ) π T X(1) () as δ, m q (g ) lim = γ/ 2π.263, δ m p (cf. with the uniform quantization (5)). Hence, for certain classes of Gaussian processes and a given accuracy, the approximation by a quantized (uniformly or non-uniformly) process demands less storage capacity for archiving digital information. In the numerical examples below we illustrate the convergence results from Corollaries 1 and 2. Denote by Q ε (X) := εr ε (X) and Q n, G (X) := n 1 R n, G (X) the normalized average uniform and non-uniform quantization rates, respectively. We approximate processes in the examples below by piecewise linear with equidistant sampling points and estimate Q ε (X), ε >, (Examples 1 and 2) 7

8 and Q n, G (X), n 1, (Example 3) by the standard Monte-Carlo method. Let m denote the number of interpolation points and N the number of simulations. Example 1. Let X 1 (t) = Y 1 cos(t) + Y 2 sin(t) + Y 3 cos(2t) + Y 4 sin(2t), t [, 2π], T = 2π, be a stationary Gaussian process with zero mean and the covariance function K 1 (t, s) = cos(t s) + cos(2(t s)), where Y j, j = 1,...,4, are independent standard normal variables, λ 2 (X 1 ) = 5. It follows from Corollary 1 that Q ε (X 1 ) 2π 1/π as ε. (6) Table 1 and Figure 1 demonstrate convergence in (6) as ε. ε Q ε (X 1 ) Table 1: The estimated Q ε (X 1 ) as ε, T = 2π, m = 2, N = 1. rate ε Figure 1: The estimated normalized average quantization rate Q ε (X 1 ) (solid) and the average variation V (X 1 ) (dashed); m = 2, N = 1. Example 2. Let W(t), t [, T], be the standard Wiener process. The integrated standard Wiener process X 2 (t) with covariance function K 2 (t, s) is defined by X 2 (t) := t (t s)w(s)ds, t [, T], where X 2 () = X (1) 2 () =, X(2) 2 (t) = W(t), and K 2(t, s) = s 3 (s 2 5st + 1t 2 )/12, t s, t, s [, T]. The variance of X (1) 2 (t) is ( ) Var X (1) 2 (t) = t3 3. 8

9 Consider X 2 (t), t [1, 3]. Applying Proposition 1(i), we have lim Q ε(x 2 ) = 2/π ε 3 Table 2 and Figure 2 demonstrate convergence in (7) as ε. 1 ( t 3 /3) 1/2dt = 8(9 3 1)/ 75π (7) ε Q ε (X 2 ) Table 2: The estimated Q ε (X 2 ) as ε, t [1, 3], m = 2, N = rate ε Figure 2: The estimated normalized average quantization rate Q ε (X 2 ) (solid) and the average variation V (X 2 ) (dashed); m = 2, N = 5. Example 3. Let X 3 (t), t [, 5], T = 5, be a stationary Gaussian process with zero mean and the covariance function K 3 (t, s) = e (t s)2, λ 2 (X 3 ) = 2. Let G(x), x R, be the standard normal distribution function. It follows from Corollary 2 that Q n, G (X 3 ) 5 4/π (2π) 1 e u2 du = 5/π as n. (8) Table 3 and Figure 3 demonstrate convergence in (8) as n. R n Q n, G (X 3 ) Table 3: The estimated Q n, G (X 3 ) as n, T = 5, m = 5, N = 1. 9

10 rate n Figure 3: The estimated normalized average quantization rate Q n, G (X 3 ) (solid) and the limit theoretical constant (dashed); m = 5, N = 1. 3 Proofs Without loss of generality, let henceforth T = 1. Proof of Theorem 1. We consider the uniform quantization of the process X C 2 [, 1] with the infinite levels grid u(ε), the quantizer q ε (X(t)) = ε[x(t)/ε], the quantization rate r ε (X) = r ε (X, [, 1]), and the quantization error e ε (X(t)) = X(t) q ε (X(t)) = ε{x(t)/ε}, e ε (X(t)) < ε, t [, 1]. It follows from Remark 1(ii) that the realizations of the process X (1) (t), t [, 1], are continuous on [,1]. Hence, we have v(x) = v(x, [, 1]) = 1 X (1) (t) dt. (9) On the other hand, let τ = { < τ 1 < τ 2 <... < τ M(X) < 1} be a set of local extremes of the process X(t), t [, 1], and τ :=, τ M(X)+1 := 1. Using the elementary properties of functions of bounded variation, we obtain v(x, [, 1]) = M(X) i= v(x, [τ i, τ i+1 ]) = M(X) i= X(τ i+1 ) X(τ i ). (1) For a fixed ε (, 1) the total number of quantization points r ε (X) is additive on disjoint intervals, i.e., r ε (X) = M(X) i= r ε (X, [τ i, τ i+1 ]). Thus, it follows from (9) and (1) that εr ε (X) v(x) Observe that by the definitions M(X) i= ( εr ε X, [τi, τ i+1 ] ) X(τ i+1 ) X(τ i ). (11) X(τ i+1 ) X(τ i ) = q ε (X(τ i+1 )) q ε (X(τ i )) + e ε (X(τ i+1 )) e ε (X(τ i )), εr ε ( X, [τi, τ i+1 ] ) = ε [ X(τ i+1 )/ε ] [ X(τ i )/ε ] = q ε (X(τ i+1 )) q ε (X(τ i )), 1

11 where i =,...,M(X). Therefore, for i =,...,M(X), we get ( εr ε X, [τi, τ i+1 ] ) X(τ i+1 ) X(τ i ) { eε (X(τ = i+1 )) e ε (X(τ i )), if X(τ i+1 ) X(τ i ), (e ε (X(τ i+1 )) e ε (X(τ i ))), if X(τ i+1 ) > X(τ i ). Combining these relations with (11), we have εr ε (X) v(x) M(X) i= e ε (X(τ i+1 )) e ε (X(τ i )) ε(m(x) + 1), ε (, 1), (12) since e ε (X(t)) < ε, t [, 1]. Note that M(X) is the number of crossings of zero level by a sample path of the process X (1) (t), t [, 1]. Belyaev (1966) shows that if a zero mean Gaussian process Z(t) C k [, 1], then the k-th moment of the number of crossings of any fixed level u by Z(t) is finite. Hence, taking into account (A1), we have for u = and Z(t) = X (1) (t) E(M(X)) < if X C 2 [, T] and E ( M(X) 2) < if X C 3 [, T]. (13) Further, letting ε in (12), we get (i) since M(X) < almost surely by (13). Finally, if we take expectations and limit ε in (12), then due to (13) the assertion (ii) follows. Similarly, we prove the assertion (iii). This completes the proof. Proof of Proposition 1. We use the following elementary property of moments. Let X, X 1, X 2,... be random variables with finite sth moments, s 1. If X n s X as n, then E X n s E X s as n. This together with Theorem 1(ii) and 1(iii) yield lim εe(r ε(x)) = E(v(X)) = V (X) = E ε 1 lim ε ε2 Var(r ε (X)) = Var(v(X)) = E X (1) (t)x (1) (s) dtds [,1] 2 X (1) (t) dt, (14) ( E(v(X))) 2. (15) Further, the modification of Fubini s Theorem for random functions implies that we can change the order of integration for the terms in the right-hand sides of (14) and (15), 1 E X (1) (t) dt = E X (1) (t)x (1) (s) dtds = [,1] 2 1 E X (1) (t) dt, (16) E X (1) (t)x (1) (s) dtds. [,1] 2 (17) Note that for any standard normal random variable Y, E Y = 2/π. Hence we obtain E X (1) (t) = 2/π σ 1 (t), t [, 1]. (18) Combining now (14) with (16) and (18) gives the assertion (i). Let Z 1 and Z 2 be standard normal variables such that E(Z 1 Z 2 ) = ρ. By the orthonormalization, we have E Z 1 Z 2 = 2π 1( 1 ρ 2 + ρ(π/2 arccos ρ) ), ρ [ 1, 1]. 11

12 Thus, for Z 1 = X (1) (t) and Z 2 = X (1) (s), E X (1) (t)x (1) (s) = γ(t, s). (19) Combining (15) with (17) and (19), we get the assertion (ii). This completes the proof. Proof of Theorem 2. Recall that the random non-uniform quantization rate r n, G (X) is the total number of quantization points of the process X(t), t [, 1], with the levels grid u n = {u k = u k (n)} = {S( k n 1 2n ), k = 1,...,n}, S = G 1, r n, G (X) = n k=1 N u k (X). If (A1) and (A2) hold for the process X(t), t [, T], then we have E(N uk (X)) = 1 R z p t (u k, z)dtdz, k = 1,...,n, (2) (see, e.g., Cramér and Leadbetter, 1967, Chapter 13.2). Summing (2) over all k = 1,...,n, and multiplying both parts by n 1 = G(u k+1 ) G(u k ) = G(u k ), we obtain Denote by n 1 R n, G (X) = = 1 1 w n (t, z) := R R ( n ) z p t (u k, z) G(u k ) dtdz k=1 ( n ) p t (u k z) G(u k ) z p t (z)dtdz. (21) k=1 n p t (u k z) G(u k ). k=1 By the elementary properties of the Lebesgue-Stieltjes integral, we have for any fixed t [, T] and z R lim w n(t, z) = p t (u z)dg(u). (22) n R In fact, the convergence in (22) follows, for example, by splitting up the real line R into bounded, u a, and unbounded, u > a, intervals, where we choose a in such a way that u >a dg(u) is sufficiently small. Further, by (B), w n (t, z) b(t, z), n 1, (23) and T R z b(t, z)p t(z)dtdz <. Hence, from (21) (23) and the Dominated Convergence Theorem we get the assertion. This completes the proof. Proof of Proposition 2. The proof is similar to that of in Seleznjev (2) for spline approximation. By uniform continuity and positiveness of c(t) = X (1) (t) we get c(t + s) = c(t)(1 + r t (s)), r t (s) as s uniformly in t [, 1]. (24) Now it follows from (1) and (24) that X(t) X(t k ) = t t k c(t k )(1 + r m,k (t)), (25) 12

13 where max{ r m,k (t), t [t k, t k+1 ], k =,...,m 1} = o(1) as m. Applying (25) to the maximum mean square error ε m (X), we obtain ε m (X) = max k = max k max X(t) X(t k) t [t k,t k+1 ] max ( t t k c(t k )(1 + r m,k (t))) t [t k,t k+1 ] = max k (h kc(t k ))(1 + o(1)) as m, (26) where h k := t k+1 t k, k =,...,m 1. Further the integral mean-value theorem implies h k = 1/(h(w k )m) for some w k [t k, t k+1 ]. By the definition of h(t), t [, 1], and (24) we have Combining (26) and (27) gives Note that max (c(t k)h k ) = 1 k m max(c(t k)/h(w k ))(1 + o(1)) max [,T] k = 1 m max(c(w k)/h(w k ))(1 + o(1)) k = 1 m max (c(t)/h(t))(1 + o(1)) as m. (27) [,1] lim mε m(x) = max (c(t)/h(t)). m [,T] unless h(t)/c(t) = const. This completes the proof. c(t) T h(t) > h(s) c(s) T h(s) ds = c(s)ds References Belyaev, Yu.K., On the number of intersections of a level by a Gaussian stochastic process, I. Theor. Prob. Appl. XI, Bennett, W.R., Spectrum of quantized signal. Bell. Syst. Tech. J. 27, Cambanis, S., Gerr, N., A simple class of asymptotically optimal quantizers. IEEE Trans. Inform. Theory 29, Cramér, H., Leadbetter, M.R., Stationary and Related Stochastic Processes. John Wiley, New York. Elias, P., 197. Bounds and asymptotes for the performance of multivariate quantizers. Ann. Math. Stat. 41, Gray, R.M., Neuhoff, D.L., Quantization. IEEE Trans. Inform. Theory 44, Hüsler, J., Piterbarg, V., Seleznjev, O., 23. On convergence of the uniform norms for Gaussian processes and linear approximation problems. Ann. Appl. Probab. 13, Lifshits, M.A., Gaussian random functions. Kluwer. Dordrecht. Lifshits, M., Linde, W., 22. Approximation and Entropy Numbers of Volterra Operators with Application to Brownian Motion. Oxford University Press, Luschgy, H., Pagès, G., 22. Functional quantization of Gaussian processes. Jour. Func. Annal. 196, Seleznjev, O., Limit theorems for maxima and crossings of a sequence of Gaussian processes and approximation of random processes. J. Appl. Probab. 28, Seleznjev, O., 2. Spline approximation of random processes and design problems. Jour. Stat. Plan. Infer. 84,

14 Seleznjev, O., Staroverov, V., 23. Wavelet approximation and simulation of Gaussian processes. Theor. Prob. Math. Stat. 66, Shykula, M., Seleznjev, O., 26. Stochastic structure of asymptotic quantization errors. Stat. Prob. Lett. 76, Wahba, G., (199). Spline models for Observational Data. SIAM, Philadelphia. Widrow, B., Kollár, I., Liu, M.-C., Statistical theory of quantization. IEEE Trans. on Instrumentation and Measurement 45,

Uniform and non-uniform quantization of Gaussian processes

Uniform and non-uniform quantization of Gaussian processes MATHEMATICAL COMMUNICATIONS 447 Math. Commun. 17(212), 447 46 Uniform and non-uniform quantization of Gaussian processes Oleg Seleznjev 1, and Mykola Shykula 2 1 Department of Mathematics and Mathematical

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

arxiv: v1 [math.pr] 9 Feb 2011

arxiv: v1 [math.pr] 9 Feb 2011 Multivariate piecewise linear interpolation of a random field Konrad Abramowicz, Oleg Seleznev, Department of Mathematics and Mathematical Statistics Umeå University, SE-901 87 Umeå, Sweden arxiv:110.1871v1

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

21 Gaussian spaces and processes

21 Gaussian spaces and processes Tel Aviv University, 2010 Gaussian measures : infinite dimension 1 21 Gaussian spaces and processes 21a Gaussian spaces: finite dimension......... 1 21b Toward Gaussian random processes....... 2 21c Random

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Cramér-von Mises Gaussianity test in Hilbert space

Cramér-von Mises Gaussianity test in Hilbert space Cramér-von Mises Gaussianity test in Hilbert space Gennady MARTYNOV Institute for Information Transmission Problems of the Russian Academy of Sciences Higher School of Economics, Russia, Moscow Statistique

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Small ball probabilities and metric entropy

Small ball probabilities and metric entropy Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline 1 Small ball probabilities vs. metric entropy 2 Connection to other questions 3 Recent results for

More information

1. If 1, ω, ω 2, -----, ω 9 are the 10 th roots of unity, then (1 + ω) (1 + ω 2 ) (1 + ω 9 ) is A) 1 B) 1 C) 10 D) 0

1. If 1, ω, ω 2, -----, ω 9 are the 10 th roots of unity, then (1 + ω) (1 + ω 2 ) (1 + ω 9 ) is A) 1 B) 1 C) 10 D) 0 4 INUTES. If, ω, ω, -----, ω 9 are the th roots of unity, then ( + ω) ( + ω ) ----- ( + ω 9 ) is B) D) 5. i If - i = a + ib, then a =, b = B) a =, b = a =, b = D) a =, b= 3. Find the integral values for

More information

Mi-Hwa Ko. t=1 Z t is true. j=0

Mi-Hwa Ko. t=1 Z t is true. j=0 Commun. Korean Math. Soc. 21 (2006), No. 4, pp. 779 786 FUNCTIONAL CENTRAL LIMIT THEOREMS FOR MULTIVARIATE LINEAR PROCESSES GENERATED BY DEPENDENT RANDOM VECTORS Mi-Hwa Ko Abstract. Let X t be an m-dimensional

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Non-Gaussian Maximum Entropy Processes

Non-Gaussian Maximum Entropy Processes Non-Gaussian Maximum Entropy Processes Georgi N. Boshnakov & Bisher Iqelan First version: 3 April 2007 Research Report No. 3, 2007, Probability and Statistics Group School of Mathematics, The University

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Run-length compression of quantized Gaussian stationary signals

Run-length compression of quantized Gaussian stationary signals Random Oper. Stoch. Equ. 0 (0), 3 38 DOI 0.55/rose-0-005 de Gruyter 0 Run-length compression of quantized Gaussian stationary signals Oleg Seleznjev and Mykola Shykula Communicated by Nikolai Leonenko

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Jae Gil Choi and Young Seo Park

Jae Gil Choi and Young Seo Park Kangweon-Kyungki Math. Jour. 11 (23), No. 1, pp. 17 3 TRANSLATION THEOREM ON FUNCTION SPACE Jae Gil Choi and Young Seo Park Abstract. In this paper, we use a generalized Brownian motion process to define

More information

Extremes and ruin of Gaussian processes

Extremes and ruin of Gaussian processes International Conference on Mathematical and Statistical Modeling in Honor of Enrique Castillo. June 28-30, 2006 Extremes and ruin of Gaussian processes Jürg Hüsler Department of Math. Statistics, University

More information

Properties of the Autocorrelation Function

Properties of the Autocorrelation Function Properties of the Autocorrelation Function I The autocorrelation function of a (real-valued) random process satisfies the following properties: 1. R X (t, t) 0 2. R X (t, u) =R X (u, t) (symmetry) 3. R

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES ALEXANDER KOLDOBSKY Abstract. We prove that if a function of the form f( K is positive definite on, where K is an origin symmetric

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Conditional Full Support for Gaussian Processes with Stationary Increments

Conditional Full Support for Gaussian Processes with Stationary Increments Conditional Full Support for Gaussian Processes with Stationary Increments Tommi Sottinen University of Vaasa Kyiv, September 9, 2010 International Conference Modern Stochastics: Theory and Applications

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Karhunen-Loève decomposition of Gaussian measures on Banach spaces Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix GT APSSE - April 2017, the 13th joint work with Xavier Bay. 1 / 29 Sommaire 1 Preliminaries on Gaussian processes 2

More information

Modelling energy forward prices Representation of ambit fields

Modelling energy forward prices Representation of ambit fields Modelling energy forward prices Representation of ambit fields Fred Espen Benth Department of Mathematics University of Oslo, Norway In collaboration with Heidar Eyjolfsson, Barbara Rüdiger and Andre Süss

More information

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010

Bernardo D Auria Stochastic Processes /10. Notes. Abril 13 th, 2010 1 Stochastic Calculus Notes Abril 13 th, 1 As we have seen in previous lessons, the stochastic integral with respect to the Brownian motion shows a behavior different from the classical Riemann-Stieltjes

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Potential Theory on Wiener space revisited

Potential Theory on Wiener space revisited Potential Theory on Wiener space revisited Michael Röckner (University of Bielefeld) Joint work with Aurel Cornea 1 and Lucian Beznea (Rumanian Academy, Bukarest) CRC 701 and BiBoS-Preprint 1 Aurel tragically

More information

arxiv: v1 [math.pr] 23 Jan 2018

arxiv: v1 [math.pr] 23 Jan 2018 TRANSFER PRINCIPLE FOR nt ORDER FRACTIONAL BROWNIAN MOTION WIT APPLICATIONS TO PREDICTION AND EQUIVALENCE IN LAW TOMMI SOTTINEN arxiv:181.7574v1 [math.pr 3 Jan 18 Department of Mathematics and Statistics,

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

1 Introduction. 2 White noise. The content of these notes is also covered by chapter 4 section A of [1].

1 Introduction. 2 White noise. The content of these notes is also covered by chapter 4 section A of [1]. 1 Introduction The content of these notes is also covered by chapter 4 section A of [1]. 2 White noise In applications infinitesimal Wiener increments are represented as dw t = η t The stochastic process

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ)

Example 4.1 Let X be a random variable and f(t) a given function of time. Then. Y (t) = f(t)x. Y (t) = X sin(ωt + δ) Chapter 4 Stochastic Processes 4. Definition In the previous chapter we studied random variables as functions on a sample space X(ω), ω Ω, without regard to how these might depend on parameters. We now

More information

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i Real Analysis Problem 1. If F : R R is a monotone function, show that F T V ([a,b]) = F (b) F (a) for any interval [a, b], and that F has bounded variation on R if and only if it is bounded. Here F T V

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA

ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA ON THE BOUNDEDNESS BEHAVIOR OF THE SPECTRAL FACTORIZATION IN THE WIENER ALGEBRA FOR FIR DATA Holger Boche and Volker Pohl Technische Universität Berlin, Heinrich Hertz Chair for Mobile Communications Werner-von-Siemens

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Representations of Gaussian measures that are equivalent to Wiener measure

Representations of Gaussian measures that are equivalent to Wiener measure Representations of Gaussian measures that are equivalent to Wiener measure Patrick Cheridito Departement für Mathematik, ETHZ, 89 Zürich, Switzerland. E-mail: dito@math.ethz.ch Summary. We summarize results

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

On prediction and density estimation Peter McCullagh University of Chicago December 2004

On prediction and density estimation Peter McCullagh University of Chicago December 2004 On prediction and density estimation Peter McCullagh University of Chicago December 2004 Summary Having observed the initial segment of a random sequence, subsequent values may be predicted by calculating

More information

Reduced-load equivalence for queues with Gaussian input

Reduced-load equivalence for queues with Gaussian input Reduced-load equivalence for queues with Gaussian input A. B. Dieker CWI P.O. Box 94079 1090 GB Amsterdam, the Netherlands and University of Twente Faculty of Mathematical Sciences P.O. Box 17 7500 AE

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Laplace transforms of L 2 ball, Comparison Theorems and Integrated Brownian motions. Jan Hannig

Laplace transforms of L 2 ball, Comparison Theorems and Integrated Brownian motions. Jan Hannig Laplace transforms of L 2 ball, Comparison Theorems and Integrated Brownian motions. Jan Hannig Department of Statistics, Colorado State University Email: hannig@stat.colostate.edu Joint work with F. Gao,

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Stationary particle Systems

Stationary particle Systems Stationary particle Systems Kaspar Stucki (joint work with Ilya Molchanov) University of Bern 5.9 2011 Introduction Poisson process Definition Let Λ be a Radon measure on R p. A Poisson process Π is a

More information

Weak invariance principles for sums of dependent random functions

Weak invariance principles for sums of dependent random functions Available online at www.sciencedirect.com Stochastic Processes and their Applications 13 (013) 385 403 www.elsevier.com/locate/spa Weak invariance principles for sums of dependent random functions István

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Elements of Probability Theory

Elements of Probability Theory Elements of Probability Theory CHUNG-MING KUAN Department of Finance National Taiwan University December 5, 2009 C.-M. Kuan (National Taiwan Univ.) Elements of Probability Theory December 5, 2009 1 / 58

More information

Brownian Motion. Chapter Stochastic Process

Brownian Motion. Chapter Stochastic Process Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Lecture 4: Stochastic Processes (I)

Lecture 4: Stochastic Processes (I) Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 215 Lecture 4: Stochastic Processes (I) Readings Recommended: Pavliotis [214], sections 1.1, 1.2 Grimmett and Stirzaker [21], section 9.4 (spectral

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

arxiv: v1 [math.sp] 29 Jan 2018

arxiv: v1 [math.sp] 29 Jan 2018 Essential Descent Spectrum Equality Abdelaziz Tajmouati 1, Hamid Boua 2 arxiv:1801.09764v1 [math.sp] 29 Jan 2018 Sidi Mohamed Ben Abdellah University Faculty of Sciences Dhar El Mahraz Laboratory of Mathematical

More information

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.

More information

Measurable functions are approximately nice, even if look terrible.

Measurable functions are approximately nice, even if look terrible. Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

Asymptotic behavior for sums of non-identically distributed random variables

Asymptotic behavior for sums of non-identically distributed random variables Appl. Math. J. Chinese Univ. 2019, 34(1: 45-54 Asymptotic behavior for sums of non-identically distributed random variables YU Chang-jun 1 CHENG Dong-ya 2,3 Abstract. For any given positive integer m,

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

University of Toronto Department of Statistics

University of Toronto Department of Statistics Norm Comparisons for Data Augmentation by James P. Hobert Department of Statistics University of Florida and Jeffrey S. Rosenthal Department of Statistics University of Toronto Technical Report No. 0704

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

UDC Anderson-Darling and New Weighted Cramér-von Mises Statistics. G. V. Martynov

UDC Anderson-Darling and New Weighted Cramér-von Mises Statistics. G. V. Martynov UDC 519.2 Anderson-Darling and New Weighted Cramér-von Mises Statistics G. V. Martynov Institute for Information Transmission Problems of the RAS, Bolshoy Karetny per., 19, build.1, Moscow, 12751, Russia

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RESEARCH REPORT. Estimation of sample spacing in stochastic processes.   Anders Rønn-Nielsen, Jon Sporring and Eva B. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 6 Anders Rønn-Nielsen, Jon Sporring and Eva B. Vedel Jensen Estimation of sample spacing in stochastic processes No. 7,

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO

ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO ROCKY MOUNTAIN JOURNAL OF MATHEMATICS Volume 44, Number, 014 ALGEBRAIC POLYNOMIALS WITH SYMMETRIC RANDOM COEFFICIENTS K. FARAHMAND AND JIANLIANG GAO ABSTRACT. This paper provides an asymptotic estimate

More information

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012

Bernardo D Auria Stochastic Processes /12. Notes. March 29 th, 2012 1 Stochastic Calculus Notes March 9 th, 1 In 19, Bachelier proposed for the Paris stock exchange a model for the fluctuations affecting the price X(t) of an asset that was given by the Brownian motion.

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Module 9: Stationary Processes

Module 9: Stationary Processes Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

arxiv: v1 [math.pr] 2 Aug 2017

arxiv: v1 [math.pr] 2 Aug 2017 Hilbert-valued self-intersection local times for planar Brownian motion Andrey Dorogovtsev, adoro@imath.kiev.ua Olga Izyumtseva, olgaizyumtseva@gmail.com arxiv:1708.00608v1 [math.pr] 2 Aug 2017 Abstract

More information

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered

More information

Solutions: Problem Set 4 Math 201B, Winter 2007

Solutions: Problem Set 4 Math 201B, Winter 2007 Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information